Science.gov

Sample records for algorithm ii nsga-ii

  1. Calibration of a polarization navigation sensor using the NSGA-II algorithm

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Hu, Xiaoping; Zhang, Lilian; He, Xiaofeng

    2016-10-01

    A bio-inspired polarization navigation sensor is designed based on the polarization sensitivity mechanisms of insects. A new calibration model by formulating the calibration problem as a multi-objective optimization problem is presented. Unlike existing calibration models, the proposed model makes the calibration problem well-posed. The calibration parameters are optimized through Non-dominated Sorting Genetic Algorithm-II (NSGA-II) approach to minimize both angle of polarization (AOP) residuals and degree of linear polarization (DOLP) dispersions. The results of simulation and experiments show that the proposed algorithm is more stable than the compared methods for the calibration applications of polarization navigation sensors.

  2. Design of isolated buildings with S-FBI system subjected to near-fault earthquakes using NSGA-II algorithm

    NASA Astrophysics Data System (ADS)

    Ozbulut, O. E.; Silwal, B.

    2014-04-01

    This study investigates the optimum design parameters of a superelastic friction base isolator (S-FBI) system through a multi-objective genetic algorithm and performance-based evaluation approach. The S-FBI system consists of a flat steel- PTFE sliding bearing and a superelastic NiTi shape memory alloy (SMA) device. Sliding bearing limits the transfer of shear across the isolation interface and provides damping from sliding friction. SMA device provides restoring force capability to the isolation system together with additional damping characteristics. A three-story building is modeled with S-FBI isolation system. Multiple-objective numerical optimization that simultaneously minimizes isolation-level displacements and superstructure response is carried out with a genetic algorithm (GA) in order to optimize S-FBI system. Nonlinear time history analyses of the building with S-FBI system are performed. A set of 20 near-field ground motion records are used in numerical simulations. Results show that S-FBI system successfully control response of the buildings against near-fault earthquakes without sacrificing in isolation efficacy and producing large isolation-level deformations.

  3. Multi-objective optimization of process parameters in Electro-Discharge Diamond Face Grinding based on ANN-NSGA-II hybrid technique

    NASA Astrophysics Data System (ADS)

    Yadav, Ravindra Nath; Yadava, Vinod; Singh, G. K.

    2013-09-01

    The effective study of hybrid machining processes (HMPs), in terms of modeling and optimization has always been a challenge to the researchers. The combined approach of Artificial Neural Network (ANN) and Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) has attracted attention of researchers for modeling and optimization of the complex machining processes. In this paper, a hybrid machining process of Electrical Discharge Face Grinding (EDFG) and Diamond Face Grinding (DFG) named as Electrical Discharge Diamond face Grinding (EDDFG) have been studied using a hybrid methodology of ANN-NSGA-II. In this study, ANN has been used for modeling while NSGA-II is used to optimize the control parameters of the EDDFG process. For observations of input-output relations, the experiments were conducted on a self developed face grinding setup, which is attached with the ram of EDM machine. During experimentation, the wheel speed, pulse current, pulse on-time and duty factor are taken as input parameters while output parameters are material removal rate (MRR) and average surface roughness ( R a). The results have shown that the developed ANN model is capable to predict the output responses within the acceptable limit for a given set of input parameters. It has also been found that hybrid approach of ANN-NSGAII gives a set of optimal solutions for getting appropriate value of outputs with multiple objectives.

  4. Multi-objective optimization of weld geometry in hybrid fiber laser-arc butt welding using Kriging model and NSGA-II

    NASA Astrophysics Data System (ADS)

    Gao, Zhongmei; Shao, Xinyu; Jiang, Ping; Wang, Chunming; Zhou, Qi; Cao, Longchao; Wang, Yilin

    2016-06-01

    An integrated multi-objective optimization approach combining Kriging model and non-dominated sorting genetic algorithm-II (NSGA-II) is proposed to predict and optimize weld geometry in hybrid fiber laser-arc welding on 316L stainless steel in this paper. A four-factor, five-level experiment using Taguchi L25 orthogonal array is conducted considering laser power ( P), welding current ( I), distance between laser and arc ( D) and traveling speed ( V). Kriging models are adopted to approximate the relationship between process parameters and weld geometry, namely depth of penetration (DP), bead width (BW) and bead reinforcement (BR). NSGA-II is used for multi-objective optimization taking the constructed Kriging models as objective functions and generates a set of optimal solutions with pareto-optimal front for outputs. Meanwhile, the main effects and the first-order interactions between process parameters are analyzed. Microstructure is also discussed. Verification experiments demonstrate that the optimum values obtained by the proposed integrated Kriging model and NSGA-II approach are in good agreement with experimental results.

  5. A modified NSGA-II solution for a new multi-objective hub maximal covering problem under uncertain shipments

    NASA Astrophysics Data System (ADS)

    Ebrahimi Zade, Amir; Sadegheih, Ahmad; Lotfi, Mohammad Mehdi

    2014-07-01

    Hubs are centers for collection, rearrangement, and redistribution of commodities in transportation networks. In this paper, non-linear multi-objective formulations for single and multiple allocation hub maximal covering problems as well as the linearized versions are proposed. The formulations substantially mitigate complexity of the existing models due to the fewer number of constraints and variables. Also, uncertain shipments are studied in the context of hub maximal covering problems. In many real-world applications, any link on the path from origin to destination may fail to work due to disruption. Therefore, in the proposed bi-objective model, maximizing safety of the weakest path in the network is considered as the second objective together with the traditional maximum coverage goal. Furthermore, to solve the bi-objective model, a modified version of NSGA-II with a new dynamic immigration operator is developed in which the accurate number of immigrants depends on the results of the other two common NSGA-II operators, i.e. mutation and crossover. Besides validating proposed models, computational results confirm a better performance of modified NSGA-II versus traditional one.

  6. Modified NSGA-II for Solving Continuous Berth Allocation Problem: Using Multiobjective Constraint-Handling Strategy.

    PubMed

    Ji, Bin; Yuan, Xiaohui; Yuan, Yanbin

    2017-02-24

    Continuous berth allocation problem (BAPC) is a major optimization problem in transportation engineering. It mainly aims at minimizing the port stay time of ships by optimally scheduling ships to the berthing areas along quays while satisfying several practical constraints. Most of the previous literatures handle the BAPC by heuristics with different constraint handling strategies as it is proved NP-hard. In this paper, we transform the constrained single-objective BAPC (SBAPC) model into unconstrained multiobjective BAPC (MBAPC) model by converting the constraint violation as another objective, which is known as the multiobjective optimization (MOO) constraint handling technique. Then a bias selection modified non-dominated sorting genetic algorithm II (MNSGA-II) is proposed to optimize the MBAPC, in which an archive is designed as an efficient complementary mechanism to provide search bias toward the feasible solution. Finally, the proposed MBAPC model and the MNSGA-II approach are tested on instances from literature and generation. We compared the results obtained by MNSGA-II with other MOO algorithms under the MBAPC model and the results obtained by single-objective oriented methods under the SBAPC model. The comparison shows the feasibility of the MBAPC model and the advantages of the MNSGA-II algorithm.

  7. Application of MIMO Disturbance Observer to Control of an Electric Wheelchair Using NSGA-II.

    PubMed

    Saadatzi, Mohammad Nasser; Poshtan, Javad; Saadatzi, Mohammad Sadegh

    2011-05-01

    Electric wheelchairs (EW) experience various terrain surfaces and slopes as well as occupants with diverse weights. This, in turn, imparts a substantial amount of perturbation to the EW dynamics. In this paper, we make use of a two-degree-of-freedom control architecture called disturbance observer (DOB) which reduces sensitivity to model uncertainties, while enhancing rejection of disturbances caused due to entering slopes. The feedback loop which is designed via characteristic loci method is then augmented with a DOB with a parameterized low-pass filter. According to disturbance rejection, sensitivity reduction, and noise rejection of the whole controller, three performance indices are defined which enable us to pick the filter's optimal parameters using a multi-objective optimization approach called non-dominated sorting genetic algorithm-II. Finally, experimental results show desirable improvement in stiffness and disturbance rejection of the proposed controller as well as its robust stability.

  8. Application of MIMO Disturbance Observer to Control of an Electric Wheelchair Using NSGA-II

    PubMed Central

    Saadatzi, Mohammad Nasser; Poshtan, Javad; Saadatzi, Mohammad Sadegh

    2011-01-01

    Electric wheelchairs (EW) experience various terrain surfaces and slopes as well as occupants with diverse weights. This, in turn, imparts a substantial amount of perturbation to the EW dynamics. In this paper, we make use of a two-degree-of-freedom control architecture called disturbance observer (DOB) which reduces sensitivity to model uncertainties, while enhancing rejection of disturbances caused due to entering slopes. The feedback loop which is designed via characteristic loci method is then augmented with a DOB with a parameterized low-pass filter. According to disturbance rejection, sensitivity reduction, and noise rejection of the whole controller, three performance indices are defined which enable us to pick the filter's optimal parameters using a multi-objective optimization approach called non-dominated sorting genetic algorithm-II. Finally, experimental results show desirable improvement in stiffness and disturbance rejection of the proposed controller as well as its robust stability. PMID:22606667

  9. Optimization of multi-reservoir operation with a new hedging rule: application of fuzzy set theory and NSGA-II

    NASA Astrophysics Data System (ADS)

    Ahmadianfar, Iman; Adib, Arash; Taghian, Mehrdad

    2016-06-01

    The reservoir hedging rule curves are used to avoid severe water shortage during drought periods. In this method reservoir storage is divided into several zones, wherein the rationing factors are changed immediately when water storage level moves from one zone to another. In the present study, a hedging rule with fuzzy rationing factors was applied for creating a transition zone in up and down each rule curve, and then the rationing factor will be changed in this zone gradually. For this propose, a monthly simulation model was developed and linked to the non-dominated sorting genetic algorithm for calculation of the modified shortage index of two objective functions involving water supply of minimum flow and agriculture demands in a long-term simulation period. Zohre multi-reservoir system in south Iran has been considered as a case study. The results of the proposed hedging rule have improved the long-term system performance from 10 till 27 percent in comparison with the simple hedging rule, where these results demonstrate that the fuzzification of hedging factors increase the applicability and the efficiency of the new hedging rule in comparison to the conventional rule curve for mitigating the water shortage problem.

  10. Developing a bi-objective optimization model for solving the availability allocation problem in repairable series-parallel systems by NSGA II

    NASA Astrophysics Data System (ADS)

    Amiri, Maghsoud; Khajeh, Mostafa

    2016-11-01

    Bi-objective optimization of the availability allocation problem in a series-parallel system with repairable components is aimed in this paper. The two objectives of the problem are the availability of the system and the total cost of the system. Regarding the previous studies in series-parallel systems, the main contribution of this study is to expand the redundancy allocation problems to systems that have repairable components. Therefore, the considered systems in this paper are the systems that have repairable components in their configurations and subsystems. Due to the complexity of the model, a meta-heuristic method called as non-dominated sorting genetic algorithm is applied to find Pareto front. After finding the Pareto front, a procedure is used to select the best solution from the Pareto front.

  11. MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION

    EPA Science Inventory

    In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...

  12. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  13. Stochastic resource allocation in emergency departments with a multi-objective simulation optimization algorithm.

    PubMed

    Feng, Yen-Yi; Wu, I-Chin; Chen, Tzu-Li

    2017-03-01

    The number of emergency cases or emergency room visits rapidly increases annually, thus leading to an imbalance in supply and demand and to the long-term overcrowding of hospital emergency departments (EDs). However, current solutions to increase medical resources and improve the handling of patient needs are either impractical or infeasible in the Taiwanese environment. Therefore, EDs must optimize resource allocation given limited medical resources to minimize the average length of stay of patients and medical resource waste costs. This study constructs a multi-objective mathematical model for medical resource allocation in EDs in accordance with emergency flow or procedure. The proposed mathematical model is complex and difficult to solve because its performance value is stochastic; furthermore, the model considers both objectives simultaneously. Thus, this study develops a multi-objective simulation optimization algorithm by integrating a non-dominated sorting genetic algorithm II (NSGA II) with multi-objective computing budget allocation (MOCBA) to address the challenges of multi-objective medical resource allocation. NSGA II is used to investigate plausible solutions for medical resource allocation, and MOCBA identifies effective sets of feasible Pareto (non-dominated) medical resource allocation solutions in addition to effectively allocating simulation or computation budgets. The discrete event simulation model of ED flow is inspired by a Taiwan hospital case and is constructed to estimate the expected performance values of each medical allocation solution as obtained through NSGA II. Finally, computational experiments are performed to verify the effectiveness and performance of the integrated NSGA II and MOCBA method, as well as to derive non-dominated medical resource allocation solutions from the algorithms.

  14. Design of OFDM radar pulses using genetic algorithm based techniques

    NASA Astrophysics Data System (ADS)

    Lellouch, Gabriel; Mishra, Amit Kumar; Inggs, Michael

    2016-08-01

    The merit of evolutionary algorithms (EA) to solve convex optimization problems is widely acknowledged. In this paper, a genetic algorithm (GA) optimization based waveform design framework is used to improve the features of radar pulses relying on the orthogonal frequency division multiplexing (OFDM) structure. Our optimization techniques focus on finding optimal phase code sequences for the OFDM signal. Several optimality criteria are used since we consider two different radar processing solutions which call either for single or multiple-objective optimizations. When minimization of the so-called peak-to-mean envelope power ratio (PMEPR) single-objective is tackled, we compare our findings with existing methods and emphasize on the merit of our approach. In the scope of the two-objective optimization, we first address PMEPR and peak-to-sidelobe level ratio (PSLR) and show that our approach based on the non-dominated sorting genetic algorithm-II (NSGA-II) provides design solutions with noticeable improvements as opposed to random sets of phase codes. We then look at another case of interest where the objective functions are two measures of the sidelobe level, namely PSLR and the integrated-sidelobe level ratio (ISLR) and propose to modify the NSGA-II to include a constrain on the PMEPR instead. In the last part, we illustrate via a case study how our encoding solution makes it possible to minimize the single objective PMEPR while enabling a target detection enhancement strategy, when the SNR metric would be chosen for the detection framework.

  15. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  16. A master-slave parallel hybrid multi-objective evolutionary algorithm for groundwater remediation design under general hydrogeological conditions

    NASA Astrophysics Data System (ADS)

    Wu, J.; Yang, Y.; Luo, Q.; Wu, J.

    2012-12-01

    This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.

  17. Multi-objective parametric optimization of Inertance type pulse tube refrigerator using response surface methodology and non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Rout, Sachindra K.; Choudhury, Balaji K.; Sahoo, Ranjit K.; Sarangi, Sunil K.

    2014-07-01

    The modeling and optimization of a Pulse Tube Refrigerator is a complicated task, due to its complexity of geometry and nature. The aim of the present work is to optimize the dimensions of pulse tube and regenerator for an Inertance-Type Pulse Tube Refrigerator (ITPTR) by using Response Surface Methodology (RSM) and Non-Sorted Genetic Algorithm II (NSGA II). The Box-Behnken design of the response surface methodology is used in an experimental matrix, with four factors and two levels. The diameter and length of the pulse tube and regenerator are chosen as the design variables where the rest of the dimensions and operating conditions of the ITPTR are constant. The required output responses are the cold head temperature (Tcold) and compressor input power (Wcomp). Computational fluid dynamics (CFD) have been used to model and solve the ITPTR. The CFD results agreed well with those of the previously published paper. Also using the results from the 1-D simulation, RSM is conducted to analyse the effect of the independent variables on the responses. To check the accuracy of the model, the analysis of variance (ANOVA) method has been used. Based on the proposed mathematical RSM models a multi-objective optimization study, using the Non-sorted genetic algorithm II (NSGA-II) has been performed to optimize the responses.

  18. Algorithmic Complexity. Volume II.

    DTIC Science & Technology

    1982-06-01

    works, give an example, and discuss the inherent weaknesses and their causes. Electrical Network Analysis Knuth mentions the applicability of...of these 3 products of 2-coefficient 2 1 polynomials can be found by a repeated application of the 3 multiplication W Ascheme, only 3.3-9 scalar...see another application of this paradigm later. We now investigate the efficiency of the divide-and-conquer polynomial multiplication algorithm. Let M(n

  19. A comparative study of three simulation optimization algorithms for solving high dimensional multi-objective optimization problems in water resources

    NASA Astrophysics Data System (ADS)

    Schütze, Niels; Wöhling, Thomas; de Play, Michael

    2010-05-01

    Some real-world optimization problems in water resources have a high-dimensional space of decision variables and more than one objective function. In this work, we compare three general-purpose, multi-objective simulation optimization algorithms, namely NSGA-II, AMALGAM, and CMA-ES-MO when solving three real case Multi-objective Optimization Problems (MOPs): (i) a high-dimensional soil hydraulic parameter estimation problem; (ii) a multipurpose multi-reservoir operation problem; and (iii) a scheduling problem in deficit irrigation. We analyze the behaviour of the three algorithms on these test problems considering their formulations ranging from 40 up to 120 decision variables and 2 to 4 objectives. The computational effort required by each algorithm in order to reach the true Pareto front is also analyzed.

  20. A new algorithm for the robust optimization of rotor-bearing systems

    NASA Astrophysics Data System (ADS)

    Lopez, R. H.; Ritto, T. G.; Sampaio, Rubens; Souza de Cursi, J. E.

    2014-08-01

    This article presents a new algorithm for the robust optimization of rotor-bearing systems. The goal of the optimization problem is to find the values of a set of parameters for which the natural frequencies of the system are as far away as possible from the rotational speeds of the machine. To accomplish this, the penalization proposed by Ritto, Lopez, Sampaio, and Souza de Cursi in 2011 is employed. Since the rotor-bearing system is subject to uncertainties, such a penalization is modelled as a random variable. The robust optimization is performed by minimizing the expected value and variance of the penalization, resulting in a multi-objective optimization problem (MOP). The objective function of this MOP is known to be non-convex and it is shown that its resulting Pareto front (PF) is also non-convex. Thus, a new algorithm is proposed for solving the MOP: the normal boundary intersection (NBI) is employed to discretize the PF handling its non-convexity, and a global optimization algorithm based on a restart procedure and local searches are employed to minimize the NBI subproblems tackling the non-convexity of the objective function. A numerical analysis section shows the advantage of using the proposed algorithm over the weighted-sum (WS) and NSGA-II approaches. In comparison with the WS, the proposed approach obtains a much more even and useful set of Pareto points. Compared with the NSGA-II approach, the proposed algorithm provides a better approximation of the PF requiring much lower computational cost.

  1. An evolutionary algorithm technique for intelligence, surveillance, and reconnaissance plan optimization

    NASA Astrophysics Data System (ADS)

    Langton, John T.; Caroli, Joseph A.; Rosenberg, Brad

    2008-04-01

    To support an Effects Based Approach to Operations (EBAO), Intelligence, Surveillance, and Reconnaissance (ISR) planners must optimize collection plans within an evolving battlespace. A need exists for a decision support tool that allows ISR planners to rapidly generate and rehearse high-performing ISR plans that balance multiple objectives and constraints to address dynamic collection requirements for assessment. To meet this need we have designed an evolutionary algorithm (EA)-based "Integrated ISR Plan Analysis and Rehearsal System" (I2PARS) to support Effects-based Assessment (EBA). I2PARS supports ISR mission planning and dynamic replanning to coordinate assets and optimize their routes, allocation and tasking. It uses an evolutionary algorithm to address the large parametric space of route-finding problems which is sometimes discontinuous in the ISR domain because of conflicting objectives such as minimizing asset utilization yet maximizing ISR coverage. EAs are uniquely suited for generating solutions in dynamic environments and also allow user feedback. They are therefore ideal for "streaming optimization" and dynamic replanning of ISR mission plans. I2PARS uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to automatically generate a diverse set of high performing collection plans given multiple objectives, constraints, and assets. Intended end users of I2PARS include ISR planners in the Combined Air Operations Centers and Joint Intelligence Centers. Here we show the feasibility of applying the NSGA-II algorithm and EAs in general to the ISR planning domain. Unique genetic representations and operators for optimization within the ISR domain are presented along with multi-objective optimization criteria for ISR planning. Promising results of the I2PARS architecture design, early software prototype, and limited domain testing of the new algorithm are discussed. We also present plans for future research and development, as well as technology

  2. Multicomponent, multi-azimuth pre-stack seismic waveform inversion for azimuthally anisotropic media using a parallel and computationally efficient non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Li, Tao; Mallick, Subhashis

    2015-02-01

    Consideration of azimuthal anisotropy, at least to an orthorhombic symmetry is important in exploring the naturally fractured and unconventional hydrocarbon reservoirs. Full waveform inversion of multicomponent seismic data can, in principle, provide more robust estimates of subsurface elastic parameters and density than the inversion of single component (P wave) seismic data. In addition, azimuthally dependent anisotropy can only be resolved by carefully studying the multicomponent seismic displacement data acquired and processed along different azimuths. Such an analysis needs an inversion algorithm capable of simultaneously optimizing multiple objectives, one for each data component along each azimuth. These multicomponent and multi-azimuthal seismic inversions are non-linear with non-unique solutions; it is therefore appropriate to treat the objectives as a vector and simultaneously optimize each of its components such that the optimal set of solutions could be obtained. The fast non-dominated sorting genetic algorithm (NSGA II) is a robust stochastic global search method capable of handling multiple objectives, but its computational expense increases with increasing number of objectives and the number of model parameters to be inverted for. In addition, an accurate extraction of subsurface azimuthal anisotropy requires multicomponent seismic data acquired at a fine spatial resolution along many source-to-receiver azimuths. Because routine acquisition of such data is prohibitively expensive, they are typically available along two or at most three azimuthal orientations at a spatial resolution where such an inversion could be applied. This paper proposes a novel multi-objective methodology using a parallelized version of NSGA II for waveform inversion of multicomponent seismic displacement data along two azimuths. By scaling the objectives prior to ranking, redefining the crowding distance as functions of the scaled objective and the model spaces, and varying

  3. A New Algorithm Using the Non-dominated Tree to improve Non-dominated Sorting.

    PubMed

    Gustavsson, Patrik; Syberfeldt, Anna

    2017-01-19

    Non-dominated sorting is a technique often used in evolutionary algorithms to determine the quality of solutions in a population. The most common algorithm is the Fast Non-dominated Sort (FNS). This algorithm, however, has the drawback that its performance deteriorates when the population size grows. The same drawback applies also to other non-dominating sorting algorithms such as the Efficient Non-dominated Sort with Binary Strategy (ENS-BS). An algorithm suggested to overcome this drawback is the Divide-and-Conquer Non-dominated Sort (DCNS) which works well on a limited number of objectives but deteriorates when the number of objectives grows. This paper presents a new, more efficient, algorithm called the Efficient Non-dominated Sort with Non-Dominated Tree (ENS-NDT). ENS-NDT is an extension of the ENS-BS algorithm and uses a novel Non-Dominated Tree (NDTree) to speed up the non-dominated sorting. ENS-NDT is able to handle large population sizes and a large number of objectives more efficiently than existing algorithms for non-dominated sorting. In the paper, it is shown that with ENS-NDT the runtime of multi-objective optimization algorithms such as the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) can be substantially reduced.

  4. Multi-objective optimal design of magnetorheological engine mount based on an improved non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong

    2014-03-01

    A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.

  5. Comparing Evolutionary Strategies on a Biobjective Cultural Algorithm

    PubMed Central

    Lagos, Carolina; Crawford, Broderick; Cabrera, Enrique; Rubio, José-Miguel; Paredes, Fernando

    2014-01-01

    Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs) are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP), the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric. PMID:25254257

  6. A hybrid multi-objective particle swarm algorithm for a mixed-model assembly line sequencing problem

    NASA Astrophysics Data System (ADS)

    Rahimi-Vahed, A. R.; Mirghorbani, S. M.; Rabbani, M.

    2007-12-01

    Mixed-model assembly line sequencing is one of the most important strategic problems in the field of production management where diversified customers' demands exist. In this article, three major goals are considered: (i) total utility work, (ii) total production rate variation and (iii) total setup cost. Due to the complexity of the problem, a hybrid multi-objective algorithm based on particle swarm optimization (PSO) and tabu search (TS) is devised to obtain the locally Pareto-optimal frontier where simultaneous minimization of the above-mentioned objectives is desired. In order to validate the performance of the proposed algorithm in terms of solution quality and diversity level, the algorithm is applied to various test problems and its reliability, based on different comparison metrics, is compared with three prominent multi-objective genetic algorithms, PS-NC GA, NSGA-II and SPEA-II. The computational results show that the proposed hybrid algorithm significantly outperforms existing genetic algorithms in large-sized problems.

  7. ASMiGA: an archive-based steady-state micro genetic algorithm.

    PubMed

    Nag, Kaustuv; Pal, Tandra; Pal, Nikhil R

    2015-01-01

    We propose a new archive-based steady-state micro genetic algorithm (ASMiGA). In this context, a new archive maintenance strategy is proposed, which maintains a set of nondominated solutions in the archive unless the archive size falls below a minimum allowable size. It makes the archive size adaptive and dynamic. We have proposed a new environmental selection strategy and a new mating selection strategy. The environmental selection strategy reduces the exploration in less probable objective spaces. The mating selection increases searching in more probable search regions by enhancing the exploitation of existing solutions. A new crossover strategy DE-3 is proposed here. ASMiGA is compared with five well-known multiobjective optimization algorithms of different types-generational evolutionary algorithms (SPEA2 and NSGA-II), archive-based hybrid scatter search, decomposition-based evolutionary approach, and archive-based micro genetic algorithm. For comparison purposes, four performance measures (HV, GD, IGD, and GS) are used on 33 test problems, of which seven problems are constrained. The proposed algorithm outperforms the other five algorithms.

  8. SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.

    1989-01-01

    The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.

  9. Multi-Objective Random Search Algorithm for Simultaneously Optimizing Wind Farm Layout and Number of Turbines

    NASA Astrophysics Data System (ADS)

    Feng, Ju; Shen, Wen Zhong; Xu, Chang

    2016-09-01

    A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximize the total power production, which is calculated by considering the wake effects using the Jensen wake model combined with the local wind distribution. The other is to minimize the total electrical cable length. This length is assumed to be the total length of the minimal spanning tree that connects all turbines and is calculated by using Prim's algorithm. Constraints on wind farm boundary and wind turbine proximity are also considered. An ideal test case shows the proposed algorithm largely outperforms a famous multi-objective genetic algorithm (NSGA-II). In the real test case based on the Horn Rev 1 wind farm, the algorithm also obtains useful Pareto frontiers and provides a wide range of Pareto optimal layouts with different numbers of turbines for a real-life wind farm developer.

  10. Environment Sensitivity-based Cooperative Co-evolutionary Algorithms for Dynamic Multi-objective Optimization.

    PubMed

    Xu, Biao; Zhang, Yong; Gong, Dunwei; Guo, Yinan; Rong, Miao

    2017-01-16

    Dynamic multi-objective optimization problems (DMOPs) not only involve multiple conflicting objectives, but these objectives may also vary with time, raising a challenge for researchers to solve them. This paper presents a cooperative co-evolutionary strategy based on environment sensitivities for solving DMOPs. In this strategy, a new method that groups decision variables is first proposed, in which all the decision variables are partitioned into two subcomponents according to their interrelation with environment. Adopting two populations to cooperatively optimize the two subcomponents, two prediction methods, i.e., differential prediction and Cauchy mutation, are then employed respectively to speed up their responses on the change of the environment. Furthermore, two improved dynamic multi-objective optimization algorithms, i.e., DNSGAII-CO and DMOPSO-CO, are proposed by incorporating the above strategy into NSGA-II and multi-objective particle swarm optimization, respectively. The proposed algorithms are compared with three state-of-the-art algorithms by applying to seven benchmark DMOPs. Experimental results reveal that the proposed algorithms significantly outperform the compared algorithms in terms of convergence and distribution on most DMOPs.

  11. Global WASF-GA: An Evolutionary Algorithm in Multiobjective Optimization to Approximate the Whole Pareto Optimal Front.

    PubMed

    Saborido, Rubén; Ruiz, Ana B; Luque, Mariano

    2016-02-08

    In this article, we propose a new evolutionary algorithm for multiobjective optimization called Global WASF-GA (global weighting achievement scalarizing function genetic algorithm), which falls within the aggregation-based evolutionary algorithms. The main purpose of Global WASF-GA is to approximate the whole Pareto optimal front. Its fitness function is defined by an achievement scalarizing function (ASF) based on the Tchebychev distance, in which two reference points are considered (both utopian and nadir objective vectors) and the weight vector used is taken from a set of weight vectors whose inverses are well-distributed. At each iteration, all individuals are classified into different fronts. Each front is formed by the solutions with the lowest values of the ASF for the different weight vectors in the set, using the utopian vector and the nadir vector as reference points simultaneously. Varying the weight vector in the ASF while considering the utopian and the nadir vectors at the same time enables the algorithm to obtain a final set of nondominated solutions that approximate the whole Pareto optimal front. We compared Global WASF-GA to MOEA/D (different versions) and NSGA-II in two-, three-, and five-objective problems. The computational results obtained permit us to conclude that Global WASF-GA gets better performance, regarding the hypervolume metric and the epsilon indicator, than the other two algorithms in many cases, especially in three- and five-objective problems.

  12. A novel multi-objective electromagnetism-like mechanism algorithm with applications in reservoir flood control operation.

    PubMed

    Ouyang, Shuo; Zhou, Jianzhong; Qin, Hui; Liao, Xiang; Wang, Hao

    2014-01-01

    Reservoir flood control operation (RFCO) is a complex problem that involves various constraints and purposes, which include the safety of the dam, watershed flood control and navigation. These objectives often conflict with each other. Thus, traditional methods have difficulty in solving the multi-objective problem efficiently. In this paper, a multi-objective self-adaptive electromagnetism-like mechanism (MOSEM) algorithm is introduced in the local searching operation of the proposed method. To enhance the optimization ability of EM, a self-adaptive parameter is applied in the local search operation of MOSEM for adjusting the values of parameters dynamically. Moreover, MOSEM is tested by several benchmark test problems and compared with some well-known multi-objective evolutionary algorithms. A case study is also used for solving RFCO problems of the Three Georges Reservoir by using the multi-objective cultured differential evolution (MOCDE), non-dominated sorting genetic algorithm-II (NSGA-II) and proposed MOSEM methods. The study results reveal that MOSEM can provide alternative Pareto-optimal solutions (POS) with better convergence properties and diversification.

  13. Investigation on Multiple Algorithms for Multi-Objective Optimization of Gear Box

    NASA Astrophysics Data System (ADS)

    Ananthapadmanabhan, R.; Babu, S. Arun; Hareendranath, KR; Krishnamohan, C.; Krishnapillai, S.; A, Krishnan

    2016-09-01

    The field of gear design is an extremely important area in engineering. In this work a spur gear reduction unit is considered. A review of relevant literatures in the area of gear design indicates that compact design of gearbox involves a complicated engineering analysis. This work deals with the simultaneous optimization of the power and dimensions of a gearbox, which are of conflicting nature. The focus is on developing a design space which is based on module, pinion teeth and face-width by using MATLAB. The feasible points are obtained through different multi-objective algorithms using various constraints obtained from different novel literatures. Attention has been devoted in various novel constraints like critical scoring criterion number, flash temperature, minimum film thickness, involute interference and contact ratio. The output from various algorithms like genetic algorithm, fmincon (constrained nonlinear minimization), NSGA-II etc. are compared to generate the best result. Hence, this is a much more precise approach for obtaining practical values of the module, pinion teeth and face-width for a minimum centre distance and a maximum power transmission for any given material.

  14. Design Optimization of an Axial Fan Blade Through Multi-Objective Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Hyuk; Choi, Jae-Ho; Husain, Afzal; Kim, Kwang-Yong

    2010-06-01

    This paper presents design optimization of an axial fan blade with hybrid multi-objective evolutionary algorithm (hybrid MOEA). Reynolds-averaged Navier-Stokes equations with shear stress transport turbulence model are discretized by the finite volume approximations and solved on hexahedral grids for the flow analyses. The validation of the numerical results was performed with the experimental data for the axial and tangential velocities. Six design variables related to the blade lean angle and blade profile are selected and the Latin hypercube sampling of design of experiments is used to generate design points within the selected design space. Two objective functions namely total efficiency and torque are employed and the multi-objective optimization is carried out to enhance total efficiency and to reduce the torque. The flow analyses are performed numerically at the designed points to obtain values of the objective functions. The Non-dominated Sorting of Genetic Algorithm (NSGA-II) with ɛ -constraint strategy for local search coupled with surrogate model is used for multi-objective optimization. The Pareto-optimal solutions are presented and trade-off analysis is performed between the two competing objectives in view of the design and flow constraints. It is observed that total efficiency is enhanced and torque is decreased as compared to the reference design by the process of multi-objective optimization. The Pareto-optimal solutions are analyzed to understand the mechanism of the improvement in the total efficiency and reduction in torque.

  15. Single-objective optimization of thermo-electric coolers using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Khanh, Doan V. K.; Vasant, P.; Elamvazuthi, Irraivan; Dieu, Vo N.

    2014-10-01

    Thermo-electric Coolers (TECs) nowadays is applied in a wide range of thermal energy systems. This is due to its superior features where no refrigerant and dynamic parts are needed. TECs generate no electrical or acoustical noise and are environment friendly. Over the past decades, many researches were employed to improve the efficiency of TECs by enhancing the material parameters and design parameters. The material parameters are restricted by currently available materials and module fabricating technologies. Therefore, the main objective of TECs design is to determine a set of design parameters such as leg area, leg length and the number of legs. Two elements that play an important role when considering the suitability of TECs in applications are rated of refrigeration (ROR) and coefficient of performance (COP). In this paper, the review of some previous researches will be conducted to see the diversity of optimization in the design of TECs in enhancing the performance and efficiency. After that, single objective optimization problems (SOP) will be tested first by using Genetic Algorithm (GA) to optimize geometry properties so that TECs will operate at near optimal conditions. In the future works, multi-objective optimization problems (MOP) using hybrid GA with another optimization technique will be considered to give a better results and compare with previous research such as Non-Dominated Sorting Genetic Algorithm (NSGA-II) to see the advantages and disadvantages.

  16. Automatic Tuning of a Retina Model for a Cortical Visual Neuroprosthesis Using a Multi-Objective Optimization Genetic Algorithm.

    PubMed

    Martínez-Álvarez, Antonio; Crespo-Cano, Rubén; Díaz-Tahoces, Ariadna; Cuenca-Asensi, Sergio; Ferrández Vicente, José Manuel; Fernández, Eduardo

    2016-11-01

    The retina is a very complex neural structure, which contains many different types of neurons interconnected with great precision, enabling sophisticated conditioning and coding of the visual information before it is passed via the optic nerve to higher visual centers. The encoding of visual information is one of the basic questions in visual and computational neuroscience and is also of seminal importance in the field of visual prostheses. In this framework, it is essential to have artificial retina systems to be able to function in a way as similar as possible to the biological retinas. This paper proposes an automatic evolutionary multi-objective strategy based on the NSGA-II algorithm for tuning retina models. Four metrics were adopted for guiding the algorithm in the search of those parameters that best approximate a synthetic retinal model output with real electrophysiological recordings. Results show that this procedure exhibits a high flexibility when different trade-offs has to be considered during the design of customized neuro prostheses.

  17. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  18. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Xu, Yiming; Liu, Jun

    2017-01-01

    For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes’ being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network’s best service quality and lifetime. PMID:28106837

  19. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks.

    PubMed

    Jiang, Peng; Xu, Yiming; Liu, Jun

    2017-01-19

    For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes' being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network's best service quality and lifetime.

  20. On the use of multi-algorithm, genetically adaptive multi-objective method for multi-site calibration of the SWAT model

    SciTech Connect

    Zhang, Xuesong; Srinivasan, Raghavan; Van Liew, M.

    2010-04-15

    With the availability of spatially distributed data, distributed hydrologic models are increasingly used for simulation of spatially varied hydrologic processes to understand and manage natural and human activities that affect watershed systems. Multi-objective optimization methods have been applied to calibrate distributed hydrologic models using observed data from multiple sites. As the time consumed by running these complex models is increasing substantially, selecting efficient and effective multi-objective optimization algorithms is becoming a nontrivial issue. In this study, we evaluated a multi-algorithm, genetically adaptive multi-objective method (AMALGAM) for multi-site calibration of a distributed hydrologic model—Soil and Water Assessment Tool (SWAT), and compared its performance with two widely used evolutionary multi-objective optimization (EMO) algorithms (i.e. Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Non-dominated Sorted Genetic Algorithm II (NSGA-II)). In order to provide insights into each method’s overall performance, these three methods were tested in four watersheds with various characteristics. The test results indicate that the AMALGAM can consistently provide competitive or superior results compared with the other two methods. The multi-method search framework of AMALGAM, which can flexibly and adaptively utilize multiple optimization algorithms, makes it a promising tool for multi-site calibration of the distributed SWAT. For practical use of AMALGAM, it is suggested to implement this method in multiple trials with relatively small number of model runs rather than run it once with long iterations. In addition, incorporating different multiobjective optimization algorithms and multi-mode search operators into AMALGAM deserves further research.

  1. SAGE Version 7.0 Algorithm: Application to SAGE II

    NASA Technical Reports Server (NTRS)

    Damadeo, R. P; Zawodny, J. M.; Thomason, L. W.; Iyer, N.

    2013-01-01

    This paper details the Stratospheric Aerosol and Gas Experiments (SAGE) version 7.0 algorithm and how it is applied to SAGE II. Changes made between the previous (v6.2) and current (v7.0) versions are described and their impacts on the data products explained for both coincident event comparisons and time-series analysis. Users of the data will notice a general improvement in all of the SAGE II data products, which are now in better agreement with more modern data sets (e.g. SAGE III) and more robust for use with trend studies.

  2. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.

  3. Multi-Rate Digital Control Systems with Simulation Applications. Volume II. Computer Algorithms

    DTIC Science & Technology

    1980-09-01

    34 ~AFWAL-TR-80-31 01 • • Volume II L IL MULTI-RATE DIGITAL CONTROL SYSTEMS WITH SIMULATiON APPLICATIONS Volume II: Computer Algorithms DENNIS G. J...29 Ma -8 - Volume II. Computer Algorithms ~ / ’+ 44MWLxkQT N Uwe ~~ 4 ~jjskYIF336l5-79-C-369~ 9. PER~rORMING ORGANIZATION NAME AND ADDRESS IPROG AMEL...additional options. The analytical basis for the computer algorithms is discussed in Ref. 12. However, to provide a complete description of the program, some

  4. Tracking at CDF: algorithms and experience from Run I and Run II

    SciTech Connect

    Snider, F.D.; /Fermilab

    2005-10-01

    The authors describe the tracking algorithms used during Run I and Run II by CDF at the Fermilab Tevatron Collider, covering the time from about 1992 through the present, and discuss the performance of the algorithms at high luminosity. By tracing the evolution of the detectors and algorithms, they reveal some of the successful strategies used by CDF to address the problems of tracking at high luminosities.

  5. Minimizing the total tardiness and makespan in an open shop scheduling problem with sequence-dependent setup times

    NASA Astrophysics Data System (ADS)

    Noori-Darvish, Samaneh; Tavakkoli-Moghaddam, Reza

    2012-10-01

    We consider an open shop scheduling problem with setup and processing times separately such that not only the setup times are dependent on the machines, but also they are dependent on the sequence of jobs that should be processed on a machine. A novel bi-objective mathematical programming is designed in order to minimize the total tardiness and the makespan. Among several multi-objective decision making (MODM) methods, an interactive one, called the TH method is applied for solving small-sized instances optimally and obtaining Pareto-optimal solutions by the Lingo software. To achieve Pareto-optimal sets for medium to large-sized problems, an improved non-dominated sorting genetic algorithm II (NSGA-II) is presented that consists of a heuristic method for obtaining a good initial population. In addition, by using the design of experiments (DOE), the efficiency of the proposed improved NSGA-II is compared with the efficiency of a well-known multi-objective genetic algorithm, namely SPEA-II. Finally, the performance of the improved NSGA-II is examined in a comparison with the performance of the traditional NSGA-II.

  6. Measurement of the inclusive jet cross section using the midpoint algorithm in Run II at CDF

    SciTech Connect

    Group, Robert Craig

    2006-01-01

    A measurement is presented of the inclusive jet cross section using the Midpoint jet clustering algorithm in five different rapidity regions. This is the first analysis which measures the inclusive jet cross section using the Midpoint algorithm in the forward region of the detector. The measurement is based on more than 1 fb-1 of integrated luminosity of Run II data taken by the CDF experiment at the Fermi National Accelerator Laboratory. The results are consistent with the predictions of perturbative quantum chromodynamics.

  7. Beam size and position measurement based on logarithm processing algorithm in HLS II

    NASA Astrophysics Data System (ADS)

    Cheng, Chao-Cai; Sun, Bao-Gen; Yang, Yong-Liang; Zhou, Ze-Ran; Lu, Ping; Wu, Fang-Fang; Wang, Ji-Gang; Tang, Kai; Luo, Qing; Li, Hao; Zheng, Jia-Jun; Duan, Qing-Ming

    2016-04-01

    A logarithm processing algorithm to measure beam transverse size and position is proposed and preliminary experimental results in Hefei Light Source II (HLS II) are given. The algorithm is based on only 4 successive channels of 16 anode channels of multianode photomultiplier tube (MAPMT) R5900U-00-L16, which has typical rise time of 0.6 ns and effective area of 0.8×16 mm for a single anode channel. In the paper, we first elaborate the simulation results of the algorithm with and without channel inconsistency. Then we calibrate the channel inconsistency and verify the algorithm using a general current signal processor Libera Photon in a low-speed scheme. Finally we get turn-by-turn beam size and position and calculate the vertical tune in a high-speed scheme. The experimental results show that measured values fit well with simulation results after channel differences are calibrated, and the fractional part of the tune in vertical direction is 0.3628, which is very close to the nominal value 0.3621. Supported by National Natural Science Foundation of China (11005105, 11175173)

  8. Inversion of oceanic constituents in case I and II waters with genetic programming algorithms.

    PubMed

    Chami, Malik; Robilliard, Denis

    2002-10-20

    A stochastic inverse technique based on agenetic programming (GP) algorithm was developed toinvert oceanic constituents from simulated data for case I and case II water applications. The simulations were carried out with the Ordre Successifs Ocean Atmosphere (OSOA) radiative transfer model. They include the effects of oceanic substances such as algal-related chlorophyll, nonchlorophyllous suspended matter, and dissolved organic matter. The synthetic data set also takes into account the directional effects of particles through a variation of their phase function that makes the simulated data realistic. It is shown that GP can be successfully applied to the inverse problem with acceptable stability in the presence of realistic noise in the data. GP is compared with neural network methodology for case I waters; GP exhibits similar retrieval accuracy, which is greater than for traditional techniques such as band ratio algorithms. The application of GP to real satellite data [a Sea-viewing Wide Field-of-view Sensor (SeaWiFS)] was also carried out for case I waters as a validation. Good agreement was obtained when GP results were compared with the SeaWiFS empirical algorithm. For case II waters the accuracy of GP is less than 33%, which remains satisfactory, at the present time, for remote-sensing purposes.

  9. Recent Improvements to the Finite-Fault Rupture Detector Algorithm: FinDer II

    NASA Astrophysics Data System (ADS)

    Smith, D.; Boese, M.; Heaton, T. H.

    2015-12-01

    Constraining the finite-fault rupture extent and azimuth is crucial for accurately estimating ground-motion in large earthquakes. Detecting and modeling finite-fault ruptures in real-time is thus essential to both earthquake early warning (EEW) and rapid emergency response. Following extensive real-time and offline testing, the finite-fault rupture detector algorithm, FinDer (Böse et al., 2012 & 2015), was successfully integrated into the California-wide ShakeAlert EEW demonstration system. Since April 2015, FinDer has been scanning real-time waveform data from approximately 420 strong-motion stations in California for peak ground acceleration (PGA) patterns indicative of earthquakes. FinDer analyzes strong-motion data by comparing spatial images of observed PGA with theoretical templates modeled from empirical ground-motion prediction equations (GMPEs). If the correlation between the observed and theoretical PGA is sufficiently high, a report is sent to ShakeAlert including the estimated centroid position, length, and strike, and their uncertainties, of an ongoing fault rupture. Rupture estimates are continuously updated as new data arrives. As part of a joint effort between USGS Menlo Park, ETH Zurich, and Caltech, we have rewritten FinDer in C++ to obtain a faster and more flexible implementation. One new feature of FinDer II is that multiple contour lines of high-frequency PGA are computed and correlated with templates, allowing the detection of both large earthquakes and much smaller (~ M3.5) events shortly after their nucleation. Unlike previous EEW algorithms, FinDer II thus provides a modeling approach for both small-magnitude point-source and larger-magnitude finite-fault ruptures with consistent error estimates for the entire event magnitude range.

  10. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    NASA Technical Reports Server (NTRS)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  11. Bi-Objective Flexible Job-Shop Scheduling Problem Considering Energy Consumption under Stochastic Processing Times.

    PubMed

    Yang, Xin; Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan

    2016-01-01

    This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems.

  12. Bi-Objective Flexible Job-Shop Scheduling Problem Considering Energy Consumption under Stochastic Processing Times

    PubMed Central

    Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan

    2016-01-01

    This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems. PMID:27907163

  13. The sloan digital sky Survey-II supernova survey: search algorithm and follow-up observations

    SciTech Connect

    Sako, Masao; Bassett, Bruce; Becker, Andrew; Hogan, Craig J.; Cinabro, David; DeJongh, Fritz; Frieman, Joshua A.; Marriner, John; Miknaitis, Gajus; Depoy, D. L.; Prieto, Jose Luis; Dilday, Ben; Kessler, Richard; Doi, Mamoru; Garnavich, Peter M.; Holtzman, Jon; Jha, Saurabh; Konishi, Kohki; Lampeitl, Hubert; Nichol, Robert C.; and others

    2008-01-01

    The Sloan Digital Sky Survey-II Supernova Survey has identified a large number of new transient sources in a 300 deg{sup 2} region along the celestial equator during its first two seasons of a three-season campaign. Multi-band (ugriz) light curves were measured for most of the sources, which include solar system objects, galactic variable stars, active galactic nuclei, supernovae (SNe), and other astronomical transients. The imaging survey is augmented by an extensive spectroscopic follow-up program to identify SNe, measure their redshifts, and study the physical conditions of the explosions and their environment through spectroscopic diagnostics. During the survey, light curves are rapidly evaluated to provide an initial photometric type of the SNe, and a selected sample of sources are targeted for spectroscopic observations. In the first two seasons, 476 sources were selected for spectroscopic observations, of which 403 were identified as SNe. For the type Ia SNe, the main driver for the survey, our photometric typing and targeting efficiency is 90%. Only 6% of the photometric SN Ia candidates were spectroscopically classified as non-SN Ia instead, and the remaining 4% resulted in low signal-to-noise, unclassified spectra. This paper describes the search algorithm and the software, and the real-time processing of the SDSS imaging data. We also present the details of the supernova candidate selection procedures and strategies for follow-up spectroscopic and imaging observations of the discovered sources.

  14. Modeling and multi-criteria optimization of an industrial process for continuous lactic acid production.

    PubMed

    Mokeddem, Diab; Khellaf, Abdelhafid

    2014-06-01

    The key feature of this paper is the optimization of an industrial process for continuous production of lactic acid. For this, a two-stage fermentor process integrated with cell recycling has been mathematically modeled and optimized for overall productivity, conversion, and yield simultaneously. Non-dominated sorting genetic algorithm (NSGA-II) was applied to solve the constrained multi-objective optimization problem as it is capable of finding multiple Pareto-optimal solutions in a single run, thereby avoiding the need to use a single-objective optimization several times. Compared with traditional methods, NSGA-II could find most of the solutions in the true Pareto-front and its simulation is also very direct and convenient. The effects of operating variables on the optimal solutions are discussed in detail. It was observed that we can make higher profit with an acceptable compromise in a two-stage system with greater efficiency.

  15. Phasing the mirror segments of the Keck telescopes II: the narrow-band phasing algorithm.

    PubMed

    Chanan, G; Ohara, C; Troy, M

    2000-09-01

    In a previous paper, we described a successful technique, the broadband algorithm, for phasing the primary mirror segments of the Keck telescopes to an accuracy of 30 nm. Here we describe a complementary narrow-band algorithm. Although it has a limited dynamic range, it is much faster than the broadband algorithm and can achieve an unprecedented phasing accuracy of approximately 6 nm. Cross checks between these two independent techniques validate both methods to a high degree of confidence. Both algorithms converge to the edge-minimizing configuration of the segmented primary mirror, which is not the same as the overall wave-front-error-minimizing configuration, but we demonstrate that this distinction disappears as the segment aberrations are reduced to zero.

  16. Experimental validation of improved 3D SBP positioning algorithm in PET applications using UW Phase II Board

    NASA Astrophysics Data System (ADS)

    Jorge, L. S.; Bonifacio, D. A. B.; DeWitt, Don; Miyaoka, R. S.

    2016-12-01

    Continuous scintillator-based detectors have been considered as a competitive and cheaper approach than highly pixelated discrete crystal positron emission tomography (PET) detectors, despite the need for algorithms to estimate 3D gamma interaction position. In this work, we report on the implementation of a positioning algorithm to estimate the 3D interaction position in a continuous crystal PET detector using a Field Programmable Gate Array (FPGA). The evaluated method is the Statistics-Based Processing (SBP) technique that requires light response function and event position characterization. An algorithm has been implemented using the Verilog language and evaluated using a data acquisition board that contains an Altera Stratix III FPGA. The 3D SBP algorithm was previously successfully implemented on a Stratix II FPGA using simulated data and a different module design. In this work, improvements were made to the FPGA coding of the 3D positioning algorithm, reducing the total memory usage to around 34%. Further the algorithm was evaluated using experimental data from a continuous miniature crystal element (cMiCE) detector module. Using our new implementation, average FWHM (Full Width at Half Maximum) for the whole block is 1.71±0.01 mm, 1.70±0.01 mm and 1.632±0.005 mm for x, y and z directions, respectively. Using a pipelined architecture, the FPGA is able to process 245,000 events per second for interactions inside of the central area of the detector that represents 64% of the total block area. The weighted average of the event rate by regional area (corner, border and central regions) is about 198,000 events per second. This event rate is greater than the maximum expected coincidence rate for any given detector module in future PET systems using the cMiCE detector design.

  17. Development of Algorithms for Nonlinear Physics on Type-II Quantum Computers

    DTIC Science & Technology

    2007-07-01

    Jan. 31, 2007 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Quantumn Lattice Algorithms for Nonlinear Physics: Optical Solutions and Bose-Eitistein...macroscopic nonlinear derivatives by local moments. Chapman-Enskog asymptotics will then, on projecting back into physical space, yield these nonlinear ...Entropic Lattice Boltzmaim Model will be being strongly pursued in future proposals. AFOSR FINAL REPORT "DEVELOPMENT OF ALGORITHMS For NONLINEAR

  18. Experimental analysis and mathematical prediction of Cd(II) removal by biosorption using support vector machines and genetic algorithms.

    PubMed

    Hlihor, Raluca Maria; Diaconu, Mariana; Leon, Florin; Curteanu, Silvia; Tavares, Teresa; Gavrilescu, Maria

    2015-05-25

    We investigated the bioremoval of Cd(II) in batch mode, using dead and living biomass of Trichoderma viride. Kinetic studies revealed three distinct stages of the biosorption process. The pseudo-second order model and the Langmuir model described well the kinetics and equilibrium of the biosorption process, with a determination coefficient, R(2)>0.99. The value of the mean free energy of adsorption, E, is less than 16 kJ/mol at 25 °C, suggesting that, at low temperature, the dominant process involved in Cd(II) biosorption by dead T. viride is the chemical ion-exchange. With the temperature increasing to 40-50 °C, E values are above 16 kJ/mol, showing that the particle diffusion mechanism could play an important role in Cd(II) biosorption. The studies on T. viride growth in Cd(II) solutions and its bioaccumulation performance showed that the living biomass was able to bioaccumulate 100% Cd(II) from a 50 mg/L solution at pH 6.0. The influence of pH, biomass dosage, metal concentration, contact time and temperature on the bioremoval efficiency was evaluated to further assess the biosorption capability of the dead biosorbent. These complex influences were correlated by means of a modeling procedure consisting in data driven approach in which the principles of artificial intelligence were applied with the help of support vector machines (SVM), combined with genetic algorithms (GA). According to our data, the optimal working conditions for the removal of 98.91% Cd(II) by T. viride were found for an aqueous solution containing 26.11 mg/L Cd(II) as follows: pH 6.0, contact time of 3833 min, 8 g/L biosorbent, temperature 46.5 °C. The complete characterization of bioremoval parameters indicates that T. viride is an excellent material to treat wastewater containing low concentrations of metal.

  19. Tangent height registration method for the Version 1.4 data retrieval algorithm of the solar occultation sensor ILAS-II.

    PubMed

    Tanaka, Tomoaki; Nakajima, Hideaki; Sugita, Takafumi; Ejiri, Mitsumu K; Irie, Hitoshi; Saitoh, Naoko; Terao, Yukio; Kawasaki, Hiroyuki; Usami, Masatoshi; Yokota, Tatsuya; Kobayashi, Hirokazu; Sasano, Yasuhiro

    2007-10-10

    The Improved Limb Atmospheric Spectrometer-II (ILAS-II) is a satellite-borne solar occultation sensor onboard the Advanced Earth Observing Satellite-II (ADEOS-II). The ILAS-II succeeded the ILAS. The ILAS-II used four grating spectrometers to observe vertical profiles of gas volume mixing ratios of trace constituents and was also equipped with a Sun-edge sensor to determine tangent heights geometrically with high precision. The accuracy of gas volume mixing ratios depends on the accuracy of the tangent height determination. The combination method is a tangent height registration method that was developed to give appropriate tangent heights for the ILAS-II Version 1.4 data retrieval algorithm. This study describes the method used in the ILAS-II Version 1.4 retrieval algorithm to register tangent heights. The root-sum-square total random error is estimated to be 30 m, and the total systematic error is 180 m at an altitude of 30 km. The influence of the tangent height errors on the vertical profiles of gas volume mixing ratios in ILAS-II Version 1.4 is estimated by using the relative difference. The relative difference for each species is within 7% (20%) for an altitude shift of +/-100 m(+/-300 m).

  20. High Spectral Resolution MODIS Algorithms for Ocean Chlorophyll in Case II Waters

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    2004-01-01

    The Case 2 chlorophyll a algorithm is based on a semi-analytical, bio-optical model of remote sensing reflectance, R(sub rs)(lambda), where R(sub rs)(lambda) is defined as the water-leaving radiance, L(sub w)(lambda), divided by the downwelling irradiance just above the sea surface, E(sub d)(lambda,0(+)). The R(sub rs)(lambda) model (Section 3) has two free variables, the absorption coefficient due to phytoplankton at 675 nm, a(sub phi)(675), and the absorption coefficient due to colored dissolved organic matter (CDOM) or gelbstoff at 400 nm, a(sub g)(400). The R(rs) model has several parameters that are fixed or can be specified based on the region and season of the MODIS scene. These control the spectral shapes of the optical constituents of the model. R(sub rs)(lambda(sub i)) values from the MODIS data processing system are placed into the model, the model is inverted, and a(sub phi)(675), a(sub g)(400) (MOD24), and chlorophyll a (MOD21, Chlor_a_3) are computed. Algorithm development is initially focused on tropical, subtropical, and summer temperate environments, and the model is parameterized in Section 4 for three different bio-optical domains: (1) high ratios of photoprotective pigments to chlorophyll and low self-shading, which for brevity, we designate as 'unpackaged'; (2) low ratios and high self-shading, which we designate as 'packaged'; and (3) a transitional or global-average type. These domains can be identified from space by comparing sea-surface temperature to nitrogen-depletion temperatures for each domain (Section 5). Algorithm errors of more than 45% are reduced to errors of less than 30% with this approach, with the greatest effect occurring at the eastern and polar boundaries of the basins. Section 6 provides an expansion of bio-optical domains into high-latitude waters. The 'fully packaged' pigment domain is introduced in this section along with a revised strategy for implementing these variable packaging domains. Chlor_a_3 values derived semi

  1. Graph Theoretic Foundations of Multibody Dynamics Part II: Analysis and Algorithms

    PubMed Central

    Jain, Abhinandan

    2011-01-01

    This second, of a two part paper, uses concepts from graph theory to obtain a deeper understanding of the mathematical foundations of multibody dynamics. The first part [7] established the block-weighted adjacency (BWA) matrix structure of spatial operators associated with serial and tree topology multibody system dynamics, and introduced the notions of spatial kernel operators (SKO) and spatial propagation operators (SPO). This paper builds upon these connections to show that key analytical results and computational algorithms are a direct consequence of these structural properties and require minimal assumptions about the specific nature of the underlying multibody system. We formalize this notion by introducing the notion of SKO models for general tree-topology multibody systems. We show that key analytical results, including mass matrix factorization, inversion, and decomposition hold for all SKO models. It is also shown that key low-order scatter/gather recursive computational algorithms follow directly from these abstract-level analytical results. Application examples to illustrate the concrete application of these general results are provided. The paper also describes a general recipe for developing SKO models. The abstract nature of SKO models allows the application of these techniques to a very broad class of multibody systems. PMID:22102791

  2. Parallel Algorithms and Software for Nuclear, Energy, and Environmental Applications. Part II: Multiphysics Software

    SciTech Connect

    Derek Gaston; Luanjing Guo; Glen Hansen; Hai Huang; Richard Johnson; Dana Knoll; Chris Newman; Hyeong Kae Park; Robert Podgorney; Michael Tonks; Richard Williamson

    2012-09-01

    This paper is the second part of a two part sequence on multiphysics algorithms and software. The first [1] focused on the algorithms; this part treats the multiphysics software framework and applications based on it. Tight coupling is typically designed into the analysis application at inception, as such an application is strongly tied to a composite nonlinear solver that arrives at the final solution by treating all equations simultaneously. The application must also take care to minimize both time and space error between the physics, particularly if more than one mesh representation is needed in the solution process. This paper presents an application framework that was specifically designed to support tightly coupled multiphysics analysis. The Multiphysics Object Oriented Simulation Environment (MOOSE) is based on the Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning to provide the underlying mathematical structure for applications. The report concludes with the presentation of a host of nuclear, energy, and environmental applications that demonstrate the efficacy of the approach and the utility of a well-designed multiphysics framework.

  3. Validation of the ORA spatial inversion algorithm with respect to the Stratospheric Aerosol and Gas Experiment II data.

    PubMed

    Fussen, D; Arijs, E; Nevejans, D; Van Hellemont, F; Brogniez, C; Lenoble, J

    1998-05-20

    We present the results of a comparison of the total extinction altitude profiles measured at the same time and at same location by the ORA (Occultation Radiometer) and Stratospheric Aerosol and Gas Experiment II solar occultation experiments at three different wavelengths. A series of 25 events for which the grazing points of both experiments lie within a 2 degrees window has been analyzed. The mean relative differences observed over the altitude range 15-45 km are -8.4%, 1.6%, and 3% for the three channels (0.385, 0.6, and 1.02 microm). Some systematic degradation occurs below 20 km (as the result of signal saturation and possible cloud interference) and above 40 km (low absorption). The fair general agreement between the extinction profiles obtained by two different instruments enhances our confidence in the results of the ORA experiment and of the recently developed vertical inversion algorithm applied to real data.

  4. WORM ALGORITHM PATH INTEGRAL MONTE CARLO APPLIED TO THE 3He-4He II SANDWICH SYSTEM

    NASA Astrophysics Data System (ADS)

    Al-Oqali, Amer; Sakhel, Asaad R.; Ghassib, Humam B.; Sakhel, Roger R.

    2012-12-01

    We present a numerical investigation of the thermal and structural properties of the 3He-4He sandwich system adsorbed on a graphite substrate using the worm algorithm path integral Monte Carlo (WAPIMC) method [M. Boninsegni, N. Prokof'ev and B. Svistunov, Phys. Rev. E74, 036701 (2006)]. For this purpose, we have modified a previously written WAPIMC code originally adapted for 4He on graphite, by including the second 3He-component. To describe the fermions, a temperature-dependent statistical potential has been used. This has proven very effective. The WAPIMC calculations have been conducted in the millikelvin temperature regime. However, because of the heavy computations involved, only 30, 40 and 50 mK have been considered for the time being. The pair correlations, Matsubara Green's function, structure factor, and density profiles have been explored at these temperatures.

  5. Exponential Gaussian approach for spectral modelling: The EGO algorithm II. Band asymmetry

    NASA Astrophysics Data System (ADS)

    Pompilio, Loredana; Pedrazzi, Giuseppe; Cloutis, Edward A.; Craig, Michael A.; Roush, Ted L.

    2010-08-01

    The present investigation is complementary to a previous paper which introduced the EGO approach to spectral modelling of reflectance measurements acquired in the visible and near-IR range (Pompilio, L., Pedrazzi, G., Sgavetti, M., Cloutis, E.A., Craig, M.A., Roush, T.L. [2009]. Icarus, 201 (2), 781-794). Here, we show the performances of the EGO model in attempting to account for temperature-induced variations in spectra, specifically band asymmetry. Our main goals are: (1) to recognize and model thermal-induced band asymmetry in reflectance spectra; (2) to develop a basic approach for decomposition of remotely acquired spectra from planetary surfaces, where effects due to temperature variations are most prevalent; (3) to reduce the uncertainty related to quantitative estimation of band position and depth when band asymmetry is occurring. In order to accomplish these objectives, we tested the EGO algorithm on a number of measurements acquired on powdered pyroxenes at sample temperature ranging from 80 up to 400 K. The main results arising from this study are: (1) EGO model is able to numerically account for the occurrence of band asymmetry on reflectance spectra; (2) the returned set of EGO parameters can suggest the influence of some additional effect other than the electronic transition responsible for the absorption feature; (3) the returned set of EGO parameters can help in estimating the surface temperature of a planetary body; (4) the occurrence of absorptions which are less affected by temperature variations can be mapped for minerals and thus used for compositional estimates. Further work is still required in order to analyze the behaviour of the EGO algorithm with respect to temperature-induced band asymmetry using powdered pyroxene spanning a range of compositions and grain sizes and more complex band shapes.

  6. Algorithms and novel applications based on the isokinetic ensemble. II. Ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    Minary, Peter; Martyna, Glenn J.; Tuckerman, Mark E.

    2003-02-01

    In this paper (Paper II), the isokinetic dynamics scheme described in Paper I is combined with the plane-wave based Car-Parrinello (CP) ab initio molecular dynamics (MD) method [R. Car and M. Parrinello, Phys. Rev. Lett. 55, 2471 (1985)] to enable the efficient study of chemical reactions and metallic systems. The Car-Parrinello approach employs "on the fly" electronic structure calculations as a means of generating accurate internuclear forces for use in a molecular dynamics simulation. This is accomplished by the introduction of an extended Lagrangian that contains the electronic orbitals as fictitious dynamical variables (often expressed directly in terms of the expansion coefficients of the orbitals in a particular basis set). Thus, rather than quench the expansion coefficients to obtain the ground state energy and nuclear forces at every time step, the orbitals are "propagated" under conditions that allow them to fluctuate rapidly around their global minimum and, hence, generate an accurate approximation to the nuclear forces as the simulation proceeds. Indeed, the CP technique requires the dynamics of the orbitals to be both fast compared to the nuclear degrees of freedom while keeping the fictitious kinetic energy that allows them to be propagated dynamically as small as possible. While these conditions can be easy to achieve in many types of systems, in metals and highly exothermic chemical reactions difficulties arise. (Note, the CP dynamics of metals is incorrect because the nuclear motion does not occur on the ground state electronic surface but it can, nonetheless, provide useful information.) In order to alleviate these difficulties the isokinetic methods of Paper I are applied to derive isokinetic CP equations of motion. The efficacy of the new isokinetic CPMD method is demonstrated on model and realistic systems. The latter include, metallic systems, liquid aluminum, a small silicon sample, the 2×1 reconstruction of the silicon 100 surface, and the

  7. Optimal operational strategies for a day-ahead electricity market in the presence of market power using multi-objective evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Rodrigo, Deepal

    2007-12-01

    This dissertation introduces a novel approach for optimally operating a day-ahead electricity market not only by economically dispatching the generation resources but also by minimizing the influences of market manipulation attempts by the individual generator-owning companies while ensuring that the power system constraints are not violated. Since economic operation of the market conflicts with the individual profit maximization tactics such as market manipulation by generator-owning companies, a methodology that is capable of simultaneously optimizing these two competing objectives has to be selected. Although numerous previous studies have been undertaken on the economic operation of day-ahead markets and other independent studies have been conducted on the mitigation of market power, the operation of a day-ahead electricity market considering these two conflicting objectives simultaneously has not been undertaken previously. These facts provided the incentive and the novelty for this study. A literature survey revealed that many of the traditional solution algorithms convert multi-objective functions into either a single-objective function using weighting schemas or undertake optimization of one function at a time. Hence, these approaches do not truly optimize the multi-objectives concurrently. Due to these inherent deficiencies of the traditional algorithms, the use of alternative non-traditional solution algorithms for such problems has become popular and widely used. Of these, multi-objective evolutionary algorithms (MOEA) have received wide acceptance due to their solution quality and robustness. In the present research, three distinct algorithms were considered: a non-dominated sorting genetic algorithm II (NSGA II), a multi-objective tabu search algorithm (MOTS) and a hybrid of multi-objective tabu search and genetic algorithm (MOTS/GA). The accuracy and quality of the results from these algorithms for applications similar to the problem investigated here

  8. Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Sen, Ramazan Sonat; Smith, Curtis Lee

    2015-07-01

    The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system

  9. The Sloan Digital Sky Survey-II Supernova Survey:Search Algorithm and Follow-up Observations

    SciTech Connect

    Sako, Masao; Bassett, Bruce; Becker, Andrew; Cinabro, David; DeJongh, Don Frederic; Depoy, D.L.; Doi, Mamoru; Garnavich, Peter M.; Craig, Hogan, J.; Holtzman, Jon; Jha, Saurabh; Konishi, Kohki; Lampeitl, Hubert; Marriner, John; Miknaitis, Gajus; Nichol, Robert C.; Prieto, Jose Luis; Richmond, Michael W.; Schneider, Donald P.; Smith, Mathew; SubbaRao, Mark; /Chicago U. /Tokyo U. /Tokyo U. /South African Astron. Observ. /Tokyo U. /Apache Point Observ. /Seoul Natl. U. /Apache Point Observ. /Apache Point Observ. /Tokyo U. /Seoul Natl. U. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ.

    2007-09-14

    The Sloan Digital Sky Survey-II Supernova Survey has identified a large number of new transient sources in a 300 deg2 region along the celestial equator during its first two seasons of a three-season campaign. Multi-band (ugriz) light curves were measured for most of the sources, which include solar system objects, Galactic variable stars, active galactic nuclei, supernovae (SNe), and other astronomical transients. The imaging survey is augmented by an extensive spectroscopic follow-up program to identify SNe, measure their redshifts, and study the physical conditions of the explosions and their environment through spectroscopic diagnostics. During the survey, light curves are rapidly evaluated to provide an initial photometric type of the SNe, and a selected sample of sources are targeted for spectroscopic observations. In the first two seasons, 476 sources were selected for spectroscopic observations, of which 403 were identified as SNe. For the Type Ia SNe, the main driver for the Survey, our photometric typing and targeting efficiency is 90%. Only 6% of the photometric SN Ia candidates were spectroscopically classified as non-SN Ia instead, and the remaining 4% resulted in low signal-to-noise, unclassified spectra. This paper describes the search algorithm and the software, and the real-time processing of the SDSS imaging data. We also present the details of the supernova candidate selection procedures and strategies for follow-up spectroscopic and imaging observations of the discovered sources.

  10. Multi-objective optimization of discrete time-cost tradeoff problem in project networks using non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shahriari, Mohammadreza

    2016-03-01

    The time-cost tradeoff problem is one of the most important and applicable problems in project scheduling area. There are many factors that force the mangers to crash the time. This factor could be early utilization, early commissioning and operation, improving the project cash flow, avoiding unfavorable weather conditions, compensating the delays, and so on. Since there is a need to allocate extra resources to short the finishing time of project and the project managers are intended to spend the lowest possible amount of money and achieve the maximum crashing time, as a result, both direct and indirect costs will be influenced in the project, and here, we are facing into the time value of money. It means that when we crash the starting activities in a project, the extra investment will be tied in until the end date of the project; however, when we crash the final activities, the extra investment will be tied in for a much shorter period. This study is presenting a two-objective mathematical model for balancing compressing the project time with activities delay to prepare a suitable tool for decision makers caught in available facilities and due to the time of projects. Also drawing the scheduling problem to real world conditions by considering nonlinear objective function and the time value of money are considered. The presented problem was solved using NSGA-II, and the effect of time compressing reports on the non-dominant set.

  11. Cost versus life cycle assessment-based environmental impact optimization of drinking water production plants.

    PubMed

    Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L

    2016-07-15

    Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly.

  12. FAST-PT II: an algorithm to calculate convolution integrals of general tensor quantities in cosmological perturbation theory

    NASA Astrophysics Data System (ADS)

    Fang, Xiao; Blazek, Jonathan A.; McEwen, Joseph E.; Hirata, Christopher M.

    2017-02-01

    Cosmological perturbation theory is a powerful tool to predict the statistics of large-scale structure in the weakly non-linear regime, but even at 1-loop order it results in computationally expensive mode-coupling integrals. Here we present a fast algorithm for computing 1-loop power spectra of quantities that depend on the observer's orientation, thereby generalizing the FAST-PT framework (McEwen et al., 2016) that was originally developed for scalars such as the matter density. This algorithm works for an arbitrary input power spectrum and substantially reduces the time required for numerical evaluation. We apply the algorithm to four examples: intrinsic alignments of galaxies in the tidal torque model; the Ostriker-Vishniac effect; the secondary CMB polarization due to baryon flows; and the 1-loop matter power spectrum in redshift space. Code implementing this algorithm and these applications is publicly available at https://github.com/JoeMcEwen/FAST-PT.

  13. Developement of a same-side kaon tagging algorithm of B^0_s decays for measuring delta m_s at CDF II

    SciTech Connect

    Menzemer, Stephanie; /Heidelberg U.

    2006-06-01

    The authors developed a Same-Side Kaon Tagging algorithm to determine the production flavor of B{sub s}{sup 0} mesons. Until the B{sub s}{sup 0} mixing frequency is clearly observed the performance of the Same-Side Kaon Tagging algorithm can not be measured on data but has to be determined on Monte Carlo simulation. Data and Monte Carlo agreement has been evaluated for both the B{sub s}{sup 0} and the high statistics B{sup +} and B{sup 0} modes. Extensive systematic studies were performed to quantify potential discrepancies between data and Monte Carlo. The final optimized tagging algorithm exploits the particle identification capability of the CDF II detector. it achieves a tagging performance of {epsilon}D{sup 2} = 4.0{sub -1.2}{sup +0.9} on the B{sub s}{sup 0} {yields} D{sub s}{sup -} {pi}{sup +} sample. The Same-Side Kaon Tagging algorithm presented here has been applied to the ongoing B{sub s}{sup 0} mixing analysis, and has provided a factor of 3-4 increase in the effective statistical size of the sample. This improvement results in the first direct measurement of the B{sub s}{sup 0} mixing frequency.

  14. Dimension reduction of decision variables for multireservoir operation: A spectral optimization model

    NASA Astrophysics Data System (ADS)

    Chen, Duan; Leon, Arturo S.; Gibson, Nathan L.; Hosseini, Parnian

    2016-01-01

    Optimizing the operation of a multireservoir system is challenging due to the high dimension of the decision variables that lead to a large and complex search space. A spectral optimization model (SOM), which transforms the decision variables from time domain to frequency domain, is proposed to reduce the dimensionality. The SOM couples a spectral dimensionality-reduction method called Karhunen-Loeve (KL) expansion within the routine of Nondominated Sorting Genetic Algorithm (NSGA-II). The KL expansion is used to represent the decision variables as a series of terms that are deterministic orthogonal functions with undetermined coefficients. The KL expansion can be truncated into fewer significant terms, and consequently, fewer coefficients by a predetermined number. During optimization, operators of the NSGA-II (e.g., crossover) are conducted only on the coefficients of the KL expansion rather than the large number of decision variables, significantly reducing the search space. The SOM is applied to the short-term operation of a 10-reservoir system in the Columbia River of the United States. Two scenarios are considered herein, the first with 140 decision variables and the second with 3360 decision variables. The hypervolume index is used to evaluate the optimization performance in terms of convergence and diversity. The evaluation of optimization performance is conducted for both conventional optimization model (i.e., NSGA-II without KL) and the SOM with different number of KL terms. The results show that the number of decision variables can be greatly reduced in the SOM to achieve a similar or better performance compared to the conventional optimization model. For the scenario with 140 decision variables, the optimal performance of the SOM model is found with six KL terms. For the scenario with 3360 decision variables, the optimal performance of the SOM model is obtained with 11 KL terms.

  15. SEBAL-A: A remote sensing ET algorithm that accounts for advection with limited data. Part II: Test for transferability

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Because the Surface Energy Balance Algorithm for Land (SEBAL) tends to underestimate ET under conditions of advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET). The modification involved the estimation of advected en...

  16. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    SciTech Connect

    Stankovski, Z.

    1995-12-31

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation about 90% of the computing time is consumed in the collision probability evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper the author presents a simple but efficient parallel algorithm based on the message passing host/node programmation model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, which is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SPI and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SPI. Because of heterogeneity of the workstation network, the author did not ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors.

  17. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less

  18. Efficient Algorithm for Locating and Sizing Series Compensation Devices in Large Transmission Grids: Solutions and Applications (PART II)

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael

    2014-01-14

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements

  19. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.

  20. Contaminant detection on poultry carcasses using hyperspectral data: Part II. Algorithms for selection of sets of ratio features

    NASA Astrophysics Data System (ADS)

    Nakariyakul, Songyot; Casasent, David P.

    2007-09-01

    We consider new methods to select useful sets of ratio features in hyperspectral data to detect contaminant regions on chicken carcasses using data provided by ARS (Athens, GA). A ratio feature is the ratio of the response at each pixel for two different wavebands. Ratio features perform a type of normalization and can thus help reduce false alarms, if a good normalization algorithm is not available. Thus, they are of interest. We present a new algorithm for the general problem of such feature selection in high-dimensional data. The four contaminant types of interest are three types of feces from different gastrointestinal regions (duodenum, ceca, and colon) and ingesta (undigested food) from the gizzard. To select the best two sets of ratio features from this 492-band HS data requires an exhaustive search of more than seven billion combinations of two sets of ratio features, which is very excessive. Thus, we propose our new fast ratio feature selection algorithm that requires evaluation of a much fewer number of sets of ratio features and is capable of giving quasi-optimal or optimal sets of ratio features. This new feature selection method has not been previously presented. It is shown to offer promise for an excellent detection rate and a low false alarm rate for this application. Our tests use data with different feed types and different contaminant types.

  1. Diabetes Risk Factors, Diabetes Risk Algorithms, and the Prediction of Future Frailty: The Whitehall II Prospective Cohort Study

    PubMed Central

    Bouillon, Kim; Kivimäki, Mika; Hamer, Mark; Shipley, Martin J.; Akbaraly, Tasnime N.; Tabak, Adam; Singh-Manoux, Archana; Batty, G. David

    2013-01-01

    Objective To examine whether established diabetes risk factors and diabetes risk algorithms are associated with future frailty. Design Prospective cohort study. Risk algorithms at baseline (1997–1999) were the Framingham Offspring, Cambridge, and Finnish diabetes risk scores. Setting Civil service departments in London, United Kingdom. Participants There were 2707 participants (72% men) aged 45 to 69 years at baseline assessment and free of diabetes. Measurements Risk factors (age, sex, family history of diabetes, body mass index, waist circumference, systolic and diastolic blood pressure, antihypertensive and corticosteroid treatments, history of high blood glucose, smoking status, physical activity, consumption of fruits and vegetables, fasting glucose, HDL-cholesterol, and triglycerides) were used to construct the risk algorithms. Frailty, assessed during a resurvey in 2007–2009, was denoted by the presence of 3 or more of the following indicators: self-reported exhaustion, low physical activity, slow walking speed, low grip strength, and weight loss; “prefrailty” was defined as having 2 or fewer of these indicators. Results After a mean follow-up of 10.5 years, 2.8% of the sample was classified as frail and 37.5% as prefrail. Increased age, being female, stopping smoking, low physical activity, and not having a daily consumption of fruits and vegetables were each associated with frailty or prefrailty. The Cambridge and Finnish diabetes risk scores were associated with frailty/prefrailty with odds ratios per 1 SD increase (disadvantage) in score of 1.18 (95% confidence interval: 1.09–1.27) and 1.27 (1.17–1.37), respectively. Conclusion Selected diabetes risk factors and risk scores are associated with subsequent frailty. Risk scores may have utility for frailty prediction in clinical practice. PMID:24103860

  2. ZEUS-2D: A Radiation Magnetohydrodynamics Code for Astrophysical Flows in Two Space Dimensions. II. The Magnetohydrodynamic Algorithms and Tests

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    In this, the second of a series of three papers, we continue a detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows in astrophysics including a self-consistent treatment of the effects of magnetic fields and radiation transfer. In this paper, we give a detailed description of the magnetohydrodynamical (MHD) algorithms in ZEUS-2D. The recently developed constrained transport (CT) algorithm is implemented for the numerical evolution of the components of the magnetic field for MHD simulations. This formalism guarantees the numerically evolved field components will satisfy the divergence-free constraint at all times. We find, however, that the method used to compute the electromotive forces must be chosen carefully to propagate accurately all modes of MHD wave families (in particular shear Alfvén waves). A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-CT method provides for the accurate evolution of all modes of MHD wave families.

  3. Hardware Demonstrator of a Level-1 Track Finding Algorithm with FPGAs for the Phase II CMS Experiment

    NASA Astrophysics Data System (ADS)

    Cieri, D.; CMS Collaboration

    2016-10-01

    At the HL-LHC, proton bunches collide every 25 ns, producing an average of 140 pp interactions per bunch crossing. To operate in such an environment, the CMS experiment will need a Level-1 (L1) hardware trigger, able to identify interesting events within a latency of 12.5 μs. This novel L1 trigger will make use of data coming from the silicon tracker to constrain the trigger rate. Goal of this new track trigger will be to build L1 tracks from the tracker information. The architecture that will be implemented in future to process tracker data is still under discussion. One possibility is to adopt a system entirely based on FPGA electronic. The proposed track finding algorithm is based on the Hough transform method. The algorithm has been tested using simulated pp collision data and it is currently being demonstrated in hardware, using the “MP7”, which is a μTCA board with a powerful FPGA capable of handling data rates approaching 1 Tb/s. Two different implementations of the Hough transform technique are currently under investigation: one utilizes a systolic array to represent the Hough space, while the other exploits a pipelined approach.

  4. Image processing algorithms and techniques II; Proceedings of the Meeting, San Jose, CA, Feb. 25-Mar. 1, 1991

    NASA Astrophysics Data System (ADS)

    Civanlar, Mehmet R.; Mitra, Sanjit K.; Moorhead, Robert J., II

    Recent developments and novel ideas in electronic imaging science and technology are examined. Particular attention is given to color imagery, image processing and filtering techniques, image/image sequence restoration and reconstruction, image analysis and pattern recognition, image coding, and parallel architectures for image processing. Consideration is also given to color correction using principal components, optimum intensity-dependent spread filters in image processing, iterative algorithms with fast-convergence rates in nonlinear image restoration, optimal regularization parameter estimation for image restoration, simultaneous object estimation and image reconstruction in a Bayesian setting, automatic recognition of bones in X-ray bone densitometry, a novel nonlinear filter for image enhancement, image compression for digital video tape recording with high-speed playback capability, image-coding based on two-channel conjugate vector quantization, and artificial neural network models for image understanding.

  5. Customized evolutionary optimization procedure for generating minimum weight compliant mechanisms

    NASA Astrophysics Data System (ADS)

    Sharma, Deepak; Deb, Kalyanmoy; Kishore, N. N.

    2014-01-01

    In this article, a customized evolutionary optimization procedure is developed for generating minimum weight compliant mechanisms. A previously-suggested concept of multi-objectivization in which a helper objective is introduced in addition to the primary objective of the original single-objective optimization problem (SOOP) is used here. The helper objective is chosen in a way such that it is in conflict with the primary objective, thereby causing an evolutionary multi-objective optimization algorithm to maintain diversity in its population from one generation to another. The elitist non-dominated sorting genetic algorithm (NSGA-II) is customized with a domain-specific initialization strategy, a domain-specific crossover operator, and a domain-specific solution repairing strategy. To make the search process computationally tractable, the proposed methodology is made suitable for parallel computing. A local search methodology is applied on the evolved non-dominated solutions found by the above-mentioned modified NSGA-II to refine the solutions further. Two case studies for tracing curvilinear and straight-line paths are performed. Results demonstrate that solutions having smaller weight than the reference design solution obtained by SOOP are found by the proposed procedure. Interesting facts and observations brought out by the study are also narrated and conclusions of the study are made.

  6. Autonomous robot navigation based on the evolutionary multi-objective optimization of potential fields

    NASA Astrophysics Data System (ADS)

    Herrera Ortiz, Juan Arturo; Rodríguez-Vázquez, Katya; Padilla Castañeda, Miguel A.; Arámbula Cosío, Fernando

    2013-01-01

    This article presents the application of a new multi-objective evolutionary algorithm called RankMOEA to determine the optimal parameters of an artificial potential field for autonomous navigation of a mobile robot. Autonomous robot navigation is posed as a multi-objective optimization problem with three objectives: minimization of the distance to the goal, maximization of the distance between the robot and the nearest obstacle, and maximization of the distance travelled on each field configuration. Two decision makers were implemented using objective reduction and discrimination in performance trade-off. The performance of RankMOEA is compared with NSGA-II and SPEA2, including both decision makers. Simulation experiments using three different obstacle configurations and 10 different routes were performed using the proposed methodology. RankMOEA clearly outperformed NSGA-II and SPEA2. The robustness of this approach was evaluated with the simulation of different sensor masks and sensor noise. The scheme reported was also combined with the wavefront-propagation algorithm for global path planning.

  7. An Implicit Energy-Conservative 2D Fokker-Planck Algorithm. II. Jacobian-Free Newton-Krylov Solver

    NASA Astrophysics Data System (ADS)

    Chacón, L.; Barnes, D. C.; Knoll, D. A.; Miley, G. H.

    2000-01-01

    Energy-conservative implicit integration schemes for the Fokker-Planck transport equation in multidimensional geometries require inverting a dense, non-symmetric matrix (Jacobian), which is very expensive to store and solve using standard solvers. However, these limitations can be overcome with Newton-Krylov iterative techniques, since they can be implemented Jacobian-free (the Jacobian matrix from Newton's algorithm is never formed nor stored to proceed with the iteration), and their convergence can be accelerated by preconditioning the original problem. In this document, the efficient numerical implementation of an implicit energy-conservative scheme for multidimensional Fokker-Planck problems using multigrid-preconditioned Krylov methods is discussed. Results show that multigrid preconditioning is very effective in speeding convergence and decreasing CPU requirements, particularly in fine meshes. The solver is demonstrated on grids up to 128×128 points in a 2D cylindrical velocity space (vr, vp) with implicit time steps of the order of the collisional time scale of the problem, τ. The method preserves particles exactly, and energy conservation is improved over alternative approaches, particularly in coarse meshes. Typical errors in the total energy over a time period of 10τ remain below a percent.

  8. Vibrational algorithms for quantitative crystallographic analyses of hydroxyapatite-based biomaterials: II, application to decayed human teeth.

    PubMed

    Adachi, Tetsuya; Pezzotti, Giuseppe; Yamamoto, Toshiro; Ichioka, Hiroaki; Boffelli, Marco; Zhu, Wenliang; Kanamura, Narisato

    2015-05-01

    A systematic investigation, based on highly spectrally resolved Raman spectroscopy, was undertaken to research the efficacy of vibrational assessments in locating chemical and crystallographic fingerprints for the characterization of dental caries and the early detection of non-cavitated carious lesions. Raman results published by other authors have indicated possible approaches for this method. However, they conspicuously lacked physical insight at the molecular scale and, thus, the rigor necessary to prove the efficacy of this spectroscopy method. After solving basic physical challenges in a companion paper, we apply them here in the form of newly developed Raman algorithms for practical dental research. Relevant differences in mineral crystallite (average) orientation and texture distribution were revealed for diseased enamel at different stages compared with healthy mineralized enamel. Clear spectroscopy features could be directly translated in terms of a rigorous and quantitative classification of crystallography and chemical characteristics of diseased enamel structures. The Raman procedure enabled us to trace back otherwise invisible characteristics in early caries, in the translucent zone (i.e., the advancing front of the disease) and in the body of lesion of cavitated caries.

  9. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  10. A retrieval algorithm to evaluate the Photosystem I and Photosystem II spectral contributions to leaf chlorophyll fluorescence at physiological temperatures.

    PubMed

    Palombi, Lorenzo; Cecchi, Giovanna; Lognoli, David; Raimondi, Valentina; Toci, Guido; Agati, Giovanni

    2011-09-01

    A new computational procedure to resolve the contribution of Photosystem I (PSI) and Photosystem II (PSII) to the leaf chlorophyll fluorescence emission spectra at room temperature has been developed. It is based on the Principal Component Analysis (PCA) of the leaf fluorescence emission spectra measured during the OI photochemical phase of fluorescence induction kinetics. During this phase, we can assume that only two spectral components are present, one of which is constant (PSI) and the other variable in intensity (PSII). Application of the PCA method to the measured fluorescence emission spectra of Ficus benjamina L. evidences that the temporal variation in the spectra can be ascribed to a single spectral component (the first principal component extracted by PCA), which can be considered to be a good approximation of the PSII fluorescence emission spectrum. The PSI fluorescence emission spectrum was deduced by difference between measured spectra and the first principal component. A single-band spectrum for the PSI fluorescence emission, peaked at about 735 nm, and a 2-band spectrum with maxima at 685 and 740 nm for the PSII were obtained. A linear combination of only these two spectral shapes produced a good fit for any measured emission spectrum of the leaf under investigation and can be used to obtain the fluorescence emission contributions of photosystems under different conditions. With the use of our approach, the dynamics of energy distribution between the two photosystems, such as state transition, can be monitored in vivo, directly at physiological temperatures. Separation of the PSI and PSII emission components can improve the understanding of the fluorescence signal changes induced by environmental factors or stress conditions on plants.

  11. Optimization of Process Parameters of Hybrid Laser-Arc Welding onto 316L Using Ensemble of Metamodels

    NASA Astrophysics Data System (ADS)

    Zhou, Qi; Jiang, Ping; Shao, Xinyu; Gao, Zhongmei; Cao, Longchao; Yue, Chen; Li, Xiongbin

    2016-08-01

    Hybrid laser-arc welding (LAW) provides an effective way to overcome problems commonly encountered during either laser or arc welding such as brittle phase formation, cracking, and porosity. The process parameters of LAW have significant effects on the bead profile and hence the quality of joint. This paper proposes an optimization methodology by combining non-dominated sorting genetic algorithm (NSGA-II) and ensemble of metamodels (EMs) to address multi-objective process parameter optimization in LAW onto 316L. Firstly, Taguchi experimental design is adopted to generate the experimental samples. Secondly, the relationships between process parameters ( i.e., laser power ( P), welding current ( A), distance between laser and arc ( D), and welding speed ( V)) and the bead geometries are fitted using EMs. The comparative results show that the EMs can take advantage of the prediction ability of each stand-alone metamodel and thus decrease the risk of adopting inappropriate metamodels. Then, the NSGA-II is used to facilitate design space exploration. Besides, the main effects and contribution rates of process parameters on bead profile are analyzed. Eventually, the verification experiments of the obtained optima are carried out and compared with the un-optimized weld seam for bead geometries, weld appearances, and welding defects. Results illustrate that the proposed hybrid approach exhibits great capability of improving welding quality in LAW.

  12. A new methodology for surcharge risk management in urban areas (case study: Gonbad-e-Kavus city).

    PubMed

    Hooshyaripor, Farhad; Yazdi, Jafar

    2017-02-01

    This research presents a simulation-optimization model for urban flood mitigation integrating Non-dominated Sorting Genetic Algorithm (NSGA-II) with Storm Water Management Model (SWMM) hydraulic model under a curve number-based hydrologic model of low impact development technologies in Gonbad-e-Kavus, a small city in the north of Iran. In the developed model, the best performance of the system relies on the optimal layout and capacity of retention ponds over the study area in order to reduce surcharge from the manholes underlying a set of storm event loads, while the available investment plays a restricting role. Thus, there is a multi-objective optimization problem with two conflicting objectives solved successfully by NSGA-II to find a set of optimal solutions known as the Pareto front. In order to analyze the results, a new factor, investment priority index (IPI), is defined which shows the risk of surcharging over the network and priority of the mitigation actions. The IPI is calculated using the probability of pond selection for candidate locations and average depth of the ponds in all Pareto front solutions. The IPI can help the decision makers to arrange a long-term progressive plan with the priority of high-risk areas when an optimal solution has been selected.

  13. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    NASA Astrophysics Data System (ADS)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  14. Novel system identification method and multi-objective-optimal multivariable disturbance observer for electric wheelchair.

    PubMed

    Nasser Saadatzi, Mohammad; Poshtan, Javad; Sadegh Saadatzi, Mohammad; Tafazzoli, Faezeh

    2013-01-01

    Electric wheelchair (EW) is subject to diverse types of terrains and slopes, but also to occupants of various weights, which causes the EW to suffer from highly perturbed dynamics. A precise multivariable dynamics of the EW is obtained using Lagrange equations of motion which models effects of slopes as output-additive disturbances. A static pre-compensator is analytically devised which considerably decouples the EW's dynamics and also brings about a more accurate identification of the EW. The controller is designed with a disturbance-observer (DOB) two-degree-of-freedom architecture, which reduces sensitivity to the model uncertainties while enhancing rejection of the disturbances. Upon disturbance rejection, noise reduction, and robust stability of the control system, three fitness functions are presented by which the DOB is tuned using a multi-objective optimization (MOO) approach namely non-dominated sorting genetic algorithm-II (NSGA-II). Finally, experimental results show desirable performance and robust stability of the proposed algorithm.

  15. Multi Objective Optimization for Calibration and Efficient Uncertainty Analysis of Computationally Expensive Watershed Models

    NASA Astrophysics Data System (ADS)

    Akhtar, T.; Shoemaker, C. A.

    2011-12-01

    Assessing the sensitivity of calibration results to different calibration criteria can be done through multi objective optimization that considers multiple calibration criteria. This analysis can be extended to uncertainty analysis by comparing the results of simulation of the model with parameter sets from many points along a Pareto Front. In this study we employ multi-objective optimization in order to understand which parameter values should be used for flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville Reservoir in upstate New York. The comprehensive analysis procedure encapsulates identification of suitable objectives, analysis of trade-offs obtained through multi-objective optimization, and the impact of the trade-offs uncertainty. Examples of multiple criteria can include a) quality of the fit in different seasons, b) quality of the fit for high flow events and for low flow events, c) quality of the fit for different constituents (e.g. water versus nutrients). Many distributed watershed models are computationally expensive and include a large number of parameters that are to be calibrated. Efficient optimization algorithms are hence needed to find good solutions to multi-criteria calibration problems in a feasible amount of time. We apply a new algorithm called Gap Optimized Multi-Objective Optimization using Response Surfaces (GOMORS), for efficient multi-criteria optimization of the Cannonsville SWAT watershed calibration problem. GOMORS is a stochastic optimization method, which makes use of Radial Basis Functions for approximation of the computationally expensive objectives. GOMORS performance is also compared against other multi-objective algorithms ParEGO and NSGA-II. ParEGO is a kriging based efficient multi-objective optimization algorithm, whereas NSGA-II is a well-known multi-objective evolutionary optimization algorithm. GOMORS is more efficient than both ParEGO and NSGA-II in providing

  16. Search for New Quantum Algorithms

    DTIC Science & Technology

    2006-05-01

    Topological computing for beginners, (slide presentation), Lecture Notes for Chapter 9 - Physics 219 - Quantum Computation. (http...14 II.A.8. A QHS algorithm for Feynman integrals ......................................................18 II.A.9. Non-abelian QHS algorithms -- A...idea is that NOT all environmentally entangling transformations are equally likely. In particular, for spatially separated physical quantum

  17. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  18. Thermal-economic multi-objective optimization of shell and tube heat exchanger using particle swarm optimization (PSO)

    NASA Astrophysics Data System (ADS)

    Ghanei, A.; Assareh, E.; Biglari, M.; Ghanbarzadeh, A.; Noghrehabadi, A. R.

    2014-10-01

    Many studies are performed by researchers about shell and tube heat exchanger (STHE) but the multi-objective particle swarm optimization (PSO) technique has never been used in such studies. This paper presents application of thermal-economic multi-objective optimization of STHE using PSO. For optimal design of a STHE, it was first thermally modeled using e-number of transfer units method while Bell-Delaware procedure was applied to estimate its shell side heat transfer coefficient and pressure drop. Multi objective PSO (MOPSO) method was applied to obtain the maximum effectiveness (heat recovery) and the minimum total cost as two objective functions. The results of optimal designs were a set of multiple optimum solutions, called `Pareto optimal solutions'. In order to show the accuracy of the algorithm, a comparison is made with the non-dominated sorting genetic algorithm (NSGA-II) and MOPSO which are developed for the same problem.

  19. Development of closed-loop supply chain network in terms of corporate social responsibility.

    PubMed

    Pedram, Ali; Pedram, Payam; Yusoff, Nukman Bin; Sorooshian, Shahryar

    2017-01-01

    Due to the rise in awareness of environmental issues and the depletion of virgin resources, many firms have attempted to increase the sustainability of their activities. One efficient way to elevate sustainability is the consideration of corporate social responsibility (CSR) by designing a closed loop supply chain (CLSC). This paper has developed a mathematical model to increase corporate social responsibility in terms of job creation. Moreover the model, in addition to increasing total CLSC profit, provides a range of strategic decision solutions for decision makers to select a best action plan for a CLSC. A proposed multi-objective mixed-integer linear programming (MILP) model was solved with non-dominated sorting genetic algorithm II (NSGA-II). Fuzzy set theory was employed to select the best compromise solution from the Pareto-optimal solutions. A numerical example was used to validate the potential application of the proposed model. The results highlight the effect of CSR in the design of CLSC.

  20. Multi-Disciplinary Design Optimization of Hypersonic Air-Breathing Vehicle

    NASA Astrophysics Data System (ADS)

    Wu, Peng; Tang, Zhili; Sheng, Jianda

    2016-06-01

    A 2D hypersonic vehicle shape with an idealized scramjet is designed at a cruise regime: Mach number (Ma) = 8.0, Angle of attack (AOA) = 0 deg and altitude (H) = 30kms. Then a multi-objective design optimization of the 2D vehicle is carried out by using a Pareto Non-dominated Sorting Genetic Algorithm II (NSGA-II). In the optimization process, the flow around the air-breathing vehicle is simulated by inviscid Euler equations using FLUENT software and the combustion in the combustor is modeled by a methodology based on the well known combination effects of area-varying pipe flow and heat transfer pipe flow. Optimization results reveal tradeoffs among total pressure recovery coefficient of forebody, lift to drag ratio of vehicle, specific impulse of scramjet engine and the maximum temperature on the surface of vehicle.

  1. Solving a bi-objective mathematical model for location-routing problem with time windows in multi-echelon reverse logistics using metaheuristic procedure

    NASA Astrophysics Data System (ADS)

    Ghezavati, V. R.; Beigi, M.

    2016-06-01

    During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.

  2. Explore the impacts of river flow and quality on biodiversity for water resources management by AI techniques

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Tsai Tsai, Wen-Ping; Chang, Li-Chiu

    2016-04-01

    Water resources development is very challenging in Taiwan due to her diverse geographic environment and climatic conditions. To pursue sustainable water resources development, rationality and integrity is essential for water resources planning. River water quality and flow regimes are closely related to each other and affect river ecosystems simultaneously. This study aims to explore the complex impacts of water quality and flow regimes on fish community in order to comprehend the situations of the eco-hydrological system in the Danshui River of northern Taiwan. To make an effective and comprehensive strategy for sustainable water resources management, this study first models fish diversity through implementing a hybrid artificial neural network (ANN) based on long-term observational heterogeneity data of water quality, stream flow and fish species in the river. Then we use stream flow to estimate the loss of dissolved oxygen based on back-propagation neural networks (BPNNs). Finally, the non-dominated sorting genetic algorithm II (NSGA-II) is established for river flow management over the Shihmen Reservoir which is the main reservoir in this study area. In addition to satisfying the water demands of human beings and ecosystems, we also consider water quality for river flow management. The ecosystem requirement takes the form of maximizing fish diversity, which can be estimated by the hybrid ANN. The human requirement is to provide a higher satisfaction degree of water supply while the water quality requirement is to reduce the loss of dissolved oxygen in the river among flow stations. The results demonstrate that the proposed methodology can offer diversified alternative strategies for reservoir operation and improve reservoir operation strategies for producing downstream flows that could better meet both human and ecosystem needs as well as maintain river water quality. Keywords: Artificial intelligence (AI), Artificial neural networks (ANNs), Non

  3. Investigation on Reservoir Operation of Agricultural Water Resources Management for Drought Mitigation

    NASA Astrophysics Data System (ADS)

    Cheng, C. L.

    2015-12-01

    Investigation on Reservoir Operation of Agricultural Water Resources Management for Drought Mitigation Chung-Lien Cheng, Wen-Ping Tsai, Fi-John Chang* Department of Bioenvironmental Systems Engineering, National Taiwan University, Da-An District, Taipei 10617, Taiwan, ROC.Corresponding author: Fi-John Chang (changfj@ntu.edu.tw) AbstractIn Taiwan, the population growth and economic development has led to considerable and increasing demands for natural water resources in the last decades. Under such condition, water shortage problems have frequently occurred in northern Taiwan in recent years such that water is usually transferred from irrigation sectors to public sectors during drought periods. Facing the uneven spatial and temporal distribution of water resources and the problems of increasing water shortages, it is a primary and critical issue to simultaneously satisfy multiple water uses through adequate reservoir operations for sustainable water resources management. Therefore, we intend to build an intelligent reservoir operation system for the assessment of agricultural water resources management strategy in response to food security during drought periods. This study first uses the grey system to forecast the agricultural water demand during February and April for assessing future agricultural water demands. In the second part, we build an intelligent water resources system by using the non-dominated sorting genetic algorithm-II (NSGA-II), an optimization tool, for searching the water allocation series based on different water demand scenarios created from the first part to optimize the water supply operation for different water sectors. The results can be a reference guide for adequate agricultural water resources management during drought periods. Keywords: Non-dominated sorting genetic algorithm-II (NSGA-II); Grey System; Optimization; Agricultural Water Resources Management.

  4. Multi-objective optimization for combined quality-quantity urban runoff control

    NASA Astrophysics Data System (ADS)

    Oraei Zare, S.; Saghafian, B.; Shamsai, A.

    2012-12-01

    Urban development affects the quantity and quality of urban surface runoff. In recent years, the best management practices (BMPs) concept has been widely promoted for control of both quality and quantity of urban floods. However, means to optimize the BMPs in a conjunctive quantity/quality framework are still under research. In this paper, three objective functions were considered: (1) minimization of the total flood damages, cost of BMP implementation and cost of land-use development; (2) reducing the amount of TSS (total suspended solid) and BOD5 (biological oxygen demand), representing the pollution characteristics, to below the threshold level; and (3) minimizing the total runoff volume. The biological oxygen demand and total suspended solid values were employed as two measures of urban runoff quality. The total surface runoff volume produced by sub-basins was representative of the runoff quantity. The construction and maintenance costs of the BMPs were also estimated based on the local price standards. Urban runoff quantity and quality in the case study watershed were simulated with the Storm Water Management Model (SWMM). The NSGA-II (Non-dominated Sorting Genetic Algorithm II) optimization technique was applied to derive the optimal trade off curve between various objectives. In the proposed structure for the NSGA-II algorithm, a continuous structure and intermediate crossover were used because they perform better as far as the optimization efficiency is concerned. Finally, urban runoff management scenarios were presented based on the optimal trade-off curve using the k-means method. Subsequently, a specific runoff control scenario was proposed to the urban managers.

  5. Long-term ELBARA-II Assistance to SMOS Land Product and Algorithm Validation at the Valencia Anchor Station (MELBEX Experiment 2010-2013)

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto; Wigneron, Jean-Pierre; Schwank, Mike; Miernecki, Maciej; Kerr, Yann; Casal, Tania; Delwart, Steven; Fernandez-Moran, Roberto; Mecklenburg, Susanne; Coll Pajaron, M. Amparo; Salgado Hernanz, Paula

    The main activity of the Valencia Anchor Station (VAS) is currently now to support the validation of SMOS (Soil Moisture and Ocean Salinity) Level 2 and 3 land products (soil moisture, SM, and vegetation optical depth, TAU). With this aim, the European Space Agency (ESA) has provided the Climatology from Satellites Group of the University of Valencia with an ELBARA-II microwave radiometer under a loan agreement since September 2009. During this time, brightness temperatures (TB) have continuously been acquired, except during normal maintenance or minor repair interruptions. ELBARA-II is an L-band dual-polarization radiometer with two channels (1400-1418 MHz, 1409-1427 MHz). It is continuously measuring over a vineyard field (El Renegado, Caudete de las Fuentes, Valencia) from a 15 m platform with a constant protocol for calibration and angular scanning measurements with the aim to assisting the validation of SMOS land products and the calibration of the L-MEB (L-Band Emission of the Biosphere) -basis for the SMOS Level 2 Land Processor- over the VAS validation site. One of the advantages of using the VAS site is the possibility of studying two different environmental conditions along the year. While the vine cycle extends mainly between April and October, during the rest of the year the area remains under bare soil conditions, adequate for the calibration of the soil model. The measurement protocol currently running has shown to be robust during the whole operation time and will be extended in time as much as possible to continue providing a long-term data set of ELBARA-II TB measurements and retrieved SM and TAU. This data set is also showing to be useful in support of SMOS scientific activities: the VAS area and, specifically the ELBARA-II site, offer good conditions to control the long-term evolution of SMOS Level 2 and Level 3 land products and interpret eventual anomalies that may obscure sensor hidden biases. In addition, SM and TAU that are currently

  6. Managing Algorithmic Skeleton Nesting Requirements in Realistic Image Processing Applications: The Case of the SKiPPER-II Parallel Programming Environment's Operating Model

    NASA Astrophysics Data System (ADS)

    Coudarcher, Rémi; Duculty, Florent; Serot, Jocelyn; Jurie, Frédéric; Derutin, Jean-Pierre; Dhome, Michel

    2005-12-01

    SKiPPER is a SKeleton-based Parallel Programming EnviRonment being developed since 1996 and running at LASMEA Laboratory, the Blaise-Pascal University, France. The main goal of the project was to demonstrate the applicability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This paper deals with the special features embedded in the latest version of the project: algorithmic skeleton nesting capabilities and a fully dynamic operating model. Throughout the case study of a complete and realistic image processing application, in which we have pointed out the requirement for skeleton nesting, we are presenting the operating model of this feature. The work described here is one of the few reported experiments showing the application of skeleton nesting facilities for the parallelisation of a realistic application, especially in the area of image processing. The image processing application we have chosen is a 3D face-tracking algorithm from appearance.

  7. Deductive sort and climbing sort: new methods for non-dominated sorting.

    PubMed

    McClymont, Kent; Keedwell, Ed

    2012-01-01

    In recent years an increasing number of real-world many-dimensional optimisation problems have been identified across the spectrum of research fields. Many popular evolutionary algorithms use non-dominance as a measure for selecting solutions for future generations. The process of sorting populations into non-dominated fronts is usually the controlling order of computational complexity and can be expensive for large populations or for a high number of objectives. This paper presents two novel methods for non-dominated sorting: deductive sort and climbing sort. The two new methods are compared to the fast non-dominated sort of NSGA-II and the non-dominated rank sort of the omni-optimizer. The results demonstrate the improved efficiencies of the deductive sort and the reductions in comparisons that can be made when applying inferred dominance relationships defined in this paper.

  8. Fatigue design of a cellular phone folder using regression model-based multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Kim, Young Gyun; Lee, Jongsoo

    2016-08-01

    In a folding cellular phone, the folding device is repeatedly opened and closed by the user, which eventually results in fatigue damage, particularly to the front of the folder. Hence, it is important to improve the safety and endurance of the folder while also reducing its weight. This article presents an optimal design for the folder front that maximizes its fatigue endurance while minimizing its thickness. Design data for analysis and optimization were obtained experimentally using a test jig. Multi-objective optimization was carried out using a nonlinear regression model. Three regression methods were employed: back-propagation neural networks, logistic regression and support vector machines. The AdaBoost ensemble technique was also used to improve the approximation. Two-objective Pareto-optimal solutions were identified using the non-dominated sorting genetic algorithm (NSGA-II). Finally, a numerically optimized solution was validated against experimental product data, in terms of both fatigue endurance and thickness index.

  9. GEOFIM: A WebGIS application for integrated geophysical modeling in active volcanic regions

    NASA Astrophysics Data System (ADS)

    Currenti, Gilda; Napoli, Rosalba; Sicali, Antonino; Greco, Filippo; Negro, Ciro Del

    2014-09-01

    We present GEOFIM (GEOphysical Forward/Inverse Modeling), a WebGIS application for integrated interpretation of multiparametric geophysical observations. It has been developed to jointly interpret scalar and vector magnetic data, gravity data, as well as geodetic data, from GPS, tiltmeter, strainmeter and InSAR observations, recorded in active volcanic areas. GEOFIM gathers a library of analytical solutions, which provides an estimate of the geophysical signals due to perturbations in the thermal and stress state of the volcano. The integrated geophysical modeling can be performed by a simple trial and errors forward modeling or by an inversion procedure based on NSGA-II algorithm. The software capability was tested on the multiparametric data set recorded during the 2008-2009 Etna flank eruption onset. The results encourage to exploit this approach to develop a near-real-time warning system for a quantitative model-based assessment of geophysical observations in areas where different parameters are routinely monitored.

  10. Sensitivity analysis of multi-objective optimization of CPG parameters for quadruped robot locomotion

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina P.; Costa, Lino

    2012-09-01

    In this paper, a study based on sensitivity analysis is performed for a gait multi-objective optimization system that combines bio-inspired Central Patterns Generators (CPGs) and a multi-objective evolutionary algorithm based on NSGA-II. In this system, CPGs are modeled as autonomous differential equations, that generate the necessary limb movement to perform the required walking gait. In order to optimize the walking gait, a multi-objective problem with three conflicting objectives is formulated: maximization of the velocity, the wide stability margin and the behavioral diversity. The experimental results highlight the effectiveness of this multi-objective approach and the importance of the objectives to find different walking gait solutions for the quadruped robot.

  11. Scalable High-Performance Algorithm for the Simulation of Exciton Dynamics. Application to the Light-Harvesting Complex II in the Presence of Resonant Vibrational Modes.

    PubMed

    Kreisbeck, Christoph; Kramer, Tobias; Aspuru-Guzik, Alán

    2014-09-09

    The accurate simulation of excitonic energy transfer in molecular complexes with coupled electronic and vibrational degrees of freedom is essential for comparing excitonic system parameters obtained from ab initio methods with measured time-resolved spectra. Several exact methods for computing the exciton dynamics within a density-matrix formalism are known but are restricted to small systems with less than 10 sites due to their computational complexity. To study the excitonic energy transfer in larger systems, we adapt and extend the exact hierarchical equation of motion (HEOM) method to various high-performance many-core platforms using the Open Compute Language (OpenCL). For the light-harvesting complex II (LHC II) found in spinach, the HEOM results deviate from predictions of approximate theories and clarify the time scale of the transfer process. We investigate the impact of resonantly coupled vibrations on the relaxation and show that the transfer does not rely on a fine-tuning of specific modes.

  12. Calibrating a Rainfall-Runoff and Routing Model for the Continental United States

    NASA Astrophysics Data System (ADS)

    Jankowfsky, S.; Li, S.; Assteerawatt, A.; Tillmanns, S.; Hilberts, A.

    2014-12-01

    Catastrophe risk models are widely used in the insurance industry to estimate the cost of risk. The models consist of hazard models linked to vulnerability and financial loss models. In flood risk models, the hazard model generates inundation maps. In order to develop country wide inundation maps for different return periods a rainfall-runoff and routing model is run using stochastic rainfall data. The simulated discharge and runoff is then input to a two dimensional inundation model, which produces the flood maps. In order to get realistic flood maps, the rainfall-runoff and routing models have to be calibrated with observed discharge data. The rainfall-runoff model applied here is a semi-distributed model based on the Topmodel (Beven and Kirkby, 1979) approach which includes additional snowmelt and evapotranspiration models. The routing model is based on the Muskingum-Cunge (Cunge, 1969) approach and includes the simulation of lakes and reservoirs using the linear reservoir approach. Both models were calibrated using the multiobjective NSGA-II (Deb et al., 2002) genetic algorithm with NLDAS forcing data and around 4500 USGS discharge gauges for the period from 1979-2013. Additional gauges having no data after 1979 were calibrated using CPC rainfall data. The model performed well in wetter regions and shows the difficulty of simulating areas with sinks such as karstic areas or dry areas. Beven, K., Kirkby, M., 1979. A physically based, variable contributing area model of basin hydrology. Hydrol. Sci. Bull. 24 (1), 43-69. Cunge, J.A., 1969. On the subject of a flood propagation computation method (Muskingum method), J. Hydr. Research, 7(2), 205-230. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T., 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on evolutionary computation, 6(2), 182-197.

  13. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  14. Adaptation of photosystem II to high and low light in wild-type and triazine-resistant Canola plants: analysis by a fluorescence induction algorithm.

    PubMed

    van Rensen, Jack J S; Vredenberg, Wim J

    2011-09-01

    Plants of wild-type and triazine-resistant Canola (Brassica napus L.) were exposed to very high light intensities and after 1 day placed on a laboratory table at low light to recover, to study the kinetics of variable fluorescence after light, and after dark-adaptation. This cycle was repeated several times. The fast OJIP fluorescence rise curve was measured immediately after light exposure and after recovery during 1 day in laboratory room light. A fluorescence induction algorithm has been used for resolution and analysis of these curves. This algorithm includes photochemical and photo-electrochemical quenching release components and a photo-electrical dependent IP-component. The analysis revealed a substantial suppression of the photo-electrochemical component (even complete in the resistant biotype), a partial suppression of the photochemical component and a decrease in the fluorescence parameter F (o) after high light. These effects were recovered after 1 day in the indoor light.

  15. Measurement of the Inclusive Jet Cross Section using the k(T) algorithm in p anti-p collisions at s**(1/2) = 1.96-TeV with the CDF II Detector

    SciTech Connect

    Abulencia, A.; Adelman, J.; Affolder, Anthony Allen; Akimoto, T.; Albrow, Michael G.; Ambrose, D.; Amerio, S.; Amidei, Dante E.; Anastassov, A.; Anikeev, Konstantin; Annovi, A.; /Frascati /Comenius U.

    2007-01-01

    The authors report on measurements of the inclusive jet production cross section as a function of the jet transverse momentum in p{bar p} collisions at {radical}s = 1.96 TeV, using the k{sub T} algorithm and a data sample corresponding to 1.0 fb{sup -1} collected with the Collider Detector at Fermilab in Run II. The measurements are carried out in five different jet rapidity regions with |y{sup jet}| < 2.1 and transverse momentum in the range 54 < p{sub T}{sup jet} < 700 GeV/c. Next-to-leading order perturbative QCD predictions are in good agreement with the measured cross sections.

  16. A multi-stakeholder framework for urban runoff quality management: Application of social choice and bargaining techniques.

    PubMed

    Ghodsi, Seyed Hamed; Kerachian, Reza; Zahmatkesh, Zahra

    2016-04-15

    In this paper, an integrated framework is proposed for urban runoff management. To control and improve runoff quality and quantity, Low Impact Development (LID) practices are utilized. In order to determine the LIDs' areas and locations, the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), which considers three objective functions of minimizing runoff volume, runoff pollution and implementation cost of LIDs, is utilized. In this framework, the Storm Water Management Model (SWMM) is used for stream flow simulation. The non-dominated solutions provided by the NSGA-II are considered as management scenarios. To select the most preferred scenario, interactions among the main stakeholders in the study area with conflicting utilities are incorporated by utilizing bargaining models including a non-cooperative game, Nash model and social choice procedures of Borda count and approval voting. Moreover, a new social choice procedure, named pairwise voting method, is proposed and applied. Based on each conflict resolution approach, a scenario is identified as the ideal solution providing the LIDs' areas, locations and implementation cost. The proposed framework is applied for urban water quality and quantity management in the northern part of Tehran metropolitan city, Iran. Results show that the proposed pairwise voting method tends to select a scenario with a higher percentage of reduction in TSS (Total Suspended Solid) load and runoff volume, in comparison with the Borda count and approval voting methods. Besides, the Nash method presents a management scenario with the highest cost for LIDs' implementation and the maximum values for percentage of runoff volume reduction and TSS removal. The results also signify that selection of an appropriate management scenario by stakeholders in the study area depends on the available financial resources and the relative importance of runoff quality improvement in comparison with reducing the runoff volume.

  17. Algorithm-directed care by nonphysician practitioners in a pediatric population. Part II. Clinical outcomes, patient satisfaction, and costs of care.

    PubMed

    Wheeler, M F; Wilson, L O; Wilson, F P; Wood, R W

    1983-02-01

    We compared outcome and cost of care for 2234 pediatric patients with upper respiratory tract infections cared for by nonphysician practitioners and 304 similar patients cared for by pediatricians. We found no significant differences (p greater than 0.05) between nonphysician practitioners' patients and pediatricians' patients in the status of the original symptoms, the number of patients reporting new symptoms, the number of return visits, or the reasons for return visits. Approximately 93 per cent of both groups had no complaints about their care. Medication costs were higher for Pamosists than pediatricians, but lower labor costs caused Pamosist care to be 15.5 per cent ($2.64) less expensive than pediatrician care in this setting, even when the costs of Pamosist audit by computer were included. Through use of clinical algorithms with computer audit, relatively untrained nonphysician practitioners can deliver safe, cost-effective health care to pediatric patients with upper respiratory infections.

  18. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  19. Predicting Protein Structure Using Parallel Genetic Algorithms.

    DTIC Science & Technology

    1994-12-01

    By " Predicting rotein Structure D istribticfiar.. ................ Using Parallel Genetic Algorithms ,Avaiu " ’ •"... Dist THESIS I IGeorge H...iiLite-d Approved for public release; distribution unlimited AFIT/ GCS /ENG/94D-03 Predicting Protein Structure Using Parallel Genetic Algorithms ...1-1 1.2 Genetic Algorithms ......... ............................ 1-3 1.3 The Protein Folding Problem

  20. Coastal aquifer management based on surrogate models and multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Mantoglou, A.; Kourakos, G.

    2011-12-01

    is capable of solving complex multi-objective optimization problems effectively with significant reduction in computational time compared to previous methods (it requires only 5% of the NSGA -II algorithm time). Further, as indicated in the figure below, the Pareto solution obtained by the much faster MOSA(MNN) algorithm, is better than the solution obtained by the NSGA-II algorithm.

  1. Scheduling algorithms

    NASA Astrophysics Data System (ADS)

    Wolfe, William J.; Wood, David; Sorensen, Stephen E.

    1996-12-01

    This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.

  2. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  3. Efficient Learning Algorithms with Limited Information

    ERIC Educational Resources Information Center

    De, Anindya

    2013-01-01

    The thesis explores efficient learning algorithms in settings which are more restrictive than the PAC model of learning (Valiant) in one of the following two senses: (i) The learning algorithm has a very weak access to the unknown function, as in, it does not get labeled samples for the unknown function (ii) The error guarantee required from the…

  4. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  5. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  6. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  7. Sequential Quadratic Programming Algorithms for Optimization

    DTIC Science & Technology

    1989-08-01

    brief history of the evolution of SQP algorithms. Surveys for this area can be found in [GMWSl]. (Po831 or fGNISW𔄂 ,] for example. The origins Ihe...0) S (TnI(P(O) K __jnfl’flj)j 2 < 0. lhe adjust uncut of thleslack variables. s in step (Ii) oft he algorith (-ii a ii only lvad to a fu rt her red

  8. Optimization of the Coverage and Accuracy of an Indoor Positioning System with a Variable Number of Sensors

    PubMed Central

    Domingo-Perez, Francisco; Lazaro-Galilea, Jose Luis; Bravo, Ignacio; Gardel, Alfredo; Rodriguez, David

    2016-01-01

    This paper focuses on optimal sensor deployment for indoor localization with a multi-objective evolutionary algorithm. Our goal is to obtain an algorithm to deploy sensors taking the number of sensors, accuracy and coverage into account. Contrary to most works in the literature, we consider the presence of obstacles in the region of interest (ROI) that can cause occlusions between the target and some sensors. In addition, we aim to obtain all of the Pareto optimal solutions regarding the number of sensors, coverage and accuracy. To deal with a variable number of sensors, we add speciation and structural mutations to the well-known non-dominated sorting genetic algorithm (NSGA-II). Speciation allows one to keep the evolution of sensor sets under control and to apply genetic operators to them so that they compete with other sets of the same size. We show some case studies of the sensor placement of an infrared range-difference indoor positioning system with a fairly complex model of the error of the measurements. The results obtained by our algorithm are compared to sensor placement patterns obtained with random deployment to highlight the relevance of using such a deployment algorithm. PMID:27338414

  9. A Multiobjective Approach to Homography Estimation

    PubMed Central

    Osuna-Enciso, Valentín; Oliva, Diego; Zúñiga, Virgilio; Pérez-Cisneros, Marco; Zaldívar, Daniel

    2016-01-01

    In several machine vision problems, a relevant issue is the estimation of homographies between two different perspectives that hold an extensive set of abnormal data. A method to find such estimation is the random sampling consensus (RANSAC); in this, the goal is to maximize the number of matching points given a permissible error (Pe), according to a candidate model. However, those objectives are in conflict: a low Pe value increases the accuracy of the model but degrades its generalization ability that refers to the number of matching points that tolerate noisy data, whereas a high Pe value improves the noise tolerance of the model but adversely drives the process to false detections. This work considers the estimation process as a multiobjective optimization problem that seeks to maximize the number of matching points whereas Pe is simultaneously minimized. In order to solve the multiobjective formulation, two different evolutionary algorithms have been explored: the Nondominated Sorting Genetic Algorithm II (NSGA-II) and the Nondominated Sorting Differential Evolution (NSDE). Results considering acknowledged quality measures among original and transformed images over a well-known image benchmark show superior performance of the proposal than Random Sample Consensus algorithm. PMID:26839532

  10. Robust Multiobjective Controllability of Complex Neuronal Networks.

    PubMed

    Tang, Yang; Gao, Huijun; Du, Wei; Lu, Jianquan; Vasilakos, Athanasios V; Kurths, Jurgen

    2016-01-01

    This paper addresses robust multiobjective identification of driver nodes in the neuronal network of a cat's brain, in which uncertainties in determination of driver nodes and control gains are considered. A framework for robust multiobjective controllability is proposed by introducing interval uncertainties and optimization algorithms. By appropriate definitions of robust multiobjective controllability, a robust nondominated sorting adaptive differential evolution (NSJaDE) is presented by means of the nondominated sorting mechanism and the adaptive differential evolution (JaDE). The simulation experimental results illustrate the satisfactory performance of NSJaDE for robust multiobjective controllability, in comparison with six statistical methods and two multiobjective evolutionary algorithms (MOEAs): nondominated sorting genetic algorithms II (NSGA-II) and nondominated sorting composite differential evolution. It is revealed that the existence of uncertainties in choosing driver nodes and designing control gains heavily affects the controllability of neuronal networks. We also unveil that driver nodes play a more drastic role than control gains in robust controllability. The developed NSJaDE and obtained results will shed light on the understanding of robustness in controlling realistic complex networks such as transportation networks, power grid networks, biological networks, etc.

  11. Photobilirubin II.

    PubMed Central

    Bonnett, R; Buckley, D G; Hamzetash, D; Hawkes, G E; Ioannou, S; Stoll, M S

    1984-01-01

    An improved preparation of photobilirubin II in ammoniacal methanol is described. Evidence is presented which distinguishes between the two structures proposed earlier for photobilirubin II in favour of the cycloheptadienyl structure. Nuclear-Overhauser-enhancement measurements with bilirubin IX alpha and photobilirubin II in dimethyl sulphoxide are complicated by the occurrence of negative and zero effects. The partition coefficient of photobilirubin II between chloroform and phosphate buffer (pH 7.4) is 0.67. PMID:6743241

  12. Multi-objective optimization of empirical hydrological model for streamflow prediction

    NASA Astrophysics Data System (ADS)

    Guo, Jun; Zhou, Jianzhong; Lu, Jiazheng; Zou, Qiang; Zhang, Huajie; Bi, Sheng

    2014-04-01

    Traditional calibration of hydrological models is performed with a single objective function. Practical experience with the calibration of hydrologic models reveals that single objective functions are often inadequate to properly measure all of the characteristics of the hydrologic system. To circumvent this problem, in recent years, a lot of studies have looked into the automatic calibration of hydrological models with multi-objective functions. In this paper, the multi-objective evolution algorithm MODE-ACM is introduced to solve the multi-objective optimization of hydrologic models. Moreover, to improve the performance of the MODE-ACM, an Enhanced Pareto Multi-Objective Differential Evolution algorithm named EPMODE is proposed in this research. The efficacy of the MODE-ACM and EPMODE are compared with two state-of-the-art algorithms NSGA-II and SPEA2 on two case studies. Five test problems are used as the first case study to generate the true Pareto front. Then this approach is tested on a typical empirical hydrological model for monthly streamflow forecasting. The results of these case studies show that the EPMODE, as well as MODE-ACM, is effective in solving multi-objective problems and has great potential as an efficient and reliable algorithm for water resources applications.

  13. A simulation-optimization model for Stone column-supported embankment stability considering rainfall effect

    NASA Astrophysics Data System (ADS)

    Deb, Kousik; Dhar, Anirban; Purohit, Sandip

    2016-02-01

    Landslide due to rainfall has been and continues to be one of the most important concerns of geotechnical engineering. The paper presents the variation of factor of safety of stone column-supported embankment constructed over soft soil due to change in water level for an incessant period of rainfall. A combined simulation-optimization based methodology has been proposed to predict the critical surface of failure of the embankment and to optimize the corresponding factor of safety under rainfall conditions using an evolutionary genetic algorithm NSGA-II (Non-Dominated Sorted Genetic Algorithm-II). It has been observed that the position of water table can be reliably estimated with varying periods of infiltration using developed numerical method. The parametric study is presented to study the optimum factor of safety of the embankment and its corresponding critical failure surface under the steady-state infiltration condition. Results show that in case of floating stone columns, period of infiltration has no effect on factor of safety. Even critical failure surfaces for a particular floating column length remain same irrespective of rainfall duration.

  14. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  15. Metal detector depth estimation algorithms

    NASA Astrophysics Data System (ADS)

    Marble, Jay; McMichael, Ian

    2009-05-01

    This paper looks at depth estimation techniques using electromagnetic induction (EMI) metal detectors. Four algorithms are considered. The first utilizes a vertical gradient sensor configuration. The second is a dual frequency approach. The third makes use of dipole and quadrapole receiver configurations. The fourth looks at coils of different sizes. Each algorithm is described along with its associated sensor. Two figures of merit ultimately define algorithm/sensor performance. The first is the depth of penetration obtainable. (That is, the maximum detection depth obtainable.) This describes the performance of the method to achieve detection of deep targets. The second is the achievable statistical depth resolution. This resolution describes the precision with which depth can be estimated. In this paper depth of penetration and statistical depth resolution are qualitatively determined for each sensor/algorithm. A scientific method is used to make these assessments. A field test was conducted using 2 lanes with emplaced UXO. The first lane contains 155 shells at increasing depths from 0" to 48". The second is more realistic containing objects of varying size. The first lane is used for algorithm training purposes, while the second is used for testing. The metal detectors used in this study are the: Geonics EM61, Geophex GEM5, Minelab STMR II, and the Vallon VMV16.

  16. Multiobjective muffler shape optimization with hybrid acoustics modeling.

    PubMed

    Airaksinen, Tuomas; Heikkola, Erkki

    2011-09-01

    This paper considers the combined use of a hybrid numerical method for the modeling of acoustic mufflers and a genetic algorithm for multiobjective optimization. The hybrid numerical method provides accurate modeling of sound propagation in uniform waveguides with non-uniform obstructions. It is based on coupling a wave based modal solution in the uniform sections of the waveguide to a finite element solution in the non-uniform component. Finite element method provides flexible modeling of complicated geometries, varying material parameters, and boundary conditions, while the wave based solution leads to accurate treatment of non-reflecting boundaries and straightforward computation of the transmission loss (TL) of the muffler. The goal of optimization is to maximize TL at multiple frequency ranges simultaneously by adjusting chosen shape parameters of the muffler. This task is formulated as a multiobjective optimization problem with the objectives depending on the solution of the simulation model. NSGA-II genetic algorithm is used for solving the multiobjective optimization problem. Genetic algorithms can be easily combined with different simulation methods, and they are not sensitive to the smoothness properties of the objective functions. Numerical experiments demonstrate the accuracy and feasibility of the model-based optimization method in muffler design.

  17. Multi-component seismic modeling and robust pre-stack seismic waveform inversion for elastic anisotropic media parameters

    NASA Astrophysics Data System (ADS)

    Li, Tao

    Consideration of azimuthal anisotropy, at least to an orthorhombic symmetry is important in exploring the naturally fractured and unconventional hydrocarbon reservoirs. Full waveform inversion of multicomponent seismic data can, in principle, provide more robust estimates of subsurface elastic parameters and density than the inversion of single component (P wave) seismic data. In addition, azimuthally dependent anisotropy can only be resolved by carefully studying the multicomponent seismic displacement data acquired and processed along different azimuths. Such an analysis needs an inversion algorithm capable of simultaneously optimizing multiple objectives, one for each data component along each azimuth. In this dissertation, I propose a novel multiobjective methodology using a parallelized version of NSGA II for waveform inversion of multicomponent seismic data along two azimuths. The proposed methodology is also an improvement of the original NSGA II in overall computational efficiency, preservation of population diversity, and rapid sampling of the model space. Next, the proposed methodology is applied on wide azimuth and multicomponent vertical seismic profile (VSP) data to provide reliable estimation of subsurface anisotropy at and near the well location. Prestack waveform inversion was applied to the wide-azimuth multicomponent VSP data acquired at the Wattenberg Field, located in Denver Basin of northeastern Colorado, USA, to characterize the Niobrara formation for azimuthal anisotropy. By comparing the waveform inversion results with an independent study that used a joint slowness-polarization approach to invert the same data, we conclude that the waveform inversion is a reliable tool for inverting the wide-azimuth multicomponent VSP data for anisotropy estimation. Last but not least, an anisotropic elastic three-dimensional scheme for modeling the elastodynamic wavefield is developed in order to go beyond the 1D layering assumption being used in previous

  18. New perspectives in the use of ink evidence in forensic science Part II. Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC.

    PubMed

    Neumann, Cedric; Margot, Pierre

    2009-03-10

    In the first part of this research, three stages were stated for a program to increase the information extracted from ink evidence and maximise its usefulness to the criminal and civil justice system. These stages are (a) develop a standard methodology for analysing ink samples by high-performance thin layer chromatography (HPTLC) in reproducible way, when ink samples are analysed at different time, locations and by different examiners; (b) compare automatically and objectively ink samples; and (c) define and evaluate theoretical framework for the use of ink evidence in forensic context. This report focuses on the second of the three stages. Using the calibration and acquisition process described in the previous report, mathematical algorithms are proposed to automatically and objectively compare ink samples. The performances of these algorithms are systematically studied for various chemical and forensic conditions using standard performance tests commonly used in biometrics studies. The results show that different algorithms are best suited for different tasks. Finally, this report demonstrates how modern analytical and computer technology can be used in the field of ink examination and how tools developed and successfully applied in other fields of forensic science can help maximising its impact within the field of questioned documents.

  19. Algorithm Animation with Galant.

    PubMed

    Stallmann, Matthias F

    2017-01-01

    Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.

  20. Iterative phase retrieval algorithms. I: optimization.

    PubMed

    Guo, Changliang; Liu, Shi; Sheridan, John T

    2015-05-20

    Two modified Gerchberg-Saxton (GS) iterative phase retrieval algorithms are proposed. The first we refer to as the spatial phase perturbation GS algorithm (SPP GSA). The second is a combined GS hybrid input-output algorithm (GS/HIOA). In this paper (Part I), it is demonstrated that the SPP GS and GS/HIO algorithms are both much better at avoiding stagnation during phase retrieval, allowing them to successfully locate superior solutions compared with either the GS or the HIO algorithms. The performances of the SPP GS and GS/HIO algorithms are also compared. Then, the error reduction (ER) algorithm is combined with the HIO algorithm (ER/HIOA) to retrieve the input object image and the phase, given only some knowledge of its extent and the amplitude in the Fourier domain. In Part II, the algorithms developed here are applied to carry out known plaintext and ciphertext attacks on amplitude encoding and phase encoding double random phase encryption systems. Significantly, ER/HIOA is then used to carry out a ciphertext-only attack on AE DRPE systems.

  1. Prediction of a Flash Flood in Complex Terrain. Part II: A Comparison of Flood Discharge Simulations Using Rainfall Input from Radar, a Dynamic Model, and an Automated Algorithmic System.

    NASA Astrophysics Data System (ADS)

    Yates, David N.; Warner, Thomas T.; Leavesley, George H.

    2000-06-01

    Three techniques were employed for the estimation and prediction of precipitation from a thunderstorm that produced a flash flood in the Buffalo Creek watershed located in the mountainous Front Range near Denver, Colorado, on 12 July 1996. The techniques included 1) quantitative precipitation estimation using the National Weather Service's Weather Surveillance Radar-1988 Doppler and the National Center for Atmospheric Research's S-band, dual-polarization radars, 2) quantitative precipitation forecasting utilizing a dynamic model, and 3) quantitative precipitation forecasting using an automated algorithmic system for tracking thunderstorms. Rainfall data provided by these various techniques at short timescales (6 min) and at fine spatial resolutions (150 m to 2 km) served as input to a distributed-parameter hydrologic model for analysis of the flash flood. The quantitative precipitation estimates from the weather radar demonstrated their ability to aid in simulating a watershed's response to precipitation forcing from small-scale, convective weather in complex terrain. That is, with the radar-based quantitative precipitation estimates employed as input, the simulated peak discharge was similar to that estimated. The dynamic model showed the most promise in providing a significant forecast lead time for this flash-flood event. The algorithmic system did not show as much skill in comparison with the dynamic model in providing precipitation forcing to the hydrologic model. The discharge forecasts based on the dynamic-model and algorithmic-system inputs point to the need to improve the ability to forecast convective storms, especially if models such as these eventually are to be used in operational flood forecasting.

  2. Polynomial Local Improvement Algorithms in Combinatorial Optimization.

    DTIC Science & Technology

    1981-11-01

    NUMBER SOL 81- 21 IIS -J O 15 14. TITLE (am#Su&Utl & YEO RPR ERO OEE Polynomial Local Improvement Algorithms in TcnclRpr Combinatorial Optimization 6...Stanford, CA 94305 II . CONTROLLING OFFICE NAME AND ADDRESS It. REPORT DATE Office of Naval Research - Dept. of the Navy November 1981 800 N. Qu~incy Street...corresponds to a node of the tree. ii ) The father of a vertex is its optimal adjacent vertex; if a vertex is a local optimum, it has no father. The tree is

  3. Comparison of three methods for the optimal allocation of hydrological model participation in an Ensemble Prediction System

    NASA Astrophysics Data System (ADS)

    Brochero, D.; Anctil, F.; Gagné, C.

    2012-04-01

    Today, the availability of the Meteorological Ensemble Prediction Systems (MEPS) and its subsequent coupling with multiple hydrological models offer the possibility of building Hydrological Ensemble Prediction Systems (HEPS) consisting of a large number of members. However, this task is complex both in terms of the coupling of information and of the computational time, which may create an operational barrier. The evaluation of the prominence of each hydrological members can be seen as a non-parametric post-processing stage that seeks finding the optimal participation of the hydrological models (in a fashion similar to the Bayesian model averaging technique), maintaining or improving the quality of a probabilistic forecasts based on only x members drawn from a super ensemble of d members, thus allowing the reduction of the task required to issue the probabilistic forecast. The main objective of the current work consists in assessing the degree of simplification (reduction of the number of hydrological members) that can be achieved with a HEPS configured using 16 lumped hydrological models driven by the 50 weather ensemble forecasts from the European Centre for Medium-range Weather Forecasts (ECMWF), i.e. an 800-member HEPS. In a previous work (Brochero et al., 2011a, b), we demonstrated that the proportion of members allocated to each hydrological model is a sufficient criterion to reduce the number of hydrological members while improving the balance of the scores, taking into account interchangeability of the ECMWF MEPS. Here, we compare the proportion of members allocated to each hydrological model derived from three non-parametric techniques: correlation analysis of hydrological members, Backward Greedy Selection (BGS) and Nondominated Sorting Genetic Algorithm (NSGA II). The last two techniques allude to techniques developed in machine learning, in a multicriteria framework exploiting the relationship between bias, reliability, and the number of members of the

  4. Photosystem II

    ScienceCinema

    James Barber

    2016-07-12

    James Barber, Ernst Chain Professor of Biochemistry at Imperial College, London, gives a BSA Distinguished Lecture titled, "The Structure and Function of Photosystem II: The Water-Splitting Enzyme of Photosynthesis."

  5. Performance comparison of optimal fractional order hybrid fuzzy PID controllers for handling oscillatory fractional order processes with dead time.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu

    2013-07-01

    Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes.

  6. Optimal colour quality of LED clusters based on memory colours.

    PubMed

    Smet, Kevin; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Hanselaer, Peter

    2011-03-28

    The spectral power distributions of tri- and tetrachromatic clusters of Light-Emitting-Diodes, composed of simulated and commercially available LEDs, were optimized with a genetic algorithm to maximize the luminous efficacy of radiation and the colour quality as assessed by the memory colour quality metric developed by the authors. The trade-off of the colour quality as assessed by the memory colour metric and the luminous efficacy of radiation was investigated by calculating the Pareto optimal front using the NSGA-II genetic algorithm. Optimal peak wavelengths and spectral widths of the LEDs were derived, and over half of them were found to be close to Thornton's prime colours. The Pareto optimal fronts of real LED clusters were always found to be smaller than those of the simulated clusters. The effect of binning on designing a real LED cluster was investigated and was found to be quite large. Finally, a real LED cluster of commercially available AlGaInP, InGaN and phosphor white LEDs was optimized to obtain a higher score on memory colour quality scale than its corresponding CIE reference illuminant.

  7. An optimal design of wind turbine and ship structure based on neuro-response surface method

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young

    2015-07-01

    The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.

  8. A niched Pareto tabu search for multi-objective optimal design of groundwater remediation systems

    NASA Astrophysics Data System (ADS)

    Yang, Yun; Wu, Jianfeng; Sun, Xiaomin; Wu, Jichun; Zheng, Chunmiao

    2013-05-01

    This study presents a new multi-objective optimization method, the niched Pareto tabu search (NPTS), for optimal design of groundwater remediation systems. The proposed NPTS is then coupled with the commonly used flow and transport code, MODFLOW and MT3DMS, to search for the near Pareto-optimal tradeoffs of groundwater remediation strategies. The difference between the proposed NPTS and the existing multiple objective tabu search (MOTS) lies in the use of the niche selection strategy and fitness archiving to maintain the diversity of the optimal solutions along the Pareto front and avoid repetitive calculations of the objective functions associated with the flow and transport model. Sensitivity analysis of the NPTS parameters is evaluated through a synthetic pump-and-treat remediation application involving two conflicting objectives, minimizations of both remediation cost and contaminant mass remaining in the aquifer. Moreover, the proposed NPTS is applied to a large-scale pump-and-treat groundwater remediation system of the field site at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts, involving minimizations of both total pumping rates and contaminant mass remaining in the aquifer. Additional comparison of the results based on the NPTS with those obtained from other two methods, namely the single objective tabu search (SOTS) and the nondominated sorting genetic algorithm II (NSGA-II), further indicates that the proposed NPTS has desirable computation efficiency, stability, and robustness and is a promising tool for optimizing the multi-objective design of groundwater remediation systems.

  9. AI techniques for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands

    NASA Astrophysics Data System (ADS)

    Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.

    2015-11-01

    Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.

  10. Broadband liner impedance eduction for multimodal acoustic propagation in the presence of a mean flow

    NASA Astrophysics Data System (ADS)

    Troian, Renata; Dragna, Didier; Bailly, Christophe; Galland, Marie-Annick

    2017-03-01

    A new broadband impedance eduction method is introduced to identify the surface impedance of acoustic liners from in situ measurements on a test rig. Multimodal acoustic propagation is taken into account in order to reproduce realistic conditions. The present approach is based on the resolution of the linearized 3D Euler equations in the time domain. The broadband impedance time domain boundary condition is prescribed from a multipole impedance model, and is formulated as a differential form well-suited for high-order numerical methods. Numerical values of the model coefficients are determined by minimizing the difference between measured and simulated acoustic quantities, namely the insertion loss and wall pressure fluctuations at a few locations inside the duct. The minimization is performed through a multi-objective optimization thanks to the Non-dominated Sorting Genetic Algorithm-II (NSGA-II). The present eduction method is validated with benchmark data provided by NASA for plane wave propagation, and by synthesized numerical data for multimodal propagation.

  11. LID-BMPs planning for urban runoff control and the case study in China.

    PubMed

    Jia, Haifeng; Yao, Hairong; Tang, Ying; Yu, Shaw L; Field, Richard; Tafuri, Anthony N

    2015-02-01

    Low Impact Development Best Management Practices (LID-BMPs) have in recent years received much recognition as cost-effective measures for mitigating urban runoff impacts. In the present paper, a procedure for LID-BMPs planning and analysis using a comprehensive decision support tool was proposed. A case study was conducted to the planning of an LID-BMPs implementation effort at a college campus in Foshan, Guangdong Province, China. By examining information obtained, potential LID-BMPs were first selected. SUSTAIN was then used to analyze four runoff control scenarios, namely: pre-development scenario; basic scenario (existing campus development plan without BMP control); Scenario 1 (least-cost BMPs implementation); and, Scenario 2 (maximized BMPs performance). A sensitivity analysis was also performed to assess the impact of the hydrologic and water quality parameters. The optimal solution for each of the two LID-BMPs scenarios was obtained by using the non-dominated sorting genetic algorithm-II (NSGA-II). Finally, the cost-effectiveness of the LID-BMPs implementation scenarios was examined by determining the incremental cost for a unit improvement of control.

  12. Research on Knowledge Based Programming and Algorithm Design.

    DTIC Science & Technology

    1981-08-01

    34prime finding" (including the Sieve of Eratosthenes and linear time prime finding). This research is described in sections 6,7,8, and 9. 4 ii. Summary of...algorithm and several variants on prime finding including the Sieve of Eratosthenes and a more sophisticated linear-time algorithm. In these additional

  13. The BR eigenvalue algorithm

    SciTech Connect

    Geist, G.A.; Howell, G.W.; Watkins, D.S.

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  14. Quantum Lattice Algorithms for 2D and 3D Magnetohydrodynamics

    DTIC Science & Technology

    2007-11-01

    Vahala (William & Mary) on both quantum and entropic lattice algorithms for the solution of nonlinear physics problems. Because of the extreme...for CAP-Phase II on the 9000 core on the SGI-Altix at ASC. 15. SUBJECT TERMS Nonlinear Physics; Quantum Lattice Algorithms; Entropic Lattice...solution of nonlinear physics problems. Because of the extreme scalability of the algorithms that we have been developing, we were chosen for CAP

  15. Solving molecular docking problems with multi-objective metaheuristics.

    PubMed

    García-Godoy, María Jesús; López-Camacho, Esteban; García-Nieto, José; Aldana-Montes, Antonio J Nebroand José F

    2015-06-02

    Molecular docking is a hard optimization problem that has been tackled in the past with metaheuristics, demonstrating new and challenging results when looking for one objective: the minimum binding energy. However, only a few papers can be found in the literature that deal with this problem by means of a multi-objective approach, and no experimental comparisons have been made in order to clarify which of them has the best overall performance. In this paper, we use and compare, for the first time, a set of representative multi-objective optimization algorithms applied to solve complex molecular docking problems. The approach followed is focused on optimizing the intermolecular and intramolecular energies as two main objectives to minimize. Specifically, these algorithms are: two variants of the non-dominated sorting genetic algorithm II (NSGA-II), speed modulation multi-objective particle swarm optimization (SMPSO), third evolution step of generalized differential evolution (GDE3), multi-objective evolutionary algorithm based on decomposition (MOEA/D) and S-metric evolutionary multi-objective optimization (SMS-EMOA). We assess the performance of the algorithms by applying quality indicators intended to measure convergence and the diversity of the generated Pareto front approximations. We carry out a comparison with another reference mono-objective algorithm in the problem domain (Lamarckian genetic algorithm (LGA) provided by the AutoDock tool). Furthermore, the ligand binding site and molecular interactions of computed solutions are analyzed, showing promising results for the multi-objective approaches. In addition, a case study of application for aeroplysinin-1 is performed, showing the effectiveness of our multi-objective approach in drug discovery.

  16. Uses of clinical algorithms.

    PubMed

    Margolis, C Z

    1983-02-04

    The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.

  17. FAQs II

    ERIC Educational Resources Information Center

    Kezar, Adrianna; Frank, Vikki; Lester, Jaime; Yang, Hannah

    2008-01-01

    In their paper entitled "Why should postsecondary institutions consider partnering to offer (Individual Development Accounts (IDAs)?" the authors reviewed frequently asked questions they encountered from higher education professionals about IDAs, but as their research continued so did the questions. FAQ II has more in-depth questions and…

  18. SAGE II

    Atmospheric Science Data Center

    2016-02-16

    ... of stratospheric aerosols, ozone, nitrogen dioxide, water vapor and cloud occurrence by mapping vertical profiles and calculating ... (i.e. MLS and SAGE III versus HALOE) Fixed various bugs Details are in the  SAGE II V7.00 Release Notes .   ...

  19. Balanced 0, + or - Matrices. Part 2. Recognition Algorithm

    DTIC Science & Technology

    1994-01-22

    Matrices MAY 101994 Part II: Recognition Algorithm D Michele ConfortlI G6rard Cornu6j~ls2 Ajai Kapoor Krisina Vuskovi 2 January 22, 1994 Dipartimento...di Matematica Pura ed Applicata Universiti di Padova, Via Belzoni 7, 94-13892 35131 Padova, Italy I IIII In II ii I l1i III Graduate School of...for balanced 0, ± matrices . This algorithm is based on a decomposition theorem proved in a companion paper. Acce166 ýr7 NTIS CRA& D’BC TAB L 1 U

  20. Gamma II

    NASA Astrophysics Data System (ADS)

    Barker, Thurburn; Castelaz, M.; Cline, J.; Owen, L.; Boehme, J.; Rottler, L.; Whitworth, C.; Clavier, D.

    2011-05-01

    GAMMA II is the Guide Star Automatic Measuring MAchine relocated from STScI to the Astronomical Photographic Data Archive (APDA) at the Pisgah Astronomical Research Institute (PARI). GAMMA II is a multi-channel laser-scanning microdensitometer that was used to measure POSS and SERC plates to create the Guide Star Catalog and the Digital Sky Survey. The microdensitometer is designed with submicron accuracy in x and y measurements using a HP 5507 laser interferometer, 15 micron sampling, and the capability to measure plates as large as 0.5-m across. GAMMA II is a vital instrument for the success of digitizing the direct, objective prism, and spectra photographic plate collections in APDA for research. We plan several targeted projects. One is a collaboration with Drs. P.D. Hemenway and R. L. Duncombe who plan to scan 1000 plates of 34 minor planets to identify systematic errors in the Fundamental System of celestial coordinates. Another is a collaboration with Dr. R. Hudec (Astronomical Institute, Academy of Sciences of the Czech Republic) who is working within the Gaia Variability Unit CU7 to digitize objective prism spectra on the Henize plates and Burrell-Schmidt plates located in APDA. These low dispersion spectral plates provide optical counterparts of celestial high-energy sources and cataclysmic variables enabling the simulation of Gaia BP/RP outputs. The astronomical community is invited to explore the more than 140,000 plates from 20 observatories now archived in APDA, and use GAMMA II. The process of relocating GAMMA to APDA, re-commissioning, and starting up the production scan programs will be described. Also, we will present planned research and future upgrades to GAMMA II.

  1. Software For Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steve E.

    1992-01-01

    SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.

  2. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  3. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  4. Quantum algorithms: an overview

    NASA Astrophysics Data System (ADS)

    Montanaro, Ashley

    2016-01-01

    Quantum computers are designed to outperform standard computers by running quantum algorithms. Areas in which quantum algorithms can be applied include cryptography, search and optimisation, simulation of quantum systems and solving large systems of linear equations. Here we briefly survey some known quantum algorithms, with an emphasis on a broad overview of their applications rather than their technical details. We include a discussion of recent developments and near-term applications of quantum algorithms.

  5. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  6. Clustering algorithm studies

    NASA Astrophysics Data System (ADS)

    Graf, Norman A.

    2001-07-01

    An object-oriented framework for undertaking clustering algorithm studies has been developed. We present here the definitions for the abstract Cells and Clusters as well as the interface for the algorithm. We intend to use this framework to investigate the interplay between various clustering algorithms and the resulting jet reconstruction efficiency and energy resolutions to assist in the design of the calorimeter detector.

  7. Applying a Genetic Algorithm to Reconfigurable Hardware

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim

    2004-01-01

    This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.

  8. PORT II

    NASA Technical Reports Server (NTRS)

    Muniz, Beau

    2009-01-01

    One unique project that the Prototype lab worked on was PORT I (Post-landing Orion Recovery Test). PORT is designed to test and develop the system and components needed to recover the Orion capsule once it splashes down in the ocean. PORT II is designated as a follow up to PORT I that will utilize a mock up pressure vessel that is spatially compar able to the final Orion capsule.

  9. Multimethod evolutionary search for the regional calibration of rainfall-runoff models

    NASA Astrophysics Data System (ADS)

    Lombardi, Laura; Castiglioni, Simone; Toth, Elena; Castellarin, Attilio; Montanari, Alberto

    2010-05-01

    The study focuses on regional calibration for a generic rainfall-runoff model. The maximum likelihood function in the spectral domain proposed by Whittle is approximated in the time domain by maximising the simultaneous fit (through a multiobjective optimisation) of selected statistics of streamflow values, with the aim to propose a calibration procedure that can be applied at regional scale. The method may in fact be applied without the availability of actual time series of streamflow observations, since it is based exclusively on the selected statistics, that are here obtained on the basis of the dominant climate and catchment characteristics, through regional regression relationships. The multiobjective optimisation was carried out by using a recently proposed multimethod evolutionary search algorithm (AMALGAM, Vrugt and Robinson, 2007), that runs simultaneously, for population evolution, a set of different optimisation methods (namely NSGA-II, Differential Evolution, Adaptive Metropolis Search and Particle Swarm Optimisation), resulting in a combination of the respective strengths by adaptively updating the weights of these individual methods based on their reproductive success. This ensures a fast, reliable and computationally efficient solution to multiobjective optimisation problems. The proposed technique is applied to the case study of some catchments located in central Italy, which are treated as ungauged and are located in a region where detailed hydrological and geomorfoclimatic information is available. The results obtained with the regional calibration are compared with those provided by a classical least squares calibration in the time domain. The outcomes of the analysis confirm the potentialities of the proposed methodology.

  10. Long Series Multi-objectives Optimal Operation of Water And Sediment Regulation

    NASA Astrophysics Data System (ADS)

    Bai, T.; Jin, W.

    2015-12-01

    Secondary suspended river in Inner Mongolia reaches have formed and the security of reach and ecological health of the river are threatened. Therefore, researches on water-sediment regulation by cascade reservoirs are urgent and necessary. Under this emergency background, multi-objectives water and sediment regulation are studied in this paper. Firstly, multi-objective optimal operation models of Longyangxia and Liujiaxia cascade reservoirs are established. Secondly, based on constraints handling and feasible search space techniques, the Non-dominated Sorting Genetic Algorithm (NSGA-II) is greatly improved to solve the model. Thirdly, four different scenarios are set. It is demonstrated that: (1) scatter diagrams of perato front are obtained to show optimal solutions of power generation maximization, sediment maximization and the global equilibrium solutions between the two; (2) the potentiality of water-sediment regulation by Longyangxia and Liujiaxia cascade reservoirs are analyzed; (3) with the increasing water supply in future, conflict between water supply and water-sediment regulation occurred, and the sustainability of water and sediment regulation will confront with negative influences for decreasing transferable water in cascade reservoirs; (4) the transfer project has less benefit for water-sediment regulation. The research results have an important practical significance and application on water-sediment regulation by cascade reservoirs in the Upper Yellow River, to construct water and sediment control system in the whole Yellow River basin.

  11. New model for sustainable management of pressurized irrigation networks. Application to Bembézar MD irrigation district (Spain).

    PubMed

    Carrillo Cobo, M T; Camacho Poyato, E; Montesinos, P; Rodríguez Díaz, J A

    2014-03-01

    Pressurized irrigation networks require large amounts of energy for their operation which are linked to significant greenhouse gas (GHG) emissions. In recent years, several management strategies have been developed to reduce energy consumption in the agricultural sector. One strategy is the reduction of the water supplied for irrigation but implies a reduction in crop yields and farmer's profits. In this work, a new methodology is developed for sustainable management of irrigation networks considering environmental and economic criteria. The multiobjective non-dominated Sorting Genetic Algorithm (NSGA II) has been selected to obtain the optimum irrigation pattern that would reduce GHG emissions and increase profits. This methodology has been applied to Bembézar Margen Derecha (BMD) irrigation district (Spain). Irrigation patterns that reduce GHG emissions or increase actual profits are obtained. The best irritation pattern reduces the current GHG emissions in 8.56% with increases the actual profits in 14.56%. Thus, these results confirm that simultaneous improvements in environmental and economic factors are possible.

  12. Hydraulic design of a low-specific speed Francis runner for a hydraulic cooling tower

    NASA Astrophysics Data System (ADS)

    Ruan, H.; Luo, X. Q.; Liao, W. L.; Zhao, Y. P.

    2012-11-01

    The air blower in a cooling tower is normally driven by an electromotor, and the electric energy consumed by the electromotor is tremendous. The remaining energy at the outlet of the cooling cycle is considerable. This energy can be utilized to drive a hydraulic turbine and consequently to rotate the air blower. The purpose of this project is to recycle energy, lower energy consumption and reduce pollutant discharge. Firstly, a two-order polynomial is proposed to describe the blade setting angle distribution law along the meridional streamline in the streamline equation. The runner is designed by the point-to-point integration method with a specific blade setting angle distribution. Three different ultra-low-specificspeed Francis runners with different wrap angles are obtained in this method. Secondly, based on CFD numerical simulations, the effects of blade setting angle distribution on pressure coefficient distribution and relative efficiency have been analyzed. Finally, blade angles of inlet and outlet and control coefficients of blade setting angle distribution law are optimal variables, efficiency and minimum pressure are objective functions, adopting NSGA-II algorithm, a multi-objective optimization for ultra-low-specific speed Francis runner is carried out. The obtained results show that the optimal runner has higher efficiency and better cavitation performance.

  13. Evolutionary multiobjective design of a flexible caudal fin for robotic fish.

    PubMed

    Clark, Anthony J; Tan, Xiaobo; McKinley, Philip K

    2015-11-25

    Robotic fish accomplish swimming by deforming their bodies or other fin-like appendages. As an emerging class of embedded computing system, robotic fish are anticipated to play an important role in environmental monitoring, inspection of underwater structures, tracking of hazardous wastes and oil spills, and the study of live fish behaviors. While integration of flexible materials (into the fins and/or body) holds the promise of improved swimming performance (in terms of both speed and maneuverability) for these robots, such components also introduce significant design challenges due to the complex material mechanics and hydrodynamic interactions. The problem is further exacerbated by the need for the robots to meet multiple objectives (e.g., both speed and energy efficiency). In this paper, we propose an evolutionary multiobjective optimization approach to the design and control of a robotic fish with a flexible caudal fin. Specifically, we use the NSGA-II algorithm to investigate morphological and control parameter values that optimize swimming speed and power usage. Several evolved fin designs are validated experimentally with a small robotic fish, where fins of different stiffness values and sizes are printed with a multi-material 3D printer. Experimental results confirm the effectiveness of the proposed design approach in balancing the two competing objectives.

  14. Multi-objective optimisation of wastewater treatment plant control to reduce greenhouse gas emissions.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2014-05-15

    This study investigates the potential of control strategy optimisation for the reduction of operational greenhouse gas emissions from wastewater treatment in a cost-effective manner, and demonstrates that significant improvements can be realised. A multi-objective evolutionary algorithm, NSGA-II, is used to derive sets of Pareto optimal operational and control parameter values for an activated sludge wastewater treatment plant, with objectives including minimisation of greenhouse gas emissions, operational costs and effluent pollutant concentrations, subject to legislative compliance. Different problem formulations are explored, to identify the most effective approach to emissions reduction, and the sets of optimal solutions enable identification of trade-offs between conflicting objectives. It is found that multi-objective optimisation can facilitate a significant reduction in greenhouse gas emissions without the need for plant redesign or modification of the control strategy layout, but there are trade-offs to consider: most importantly, if operational costs are not to be increased, reduction of greenhouse gas emissions is likely to incur an increase in effluent ammonia and total nitrogen concentrations. Design of control strategies for a high effluent quality and low costs alone is likely to result in an inadvertent increase in greenhouse gas emissions, so it is of key importance that effects on emissions are considered in control strategy development and optimisation.

  15. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs.

    PubMed

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture.

  16. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs

    PubMed Central

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  17. PESTICINS II. I and II

    PubMed Central

    Brubaker, Robert R.; Surgalla, Michael J.

    1962-01-01

    Brubaker, Robert R. (Fort Detrick, Frederick, Md.) and Michael J. Surgalla. Pesticins. II. Production of pesticin I and II. J. Bacteriol. 84:539–545. 1962.—Pesticin I was separated from pesticin I inhibitor by ion-exchange chromatography of cell-free culture supernatant fluids and by acid precipitation of soluble preparations obtained from mechanically disrupted cells. The latter procedure resulted in formation of an insoluble pesticin I complex which, upon removal by centrifugation and subsequent dissolution in neutral buffer, exhibited a 100- to 1,000-fold increase in antibacterial activity over that originally observed. However, activity returned to the former level upon addition of the acid-soluble fraction, which contained pesticin I inhibitor. Since the presence of pesticin I inhibitor leads to serious errors in the determination of pesticin I, an assay medium containing ethylenediaminetetraacetic acid in excess Ca++ was developed; this medium eliminated the effect of the inhibitor. By use of the above medium, sufficient pesticin I was found to be contained within 500 nonirradiated cells to inhibit growth of a suitable indicator strain; at least 107 cells were required to effect a corresponding inhibition by pesticin II. Although both pesticins are located primarily within the cell during growth, pesticin I may arise extracellularly during storage of static cells. Slightly higher activity of pesticin I inhibitor was found in culture supernatant fluids than occurred in corresponding cell extracts of equal volume. The differences and similarities between pesticin I and some known bacteriocins are discussed. PMID:14016110

  18. SAGE II Version 7.00 Release

    Atmospheric Science Data Center

    2013-07-10

    ... algorithms from SAGE III v4.00 Ceased removal of the water vapor extinction in the 600nm channel due to uncertainty in the H2O ... (i.e. MLS and SAGE III versus HALOE) Fixed various bugs   Details are in the SAGE II V7.00 Release Notes . The ...

  19. License plate detection algorithm

    NASA Astrophysics Data System (ADS)

    Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds

    2013-12-01

    A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.

  20. Distributed Minimum Hop Algorithms

    DTIC Science & Technology

    1982-01-01

    acknowledgement), node d starts iteration i+1, and otherwise the algorithm terminates. A detailed description of the algorithm is given in pidgin algol...precise behavior of the algorithm under these circumstances is described by the pidgin algol program in the appendix which is executed by each node. The...l) < N!(2) for each neighbor j, and thus by induction,J -1 N!(2-1) < n-i + (Z-1) + N!(Z-1), completing the proof. Algorithm Dl in Pidgin Algol It is

  1. A New Component Labelling And Merging Algorithm

    NASA Astrophysics Data System (ADS)

    Lochovsky, Amelia F.

    1987-10-01

    Component labelling is an important part of region analysis in image processing. Component labelling consists of assigning labels to pixels in the image such that adjacent pixels are given the same labels. There are various approaches to component labelling. Some require random access to the processed image; some assume special structure of the image such as a quad tree. Algorithms based on sequential scan of the image are attractive to hardware implementation. One method of labelling is based on a fixed size local window which includes the previous line. Due to the fixed size window and the sequential fashion of the labelling process, different branches of the same object may be given different labels and later found to be connected to each other. These labels are con-sidered to be equivalent and must later be collected to correctly represent one single object. This approach can be found in [F,FE,R]. Assume an input binary image of size NxM. Using these labelling algorithms, the number of equivalent pair generated is bounded by O(N*M). The number of distinct labels is also bounded by O(N*M). There is no known algorithm that merge the equivalent label pairs in time linear to the number of pairs, that is in time bounded by O(N*M). We propose a new labelling algorithm which interleaves the labelling with the merging process. The labelling and the merging are combined in one algorithm. Merged label information is kept in an equivalent table which is used to guide the labelling. In general , the algorithm produces fewer equivalent label pairs. The combined labelling and merging algorithm is O(N*M), where NxM is the size of the image. Section II describes the algorithm. Section III gives some examples We discuss implementation issues in section IV and further discussion and conclusion are given in Section V.

  2. Unexpected Ni(II) and Cu(II) polynuclear assemblies--a balance between ligand and metal ion coordination preferences.

    PubMed

    Shuvaev, Kontantin V; Tandon, Santokh S; Dawe, Louise N; Thompson, Laurence K

    2010-07-14

    Polytopic ligand design involves matching the coordination pocket composition with the metal ion coordination 'algorithm', but despite targeting [4 x 4] grids as the final outcome, metal ion preferences and ligand control can lead to widely varying complexes in the self-assembly process with Ni(II) and Cu(II).

  3. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  4. Transitional Division Algorithms.

    ERIC Educational Resources Information Center

    Laing, Robert A.; Meyer, Ruth Ann

    1982-01-01

    A survey of general mathematics students whose teachers were taking an inservice workshop revealed that they had not yet mastered division. More direct introduction of the standard division algorithm is favored in elementary grades, with instruction of transitional processes curtailed. Weaknesses in transitional algorithms appear to outweigh…

  5. Ultrametric Hierarchical Clustering Algorithms.

    ERIC Educational Resources Information Center

    Milligan, Glenn W.

    1979-01-01

    Johnson has shown that the single linkage and complete linkage hierarchical clustering algorithms induce a metric on the data known as the ultrametric. Johnson's proof is extended to four other common clustering algorithms. Two additional methods also produce hierarchical structures which can violate the ultrametric inequality. (Author/CTM)

  6. The Training Effectiveness Algorithm.

    ERIC Educational Resources Information Center

    Cantor, Jeffrey A.

    1988-01-01

    Describes the Training Effectiveness Algorithm, a systematic procedure for identifying the cause of reported training problems which was developed for use in the U.S. Navy. A two-step review by subject matter experts is explained, and applications of the algorithm to other organizations and training systems are discussed. (Author/LRW)

  7. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  8. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.

    2013-09-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  9. Type II Quantum Computing Algorithm For Computational Fluid Dynamics

    DTIC Science & Technology

    2006-03-01

    is the Moore - Penrose pseudoinverse [30]. 38 Yepez’s generalized inverse for Ĵ is ( )1 2 2 2 2 22 2 2 2 2 1 1 1ˆ ˆ genJ E E E E Jλλ λ λ...second method is to multiply both sides of (4.27) by a “ generalized inverse ” 1ˆgenJ − , which Yepez has invented. This matrix is similar to the Moore ...his generalized inverse . The generalized inverse is analogous to the inverse of a nonsingular square matrix 1 1 1− − −=M SΛ S . Yepez uses an

  10. Optimum design of phononic crystal perforated plate structures for widest bandgap of fundamental guided wave modes and maximized in-plane stiffness

    NASA Astrophysics Data System (ADS)

    Hedayatrasa, Saeid; Abhary, Kazem; Uddin, Mohammad; Ng, Ching-Tai

    2016-04-01

    This paper presents a topology optimization of single material phononic crystal plate (PhP) to be produced by perforation of a uniform background plate. The primary objective of this optimization study is to explore widest exclusive bandgaps of fundamental (first order) symmetric or asymmetric guided wave modes as well as widest complete bandgap of mixed wave modes (symmetric and asymmetric). However, in the case of single material porous phononic crystals the bandgap width essentially depends on the resultant structural integration introduced by achieved unitcell topology. Thinner connections of scattering segments (i.e. lower effective stiffness) generally lead to (i) wider bandgap due to enhanced interfacial reflections, and (ii) lower bandgap frequency range due to lower wave speed. In other words higher relative bandgap width (RBW) is produced by topology with lower effective stiffness. Hence in order to study the bandgap efficiency of PhP unitcell with respect to its structural worthiness, the in-plane stiffness is incorporated in optimization algorithm as an opposing objective to be maximized. Thick and relatively thin Polysilicon PhP unitcells with square symmetry are studied. Non-dominated sorting genetic algorithm NSGA-II is employed for this multi-objective optimization problem and modal band analysis of individual topologies is performed through finite element method. Specialized topology initiation, evaluation and filtering are applied to achieve refined feasible topologies without penalizing the randomness of genetic algorithm (GA) and diversity of search space. Selected Pareto topologies are presented and gradient of RBW and elastic properties in between the two Pareto front extremes are investigated. Chosen intermediate Pareto topology, even not extreme topology with widest bandgap, show superior bandgap efficiency compared with the results reported in other works on widest bandgap topology of asymmetric guided waves, available in the literature

  11. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  12. Analysis of estimation algorithms for CDTI and CAS applications

    NASA Technical Reports Server (NTRS)

    Goka, T.

    1985-01-01

    Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.

  13. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  14. Inclusive Flavour Tagging Algorithm

    NASA Astrophysics Data System (ADS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-10-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.

  15. OpenEIS Algorithms

    SciTech Connect

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  16. Implementation of Parallel Algorithms

    DTIC Science & Technology

    1993-06-30

    their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in

  17. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  18. Parallel Wolff Cluster Algorithms

    NASA Astrophysics Data System (ADS)

    Bae, S.; Ko, S. H.; Coddington, P. D.

    The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.

  19. US-VISIT Identity Matching Algorithm Evaluation Program: ADIS Algorithm Evaluation Project Plan Update

    SciTech Connect

    Grant, C W; Lenderman, J S; Gansemer, J D

    2011-02-24

    This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed by Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).

  20. A Cooperative Framework for Fireworks Algorithm.

    PubMed

    Zheng, Shaoqiu; Li, Junzhi; Janecek, Andreas; Tan, Ying

    2017-01-01

    This paper presents a cooperative framework for fireworks algorithm (CoFFWA). A detailed analysis of existing fireworks algorithm (FWA) and its recently developed variants has revealed that ( i) the current selection strategy has the drawback that the contribution of the firework with the best fitness (denoted as core firework) overwhelms the contributions of all other fireworks (non-core fireworks) in the explosion operator, ( ii) the Gaussian mutation operator is not as effective as it is designed to be. To overcome these limitations, the CoFFWA is proposed, which significantly improves the exploitation capability by using an independent selection method and also increases the exploration capability by incorporating a crowdness-avoiding cooperative strategy among the fireworks. Experimental results on the CEC2013 benchmark functions indicate that CoFFWA outperforms the state-of-the-art FWA variants, artificial bee colony, differential evolution, and the standard particle swarm optimization SPSO2007/SPSO2011 in terms of convergence performance.

  1. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  2. Algorithms and Libraries

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our inquiry into algorithms and applications that would benefit by latency tolerant approach to algorithm building, including the construction of new algorithms where appropriate. In a multithreaded execution, when a processor reaches a point where remote memory access is necessary, the request is sent out on the network and a context--switch occurs to a new thread of computation. This effectively masks a long and unpredictable latency due to remote loads, thereby providing tolerance to remote access latency. We began to develop standards to profile various algorithm and application parameters, such as the degree of parallelism, granularity, precision, instruction set mix, interprocessor communication, latency etc. These tools will continue to develop and evolve as the Information Power Grid environment matures. To provide a richer context for this research, the project also focused on issues of fault-tolerance and computation migration of numerical algorithms and software. During the initial phase we tried to increase our understanding of the bottlenecks in single processor performance. Our work began by developing an approach for the automatic generation and optimization of numerical software for processors with deep memory hierarchies and pipelined functional units. Based on the results we achieved in this study we are planning to study other architectures of interest, including development of cost models, and developing code generators appropriate to these architectures.

  3. Algorithmization in Learning and Instruction.

    ERIC Educational Resources Information Center

    Landa, L. N.

    An introduction to the theory of algorithms reviews the theoretical issues of teaching algorithms, the logical and psychological problems of devising algorithms of identification, and the selection of efficient algorithms; and then relates all of these to the classroom teaching process. It also descirbes some major research on the effectiveness of…

  4. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  5. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  6. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  7. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  8. Parallel Algorithms and Patterns

    SciTech Connect

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  9. Improved Chaff Solution Algorithm

    DTIC Science & Technology

    2009-03-01

    Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement...0Z4 2. SECURITY CLASSIFICATION (Overall security classification of the document including special warning terms if applicable .) UNCLASSIFIED

  10. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  11. Risk Bounds for Regularized Least-Squares Algorithm with Operator-Value Kernels

    DTIC Science & Technology

    2005-05-16

    for regularized least-squares algorithm with operator-valued kernels Ernesto De Vito a Andrea Caponnetto b aDipartimento di Matematica , Università...0915, National Science Foundation (ITR/SYS) Contract No. IIS - 0112991, National Science Foundation (ITR) Contract No. IIS -0209289, National Science

  12. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  13. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  14. Quantum gate decomposition algorithms.

    SciTech Connect

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  15. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  16. Algorithm for reaction classification.

    PubMed

    Kraut, Hans; Eiblmaier, Josef; Grethe, Guenter; Löw, Peter; Matuszczyk, Heinz; Saller, Heinz

    2013-11-25

    Reaction classification has important applications, and many approaches to classification have been applied. Our own algorithm tests all maximum common substructures (MCS) between all reactant and product molecules in order to find an atom mapping containing the minimum chemical distance (MCD). Recent publications have concluded that new MCS algorithms need to be compared with existing methods in a reproducible environment, preferably on a generalized test set, yet the number of test sets available is small, and they are not truly representative of the range of reactions that occur in real reaction databases. We have designed a challenging test set of reactions and are making it publicly available and usable with InfoChem's software or other classification algorithms. We supply a representative set of example reactions, grouped into different levels of difficulty, from a large number of reaction databases that chemists actually encounter in practice, in order to demonstrate the basic requirements for a mapping algorithm to detect the reaction centers in a consistent way. We invite the scientific community to contribute to the future extension and improvement of this data set, to achieve the goal of a common standard.

  17. Robotic Follow Algorithm

    SciTech Connect

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  18. Fast autodidactic adaptive equalization algorithms

    NASA Astrophysics Data System (ADS)

    Hilal, Katia

    Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.

  19. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2012-01-01

    The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  20. Data bank homology search algorithm with linear computation complexity.

    PubMed

    Strelets, V B; Ptitsyn, A A; Milanesi, L; Lim, H A

    1994-06-01

    A new algorithm for data bank homology search is proposed. The principal advantages of the new algorithm are: (i) linear computation complexity; (ii) low memory requirements; and (iii) high sensitivity to the presence of local region homology. The algorithm first calculates indicative matrices of k-tuple 'realization' in the query sequence and then searches for an appropriate number of matching k-tuples within a narrow range in database sequences. It does not require k-tuple coordinates tabulation and in-memory placement for database sequences. The algorithm is implemented in a program for execution on PC-compatible computers and tested on PIR and GenBank databases with good results. A few modifications designed to improve the selectivity are also discussed. As an application example, the search for homology of the mouse homeotic protein HOX 3.1 is given.

  1. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  2. Ovarian Cancer Stage II

    MedlinePlus

    ... Download Title: Ovarian Cancer Stage II Description: Three-panel drawing of stage IIA, IIB, and stage II primary peritoneal cancer; the first panel (stage IIA) shows cancer inside both ovaries that ...

  3. Factor II deficiency

    MedlinePlus

    ... if one or more of these factors are missing or are not functioning like they should. Factor II is one such coagulation factor. Factor II deficiency runs in families (inherited) and is very rare. Both parents must ...

  4. A MEDLINE categorization algorithm

    PubMed Central

    Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit

    2006-01-01

    Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms

  5. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  6. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  7. YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing

    NASA Astrophysics Data System (ADS)

    Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.

    2016-05-01

    State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.

  8. JPSS CGS Tools For Rapid Algorithm Updates

    NASA Astrophysics Data System (ADS)

    Smith, D. C.; Grant, K. D.

    2011-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will contribute the afternoon orbit component and ground processing system of the restructured National Polar-orbiting Operational Environmental Satellite System (NPOESS). As such, JPSS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the ground processing component of both POES and the Defense Meteorological Satellite Program (DMSP) replacement known as the Defense Weather Satellite System (DWSS), managed by the Department of Defense (DoD). The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS), and consists of a Command, Control, and Communications Segment (C3S) and the Interface Data Processing Segment (IDPS). Both are developed by Raytheon Intelligence and Information Systems (IIS). The Interface Data Processing Segment will process NPOESS Preparatory Project, Joint Polar Satellite System and Defense Weather Satellite System satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. Under NPOESS, Northrop Grumman Aerospace Systems Algorithms and Data Products (A&DP) organization was responsible for the algorithms that produce the EDRs, including their quality aspects. For JPSS, that responsibility has transferred to NOAA's Center for Satellite Applications & Research (STAR). As the Calibration and Validation (Cal/Val) activities move forward following both the NPP launch and subsequent JPSS and DWSS launches, rapid algorithm updates may be required. Raytheon and

  9. Algorithms, games, and evolution

    PubMed Central

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-01-01

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793

  10. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  11. Irregular Applications: Architectures & Algorithms

    SciTech Connect

    Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    2012-02-06

    Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.

  12. Basic cluster compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.

    1980-01-01

    Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.

  13. SPA: Solar Position Algorithm

    NASA Astrophysics Data System (ADS)

    Reda, Ibrahim; Andreas, Afshin

    2015-04-01

    The Solar Position Algorithm (SPA) calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. SPA is implemented in C; in addition to being available for download, an online calculator using this code is available at http://www.nrel.gov/midc/solpos/spa.html.

  14. ARPANET Routing Algorithm Improvements

    DTIC Science & Technology

    1978-10-01

    IMPROVEMENTS . .PFOnINI ORG. REPORT MUNDER -- ) _ .. .... 3940 7, AUT񓂏(c) .. .. .. CONTRACT Of GRANT NUMSlet e) SJ. M. /Mc~uillan E. C./Rosen I...8217), this problem may persist for a very long time, causing extremely bad performance throughout the whole network (for instance, if w’ reports that one of...algorithm may naturally tend to oscillate between bad routing paths and become itself a major contributor to network congestion. These examples show

  15. Signal Processing Algorithms.

    DTIC Science & Technology

    1983-10-13

    determining the solu- tion using the Moore - Penrose inverse . An expression for the mean square error is derived [8,9]. The expression indicates that...Proc. 10. "An Iterative Algorithm for Finding the Minimum Eigenvalue of a Class of Symmetric Matrices," D. Fuhrmann and B. Liu, submitted to 1984 IEEE...Int. Conf. Acous. Sp. 5V. Proc. 11. "Approximating the Eigenvectors of a Symmetric Toeplitz Matrix," by D. Fuhrmann and B. Liu, 1983 Allerton Conf. an

  16. SIMAS ADM XBT Algorithm

    DTIC Science & Technology

    2016-06-07

    XBT’s sound speed values instead of temperature values. Studies show that the sound speed at the surface in a specific location varies less than...be entered at the terminal in metric or English temperatures or sound speeds. The algorithm automatically determines which form each data point was... sound speeds. Leroy’s equation is used to derive sound speed from temperature or temperature from sound speed. The previous, current, and next months

  17. Adaptive continuous twisting algorithm

    NASA Astrophysics Data System (ADS)

    Moreno, Jaime A.; Negrete, Daniel Y.; Torres-González, Victor; Fridman, Leonid

    2016-09-01

    In this paper, an adaptive continuous twisting algorithm (ACTA) is presented. For double integrator, ACTA produces a continuous control signal ensuring finite time convergence of the states to zero. Moreover, the control signal generated by ACTA compensates the Lipschitz perturbation in finite time, i.e. its value converges to the opposite value of the perturbation. ACTA also keeps its convergence properties, even in the case that the upper bound of the derivative of the perturbation exists, but it is unknown.

  18. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  19. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  20. Optimal design of tunable phononic bandgap plates under equibiaxial stretch

    NASA Astrophysics Data System (ADS)

    Hedayatrasa, Saeid; Abhary, Kazem; Uddin, M. S.; Guest, James K.

    2016-05-01

    Design and application of phononic crystal (PhCr) acoustic metamaterials has been a topic with tremendous growth of interest in the last decade due to their promising capabilities to manipulate acoustic and elastodynamic waves. Phononic controllability of waves through a particular PhCr is limited only to the spectrums located within its fixed bandgap frequency. Hence the ability to tune a PhCr is desired to add functionality over its variable bandgap frequency or for switchability. Deformation induced bandgap tunability of elastomeric PhCr solids and plates with prescribed topology have been studied by other researchers. Principally the internal stress state and distorted geometry of a deformed phononic crystal plate (PhP) changes its effective stiffness and leads to deformation induced tunability of resultant modal band structure. Thus the microstructural topology of a PhP can be altered so that specific tunability features are met through prescribed deformation. In the present study novel tunable PhPs of this kind with optimized bandgap efficiency-tunability of guided waves are computationally explored and evaluated. Low loss transmission of guided waves throughout thin walled structures makes them ideal for fabrication of low loss ultrasound devices and structural health monitoring purposes. Various tunability targets are defined to enhance or degrade complete bandgaps of plate waves through macroscopic tensile deformation. Elastomeric hyperelastic material is considered which enables recoverable micromechanical deformation under tuning finite stretch. Phononic tunability through stable deformation of phononic lattice is specifically required and so any topology showing buckling instability under assumed deformation is disregarded. Nondominated sorting genetic algorithm (GA) NSGA-II is adopted for evolutionary multiobjective topology optimization of hypothesized tunable PhP with square symmetric unit-cell and relevant topologies are analyzed through finite

  1. Multi-objective optimization of gear forging process based on adaptive surrogate meta-models

    NASA Astrophysics Data System (ADS)

    Meng, Fanjuan; Labergere, Carl; Lafon, Pascal; Daniel, Laurent

    2013-05-01

    In forging industry, net shape or near net shape forging of gears has been the subject of considerable research effort in the last few decades. So in this paper, a multi-objective optimization methodology of net shape gear forging process design has been discussed. The study is mainly done in four parts: building parametric CAD geometry model, simulating the forging process, fitting surrogate meta-models and optimizing the process by using an advanced algorithm. In order to maximally appropriate meta-models of the real response, an adaptive meta-model based design strategy has been applied. This is a continuous process: first, bui Id a preliminary version of the meta-models after the initial simulated calculations; second, improve the accuracy and update the meta-models by adding some new representative samplings. By using this iterative strategy, the number of the initial sample points for real numerical simulations is greatly decreased and the time for the forged gear design is significantly shortened. Finally, an optimal design for an industrial application of a 27-teeth gear forging process was introduced, which includes three optimization variables and two objective functions. A 3D FE nu merical simulation model is used to realize the process and an advanced thermo-elasto-visco-plastic constitutive equation is considered to represent the material behavior. The meta-model applied for this example is kriging and the optimization algorithm is NSGA-II. At last, a relatively better Pareto optimal front (POF) is gotten with gradually improving the obtained surrogate meta-models.

  2. The Hip Restoration Algorithm

    PubMed Central

    Stubbs, Allston Julius; Atilla, Halis Atil

    2016-01-01

    Summary Background Despite the rapid advancement of imaging and arthroscopic techniques about the hip joint, missed diagnoses are still common. As a deep joint and compared to the shoulder and knee joints, localization of hip symptoms is difficult. Hip pathology is not easily isolated and is often related to intra and extra-articular abnormalities. In light of these diagnostic challenges, we recommend an algorithmic approach to effectively diagnoses and treat hip pain. Methods In this review, hip pain is evaluated from diagnosis to treatment in a clear decision model. First we discuss emergency hip situations followed by the differentiation of intra and extra-articular causes of the hip pain. We differentiate the intra-articular hip as arthritic and non-arthritic and extra-articular pain as surrounding or remote tissue generated. Further, extra-articular hip pain is evaluated according to pain location. Finally we summarize the surgical treatment approach with an algorithmic diagram. Conclusion Diagnosis of hip pathology is difficult because the etiologies of pain may be various. An algorithmic approach to hip restoration from diagnosis to rehabilitation is crucial to successfully identify and manage hip pathologies. Level of evidence: V. PMID:28066734

  3. Algorithm for navigated ESS.

    PubMed

    Baudoin, T; Grgić, M V; Zadravec, D; Geber, G; Tomljenović, D; Kalogjera, L

    2013-12-01

    ENT navigation has given new opportunities in performing Endoscopic Sinus Surgery (ESS) and improving surgical outcome of the patients` treatment. ESS assisted by a navigation system could be called Navigated Endoscopic Sinus Surgery (NESS). As it is generally accepted that the NESS should be performed only in cases of complex anatomy and pathology, it has not yet been established as a state-of-the-art procedure and thus not used on a daily basis. This paper presents an algorithm for use of a navigation system for basic ESS in the treatment of chronic rhinosinusitis (CRS). The algorithm includes five units that should be highlighted using a navigation system. They are as follows: 1) nasal vestibule unit, 2) OMC unit, 3) anterior ethmoid unit, 4) posterior ethmoid unit, and 5) sphenoid unit. Each unit has a shape of a triangular pyramid and consists of at least four reference points or landmarks. As many landmarks as possible should be marked when determining one of the five units. Navigated orientation in each unit should always precede any surgical intervention. The algorithm should improve the learning curve of trainees and enable surgeons to use the navigation system routinely and systematically.

  4. Large scale tracking algorithms

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  5. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  6. A sparse matrix based full-configuration interaction algorithm.

    PubMed

    Rolik, Zoltán; Szabados, Agnes; Surján, Péter R

    2008-04-14

    We present an algorithm related to the full-configuration interaction (FCI) method that makes complete use of the sparse nature of the coefficient vector representing the many-electron wave function in a determinantal basis. Main achievements of the presented sparse FCI (SFCI) algorithm are (i) development of an iteration procedure that avoids the storage of FCI size vectors; (ii) development of an efficient algorithm to evaluate the effect of the Hamiltonian when both the initial and the product vectors are sparse. As a result of point (i) large disk operations can be skipped which otherwise may be a bottleneck of the procedure. At point (ii) we progress by adopting the implementation of the linear transformation by Olsen et al. [J. Chem Phys. 89, 2185 (1988)] for the sparse case, getting the algorithm applicable to larger systems and faster at the same time. The error of a SFCI calculation depends only on the dropout thresholds for the sparse vectors, and can be tuned by controlling the amount of system memory passed to the procedure. The algorithm permits to perform FCI calculations on single node workstations for systems previously accessible only by supercomputers.

  7. Algorithm development for predicting biodiversity based on phytoplankton absorption

    NASA Astrophysics Data System (ADS)

    Moisan, Tiffany A. H.; Moisan, John R.; Linkswiler, Matthew A.; Steinhardt, Rachel A.

    2013-03-01

    Ocean color remote sensing has provided the scientific community with unprecedented global coverage of chlorophyll a, an indicator of phytoplankton biomass. Together, satellite-derived chlorophyll a and knowledge of Phytoplankton Functional Types (PFTs) will improve our limited understanding of marine ecosystem responses to physiochemical climate drivers involved in carbon cycle dynamics and linkages. Using cruise data from the Gulf of Maine and the Middle Atlantic Bight (N=269 pairs of HPLC and phytoplankton absorption samples), two modeling approaches were utilized to predict phytoplankton absorption and pigments. Algorithm I predicts the chlorophyll-specific absorption coefficient (aph* (m2 mg chl a-1)) using inputs of temperature, light, and chlorophyll a. Modeled r2 values (400-700 nm) ranged from 0.79 to 0.99 when compared to in situ observations with ˜25% lower r2 values in the UV region. Algorithm II-a utilizes matrix inversion analysis to predict a(m-1, 400-700 nm) and r2 values ranged from 0.89 to 0.99. The prediction of phytoplankton pigments with Algorithm II-b produced r2 values that ranged from 0.40 to 0.93. When used in combination, Algorithm I, and Algorithm II-a are able to use satellite products of SST, PAR, and chlorophyll a (Algorithm I) to predict pigment concentrations and ratios to describe the phytoplankton community. The results of this study demonstrate that the spatial variation in modeled pigment ratios differ significantly from the 10-year SeaWiFS average chlorophyll a data set. Contiguous observations of chlorophyll a and phytoplankton biodiversity will elucidate ecosystem responses with unprecedented complexity.

  8. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  9. Two Meanings of Algorithmic Mathematics.

    ERIC Educational Resources Information Center

    Maurer, Stephen B.

    1984-01-01

    Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…

  10. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  11. Grammar Rules as Computer Algorithms.

    ERIC Educational Resources Information Center

    Rieber, Lloyd

    1992-01-01

    One college writing teacher engaged his class in the revision of a computer program to check grammar, focusing on improvement of the algorithms for identifying inappropriate uses of the passive voice. Process and problems of constructing new algorithms, effects on student writing, and other algorithm applications are discussed. (MSE)

  12. Verifying a Computer Algorithm Mathematically.

    ERIC Educational Resources Information Center

    Olson, Alton T.

    1986-01-01

    Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)

  13. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    NASA Astrophysics Data System (ADS)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  14. Obstacle Detection Algorithms for Aircraft Navigation: Performance Characterization of Obstacle Detection Algorithms for Aircraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Coraor, Lee

    2000-01-01

    The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.

  15. Multiple Magnetic Dipole Modeling Coupled with a Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lientschnig, G.

    2012-05-01

    Magnetic field measurements of scientific spacecraft can be modelled successfully with the multiple magnetic dipole method. The existing GANEW software [1] uses a modified Gauss-Newton algorithm to find good magnetic dipole models. However, this deterministic approach relies on suitable guesses of the initial parameters which require a lot of expertise and time-consuming interaction of the user. Here, the use of probabilistic methods employing genetic algorithms is put forward. Stochastic methods like these are well- suited for providing good initial starting points for GANEW. Furthermore a computer software is reported upon that was successfully tested and used for a Cluster II satellite.

  16. Some Computer Algorithms to Implement a Reliability Shorthand.

    DTIC Science & Technology

    1982-10-01

    AD-A123 781 SOME COMPUTER ALGORITHMS TO IMPLEMENT A RELIAILITY /I I SHORTHAND(U) N VAL POSTGRADUATE SCHOOL MONTEREY CA UNCLASSIFIED SGREOC82F/G 12...California THESIS SOME COMPUTER ALGORITHMS TO IMPLEMENT A RELIABILITY SHORTHAND Sadan Gursel October 1982 JAN 26I󈨗 A :: Thesis Advisor: J. D. Esary...DOCMEWTATION PAGE ISSFORK COMPLZT’Nc FORM .REPORTNMU1EUGW CKO N.3 19IiNI CATALOG mao d. TMTE (od Sid"Ifte) $. ?’V9E OF 1119000 & PEUoOŔ COVERED Some Computer

  17. Join-Graph Propagation Algorithms

    PubMed Central

    Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina

    2010-01-01

    The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057

  18. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  19. Algorithm performance evaluation

    NASA Astrophysics Data System (ADS)

    Smith, Richard N.; Greci, Anthony M.; Bradley, Philip A.

    1995-03-01

    Traditionally, the performance of adaptive antenna systems is measured using automated antenna array pattern measuring equipment. This measurement equipment produces a plot of the receive gain of the antenna array as a function of angle. However, communications system users more readily accept and understand bit error rate (BER) as a performance measure. The work reported on here was conducted to characterize adaptive antenna receiver performance in terms of overall communications system performance using BER as a performance measure. The adaptive antenna system selected for this work featured a linear array, least mean square (LMS) adaptive algorithm and a high speed phase shift keyed (PSK) communications modem.

  20. An assessment of algorithms to estimate respiratory rate from the electrocardiogram and photoplethysmogram.

    PubMed

    Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Clifton, David A; Beale, Richard; Watkinson, Peter J

    2016-04-01

    Over 100 algorithms have been proposed to estimate respiratory rate (RR) from the electrocardiogram (ECG) and photoplethysmogram (PPG). As they have never been compared systematically it is unclear which algorithm performs the best. Our primary aim was to determine how closely algorithms agreed with a gold standard RR measure when operating under ideal conditions. Secondary aims were: (i) to compare algorithm performance with IP, the clinical standard for continuous respiratory rate measurement in spontaneously breathing patients; (ii) to compare algorithm performance when using ECG and PPG; and (iii) to provide a toolbox of algorithms and data to allow future researchers to conduct reproducible comparisons of algorithms. Algorithms were divided into three stages: extraction of respiratory signals, estimation of RR, and fusion of estimates. Several interchangeable techniques were implemented for each stage. Algorithms were assembled using all possible combinations of techniques, many of which were novel. After verification on simulated data, algorithms were tested on data from healthy participants. RRs derived from ECG, PPG and IP were compared to reference RRs obtained using a nasal-oral pressure sensor using the limits of agreement (LOA) technique. 314 algorithms were assessed. Of these, 270 could operate on either ECG or PPG, and 44 on only ECG. The best algorithm had 95% LOAs of  -4.7 to 4.7 bpm and a bias of 0.0 bpm when using the ECG, and  -5.1 to 7.2 bpm and 1.0 bpm when using PPG. IP had 95% LOAs of  -5.6 to 5.2 bpm and a bias of  -0.2 bpm. Four algorithms operating on ECG performed better than IP. All high-performing algorithms consisted of novel combinations of time domain RR estimation and modulation fusion techniques. Algorithms performed better when using ECG than PPG. The toolbox of algorithms and data used in this study are publicly available.

  1. Computational and performance aspects of PCA-based face-recognition algorithms.

    PubMed

    Moon, H; Phillips, P J

    2001-01-01

    Algorithms based on principal component analysis (PCA) form the basis of numerous studies in the psychological and algorithmic face-recognition literature. PCA is a statistical technique and its incorporation into a face-recognition algorithm requires numerous design decisions. We explicitly state the design decisions by introducing a generic modular PCA-algorithm. This allows us to investigate these decisions, including those not documented in the literature. We experimented with different implementations of each module, and evaluated the different implementations using the September 1996 FERET evaluation protocol (the de facto standard for evaluating face-recognition algorithms). We experimented with (i) changing the illumination normalization procedure; (ii) studying effects on algorithm performance of compressing images with JPEG and wavelet compression algorithms; (iii) varying the number of eigenvectors in the representation; and (iv) changing the similarity measure in the classification process. We performed two experiments. In the first experiment, we obtained performance results on the standard September 1996 FERET large-gallery image sets. In the second experiment, we examined the variability in algorithm performance on different sets of facial images. The study was performed on 100 randomly generated image sets (galleries) of the same size. Our two most significant results are (i) changing the similarity measure produced the greatest change in performance, and (ii) that difference in performance of +/- 10% is needed to distinguish between algorithms.

  2. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  3. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  4. Online Pairwise Learning Algorithms.

    PubMed

    Ying, Yiming; Zhou, Ding-Xuan

    2016-04-01

    Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.

  5. A computational study of routing algorithms for realistic transportation networks

    SciTech Connect

    Jacob, R.; Marathe, M.V.; Nagel, K.

    1998-12-01

    The authors carry out an experimental analysis of a number of shortest path (routing) algorithms investigated in the context of the TRANSIMS (Transportation Analysis and Simulation System) project. The main focus of the paper is to study how various heuristic and exact solutions, associated data structures affected the computational performance of the software developed especially for realistic transportation networks. For this purpose the authors have used Dallas Fort-Worth road network with very high degree of resolution. The following general results are obtained: (1) they discuss and experimentally analyze various one-one shortest path algorithms, which include classical exact algorithms studied in the literature as well as heuristic solutions that are designed to take into account the geometric structure of the input instances; (2) they describe a number of extensions to the basic shortest path algorithm. These extensions were primarily motivated by practical problems arising in TRANSIMS and ITS (Intelligent Transportation Systems) related technologies. Extensions discussed include--(i) time dependent networks, (ii) multi-modal networks, (iii) networks with public transportation and associated schedules. Computational results are provided to empirically compare the efficiency of various algorithms. The studies indicate that a modified Dijkstra`s algorithm is computationally fast and an excellent candidate for use in various transportation planning applications as well as ITS related technologies.

  6. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  7. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    PubMed

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  8. Unsupervised Clustering of Type II Supernova Light Curves

    NASA Astrophysics Data System (ADS)

    Rubin, Adam; Gal-Yam, Avishay

    2016-09-01

    As new facilities come online, the astronomical community will be provided with extremely large data sets of well-sampled light curves (LCs) of transients. This motivates systematic studies of the LCs of supernovae (SNe) of all types, including the early rising phase. We performed unsupervised k-means clustering on a sample of 59 R-band SN II LCs and find that the rise to peak plays an important role in classifying LCs. Our sample can be divided into three classes: slowly rising (II-S), fast rise/slow decline (II-FS), and fast rise/fast decline (II-FF). We also identify three outliers based on the algorithm. The II-FF and II-FS classes are disjoint in their decline rates, while the II-S class is intermediate and “bridges the gap.” This may explain recent conflicting results regarding II-P/II-L populations. The II-FS class is also significantly less luminous than the other two classes. Performing clustering on the first two principal component analysis components gives equivalent results to using the full LC morphologies. This indicates that Type II LCs could possibly be reduced to two parameters. We present several important caveats to the technique, and find that the division into these classes is not fully robust. Moreover, these classes have some overlap, and are defined in the R band only. It is currently unclear if they represent distinct physical classes, and more data is needed to study these issues. However, we show that the outliers are actually composed of slowly evolving SN IIb, demonstrating the potential of such methods. The slowly evolving SNe IIb may arise from single massive progenitors.

  9. Computer Algorithms and Architectures for Three-Dimensional Eddy-Current Nondestructive Evaluation. Volume 1. Executive Summary

    DTIC Science & Technology

    1989-01-20

    LLAA6 .l iI -SA/TR-2/89 A003: FINAL REPORT * COMPUTER ALGORITHMS AND ARCHITECTURES N FOR THREE-DIMENSIONAL EDDY-CURRENT NONDESTRUCTIVE EVALUATION...Ciasuication) COMPUTER ALGORITHMS AND ARCHITECTURES FOR THREE-DIMENSIONAL EDD~j~~JRRN iv ummary Q PERSONAL AUTriOR(S) SBAHASCAE 1 3a. TYPE Of REPORT

  10. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  11. Fighting Censorship with Algorithms

    NASA Astrophysics Data System (ADS)

    Mahdian, Mohammad

    In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.

  12. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  13. One improved LSB steganography algorithm

    NASA Astrophysics Data System (ADS)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  14. Algorithm Diversity for Resilent Systems

    DTIC Science & Technology

    2016-06-27

    4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Algorithm Diversity for Resilent Systems N/A 5b. GRANT NUMBER NOOO 141512208 5c. PROGRAM ELEMENT NUMBER...changes to a prograrn’s state during execution. Specifically, the project aims to develop techniques to introduce algorithm -level diversity, in contrast...to existing work on execution-level diversity. Algorithm -level diversity can introduce larger differences between variants than execution-level

  15. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results.

    SciTech Connect

    Swiler, Laura Painton; Eldred, Michael Scott

    2009-09-01

    This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

  16. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  17. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  18. Quantum algorithm for data fitting.

    PubMed

    Wiebe, Nathan; Braun, Daniel; Lloyd, Seth

    2012-08-03

    We provide a new quantum algorithm that efficiently determines the quality of a least-squares fit over an exponentially large data set by building upon an algorithm for solving systems of linear equations efficiently [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)]. In many cases, our algorithm can also efficiently find a concise function that approximates the data to be fitted and bound the approximation error. In cases where the input data are pure quantum states, the algorithm can be used to provide an efficient parametric estimation of the quantum state and therefore can be applied as an alternative to full quantum-state tomography given a fault tolerant quantum computer.

  19. Preconditioned quantum linear system algorithm.

    PubMed

    Clader, B D; Jacobs, B C; Sprouse, C R

    2013-06-21

    We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm.

  20. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  1. Multi-objective design optimization of the transverse gaseous jet in supersonic flows

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Yang, Jun; Yan, Li

    2014-01-01

    The mixing process between the injectant and the supersonic crossflow is one of the important issues for the design of the scramjet engine, and the efficiency mixing has a great impact on the improvement of the combustion efficiency. A hovering vortex is formed between the separation region and the barrel shock wave, and this may be induced by the large negative density gradient. The separation region provides a good mixing area for the injectant and the subsonic boundary layer. In the current study, the transverse injection flow field with a freestream Mach number of 3.5 has been optimized by the non-dominated sorting genetic algorithm (NSGA II) coupled with the Kriging surrogate model; and the variance analysis method and the extreme difference analysis method have been employed to evaluate the values of the objective functions. The obtained results show that the jet-to-crossflow pressure ratio is the most important design variable for the transverse injection flow field, and the injectant molecular weight and the slot width should be considered for the mixing process between the injectant and the supersonic crossflow. There exists an optimal penetration height for the mixing efficiency, and its value is about 14.3 mm in the range considered in the current study. The larger penetration height provides a larger total pressure loss, and there must be a tradeoff between these two objection functions. In addition, this study demonstrates that the multi-objective design optimization method with the data mining technique can be used efficiently to explore the relationship between the design variables and the objective functions.

  2. World War II Homefront.

    ERIC Educational Resources Information Center

    Garcia, Rachel

    2002-01-01

    Presents an annotated bibliography that provides Web sites focusing on the U.S. homefront during World War II. Covers various topics such as the homefront, Japanese Americans, women during World War II, posters, and African Americans. Includes lesson plan sources and a list of additional resources. (CMK)

  3. Wind farm optimization using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Ituarte-Villarreal, Carlos M.

    In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a

  4. Hot spots in apolipoprotein A-II misfolding and amyloidosis in mice and men

    PubMed Central

    Gursky, Olga

    2014-01-01

    ApoA-II is the second-major protein of high-density lipoproteins. C-terminal extension in human apoA-II or point substitutions in murine apoA-II cause amyloidosis. The molecular mechanism of apolipoprotein misfolding, from the native predominantly α-helical conformation to cross-β-sheet in amyloid, is unknown. We used 12 sequence-based prediction algorithms to identify two ten-residue segments in apoA-II that probably initiate β-aggregation. Previous studies of apoA-II fragments experimentally verify this prediction. Together, experimental and bioinformatics studies explain why the C-terminal extension in human apoA-II causes amyloidosis and why, unlike murine apoA-II, human apoA-II normally does not cause amyloidosis despite its unusually high sequence propensity for β-aggregation. PMID:24561203

  5. Research on Routing Selection Algorithm Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Guohong; Zhang, Baojian; Li, Xueyong; Lv, Jinna

    The hereditary algorithm is a kind of random searching and method of optimizing based on living beings natural selection and hereditary mechanism. In recent years, because of the potentiality in solving complicate problems and the successful application in the fields of industrial project, hereditary algorithm has been widely concerned by the domestic and international scholar. Routing Selection communication has been defined a standard communication model of IP version 6.This paper proposes a service model of Routing Selection communication, and designs and implements a new Routing Selection algorithm based on genetic algorithm.The experimental simulation results show that this algorithm can get more resolution at less time and more balanced network load, which enhances search ratio and the availability of network resource, and improves the quality of service.

  6. LSPRAY-II: A Lagrangian Spray Module

    NASA Technical Reports Server (NTRS)

    Raju, M. S.

    2004-01-01

    LSPRAY-II is a Lagrangian spray solver developed for application with parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and/or Monte Carlo Probability Density Function (PDF) solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type for the gas flow grid representation. It is mainly designed to predict the flow, thermal and transport properties of a rapidly vaporizing spray because of its importance in aerospace application. The manual provides the user with an understanding of various models involved in the spray formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. With the development of LSPRAY-II, we have advanced the state-of-the-art in spray computations in several important ways.

  7. Using DFX for Algorithm Evaluation

    SciTech Connect

    Beiriger, J.I.; Funkhouser, D.R.; Young, C.J.

    1998-10-20

    Evaluating whether or not a new seismic processing algorithm can improve the performance of the operational system can be problematic: it maybe difficult to isolate the comparable piece of the operational system it maybe necessary to duplicate ancillary timctions; and comparing results to the tuned, full-featured operational system maybe an unsat- isfactory basis on which to draw conclusions. Algorithm development and evaluation in an environment that more closely resembles the operational system can be achieved by integrating the algorithm with the custom user library of the Detection and Feature Extraction (DFX) code, developed by Science Applications kternational Corporation. This integration gives the seismic researcher access to all of the functionality of DFX, such as database access, waveform quality control, and station-specific tuning, and provides a more meaningfid basis for evaluation. The goal of this effort is to make the DFX environment more accessible to seismic researchers for algorithm evalua- tion. Typically, anew algorithm will be developed as a C-language progmm with an ASCII test parameter file. The integration process should allow the researcher to focus on the new algorithm developmen~ with minimum attention to integration issues. Customizing DFX, however, requires soflsvare engineering expertise, knowledge of the Scheme and C programming languages, and familiarity with the DFX source code. We use a C-language spatial coherence processing algorithm with a parameter and recipe file to develop a general process for integrating and evaluating a new algorithm in the DFX environment. To aid in configuring and managing the DFX environment, we develop a simple parameter management tool. We also identifi and examine capabilities that could simplify the process further, thus reducing the barriers facing researchers in using DFX..These capabilities include additional parameter manage- ment features, a Scheme-language template for algorithm testing, a

  8. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    NASA Technical Reports Server (NTRS)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; Nowak, M. A.

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  9. Modular algorithm concept evaluation tool (MACET) sensor fusion algorithm testbed

    NASA Astrophysics Data System (ADS)

    Watson, John S.; Williams, Bradford D.; Talele, Sunjay E.; Amphay, Sengvieng A.

    1995-07-01

    Target acquisition in a high clutter environment in all-weather at any time of day represents a much needed capability for the air-to-surface strike mission. A considerable amount of the research at the Armament Directorate at Wright Laboratory, Advanced Guidance Division WL/MNG, has been devoted to exploring various seeker technologies, including multi-spectral sensor fusion, that may yield a cost efficient system with these capabilities. Critical elements of any such seekers are the autonomous target acquisition and tracking algorithms. These algorithms allow the weapon system to operate independently and accurately in realistic battlefield scenarios. In order to assess the performance of the multi-spectral sensor fusion algorithms being produced as part of the seeker technology development programs, the Munition Processing Technology Branch of WL/MN is developing an algorithm testbed. This testbed consists of the Irma signature prediction model, data analysis workstations, such as the TABILS Analysis and Management System (TAMS), and the Modular Algorithm Concept Evaluation Tool (MACET) algorithm workstation. All three of these components are being enhanced to accommodate multi-spectral sensor fusion systems. MACET is being developed to provide a graphical interface driven simulation by which to quickly configure algorithm components and conduct performance evaluations. MACET is being developed incrementally with each release providing an additional channel of operation. To date MACET 1.0, a passive IR algorithm environment, has been delivered. The second release, MACET 1.1 is presented in this paper using the MMW/IR data from the Advanced Autonomous Dual Mode Seeker (AADMS) captive flight demonstration. Once completed, the delivered software from past algorithm development efforts will be converted to the MACET library format, thereby providing an on-line database of the algorithm research conducted to date.

  10. Improved algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2013-02-01

    algorithms by selecting ranges of the argument omega in which the performance is the fastest. Reasons for the new version: Some of the theoretical models describing electron transport in condensed matter need a source of the Chandrasekhar H function values with an accuracy of at least 10 decimal places. Additionally, calculations of this function should be as fast as possible since frequent calls to a subroutine providing this function are made (e.g., numerical evaluation of a double integral with a complicated integrand containing the H function). Both conditions were satisfied in the algorithm previously published [1]. However, it has been found that a proper selection of the quadrature in an integral representation of the Chandrasekhar function may considerably decrease the running time. By suitable selection of the number of abscissas in Gauss-Legendre quadrature, the execution time was decreased by a factor of more than 20. Simultaneously, the accuracy of results has not been affected. Summary of revisions: (1) As in previous work [1], two integral representations of the Chandrasekhar function, H(x,omega), were considered: the expression published by Dudarev and Whelan [2] and the expression published by Davidović et al. [3]. The algorithms implementing these representations were designated A and B, respectively. All integrals in these implementations were previously calculated using Romberg quadrature. It has been found, however, that the use of Gauss-Legendre quadrature considerably improved the performance of both algorithms. Two conditions have to be satisfied. (i) The number of abscissas, N, has to be rather large, and (ii) the abscissas and corresponding weights should be determined with accuracy as high as possible. The abscissas and weights are available for N=16, 20, 24, 32, 40, 48, 64, 80, and 96 with accuracy of 20 decimal places [4], and all these values were introduced into a new procedure GAUSS replacing procedure ROMBERG. Due to the fact that the

  11. NSLS-II RF BEAM POSITION MONITOR

    SciTech Connect

    Vetter, K.; Della Penna, A. J.; DeLong, J.; Kosciuk, B.; Mead, J.; Pinayev, I.; Singh, O.; Tian, Y.; Ha, K.; Portmann, G.; Sebek J.

    2011-03-28

    An internal R&D program has been undertaken at BNL to develop a sub-micron RF Beam Position Monitor (BPM) for the NSLS-II 3rd generation light source that is currently under construction. The BPM R&D program started in August 2009. Successful beam tests were conducted 15 months from the start of the program. The NSLS-II RF BPM has been designed to meet all requirements for the NSLS-II Injection system and Storage Ring. Housing of the RF BPM's in +-0.1 C thermally controlled racks provide sub-micron stabilization without active correction. An active pilot-tone has been incorporated to aid long-term (8hr min) stabilization to 200nm RMS. The development of a sub-micron BPM for the NSLS-II has successfully demonstrated performance and stability. Pilot Tone calibration combiner and RF synthesizer has been implemented and algorithm development is underway. The program is currently on schedule to start production development of 60 Injection BPM's starting in the Fall of 2011. The production of {approx}250 Storage Ring BPM's will overlap the Injection schedule.

  12. Algorithms on ensemble quantum computers.

    PubMed

    Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh

    2010-06-01

    In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.

  13. Algorithm Calculates Cumulative Poisson Distribution

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  14. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  15. Predicting patchy particle crystals: Variable box shape simulations and evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-01

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.

  16. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  17. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  18. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  19. A hybrid algorithm for instant optimization of beam weights in anatomy-based intensity modulated radiotherapy: A performance evaluation study.

    PubMed

    Vaitheeswaran, Ranganathan; Sathiya, Narayanan V K; Bhangle, Janhavi R; Nirhali, Amit; Kumar, Namita; Basu, Sumit; Maiya, Vikram

    2011-04-01

    The study aims to introduce a hybrid optimization algorithm for anatomy-based intensity modulated radiotherapy (AB-IMRT). Our proposal is that by integrating an exact optimization algorithm with a heuristic optimization algorithm, the advantages of both the algorithms can be combined, which will lead to an efficient global optimizer solving the problem at a very fast rate. Our hybrid approach combines Gaussian elimination algorithm (exact optimizer) with fast simulated annealing algorithm (a heuristic global optimizer) for the optimization of beam weights in AB-IMRT. The algorithm has been implemented using MATLAB software. The optimization efficiency of the hybrid algorithm is clarified by (i) analysis of the numerical characteristics of the algorithm and (ii) analysis of the clinical capabilities of the algorithm. The numerical and clinical characteristics of the hybrid algorithm are compared with Gaussian elimination method (GEM) and fast simulated annealing (FSA). The numerical characteristics include convergence, consistency, number of iterations and overall optimization speed, which were analyzed for the respective cases of 8 patients. The clinical capabilities of the hybrid algorithm are demonstrated in cases of (a) prostate and (b) brain. The analyses reveal that (i) the convergence speed of the hybrid algorithm is approximately three times higher than that of FSA algorithm; (ii) the convergence (percentage reduction in the cost function) in hybrid algorithm is about 20% improved as compared to that in GEM algorithm; (iii) the hybrid algorithm is capable of producing relatively better treatment plans in terms of Conformity Index (CI) [~ 2% - 5% improvement] and Homogeneity Index (HI) [~ 4% - 10% improvement] as compared to GEM and FSA algorithms; (iv) the sparing of organs at risk in hybrid algorithm-based plans is better than that in GEM-based plans and comparable to that in FSA-based plans; and (v) the beam weights resulting from the hybrid algorithm are

  20. Fast algorithm for transient current through open quantum systems

    NASA Astrophysics Data System (ADS)

    Cheung, King Tai; Fu, Bin; Yu, Zhizhou; Wang, Jian

    2017-03-01

    Transient current calculation is essential to study the response time and capture the peak transient current for preventing meltdown of nanochips in nanoelectronics. Its calculation is known to be extremely time consuming with the best scaling T N3 where N is the dimension of the device and T is the number of time steps. The dynamical response of the system is usually probed by sending a steplike pulse and monitoring its transient behavior. Here, we provide a fast algorithm to study the transient behavior due to the steplike pulse. This algorithm consists of two parts: algorithm I reduces the computational complexity to T0N3 for large systems as long as T algorithm II employs the fast multipole technique and achieves scaling T0N3 whenever T algorithm allows us to tackle many large scale transient problems including magnetic tunneling junctions and ferroelectric tunneling junctions.

  1. SEU-tolerant IQ detection algorithm for LLRF accelerator system

    NASA Astrophysics Data System (ADS)

    Grecki, M.

    2007-08-01

    High-energy accelerators use RF field to accelerate charged particles. Measurements of effective field parameters (amplitude and phase) are tasks of great importance in these facilities. The RF signal is downconverted in frequency but keeping the information about amplitude and phase and then sampled in ADC. One of the several tasks for LLRF control system is to estimate the amplitude and phase (or I and Q components) of the RF signal. These parameters are further used in the control algorithm. The XFEL accelerator will be built using a single-tunnel concept. Therefore electronic devices (including LLRF control system) will be exposed to ionizing radiation, particularly to a neutron flux generating SEUs in digital circuits. The algorithms implemented in FPGA/DSP should therefore be SEU-tolerant. This paper presents the application of the WCC method to obtain immunity of IQ detection algorithm to SEUs. The VHDL implementation of this algorithm in Xilinx Virtex II Pro FPGA is presented, together with results of simulation proving the algorithm suitability for systems operating in the presence of SEUs.

  2. An algorithm for the treatment of schizophrenia in the correctional setting: the Forensic Algorithm Project.

    PubMed

    Buscema, C A; Abbasi, Q A; Barry, D J; Lauve, T H

    2000-10-01

    The Forensic Algorithm Project (FAP) was born of the need for a holistic approach in the treatment of the inmate with schizophrenia. Schizophrenia was chosen as the first entity to be addressed by the algorithm because of its refractory nature and high rate of recidivism in the correctional setting. Schizophrenia is regarded as a spectrum disorder, with symptom clusters and behaviors ranging from positive to negative symptoms to neurocognitive dysfunction and affective instability. Furthermore, the clinical picture is clouded by Axis II symptomatology (particularly prominent in the inmate population), comorbid Axis I disorders, and organicity. Four subgroups of schizophrenia were created to coincide with common clinical presentations in the forensic inpatient facility and also to parallel 4 tracks of intervention, consisting of pharmacologic management and programming recommendations. The algorithm begins with any antipsychotic medication and proceeds to atypical neuroleptic usage, augmentation with other psychotropic agents, and, finally, the use of clozapine as the common pathway for refractory schizophrenia. Outcome measurement of pharmacologic intervention is assessed every 6 weeks through the use of a 4-item subscale, specific for each forensic subgroup. A "floating threshold" of 40% symptom severity reduction on Positive and Negative Syndrome Scale and Brief Psychiatric Rating Scale items over a 6-week period is considered an indication for neuroleptic continuation. The forensic algorithm differs from other clinical practice guidelines in that specific programming in certain prison environments is stipulated. Finally, a social commentary on the importance of state-of-the-art psychiatric treatment for all members of society is woven into the clinical tapestry of this article.

  3. FIRE II Cirrus Info

    Atmospheric Science Data Center

    2014-03-18

    ... Page:  FIRE II Main Grouping:  Cirrus Description:  First ISCCP Regional Experiment (FIRE) ... stratocumulus systems, the radiative properties of these clouds and their interactions. Data Products:  Cirrus ...

  4. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  5. START II and beyond

    SciTech Connect

    Mendelsohn, J.

    1996-10-01

    The second Strategic Arms Reduction Treaty (START II), signed by President George Bush and Russian President Boris yeltsin in January 1993, was ratified by the US Senate in January 1996 by and overwhelming vote of 87-4. The treaty, which will slash the strategic arsenals of the United States and Russia to 3,000-3,500 warheads each, is now before the two houses of the Russian Parliament (the Duma and the Federation Council) awaiting ratification amidst confusion and criticism. The Yeltsin administration supports START II and spoke in favor of Russian ratification after the Senate acted on the treaty. The Russian foreign minister and the Russian military believed that START II should be ratified as soon as possible. During the recent presidential campaign and his subsequent illness, President Yeltsin has been virtually silent on the subject of START II and nuclear force reductions. Without a push from the Yeltsin administration, the tone among Duma members, has been sharply critical of START II. Voices across the Russian political spectrum have questioned the treaty and linked it to constraints on highly capable theater missile defense (TMD) systems and the continued viability of the ABM Treaty. And urged that START II ratification be held hostage until NATO abandons its plans to expand eastward. Although the START I and START II accords have generated the momentum, opportunity and expectation-both domestic and international-for additional nuclear arms reductions, the current impasse over ratification in the Duma has cast a shadow over the future of START II and raised questions about the chances for any follow-on (START III) agreement.

  6. Mod II engine development

    NASA Technical Reports Server (NTRS)

    Karl, David W.

    1987-01-01

    The Mod II engine, a four-cylinder, automotive Stirling engine utilizing the Siemens-Rinia double-acting concept, was assembled and became operational in January 1986. This paper describes the Mod II engine, its first assembly, and the subsequent development work done on engine components up to the point that engine performance characterization testing took place. Performance data for the engine are included.

  7. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  8. Status and performance of the CDF Run II silicon detectors

    SciTech Connect

    Nielsen, Jason; /LBL, Berkeley

    2004-11-01

    In 2001, an upgraded silicon detector system was installed in the CDF II experiment on the Tevatron at Fermilab. The complete system consists of three silicon microstrip detectors: SVX II with five layers for precision tracking, Layer 00 with one beampipe-mounted layer for vertexing, and two Intermediate Silicon Layers located between SVX II and the main CDF II tracking chamber. Currently all detectors in the system are operating at or near design levels. The performance of the combined silicon system is excellent in the context of CDF tracking algorithms, and the first useful physics results from the innermost Layer 00 detector have been recently documented. Operational and monitoring efforts have also been strengthened to maintain silicon efficiency through the end of Run 2 at the Tevatron.

  9. A universal symmetry detection algorithm.

    PubMed

    Maurer, Peter M

    2015-01-01

    Research on symmetry detection focuses on identifying and detecting new types of symmetry. The paper presents an algorithm that is capable of detecting any type of permutation-based symmetry, including many types for which there are no existing algorithms. General symmetry detection is library-based, but symmetries that can be parameterized, (i.e. total, partial, rotational, and dihedral symmetry), can be detected without using libraries. In many cases it is faster than existing techniques. Furthermore, it is simpler than most existing techniques, and can easily be incorporated into existing software. The algorithm can also be used with virtually any type of matrix-based symmetry, including conjugate symmetry.

  10. Review of jet reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Atkin, Ryan

    2015-10-01

    Accurate jet reconstruction is necessary for understanding the link between the unobserved partons and the jets of observed collimated colourless particles the partons hadronise into. Understanding this link sheds light on the properties of these partons. A review of various common jet algorithms is presented, namely the Kt, Anti-Kt, Cambridge/Aachen, Iterative cones and the SIScone, highlighting their strengths and weaknesses. If one is interested in studying jets, the Anti-Kt algorithm is the best choice, however if ones interest is in the jet substructures then the Cambridge/Aachen algorithm would be the best option.

  11. Routing Algorithm Exploits Spatial Relations

    NASA Technical Reports Server (NTRS)

    Okino, Clayton; Jennings, Esther

    2004-01-01

    A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).

  12. Numerical Algorithms and Mathematical Software for Linear Control and Estimation Theory.

    DTIC Science & Technology

    1985-05-30

    RD -R157 525 NUMERICAL ALGORITHMS AND MATHEMATICAL SOFTWJARE FOR i/i LINEAR CONTROL AND EST..U) MASSACHUSETTS INST OF TECH CAMBRIDGE STATISTICS...PERIOD COVERED"~~ "ia--Dec. 14, 1981-- LD Numerical Algorithms and Mathematical Dec. 13, 1984*Software for Linear Control and 1.0 Estimation Theory...THIS PAGE (Wten Date Entered) .. :..0 70 FINAL REPORT--ARO Grant DAAG29-82-K-0028,"Numerical Algorithms and Mathematical Software for Linear Control and

  13. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  14. Carnitine palmitoyltransferase II deficiency

    PubMed Central

    Roe, C R.; Yang, B-Z; Brunengraber, H; Roe, D S.; Wallace, M; Garritson, B K.

    2008-01-01

    Background: Carnitine palmitoyltransferase II (CPT II) deficiency is an important cause of recurrent rhabdomyolysis in children and adults. Current treatment includes dietary fat restriction, with increased carbohydrate intake and exercise restriction to avoid muscle pain and rhabdomyolysis. Methods: CPT II enzyme assay, DNA mutation analysis, quantitative analysis of acylcarnitines in blood and cultured fibroblasts, urinary organic acids, the standardized 36-item Short-Form Health Status survey (SF-36) version 2, and bioelectric impedance for body fat composition. Diet treatment with triheptanoin at 30% to 35% of total daily caloric intake was used for all patients. Results: Seven patients with CPT II deficiency were studied from 7 to 61 months on the triheptanoin (anaplerotic) diet. Five had previous episodes of rhabdomyolysis requiring hospitalizations and muscle pain on exertion prior to the diet (two younger patients had not had rhabdomyolysis). While on the diet, only two patients experienced mild muscle pain with exercise. During short periods of noncompliance, two patients experienced rhabdomyolysis with exercise. None experienced rhabdomyolysis or hospitalizations while on the diet. All patients returned to normal physical activities including strenuous sports. Exercise restriction was eliminated. Previously abnormal SF-36 physical composite scores returned to normal levels that persisted for the duration of the therapy in all five symptomatic patients. Conclusions: The triheptanoin diet seems to be an effective therapy for adult-onset carnitine palmitoyltransferase II deficiency. GLOSSARY ALT = alanine aminotransferase; AST = aspartate aminotransferase; ATP = adenosine triphosphate; BHP = β-hydroxypentanoate; BKP = β-ketopentanoate; BKP-CoA = β-ketopentanoyl–coenzyme A; BUN = blood urea nitrogen; CAC = citric acid cycle; CoA = coenzyme A; CPK = creatine phosphokinase; CPT II = carnitine palmitoyltransferase II; LDL = low-density lipoprotein; MCT

  15. Do You Understand Your Algorithms?

    ERIC Educational Resources Information Center

    Pickreign, Jamar; Rogers, Robert

    2006-01-01

    This article discusses relationships between the development of an understanding of algorithms and algebraic thinking. It also provides some sample activities for middle school teachers of mathematics to help promote students' algebraic thinking. (Contains 11 figures.)

  16. Fibonacci Numbers and Computer Algorithms.

    ERIC Educational Resources Information Center

    Atkins, John; Geist, Robert

    1987-01-01

    The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)

  17. APL simulation of Grover's algorithm

    NASA Astrophysics Data System (ADS)

    Lipovaca, Samir

    2012-02-01

    Grover's algorithm is a fast quantum search algorithm. Classically, to solve the search problem for a search space of size N we need approximately N operations. Grover's algorithm offers a quadratic speedup. Since present quantum computers are not robust enough for code writing and execution, to experiment with Grover's algorithm, we will simulate it using the APL programming language. The APL programming language is especially suited for this task. For example, to compute Walsh-Hadamard transformation matrix for N quantum states via a tensor product of N Hadamard matrices we need to iterate N-1 times only one line of the code. Initial study indicates the quantum mechanical amplitude of the solution is almost independent of the search space size and rapidly reaches 0.999 values with slight variations at higher decimal places.

  18. Simplified calculation of distance measure in DP algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Tao; Ren, Xian-yi; Lu, Yu-ming

    2014-01-01

    Distance measure of point to segment is one of the determinants which affect the efficiency of DP (Douglas-Peucker) polyline simplification algorithm. Zone-divided distance measure instead of only perpendicular distance is proposed by Dan Sunday [1] to improve the deficiency of the original DP algorithm. A new efficiency zone-divided distance measure method is proposed in this paper. Firstly, a rotating coordinate is established based on the two endpoints of curve. Secondly, the new coordinate value in the rotating coordinate is computed for each point. Finally, the new coordinate values are used to divide points into three zones and to calculate distance, Manhattan distance is adopted in zone I and III, perpendicular distance in zone II. Compared with Dan Sunday's method, the proposed method can take full advantage of the computation result of previous point. The calculation amount basically keeps for points in zone I and III, and the calculation amount reduces significantly for points in zone II which own highest proportion. Experimental results show that the proposed distance measure method can improve the efficiency of original DP algorithm.

  19. What is a Systolic Algorithm?

    NASA Astrophysics Data System (ADS)

    Rao, Sailesh K.; Kollath, T.

    1986-07-01

    In this paper, we show that every systolic array executes a Regular Iterative Algorithm with a strongly separating hyperplane and conversely, that every such algorithm can be implemented on a systolic array. This characterization provides us with an unified framework for describing the contributions of other authors. It also exposes the relevance of many fundamental concepts that were introduced in the sixties by Hennie, Waite and Karp, Miller and Winograd, to the present day concern of systolic array

  20. Programming the gradient projection algorithm

    NASA Technical Reports Server (NTRS)

    Hargrove, A.

    1983-01-01

    The gradient projection method of numerical optimization which is applied to problems having linear constraints but nonlinear objective functions is described and analyzed. The algorithm is found to be efficient and thorough for small systems, but requires the addition of auxiliary methods and programming for large scale systems with severe nonlinearities. In order to verify the theoretical results a digital computer is used to simulate the algorithm.

  1. Genetic algorithms as discovery programs

    SciTech Connect

    Hilliard, M.R.; Liepins, G.

    1986-01-01

    Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.

  2. Inversion Algorithms for Geophysical Problems

    DTIC Science & Technology

    1987-12-16

    ktdud* Sccumy Oass/Kjoon) Inversion Algorithms for Geophysical Problems (U) 12. PERSONAL AUTHOR(S) Lanzano, Paolo 13 «. TYPE OF REPORT Final 13b...spectral density. 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 13 UNCLASSIFIED/UNLIMITED D SAME AS RPT n OTIC USERS 22a. NAME OF RESPONSIBLE...Research Laboratory ’^^ SSZ ’.Washington. DC 20375-5000 NRLrMemorandum Report-6138 Inversion Algorithms for Geophysical Problems p. LANZANO Space

  3. Label Ranking Algorithms: A Survey

    NASA Astrophysics Data System (ADS)

    Vembu, Shankar; Gärtner, Thomas

    Label ranking is a complex prediction task where the goal is to map instances to a total order over a finite set of predefined labels. An interesting aspect of this problem is that it subsumes several supervised learning problems, such as multiclass prediction, multilabel classification, and hierarchical classification. Unsurprisingly, there exists a plethora of label ranking algorithms in the literature due, in part, to this versatile nature of the problem. In this paper, we survey these algorithms.

  4. A retrodictive stochastic simulation algorithm

    SciTech Connect

    Vaughan, T.G. Drummond, P.D.; Drummond, A.J.

    2010-05-20

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  5. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  6. Rotational Invariant Dimensionality Reduction Algorithms.

    PubMed

    Lai, Zhihui; Xu, Yong; Yang, Jian; Shen, Linlin; Zhang, David

    2016-06-30

    A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the L₂ norm as the metric. In this paper, a series of methods based on the L₂,₁-norm are proposed for linear dimensionality reduction. Since the L₂,₁-norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous L₂ norm based subspace learning algorithms.

  7. Multimodal Estimation of Distribution Algorithms.

    PubMed

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  8. Multi-Objective Scheduling for the Cluster II Constellation

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Giuliano, Mark

    2011-01-01

    This paper describes the application of the MUSE multiobjecctive scheduling framework to the Cluster II WBD scheduling domain. Cluster II is an ESA four-spacecraft constellation designed to study the plasma environment of the Earth and it's magnetosphere. One of the instruments on each of the four spacecraft is the Wide Band Data (WBD) plasma wave experiment. We have applied the MUSE evolutionary algorithm to the scheduling problem represented by this instrument, and the result has been adopted and utilized by the WBD schedulers for nearly a year. This paper describes the WBD scheduling problem, its representation in MUSE, and some of the visualization elements that provide insight into objective value tradeoffs.

  9. A study of image reconstruction algorithms for hybrid intensity interferometers

    NASA Astrophysics Data System (ADS)

    Crabtree, Peter N.; Murray-Krezan, Jeremy; Picard, Richard H.

    2011-09-01

    Phase retrieval is explored for image reconstruction using outputs from both a simulated intensity interferometer (II) and a hybrid system that combines the II outputs with partially resolved imagery from a traditional imaging telescope. Partially resolved imagery provides an additional constraint for the iterative phase retrieval process, as well as an improved starting point. The benefits of this additional a priori information are explored and include lower residual phase error for SNR values above 0.01, increased sensitivity, and improved image quality. Results are also presented for image reconstruction from II measurements alone, via current state-of-the-art phase retrieval techniques. These results are based on the standard hybrid input-output (HIO) algorithm, as well as a recent enhancement to HIO that optimizes step lengths in addition to step directions. The additional step length optimization yields a reduction in residual phase error, but only for SNR values greater than about 10. Image quality for all algorithms studied is quite good for SNR>=10, but it should be noted that the studied phase-recovery techniques yield useful information even for SNRs that are much lower.

  10. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  11. About APPLE II Operation

    SciTech Connect

    Schmidt, T.; Zimoch, D.

    2007-01-19

    The operation of an APPLE II based undulator beamline with all its polarization states (linear horizontal and vertical, circular and elliptical, and continous variation of the linear vector) requires an effective description allowing an automated calculation of gap and shift parameter as function of energy and operation mode. The extension of the linear polarization range from 0 to 180 deg. requires 4 shiftable magnet arrrays, permitting use of the APU (adjustable phase undulator) concept. Studies for a pure fixed gap APPLE II for the SLS revealed surprising symmetries between circular and linear polarization modes allowing for simplified operation. A semi-analytical model covering all types of APPLE II and its implementation will be presented.

  12. Mod II engine performance

    NASA Technical Reports Server (NTRS)

    Richey, Albert E.; Huang, Shyan-Cherng

    1987-01-01

    The testing of a prototype of an automotive Stirling engine, the Mod II, is discussed. The Mod II is a one-piece cast block with a V-4 single-crankshaft configuration and an annular regenerator/cooler design. The initial testing of Mod II concentrated on the basic engine, with auxiliaries driven by power sources external to the engine. The performance of the engine was tested at 720 C set temperature and 820 C tube temperature. At 720 C, it is observed that the power deficiency is speed dependent and linear, with a weak pressure dependency, and at 820 C, the power deficiency is speed and pressure dependent. The effects of buoyancy and nozzle spray pattern on the heater temperature spread are investigated. The characterization of the oil pump and the operating cycle and temperature spread tests are proposed for further evaluation of the engine.

  13. Evaluation of chlorophyll-a retrieval algorithms based on MERIS bands for optically varying eutrophic inland lakes.

    PubMed

    Lyu, Heng; Li, Xiaojun; Wang, Yannan; Jin, Qi; Cao, Kai; Wang, Qiao; Li, Yunmei

    2015-10-15

    Fourteen field campaigns were conducted in five inland lakes during different seasons between 2006 and 2013, and a total of 398 water samples with varying optical characteristics were collected. The characteristics were analyzed based on remote sensing reflectance, and an automatic cluster two-step method was applied for water classification. The inland waters could be clustered into three types, which we labeled water types I, II and III. From water types I to III, the effect of the phytoplankton on the optical characteristics gradually decreased. Four chlorophyll-a retrieval algorithms for Case II water, a two-band, three-band, four-band and SCI (Synthetic Chlorophyll Index) algorithm were evaluated for three water types based on the MERIS bands. Different MERIS bands were used for the three water types in each of the four algorithms. The four algorithms had different levels of retrieval accuracy for each water type, and no single algorithm could be successfully applied to all water types. For water types I and III, the three-band algorithm performed the best, while the four-band algorithm had the highest retrieval accuracy for water type II. However, the three-band algorithm is preferable to the two-band algorithm for turbid eutrophic inland waters. The SCI algorithm is recommended for highly turbid water with a higher concentration of total suspended solids. Our research indicates that the chlorophyll-a concentration retrieval by remote sensing for optically contrasted inland water requires a specific algorithm that is based on the optical characteristics of inland water bodies to obtain higher estimation accuracy.

  14. SAGE II Ozone Analysis

    NASA Technical Reports Server (NTRS)

    Cunnold, Derek; Wang, Ray

    2002-01-01

    Publications from 1999-2002 describing research funded by the SAGE II contract to Dr. Cunnold and Dr. Wang are listed below. Our most recent accomplishments include a detailed analysis of the quality of SAGE II, v6.1, ozone measurements below 20 km altitude (Wang et al., 2002 and Kar et al., 2002) and an analysis of the consistency between SAGE upper stratospheric ozone trends and model predictions with emphasis on hemispheric asymmetry (Li et al., 2001). Abstracts of the 11 papers are attached.

  15. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  16. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  17. Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida

    EPA Science Inventory

    By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...

  18. Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida (Published Proceedings)

    EPA Science Inventory

    By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...

  19. The Icarus challenge - Predicting vulnerability to climate change using an algorithm-based species' trait approach

    EPA Science Inventory

    The Icarus challenge - Predicting vulnerability to climate change using an algorithm-based species’ trait approachHenry Lee II, Christina Folger, Deborah A. Reusser, Patrick Clinton, and Rene Graham1 U.S. EPA, Western Ecology Division, Newport, OR USA E-mail: lee.henry@ep...

  20. A Fast Algorithm for Exonic Regions Prediction in DNA Sequences

    PubMed Central

    Saberkari, Hamidreza; Shamsi, Mousa; Heravi, Hamed; Sedaaghi, Mohammad Hossein

    2013-01-01

    The main purpose of this paper is to introduce a fast method for gene prediction in DNA sequences based on the period-3 property in exons. First, the symbolic DNA sequences were converted to digital signal using the electron ion interaction potential method. Then, to reduce the effect of background noise in the period-3 spectrum, we used the discrete wavelet transform at three levels and applied it on the input digital signal. Finally, the Goertzel algorithm was used to extract period-3 components in the filtered DNA sequence. The proposed algorithm leads to decrease the computational complexity and hence, increases the speed of the process. Detection of small size exons in DNA sequences, exactly, is another advantage of the algorithm. The proposed algorithm ability in exon prediction was compared with several existing methods at the nucleotide level using: (i) specificity - sensitivity values; (ii) receiver operating curves (ROC); and (iii) area under ROC curve. Simulation results confirmed that the proposed method can be used as a promising tool for exon prediction in DNA sequences. PMID:24672762

  1. Algorithmic Coordination in Robotic Networks

    DTIC Science & Technology

    2010-11-29

    motion.me.ucsb.edu November 29, 2010 Contents 1 Original Proposal Summary i 2 Technical Accomplishments ii 2.1 Dynamic vehicle routing and target assignment ii...as fully or partly supported by this award. The results are organized in four main thrusts: 1. dynamic vehicle routing and target assignment. 2...journal publications, listed in Section 3, are organized in the same four thrusts. 2.1 Dynamic vehicle routing and target assignment Supported

  2. Conflict-Aware Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Borden, Chester

    2006-01-01

    conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.

  3. Fourier Lucas-Kanade algorithm.

    PubMed

    Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha

    2013-06-01

    In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs).

  4. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  5. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  6. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  7. Parallel job-scheduling algorithms

    SciTech Connect

    Rodger, S.H.

    1989-01-01

    In this thesis, we consider solving job scheduling problems on the CREW PRAM model. We show how to adapt Cole's pipeline merge technique to yield several efficient parallel algorithms for a number of job scheduling problems and one optimal parallel algorithm for the following job scheduling problem: Given a set of n jobs defined by release times, deadlines and processing times, find a schedule that minimizes the maximum lateness of the jobs and allows preemption when the jobs are scheduled to run on one machine. In addition, we present the first NC algorithm for the following job scheduling problem: Given a set of n jobs defined by release times, deadlines and unit processing times, determine if there is a schedule of jobs on one machine, and calculate the schedule if it exists. We identify the notion of a canonical schedule, which is the type of schedule our algorithm computes if there is a schedule. Our algorithm runs in O((log n){sup 2}) time and uses O(n{sup 2}k{sup 2}) processors, where k is the minimum number of distinct offsets of release times or deadlines.

  8. Using Alternative Multiplication Algorithms to "Offload" Cognition

    ERIC Educational Resources Information Center

    Jazby, Dan; Pearn, Cath

    2015-01-01

    When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…

  9. Seamless Merging of Hypertext and Algorithm Animation

    ERIC Educational Resources Information Center

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  10. Instant Insanity II

    ERIC Educational Resources Information Center

    Richmond, Tom; Young, Aaron

    2013-01-01

    "Instant Insanity II" is a sliding mechanical puzzle whose solution requires the special alignment of 16 colored tiles. We count the number of solutions of the puzzle's classic challenge and show that the more difficult ultimate challenge has, up to row permutation, exactly two solutions, and further show that no…

  11. Dissecting Diversity Part II

    ERIC Educational Resources Information Center

    Matthews, Frank

    2005-01-01

    This article presents "Dissecting Diversity, Part II," the conclusion of a wide-ranging two-part roundtable discussion on diversity in higher education. The participants were as follows: Lezli Baskerville, J.D., President and CEO of the National Association for Equal Opportunity (NAFEO); Dr. Gerald E. Gipp, Executive Director of the…

  12. Listen & Learn II.

    ERIC Educational Resources Information Center

    Community Building Resources, Spruce Grove (Alberta).

    Six community builders in Edmonton, Alberta, planned, developed, and implemented Listen and Learn II, a reflective research project in asset-based community building, over a 6-month period in 1998. They met regularly over 2 months to plan the research and design a method that was open to participation at any stage, encouraged exchange of…

  13. A la Mode II.

    ERIC Educational Resources Information Center

    Stowe, Richard A.

    This paper describes two modes of educational decision-making: Mode I, in which the instructor makes such decisions as what to teach, to whom, when, in what order, at what pace, and at what complexity level; and Mode II, in which the learner makes the decisions. While Mode I comprises most of what is regarded as formal education, the learner in…

  14. Periodontics II: Course Proposal.

    ERIC Educational Resources Information Center

    Dordick, Bruce

    A proposal is presented for Periodontics II, a course offered at the Community College of Philadelphia to give the dental hygiene/assisting student an understanding of the disease states of the periodontium and their treatment. A standardized course proposal cover form is given, followed by a statement of purpose for the course, a list of major…

  15. Class II Microcins

    NASA Astrophysics Data System (ADS)

    Vassiliadis, Gaëlle; Destoumieux-Garzón, Delphine; Peduzzi, Jean

    Class II microcins are 4.9- to 8.9-kDa polypeptides produced by and active against enterobacteria. They are classified into two subfamilies according to their structure and their gene cluster arrangement. While class IIa microcins undergo no posttranslational modification, class IIb microcins show a conserved C-terminal sequence that carries a salmochelin-like siderophore motif as a posttranslational modification. Aside from this C-terminal end, which is the signature of class IIb microcins, some sequence similarities can be observed within and between class II subclasses, suggesting the existence of common ancestors. Their mechanisms of action are still under investigation, but several class II microcins use inner membrane proteins as cellular targets, and some of them are membrane-active. Like group B colicins, many, if not all, class II microcins are TonB- and energy-dependent and use catecholate siderophore receptors for recognition/­translocation across the outer membrane. In that context, class IIb microcins are considered to have developed molecular mimicry to increase their affinity for their outer membrane receptors through their salmochelin-like posttranslational modification.

  16. Inhibitory role of peroxiredoxin II (Prx II) on cellular senescence.

    PubMed

    Han, Ying-Hao; Kim, Hyun-Sun; Kim, Jin-Man; Kim, Sang-Keun; Yu, Dae-Yeul; Moon, Eun-Yi

    2005-08-29

    Reactive oxygen species (ROS) were generated in all oxygen-utilizing organisms. Peroxiredoxin II (Prx II) as one of antioxidant enzymes may play a protective role against the oxidative damage caused by ROS. In order to define the role of Prx II in organismal aging, we evaluated cellular senescence in Prx II(-/-) mouse embryonic fibroblast (MEF). As compared to wild type MEF, cellular senescence was accelerated in Prx II(-/-) MEF. Senescence-associated (SA)-beta-galactosidase (Gal)-positive cell formation was about 30% higher in Prx II(-/-) MEF. N-Acetyl-l-cysteine (NAC) treatment attenuated SA-beta-Gal-positive cell formation. Prx II(-/-) MEF exhibited the higher G2/M (41%) and lower S (1.6%) phase cells as compared to 24% and 7.3% [corrected] in wild type MEF, respectively. A high increase in the p16 and a slight increase in the p21 and p53 levels were detected in PrxII(-/-) MEF cells. The cellular senescence of Prx II(-/-) MEF was correlated with the organismal aging of Prx II(-/-) mouse skin. While extracellular signal-regulated kinase (ERK) and p38 activation was detected in Prx II(-/-) MEF, ERK and c-Jun N-terminal kinase (JNK) activation was detected in Prx II(-/-) skin. These results suggest that Prx II may function as an enzymatic antioxidant to prevent cellular senescence and skin aging.

  17. Fault Tolerant Statistical Signal Processing Algorithms for Parallel Architectures.

    DTIC Science & Technology

    2014-09-26

    AD-fi57 393 FAULT TOLERANT STATISTICAL SIGNAL PROCESSING ALGORITHMS i/i FOR PARALLEL ARCH U) JOHNS HOPKINS UNIV BALTIMORE MD DEPT OF ELECTRICAL...COVERED * ’ Fault Tolerant Statistical Signal Processing Technical A l g o r i t h m s f o r P a r a l l e l A r c h i t e c t u r e s a ._ P E R F O R M I...Identify by block number) , Fault Tolerance, Signal Processing, Parallel Architecture 0 20. ABSTRACT (Continue on reveree side It neceseary and identify by

  18. A Successive Shortest Path Algorithm for the Assignment Problem.

    DTIC Science & Technology

    1980-08-01

    a refinement of the Dinic-Kronrod algorithm [ 7 ]. We have used SSP to develop a computer code which is very efficient for solving large, sparse...x .. / - Node,i Predecessor,Pt Distance,D iI I lD, 3 none 0 2 3 6 2 3 1 1 4 3 3 (,2 4 5 1 3 6 2 10 7 1 1 6 Fig. 1. A shortest path tree. 4 In a...denote the number of elements in $I: j = Ail. The modified assignment problem relative to (C,A) is defined as follows: I 7 Minimize cij xij (i,j E

  19. Structure Learning and Statistical Estimation in Distribution Networks - Part II

    SciTech Connect

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    2015-02-13

    Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/or line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.

  20. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  1. Parallel Algorithms for Computer Vision on the Connection Machine.

    DTIC Science & Technology

    1986-11-01

    INTELLIGENCE LAB J J LITTLE UNCLASSIFIED NOV 86 AI-M-928 DRCA76-85-C-0818 F/G 12/7 NL EmlolEllllllEIIIIIIEIIEIIIE El ..... 9’-2 4 2. 0 ~~1.8 .22 -C% .1...connect ed cornpontoit label ii ig (; ii’, - Bl~~~ ello ( h explained the use of se 1’irig. osltdoiayo i o tii - and devise~d the NIST algorithm), Mike 1...tinliated """ Edge (hetec t ion . Convolution 3ms 2rus . Find Zero-Crossings 0.5ms (.57, I Propagate lal) el 36ms 3r)ins * La urricrate c u rves 350ps

  2. Next Generation Suspension Dynamics Algorithms

    SciTech Connect

    Schunk, Peter Randall; Higdon, Jonathon; Chen, Steven

    2014-12-01

    This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.

  3. Optimizing connected component labeling algorithms

    NASA Astrophysics Data System (ADS)

    Wu, Kesheng; Otoo, Ekow; Shoshani, Arie

    2005-04-01

    This paper presents two new strategies that can be used to greatly improve the speed of connected component labeling algorithms. To assign a label to a new object, most connected component labeling algorithms use a scanning step that examines some of its neighbors. The first strategy exploits the dependencies among them to reduce the number of neighbors examined. When considering 8-connected components in a 2D image, this can reduce the number of neighbors examined from four to one in many cases. The second strategy uses an array to store the equivalence information among the labels. This replaces the pointer based rooted trees used to store the same equivalence information. It reduces the memory required and also produces consecutive final labels. Using an array instead of the pointer based rooted trees speeds up the connected component labeling algorithms by a factor of 5 ~ 100 in our tests on random binary images.

  4. Learning with the ratchet algorithm.

    SciTech Connect

    Hush, D. R.; Scovel, James C.

    2003-01-01

    This paper presents a randomized algorithm called Ratchet that asymptotically minimizes (with probability 1) functions that satisfy a positive-linear-dependent (PLD) property. We establish the PLD property and a corresponding realization of Ratchet for a generalized loss criterion for both linear machines and linear classifiers. We describe several learning criteria that can be obtained as special cases of this generalized loss criterion, e.g. classification error, classification loss and weighted classification error. We also establish the PLD property and a corresponding realization of Ratchet for the Neyman-Pearson criterion for linear classifiers. Finally we show how, for linear classifiers, the Ratchet algorithm can be derived as a modification of the Pocket algorithm.

  5. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  6. Algorithm Development Library for Environmental Satellite Missions

    NASA Astrophysics Data System (ADS)

    Smith, D. C.; Grant, K. D.; Miller, S. W.; Jamilkowski, M. L.

    2012-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will contribute the afternoon orbit component and ground processing system of the restructured National Polar-orbiting Operational Environmental Satellite System (NPOESS). As such, the Joint Polar Satellite System replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by the National Oceanic and Atmospheric Administration and the ground processing component of both Polar-orbiting Operational Environmental Satellites and the Defense Meteorological Satellite Program (DMSP) replacement, previously known as the Defense Weather Satellite System (DWSS), managed by the Department of Defense (DoD). The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS), and consists of a Command, Control, and Communications Segment (C3S) and an Interface Data Processing Segment (IDPS). Both segments are developed by Raytheon Intelligence and Information Systems (IIS). The C3S currently flies the Suomi National Polar Partnership (Suomi NPP) satellite and transfers mission data from Suomi NPP and between the ground facilities. The IDPS processes Suomi NPP satellite data to provide Environmental Data Records (EDRs) to NOAA and DoD processing centers operated by the United States government. When the JPSS-1 satellite is launched in early 2017, the responsibilities of the C3S and the IDPS will be expanded to support both Suomi NPP and JPSS-1. The EDRs for Suomi NPP are currently undergoing an extensive Calibration and Validation (Cal/Val) campaign. As Cal/Val proceeds, changes to the

  7. Application of heuristic optimization techniques and algorithm tuning to multilayered sorptive barrier design.

    PubMed

    Matott, L Shawn; Bartelt-Hunt, Shannon L; Rabideau, Alan J; Fowler, K R

    2006-10-15

    Although heuristic optimization techniques are increasingly applied in environmental engineering applications, algorithm selection and configuration are often approached in an ad hoc fashion. In this study, the design of a multilayer sorptive barrier system served as a benchmark problem for evaluating several algorithm-tuning procedures, as applied to three global optimization techniques (genetic algorithms, simulated annealing, and particle swarm optimization). Each design problem was configured as a combinatorial optimization in which sorptive materials were selected for inclusion in a landfill liner to minimize the transport of three common organic contaminants. Relative to multilayer sorptive barrier design, study results indicate (i) the binary-coded genetic algorithm is highly efficient and requires minimal tuning, (ii) constraint violations must be carefully integrated to avoid poor algorithm convergence, and (iii) search algorithm performance is strongly influenced by the physical-chemical properties of the organic contaminants of concern. More generally, the results suggest that formal algorithm tuning, which has not been widely applied to environmental engineering optimization, can significantly improve algorithm performance and provide insight into the physical processes that control environmental systems.

  8. Two Algorithms for Processing Electronic Nose Data

    NASA Technical Reports Server (NTRS)

    Young, Rebecca; Linnell, Bruce

    2007-01-01

    Two algorithms for processing the digitized readings of electronic noses, and computer programs to implement the algorithms, have been devised in a continuing effort to increase the utility of electronic noses as means of identifying airborne compounds and measuring their concentrations. One algorithm identifies the two vapors in a two-vapor mixture and estimates the concentration of each vapor (in principle, this algorithm could be extended to more than two vapors). The other algorithm identifies a single vapor and estimates its concentration.

  9. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  10. Adaptive-feedback control algorithm.

    PubMed

    Huang, Debin

    2006-06-01

    This paper is motivated by giving the detailed proofs and some interesting remarks on the results the author obtained in a series of papers [Phys. Rev. Lett. 93, 214101 (2004); Phys. Rev. E 71, 037203 (2005); 69, 067201 (2004)], where an adaptive-feedback algorithm was proposed to effectively stabilize and synchronize chaotic systems. This note proves in detail the strictness of this algorithm from the viewpoint of mathematics, and gives some interesting remarks for its potential applications to chaos control & synchronization. In addition, a significant comment on synchronization-based parameter estimation is given, which shows some techniques proposed in literature less strict and ineffective in some cases.

  11. ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.

    USGS Publications Warehouse

    Claire, Robert W.

    1984-01-01

    An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.

  12. Deceptiveness and genetic algorithm dynamics

    SciTech Connect

    Liepins, G.E. ); Vose, M.D. )

    1990-01-01

    We address deceptiveness, one of at least four reasons genetic algorithms can fail to converge to function optima. We construct fully deceptive functions and other functions of intermediate deceptiveness. For the fully deceptive functions of our construction, we generate linear transformations that induce changes of representation to render the functions fully easy. We further model genetic algorithm selection recombination as the interleaving of linear and quadratic operators. Spectral analysis of the underlying matrices allows us to draw preliminary conclusions about fixed points and their stability. We also obtain an explicit formula relating the nonuniform Walsh transform to the dynamics of genetic search. 21 refs.

  13. An algorithm for haplotype analysis

    SciTech Connect

    Lin, Shili; Speed, T.P.

    1997-12-01

    This paper proposes an algorithm for haplotype analysis based on a Monte Carlo method. Haplotype configurations are generated according to the distribution of joint haplotypes of individuals in a pedigree given their phenotype data, via a Markov chain Monte Carlo algorithm. The haplotype configuration which maximizes this conditional probability distribution can thus be estimated. In addition, the set of haplotype configurations with relatively high probabilities can also be estimated as possible alternatives to the most probable one. This flexibility enables geneticists to choose the haplotype configurations which are most reasonable to them, allowing them to include their knowledge of the data under analysis. 18 refs., 2 figs., 1 tab.

  14. A generalized memory test algorithm

    NASA Technical Reports Server (NTRS)

    Milner, E. J.

    1982-01-01

    A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.

  15. Gossip algorithms in quantum networks

    NASA Astrophysics Data System (ADS)

    Siomau, Michael

    2017-01-01

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up - in the best case exponentially - the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication.

  16. Role of Bound Zn(II) in the CadC Cd(II)/Pb(II)/Zn(II)-Responsive Repressor

    SciTech Connect

    Kandegedara, A.; Thiyagarajan, S; Kondapalli, K; Stemmler, T; Rosen, B

    2009-01-01

    The Staphylococcus aureus plasmid pI258 cadCA operon encodes a P-type ATPase, CadA, that confers resistance to Cd(II)/Pb(II)/Zn(II). Expression is regulated by CadC, a homodimeric repressor that dissociates from the cad operator/promoter upon binding of Cd(II), Pb(II), or Zn(II). CadC is a member of the ArsR/SmtB family of metalloregulatory proteins. The crystal structure of CadC shows two types of metal binding sites, termed Site 1 and Site 2, and the homodimer has two of each. Site 1 is the physiological inducer binding site. The two Site 2 metal binding sites are formed at the dimerization interface. Site 2 is not regulatory in CadC but is regulatory in the homologue SmtB. Here the role of each site was investigated by mutagenesis. Both sites bind either Cd(II) or Zn(II). However, Site 1 has higher affinity for Cd(II) over Zn(II), and Site 2 prefers Zn(II) over Cd(II). Site 2 is not required for either derepression or dimerization. The crystal structure of the wild type with bound Zn(II) and of a mutant lacking Site 2 was compared with the SmtB structure with and without bound Zn(II). We propose that an arginine residue allows for Zn(II) regulation in SmtB and, conversely, a glycine results in a lack of regulation by Zn(II) in CadC. We propose that a glycine residue was ancestral whether the repressor binds Zn(II) at a Site 2 like CadC or has no Site 2 like the paralogous ArsR and implies that acquisition of regulatory ability in SmtB was a more recent evolutionary event.

  17. Lattice Boltzmann algorithm for continuum multicomponent flow

    NASA Astrophysics Data System (ADS)

    Halliday, I.; Hollis, A. P.; Care, C. M.

    2007-08-01

    We present a multicomponent lattice Boltzmann simulation for continuum fluid mechanics, paying particular attention to the component segregation part of the underlying algorithm. In the principal result of this paper, the dynamics of a component index, or phase field, is obtained for a segregation method after U. D’Ortona [Phys. Rev. E 51, 3718 (1995)], due to Latva-Kokko and Rothman [Phys. Rev. E 71 056702 (2005)]. The said dynamics accord with a simulation designed to address multicomponent flow in the continuum approximation and underwrite improved simulation performance in two main ways: (i) by reducing the interfacial microcurrent activity considerably and (ii) by facilitating simulational access to regimes of flow with a low capillary number and drop Reynolds number [I. Halliday, R. Law, C. M. Care, and A. Hollis, Phys. Rev. E 73, 056708 (2006)]. The component segregation method studied, used in conjunction with Lishchuk’s method [S. V. Lishchuk, C. M. Care, and I. Halliday, Phys. Rev. E 67, 036701 (2003)], produces an interface, which is distributed in terms of its component index; however, the hydrodynamic boundary conditions which emerge are shown to support the notion of a sharp, unstructured, continuum interface.

  18. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    SciTech Connect

    Tajaldeen, A; Ramachandran, P; Geso, M

    2015-06-15

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fast superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of

  19. Lattice study for the HLS-II storage ring

    NASA Astrophysics Data System (ADS)

    Bai, Zheng-He; Wang, Lin; Jia, Qi-Ka; Li, Wei-Min

    2013-04-01

    The Hefei Light Source (HLS) is undergoing a major upgrade project, named HLS- II, in order to obtain lower emittance and more insertion device straight sections. Undulators are the main insertion devices in the HLS- II storage ring. In this paper, based on the database of lattice parameters built for the HLS- II storage ring obtained by the global scan method, we use the quantity related to the undulator radiation brightness to more directly search for high brightness lattices. Lattice solutions for achromatic and non-achromatic modes are easily found with lower emittance, smaller beta functions at the center of the insertion device straight sections and lower dispersion in nonzero dispersion straight sections compared with the previous lattice solutions. In this paper, the superperiod lattice with alternating high and low horizontal beta functions in long straight sections for the achromatic mode is studied using the multiobjective particle swarm optimization algorithm.

  20. Multiple endocrine neoplasia (MEN) II

    MedlinePlus

    Sipple syndrome; MEN II; Pheochromocytoma - MEN II; Thyroid cancer - pheochromocytoma; Parathyroid cancer - pheochromocytoma ... often not cancerous (benign). Medullary carcinoma of the thyroid is ... fatal cancer, but early diagnosis and surgery can often lead ...

  1. FIRE II - Cirrus Data Sets

    Atmospheric Science Data Center

    2013-07-26

    FIRE II - Cirrus Data Sets First ISCCP Regional Experiment (FIRE) II Cirrus was conducted in southeastern Kansas. It was designed to improve the ... stratocumulus systems, the radiative properties of these clouds and their interactions. Relevant Documents:  FIRE ...

  2. RADTRAN II user guide

    SciTech Connect

    Madsen, M M; Wilmot, E L; Taylor, J M

    1983-02-01

    RADTRAN II is a flexible analytical tool for calculating both the incident-free and accident impacts of transporting radioactive materials. The consequences from incident-free shipments are apportioned among eight population subgroups and can be calculated for several transport modes. The radiological accident risk (probability times consequence summed over all postulated accidents) is calculated in terms of early fatalities, early morbidities, latent cancer fatalities, genetic effects, and economic impacts. Groundshine, inhalation, direct exposure, resuspension, and cloudshine dose pathways are modeled to calculate the radiological health risks from accidents. Economic impacts are evaluated based on costs for emergency response, cleanup, evacuation, income loss, and land use. RADTRAN II can be applied to specific scenario evaluations (individual transport modes or specified combinations), to compare alternative modes or to evaluate generic radioactive material shipments. Unit-risk factors can easily be evaluated to aid in performing generic analyses when several options must be compared with the amount of travel as the only variable.

  3. Results from SAGE II

    SciTech Connect

    Nico, J.S.

    1994-10-01

    The Russian-American Gallium solar neutrino Experiment (SAGE) began the second phase of operation (SAGE II) in September of 1992. Monthly measurements of the integral flux of solar neutrinos have been made with 55 tonnes of gallium. The K-peak results of the first nine runs of SAGE II give a capture rate of 66{sub -13}{sup +18} (stat) {sub -7}{sup +5} (sys) SNU. Combined with the SAGE I result of 73{sub -16}{sup +18} (stat) {sub -7}{sup 5} (sys) SNU, the capture rate is 69{sub -11}{sup +11} (stat) {sub -7}{sup +5} (sys) SNU. This represents only 52%--56% of the capture rate predicted by different Standard Solar Models.

  4. Ribosomal Database Project II

    DOE Data Explorer

    The Ribosomal Database Project (RDP) provides ribosome related data and services to the scientific community, including online data analysis and aligned and annotated Bacterial small-subunit 16S rRNA sequences. As of March 2008, RDP Release 10 is available and currently (August 2009) contains 1,074,075 aligned 16S rRNA sequences. Data that can be downloaded include zipped GenBank and FASTA alignment files, a histogram (in Excel) of the number of RDP sequences spanning each base position, data in the Functional Gene Pipeline Repository, and various user submitted data. The RDP-II website also provides numerous analysis tools.[From the RDP-II home page at http://rdp.cme.msu.edu/index.jsp

  5. Cognitive Algorithms for Signal Processing

    DTIC Science & Technology

    2011-03-18

    63] L. I. Perlovsky and R. Kozma. Eds. Neurodynamics of Higher-Level Cognition and Consciousness. Heidelberg, Germany: Springer-Verlag, 2007. [64...AFRL-RY-HS-TR-2011-0013 ________________________________________________________________________ Cognitive Algorithms for Signal Processing...in more details in [46]. ..................................... 16  1 Abstract Processes in the mind: perception, cognition

  6. Coagulation algorithms with size binning

    NASA Technical Reports Server (NTRS)

    Statton, David M.; Gans, Jason; Williams, Eric

    1994-01-01

    The Smoluchowski equation describes the time evolution of an aerosol particle size distribution due to aggregation or coagulation. Any algorithm for computerized solution of this equation requires a scheme for describing the continuum of aerosol particle sizes as a discrete set. One standard form of the Smoluchowski equation accomplishes this by restricting the particle sizes to integer multiples of a basic unit particle size (the monomer size). This can be inefficient when particle concentrations over a large range of particle sizes must be calculated. Two algorithms employing a geometric size binning convention are examined: the first assumes that the aerosol particle concentration as a function of size can be considered constant within each size bin; the second approximates the concentration as a linear function of particle size within each size bin. The output of each algorithm is compared to an analytical solution in a special case of the Smoluchowski equation for which an exact solution is known . The range of parameters more appropriate for each algorithm is examined.

  7. Adaptive protection algorithm and system

    DOEpatents

    Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA

    2009-04-28

    An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.

  8. Genetic Algorithms: A gentle introduction

    SciTech Connect

    Jong, K.D.

    1994-12-31

    Information is presented on genetic algorithms in outline form. The following topics are discussed: how are new samples generated, a genotypic viewpoint, a phenotypic viewpoint, an optimization viewpoint, an intuitive view, parameter optimization problems, evolving production rates, genetic programming, GAs and NNs, formal analysis, Lemmas and theorems, discrete Walsh transforms, deceptive problems, Markov chain analysis, and PAC learning analysis.

  9. Aerocapture Guidance Algorithm Comparison Campaign

    NASA Technical Reports Server (NTRS)

    Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric

    2002-01-01

    The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.

  10. Algorithms, complexity, and the sciences

    PubMed Central

    Papadimitriou, Christos

    2014-01-01

    Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382

  11. The minimal time detection algorithm

    NASA Technical Reports Server (NTRS)

    Kim, Sungwan

    1995-01-01

    An aerospace vehicle may operate throughout a wide range of flight environmental conditions that affect its dynamic characteristics. Even when the control design incorporates a degree of robustness, system parameters may drift enough to cause its performance to degrade below an acceptable level. The object of this paper is to develop a change detection algorithm so that we can build a highly adaptive control system applicable to aircraft systems. The idea is to detect system changes with minimal time delay. The algorithm developed is called Minimal Time-Change Detection Algorithm (MT-CDA) which detects the instant of change as quickly as possible with false-alarm probability below a certain specified level. Simulation results for the aircraft lateral motion with a known or unknown change in control gain matrices, in the presence of doublet input, indicate that the algorithm works fairly well as theory indicates though there is a difficulty in deciding the exact amount of change in some situations. One of MT-CDA distinguishing properties is that detection delay of MT-CDA is superior to that of Whiteness Test.

  12. Fission Reaction Event Yield Algorithm

    SciTech Connect

    Hagmann, Christian; Verbeke, Jerome; Vogt, Ramona; Roundrup, Jorgen

    2016-05-31

    FREYA (Fission Reaction Event Yield Algorithm) is a code that simulated the decay of a fissionable nucleus at specified excitation energy. In its present form, FREYA models spontaneous fission and neutron-induced fission up to 20 MeV. It includes the possibility of neutron emission from the nuclear prior to its fussion (nth chance fission).

  13. Associative Algorithms for Computational Creativity

    ERIC Educational Resources Information Center

    Varshney, Lav R.; Wang, Jun; Varshney, Kush R.

    2016-01-01

    Computational creativity, the generation of new, unimagined ideas or artifacts by a machine that are deemed creative by people, can be applied in the culinary domain to create novel and flavorful dishes. In fact, we have done so successfully using a combinatorial algorithm for recipe generation combined with statistical models for recipe ranking…

  14. Key Concepts in Informatics: Algorithm

    ERIC Educational Resources Information Center

    Szlávi, Péter; Zsakó, László

    2014-01-01

    "The system of key concepts contains the most important key concepts related to the development tasks of knowledge areas and their vertical hierarchy as well as the links of basic key concepts of different knowledge areas." (Vass 2011) One of the most important of these concepts is the algorithm. In everyday life, when learning or…

  15. Algorithm Visualization in Teaching Practice

    ERIC Educational Resources Information Center

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  16. Understanding Algorithms in Different Presentations

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  17. Listless zerotree image compression algorithm

    NASA Astrophysics Data System (ADS)

    Lian, Jing; Wang, Ke

    2006-09-01

    In this paper, an improved zerotree structure and a new coding procedure are adopted, which improve the reconstructed image qualities. Moreover, the lists in SPIHT are replaced by flag maps, and lifting scheme is adopted to realize wavelet transform, which lowers the memory requirements and speeds up the coding process. Experimental results show that the algorithm is more effective and efficient compared with SPIHT.

  18. An Algorithm for Suffix Stripping

    ERIC Educational Resources Information Center

    Porter, M. F.

    2006-01-01

    Purpose: The automatic removal of suffixes from words in English is of particular interest in the field of information retrieval. This work was originally published in Program in 1980 and is republished as part of a series of articles commemorating the 40th anniversary of the journal. Design/methodology/approach: An algorithm for suffix stripping…

  19. Threshold extended ID3 algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.

    2012-04-01

    Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.

  20. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  1. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  2. Algorithms, modelling and VO₂ kinetics.

    PubMed

    Capelli, Carlo; Carlo, Capelli; Cautero, Michela; Michela, Cautero; Pogliaghi, Silvia; Silvia, Pogliaghi

    2011-03-01

    This article summarises the pros and cons of different algorithms developed for estimating breath-by-breath (B-by-B) alveolar O(2) transfer (VO 2A) in humans. VO 2A is the difference between O(2) uptake at the mouth and changes in alveolar O(2) stores (∆ VO(2s)), which for any given breath, are equal to the alveolar volume change at constant FAO2/FAiO2 ∆VAi plus the O(2) alveolar fraction change at constant volume [V Ai-1(F Ai - F Ai-1) O2, where V (Ai-1) is the alveolar volume at the beginning of a breath. Therefore, VO 2A can be determined B-by-B provided that V (Ai-1) is: (a) set equal to the subject's functional residual capacity (algorithm of Auchincloss, A) or to zero; (b) measured (optoelectronic plethysmography, OEP); (c) selected according to a procedure that minimises B-by-B variability (algorithm of Busso and Robbins, BR). Alternatively, the respiratory cycle can be redefined as the time between equal FO(2) in two subsequent breaths (algorithm of Grønlund, G), making any assumption of V (Ai-1) unnecessary. All the above methods allow an unbiased estimate of VO2 at steady state, albeit with different precision. Yet the algorithms "per se" affect the parameters describing the B-by-B kinetics during exercise transitions. Among these approaches, BR and G, by increasing the signal-to-noise ratio of the measurements, reduce the number of exercise repetitions necessary to study VO2 kinetics, compared to A approach. OEP and G (though technically challenging and conceptually still debated), thanks to their ability to track ∆VO(2s) changes during the early phase of exercise transitions, appear rather promising for investigating B-by-B gas exchange.

  3. Operation Everest II

    PubMed Central

    2010-01-01

    Abstract Wagner, Peter D. Operation Everest II. High Alt. Med. Biol. 11:111–119, 2010.—In October 1985, 25 years ago, 8 subjects and 27 investigators met at the United States Army Research Institute for Environmental Medicine (USARIEM) altitude chambers in Natick, Massachusetts, to study human responses to a simulated 40-day ascent of Mt. Everest, termed Operation Everest II (OE II). Led by Charlie Houston, John Sutton, and Allen Cymerman, these investigators conducted a large number of investigations across several organ systems as the subjects were gradually decompressed over 40 days to the Everest summit equivalent. There the subjects reached a \\documentclass{aastex}\\usepackage{amsbsy}\\usepackage{amsfonts}\\usepackage{amssymb}\\usepackage{bm}\\usepackage{mathrsfs}\\usepackage{pifont}\\usepackage{stmaryrd}\\usepackage{textcomp}\\usepackage{portland,xspace}\\usepackage{amsmath,amsxtra}\\pagestyle{empty}\\DeclareMathSizes{10}{9}{7}{6} \\begin{document} \\begin{align*} \\dot{\\rm V}{\\sc O}_2{\\rm max} \\end{align*} \\end{document} of 15.3 mL/kg/min (28% of initial sea-level values) at 100 W and arterial Po2 and Pco2 of ∼28 and ∼10 mm Hg, respectively. Cardiac function resisted hypoxia, but the lungs could not: ventilation–perfusion inequality and O2 diffusion limitation reduced arterial oxygenation considerably. Pulmonary vascular resistance was increased, was not reversible after short-term hyperoxia, but was reduced during exercise. Skeletal muscle atrophy occurred, but muscle structure and function were otherwise remarkably unaffected. Neurological deficits (cognition and memory) persisted after return to sea level, more so in those with high hypoxic ventilatory responsiveness, with motor function essentially spared. Nine percent body weight loss (despite an unrestricted diet) was mainly (67%) from muscle and exceeded the 2% predicted from energy intake–expenditure balance. Some immunological and lipid metabolic changes occurred, of uncertain

  4. Birkhoffian symplectic algorithms derived from Hamiltonian symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Xin-Lei, Kong; Hui-Bin, Wu; Feng-Xiang, Mei

    2016-01-01

    In this paper, we focus on the construction of structure preserving algorithms for Birkhoffian systems, based on existing symplectic schemes for the Hamiltonian equations. The key of the method is to seek an invertible transformation which drives the Birkhoffian equations reduce to the Hamiltonian equations. When there exists such a transformation, applying the corresponding inverse map to symplectic discretization of the Hamiltonian equations, then resulting difference schemes are verified to be Birkhoffian symplectic for the original Birkhoffian equations. To illustrate the operation process of the method, we construct several desirable algorithms for the linear damped oscillator and the single pendulum with linear dissipation respectively. All of them exhibit excellent numerical behavior, especially in preserving conserved quantities. Project supported by the National Natural Science Foundation of China (Grant No. 11272050), the Excellent Young Teachers Program of North China University of Technology (Grant No. XN132), and the Construction Plan for Innovative Research Team of North China University of Technology (Grant No. XN129).

  5. A multi-objective optimization tool for the selection and placement of BMPs for pesticide control

    NASA Astrophysics Data System (ADS)

    Maringanti, C.; Chaubey, I.; Arabi, M.; Engel, B.

    2008-07-01

    Pesticides (particularly atrazine used in corn fields) are the foremost source of water contamination in many of the water bodies in Midwestern corn belt, exceeding the 3 ppb MCL established by the U.S. EPA for drinking water. Best management practices (BMPs), such as buffer strips and land management practices, have been proven to effectively reduce the pesticide pollution loads from agricultural areas. However, selection and placement of BMPs in watersheds to achieve an ecologically effective and economically feasible solution is a daunting task. BMP placement decisions under such complex conditions require a multi-objective optimization algorithm that would search for the best possible solution that satisfies the given watershed management objectives. Genetic algorithms (GA) have been the most popular optimization algorithms for the BMP selection and placement problem. Most optimization models also had a dynamic linkage with the water quality model, which increased the computation time considerably thus restricting them to apply models on field scale or relatively smaller (11 or 14 digit HUC) watersheds. However, most previous works have considered the two objectives individually during the optimization process by introducing a constraint on the other objective, therefore decreasing the degree of freedom to find the solution. In this study, the optimization for atrazine reduction is performed by considering the two objectives simultaneously using a multi-objective genetic algorithm (NSGA-II). The limitation with the dynamic linkage with a distributed parameter watershed model was overcome through the utilization of a BMP tool, a database that stores the pollution reduction and cost information of different BMPs under consideration. The model was used for the selection and placement of BMPs in Wildcat Creek Watershed (located in Indiana, for atrazine reduction. The most ecologically effective solution from the model had an annual atrazine concentration reduction

  6. Space complexity of estimation of distribution algorithms.

    PubMed

    Gao, Yong; Culberson, Joseph

    2005-01-01

    In this paper, we investigate the space complexity of the Estimation of Distribution Algorithms (EDAs), a class of sampling-based variants of the genetic algorithm. By analyzing the nature of EDAs, we identify criteria that characterize the space complexity of two typical implementation schemes of EDAs, the factorized distribution algorithm and Bayesian network-based algorithms. Using random additive functions as the prototype, we prove that the space complexity of the factorized distribution algorithm and Bayesian network-based algorithms is exponential in the problem size even if the optimization problem has a very sparse interaction structure.

  7. Higher-order force gradient symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  8. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  9. Treatment algorithms in refractory partial epilepsy.

    PubMed

    Jobst, Barbara C

    2009-09-01

    An algorithm is a "step-by-step procedure for solving a problem or accomplishing some end....in a finite number of steps." (Merriam-Webster, 2009). Medical algorithms are decision trees to help with diagnostic and therapeutic decisions. For the treatment of epilepsy there is no generally accepted treatment algorithm, as individual epilepsy centers follow different diagnostic and therapeutic guidelines. This article presents two algorithms to guide decisions in the treatment of refractory partial epilepsy. The treatment algorithm describes a stepwise diagnostic and therapeutic approach to intractable medial temporal and neocortical epilepsy. The surgical algorithm guides decisions in the surgical treatment of neocortical epilepsy.

  10. UCCM Phase II

    DTIC Science & Technology

    2010-09-01

    and possibly a polarization. We assume a return is about 1 KB of data. The radar has three imagery modes: SAR, ISAR , and HRR. All return a grayscale...same scene. ISAR is inverse SAR. This algorithm uses the Doppler histories of the scattering centers in the target area, so the radar is concerned not...fast. An ISAR image requires about 30 seconds of data and processing. HRR is high-resolution radar, and can be thought of as a strip of an ISAR

  11. A genetic algorithm for flexible molecular overlay and pharmacophore elucidation

    NASA Astrophysics Data System (ADS)

    Jones, Gareth; Willett, Peter; Glen, Robert C.

    1995-12-01

    A genetic algorithm (GA) has been developed for the superimposition of sets of flexible molecules. Molecules are represented by a chromosome that encodes angles of rotation about flexible bonds and mappings between hydrogen-bond donor proton, acceptor lone pair and ring centre features in pairs of molecules. The molecule with the smallest number of features in the data set is used as a template, onto which the remaining molecules are fitted with the objective of maximising structural equivalences. The fitness function of the GA is a weighted combination of: (i) the number and the similarity of the features that have been overlaid in this way; (ii) the volume integral of the overlay; and (iii) the van der Waals energy of the molecular conformations defined by the torsion angles encoded in the chromosomes. The algorithm has been applied to a number of pharmacophore elucidation problems, i.e., angiotensin II receptor antagonists, Leu-enkephalin and a hybrid morphine molecule, 5-HT1D agonists, benzodiazepine receptor ligands, 5-HT3 antagonists, dopamine D2 antagonists, dopamine reuptake blockers and FKBP12 ligands. The resulting pharmacophores are generated rapidly and are in good agreement with those derived from alternative means.

  12. Determining the Numerical Stability of Quantum Chemistry Algorithms.

    PubMed

    Knizia, Gerald; Li, Wenbin; Simon, Sven; Werner, Hans-Joachim

    2011-08-09

    We present a simple, broadly applicable method for determining the numerical properties of quantum chemistry algorithms. The method deliberately introduces random numerical noise into computations, which is of the same order of magnitude as the floating point precision. Accordingly, repeated runs of an algorithm give slightly different results, which can be analyzed statistically to obtain precise estimates of its numerical stability. This noise is produced by automatic code injection into regular compiler output, so that no substantial programming effort is required, only a recompilation of the affected program sections. The method is applied to investigate: (i) the numerical stability of the three-center Obara-Saika integral evaluation scheme for high angular momenta, (ii) if coupled cluster perturbative triples can be evaluated with single precision arithmetic, (iii) how to implement the density fitting approximation in Møller-Plesset perturbation theory (MP2) most accurately, and (iv) which parts of density fitted MP2 can be safely evaluated with single precision arithmetic. In the integral case, we find a numerical instability in an equation that is used in almost all integral programs. Due to the results of (ii) and (iv), we conjecture that single precision arithmetic can be applied whenever a calculation is done in an orthogonal basis set and excessively long linear sums are avoided.

  13. Algorithmic methods in diffraction microscopy

    NASA Astrophysics Data System (ADS)

    Thibault, Pierre

    Recent diffraction imaging techniques use properties of coherent sources (most notably x-rays and electrons) to transfer a portion of the imaging task to computer algorithms. "Diffraction microscopy" is a method which consists in reconstructing the image of a specimen from its diffraction pattern. Because only the amplitude of a wavefield incident on a detector is measured, reconstruction of the image entails to recovering the lost phases. This extension of the 'phase problem" commonly met in crystallography is solved only if additional information is available. The main topic of this thesis is the development of algorithmic techniques in diffraction microscopy. In addition to introducing new methods, it is meant to be a review of the algorithmic aspects of the field of diffractive imaging. An overview of the scattering approximations used in the interpretation of diffraction datasets is first given, as well as a numerical propagation tool useful in conditions where known approximations fail. Concepts central to diffraction microscopy---such as oversampling---are then introduced and other similar imaging techniques described. A complete description of iterative reconstruction algorithms follows, with a special emphasis on the difference map, the algorithm used in this thesis. The formalism, based on constraint sets and projection onto these sets, is then defined and explained. Simple projections commonly used in diffraction imaging are then described. The various ways experimental realities can affect reconstruction methods will then be enumerated. Among the diverse sources of algorithmic difficulties, one finds that noise, missing data and partial coherence are typically the most important. Other related difficulties discussed are the detrimental effects of crystalline domains in a specimen, and the convergence problems occurring when the support of a complex-valued specimen is not well known. The last part of this thesis presents reconstruction results; an

  14. AWIPS II Extended - Data Delivery

    NASA Astrophysics Data System (ADS)

    Henry, R.; Schotz, S.; Calkins, J.; Gockel, B.; Ortiz, C.; Peter, R.

    2012-12-01

    AWIPS II Technology Infusion is a multiphase program. The first phase is the migration of the Weather Forecast Offices (WFOs) and River Forecast Centers (RFCs) AWIPS I capabilities into a Service Oriented Architecture (SOA), referred to as AWIPS II. AWIPS II is currently being deployed to Operational Test and Evaluation (OTE) and other select deployment sites. The subsequent phases of AWIPS Technology Infusion, known as AWIPS II Extended, include several projects that will improve technological capabilities of AWIPS II in order to enhance the NWS enterprise and improve services to partners. This paper summarizes AWIPS II Extended - Data Delivery project and reports on its status. Data Delivery enables AWIPS II users to discover, subscribe and access web-enabled data provider systems including the capability to subset datasets by space, time and parameter.

  15. Photometric Supernova Cosmology with BEAMS and SDSS-II

    NASA Astrophysics Data System (ADS)

    Hlozek, Renée; Kunz, Martin; Bassett, Bruce; Smith, Mat; Newling, James; Varughese, Melvin; Kessler, Rick; Bernstein, Joseph P.; Campbell, Heather; Dilday, Ben; Falck, Bridget; Frieman, Joshua; Kuhlmann, Steve; Lampeitl, Hubert; Marriner, John; Nichol, Robert C.; Riess, Adam G.; Sako, Masao; Schneider, Donald P.

    2012-06-01

    Supernova (SN) cosmology without spectroscopic confirmation is an exciting new frontier, which we address here with the Bayesian Estimation Applied to Multiple Species (BEAMS) algorithm and the full three years of data from the Sloan Digital Sky Survey II Supernova Survey (SDSS-II SN). BEAMS is a Bayesian framework for using data from multiple species in statistical inference when one has the probability that each data point belongs to a given species, corresponding in this context to different types of SNe with their probabilities derived from their multi-band light curves. We run the BEAMS algorithm on both Gaussian and more realistic SNANA simulations with of order 104 SNe, testing the algorithm against various pitfalls one might expect in the new and somewhat uncharted territory of photometric SN cosmology. We compare the performance of BEAMS to that of both mock spectroscopic surveys and photometric samples that have been cut using typical selection criteria. The latter typically either are biased due to contamination or have significantly larger contours in the cosmological parameters due to small data sets. We then apply BEAMS to the 792 SDSS-II photometric SNe with host spectroscopic redshifts. In this case, BEAMS reduces the area of the Ω m , ΩΛ contours by a factor of three relative to the case where only spectroscopically confirmed data are used (297 SNe). In the case of flatness, the constraints obtained on the matter density applying BEAMS to the photometric SDSS-II data are ΩBEAMS m = 0.194 ± 0.07. This illustrates the potential power of BEAMS for future large photometric SN surveys such as Large Synoptic Survey Telescope.

  16. PHOTOMETRIC SUPERNOVA COSMOLOGY WITH BEAMS AND SDSS-II

    SciTech Connect

    Hlozek, Renee; Kunz, Martin; Bassett, Bruce; Smith, Mat; Newling, James; Varughese, Melvin; Kessler, Rick; Frieman, Joshua; Bernstein, Joseph P.; Kuhlmann, Steve; Marriner, John; Campbell, Heather; Lampeitl, Hubert; Nichol, Robert C.; Dilday, Ben; Falck, Bridget; Riess, Adam G.; Sako, Masao; Schneider, Donald P.

    2012-06-20

    Supernova (SN) cosmology without spectroscopic confirmation is an exciting new frontier, which we address here with the Bayesian Estimation Applied to Multiple Species (BEAMS) algorithm and the full three years of data from the Sloan Digital Sky Survey II Supernova Survey (SDSS-II SN). BEAMS is a Bayesian framework for using data from multiple species in statistical inference when one has the probability that each data point belongs to a given species, corresponding in this context to different types of SNe with their probabilities derived from their multi-band light curves. We run the BEAMS algorithm on both Gaussian and more realistic SNANA simulations with of order 10{sup 4} SNe, testing the algorithm against various pitfalls one might expect in the new and somewhat uncharted territory of photometric SN cosmology. We compare the performance of BEAMS to that of both mock spectroscopic surveys and photometric samples that have been cut using typical selection criteria. The latter typically either are biased due to contamination or have significantly larger contours in the cosmological parameters due to small data sets. We then apply BEAMS to the 792 SDSS-II photometric SNe with host spectroscopic redshifts. In this case, BEAMS reduces the area of the {Omega}{sub m}, {Omega}{sub {Lambda}} contours by a factor of three relative to the case where only spectroscopically confirmed data are used (297 SNe). In the case of flatness, the constraints obtained on the matter density applying BEAMS to the photometric SDSS-II data are {Omega}{sup BEAMS}{sub m} = 0.194 {+-} 0.07. This illustrates the potential power of BEAMS for future large photometric SN surveys such as Large Synoptic Survey Telescope.

  17. Molecular beacon sequence design algorithm.

    PubMed

    Monroe, W Todd; Haselton, Frederick R

    2003-01-01

    A method based on Web-based tools is presented to design optimally functioning molecular beacons. Molecular beacons, fluorogenic hybridization probes, are a powerful tool for the rapid and specific detection of a particular nucleic acid sequence. However, their synthesis costs can be considerable. Since molecular beacon performance is based on its sequence, it is imperative to rationally design an optimal sequence before synthesis. The algorithm presented here uses simple Microsoft Excel formulas and macros to rank candidate sequences. This analysis is carried out using mfold structural predictions along with other free Web-based tools. For smaller laboratories where molecular beacons are not the focus of research, the public domain algorithm described here may be usefully employed to aid in molecular beacon design.

  18. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  19. Algorithms for intravenous insulin delivery.

    PubMed

    Braithwaite, Susan S; Clement, Stephen

    2008-08-01

    This review aims to classify algorithms for intravenous insulin infusion according to design. Essential input data include the current blood glucose (BG(current)), the previous blood glucose (BG(previous)), the test time of BG(current) (test time(current)), the test time of BG(previous) (test time(previous)), and the previous insulin infusion rate (IR(previous)). Output data consist of the next insulin infusion rate (IR(next)) and next test time. The classification differentiates between "IR" and "MR" algorithm types, both defined as a rule for assigning an insulin infusion rate (IR), having a glycemic target. Both types are capable of assigning the IR for the next iteration of the algorithm (IR(next)) as an increasing function of BG(current), IR(previous), and rate-of-change of BG with respect to time, each treated as an independent variable. Algorithms of the IR type directly seek to define IR(next) as an incremental adjustment to IR(previous). At test time(current), under an IR algorithm the differences in values of IR(next) that might be assigned depending upon the value of BG(current) are not necessarily continuously dependent upon, proportionate to, or commensurate with either the IR(previous) or the rate-of-change of BG. Algorithms of the MR type create a family of IR functions of BG differing according to maintenance rate (MR), each being an iso-MR curve. The change of IR(next) with respect to BG(current) is a strictly increasing function of MR. At test time(current), algorithms of the MR type use IR(previous) and the rate-of-change of BG to define the MR, multiplier, or column assignment, which will be used for patient assignment to the right iso-MR curve and as precedent for IR(next). Bolus insulin therapy is especially effective when used in proportion to carbohydrate load to cover anticipated incremental transitory enteral or parenteral carbohydrate exposure. Specific distinguishing algorithm design features and choice of parameters may be important to

  20. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  1. Innovations in Lattice QCD Algorithms

    SciTech Connect

    Konstantinos Orginos

    2006-06-25

    Lattice QCD calculations demand a substantial amount of computing power in order to achieve the high precision results needed to better understand the nature of strong interactions, assist experiment to discover new physics, and predict the behavior of a diverse set of physical systems ranging from the proton itself to astrophysical objects such as neutron stars. However, computer power alone is clearly not enough to tackle the calculations we need to be doing today. A steady stream of recent algorithmic developments has made an important impact on the kinds of calculations we can currently perform. In this talk I am reviewing these algorithms and their impact on the nature of lattice QCD calculations performed today.

  2. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  3. Delta II Mars Pathfinder

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Final preparations for lift off of the DELTA II Mars Pathfinder Rocket are shown. Activities include loading the liquid oxygen, completing the construction of the Rover, and placing the Rover into the Lander. After the countdown, important visual events include the launch of the Delta Rocket, burnout and separation of the three Solid Rocket Boosters, and the main engine cutoff. The cutoff of the main engine marks the beginning of the second stage engine. After the completion of the second stage, the third stage engine ignites and then cuts off. Once the third stage engine cuts off spacecraft separation occurs.

  4. Run II luminosity progress

    SciTech Connect

    Gollwitzer, K.; /Fermilab

    2007-06-01

    The Fermilab Tevatron Collider Run II program continues at the energy and luminosity frontier of high energy particle physics. To the collider experiments CDF and D0, over 3 fb{sup -1} of integrated luminosity has been delivered to each. Upgrades and improvements in the Antiproton Source of the production and collection of antiprotons have led to increased number of particles stored in the Recycler. Electron cooling and associated improvements have help make a brighter antiproton beam at collisions. Tevatron improvements to handle the increased number of particles and the beam lifetimes have resulted in an increase in luminosity.

  5. Introductory Students, Conceptual Understanding, and Algorithmic Success.

    ERIC Educational Resources Information Center

    Pushkin, David B.

    1998-01-01

    Addresses the distinction between conceptual and algorithmic learning and the clarification of what is meant by a second-tier student. Explores why novice learners in chemistry and physics are able to apply algorithms without significant conceptual understanding. (DDR)

  6. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  7. Teaching Multiplication Algorithms from Other Cultures

    ERIC Educational Resources Information Center

    Lin, Cheng-Yao

    2007-01-01

    This article describes a number of multiplication algorithms from different cultures around the world: Hindu, Egyptian, Russian, Japanese, and Chinese. Students can learn these algorithms and better understand the operation and properties of multiplication.

  8. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  9. Algorithms for Automated DNA Assembly

    DTIC Science & Technology

    2010-01-01

    polyketide synthase gene cluster. Proc. Natl Acad. Sci. USA, 101, 15573–15578. 16. Shetty,R.P., Endy,D. and Knight,T.F. Jr (2008) Engineering BioBrick vectors...correct theoretical construction scheme is de- veloped manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and...to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with

  10. Algorithmic deformation of matrix factorisations

    NASA Astrophysics Data System (ADS)

    Carqueville, Nils; Dowdy, Laura; Recknagel, Andreas

    2012-04-01

    Branes and defects in topological Landau-Ginzburg models are described by matrix factorisations. We revisit the problem of deforming them and discuss various deformation methods as well as their relations. We have implemented these algorithms and apply them to several examples. Apart from explicit results in concrete cases, this leads to a novel way to generate new matrix factorisations via nilpotent substitutions, and to criteria whether boundary obstructions can be lifted by bulk deformations.

  11. Consensus Algorithms Over Fading Channels

    DTIC Science & Technology

    2010-10-01

    studying the effect of fading and collisions on the performance of wireless consensus gossiping and in comparing its cost (measured in terms of number of...not assumed to be symmetric under A2. III. RELATED WORK There has been a resurgence of interest in characterizing consen- sus and gossip algorithms...tree, and then distribute the consensus value, with a finite number of exchanges. The price paid is clearly that of finding the appropriate routing

  12. Numerical Algorithms and Parallel Tasking.

    DTIC Science & Technology

    1984-07-01

    34 Principal Investigator, Virginia Klema, Research Staff, George Cybenko and Elizabeth Ducot . During the period, May 15, 1983 through May 14, 1984...Virginia Klema and Elizabeth Ducot have been supported for four months, and George Cybenko has been supported for one month. During this time system...algorithms or applications is the responsibility of the user. Virginia Klema and Elizabeth Ducot presented a description of the concurrent computing

  13. Network Games and Approximation Algorithms

    DTIC Science & Technology

    2008-01-03

    I also spend time during the last three years writing a textbook on Algorithm Design (with Jon Kleinberg) that had now been adopted by a number of...Minimum-Size Bounded-Capacity Cut (MSBCC) problem, in which we are given a graph with an identified source and seek to find a cut minimizing the number ...Distributed Computing (Special Issue PODC 05) Volume 19, Number 4, 2007, 255-266. http://www.springerlink.com/content/x 148746507861 np7/ ?p

  14. QCCM Center for Quantum Algorithms

    DTIC Science & Technology

    2008-10-17

    and A. Ekert and C. Macchiavello and M. Mosca quant-ph/0609160v1 Phase map decompositions for unitaries Niel de Beaudrap, Vincent Danos, Elham...Quantum Algorithms and Complexity M. Mosca Proceedings of NATO ASI Quantum Computation and Information 2005, Chania, Crete, Greece, IOS Press (2006), in...press Quantum Cellular Automata and Single Spin Measurement C. Perez, D. Cheung, M. Mosca , P. Cappellaro, D. Cory Proceedings of Asian Conference on

  15. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  16. Halftoning and Image Processing Algorithms

    DTIC Science & Technology

    1999-02-01

    screening techniques with the quality advantages of error diffusion in the half toning of color maps, and on color image enhancement for halftone ...image quality. Our goals in this research were to advance the understanding in image science for our new halftone algorithm and to contribute to...image retrieval and noise theory for such imagery. In the field of color halftone printing, research was conducted on deriving a theoretical model of our

  17. Principles for Developing Algorithmic Instruction.

    DTIC Science & Technology

    1978-12-01

    information-processing theories to test their applicability with instruction directed by learning algorithms. A version of a logical, or familiar, and a...intent of our research was to borrow~ from information-processing theory factors which are known to affect learning in a predictable manner and to apply... learning studies where processing theories are tested by minute performance or latency differences. -~ It is not surprising that differences are seldom found

  18. Global Positioning System Navigation Algorithms

    DTIC Science & Technology

    1977-05-01

    Historical Remarks on Navigation In Greek mythology , Odysseus sailed safely by the Sirens only to encounter the monsters Scylla and Charybdis...TNED 000 00 1(.7 BIBLIOGRAPHY 1. Pinsent, John. Greek Mythology . Paul Hamlyn, London, 1969. 2. Kline, Morris. Mathematical Thought from Ancient to...Algorithms 20. ABS AACT (Continue an reverse sid* If necessary and identify by block nttrnber) The Global Positioning System (CPS) will be a constellation of

  19. Efficient GPS Position Determination Algorithms

    DTIC Science & Technology

    2007-06-01

    Dilution of Precision ( GDOP ) conditions. The novel differential GPS algorithm for a network of users that has been developed in this research uses a...performance is achieved, even under high Geometric Dilution of Precision ( GDOP ) conditions. The second part of this research investigates a...respect to the receiver produces high Geometric Dilution of Precision ( GDOP ), which can adversely affect GPS position solutions [1]. Four

  20. Algorithms for optimal redundancy allocation

    SciTech Connect

    Vandenkieboom, J.; Youngblood, R.

    1993-01-01

    Heuristic and exact methods for solving the redundancy allocation problem are compared to an approach based on genetic algorithms. The various methods are applied to the bridge problem, which has been used as a benchmark in earlier work on optimization methods. Comparisons are presented in terms of the best configuration found by each method, and the computation effort which was necessary in order to find it.

  1. SAGE II aerosol data validation - Comparative studies of SAGE II and SAM II data sets

    NASA Technical Reports Server (NTRS)

    Yue, G. K.; Mccormick, M. P.; Chu, W. P.; Wang, P. H.; Osborn, M. T.

    1989-01-01

    Data from the Stratospheric Aerosol and Gas Experiment (SAGE II) satellite are compared with data from the Stratospheric Aerosol Measurement (SAM II) satellite. Both experiments produce aerosol extinction profiles by measuring the attenuation of solar radiation during each sunrise and sunset observed by the satelltie. The SAGE II obtains profiles at 1.02 microns and three smaller wavelengths, whereas the SAM II measures at only one radiometric channel at 1.0 microns. It is found that the differences between the two sets of data are generally within the error bars associated with each measurement. In addition, the sunrise and sunset data from SAGE II are analyzed.

  2. An improved sink particle algorithm for SPH simulations

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Walch, S.; Whitworth, A. P.

    2013-04-01

    Numerical simulations of star formation frequently rely on the implementation of sink particles: (a) to avoid expending computational resource on the detailed internal physics of individual collapsing protostars, (b) to derive mass functions, binary statistics and clustering kinematics (and hence to make comparisons with observation), and (c) to model radiative and mechanical feedback; sink particles are also used in other contexts, for example to represent accreting black holes in galactic nuclei. We present a new algorithm for creating and evolving sink particles in smoothed particle hydrodynamic (SPH) simulations, which appears to represent a significant improvement over existing algorithms - particularly in situations where sinks are introduced after the gas has become optically thick to its own cooling radiation and started to heat up by adiabatic compression. (i) It avoids spurious creation of sinks. (ii) It regulates the accretion of matter on to a sink so as to mitigate non-physical perturbations in the vicinity of the sink. (iii) Sinks accrete matter, but the associated angular momentum is transferred back to the surrounding medium. With the new algorithm - and modulo the need to invoke sufficient resolution to capture the physics preceding sink formation - the properties of sinks formed in simulations are essentially independent of the user-defined parameters of sink creation, or the number of SPH particles used.

  3. An algorithm for segmenting range imagery

    SciTech Connect

    Roberts, R.S.

    1997-03-01

    This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.

  4. Algorithms and Requirements for Measuring Network Bandwidth

    SciTech Connect

    Jin, Guojun

    2002-12-08

    This report unveils new algorithms for actively measuring (not estimating) available bandwidths with very low intrusion, computing cross traffic, thus estimating the physical bandwidth, provides mathematical proof that the algorithms are accurate, and addresses conditions, requirements, and limitations for new and existing algorithms for measuring network bandwidths. The paper also discusses a number of important terminologies and issues for network bandwidth measurement, and introduces a fundamental parameter -Maximum Burst Size that is critical for implementing algorithms based on multiple packets.

  5. Efficient Algorithm for Rectangular Spiral Search

    NASA Technical Reports Server (NTRS)

    Brugarolas, Paul; Breckenridge, William

    2008-01-01

    An algorithm generates grid coordinates for a computationally efficient spiral search pattern covering an uncertain rectangular area spanned by a coordinate grid. The algorithm does not require that the grid be fixed; the algorithm can search indefinitely, expanding the grid and spiral, as needed, until the target of the search is found. The algorithm also does not require memory of coordinates of previous points on the spiral to generate the current point on the spiral.

  6. The Cartan algorithm in five dimensions

    NASA Astrophysics Data System (ADS)

    McNutt, D. D.; Coley, A. A.; Forget, A.

    2017-03-01

    In this paper, we introduce an algorithm to determine the equivalence of five dimensional spacetimes, which generalizes the Karlhede algorithm for four dimensional general relativity. As an alternative to the Petrov type classification, we employ the alignment classification to algebraically classify the Weyl tensor. To illustrate the algorithm, we discuss three examples: the singly rotating Myers-Perry solution, the Kerr (Anti-) de Sitter solution, and the rotating black ring solution. We briefly discuss some applications of the Cartan algorithm in five dimensions.

  7. Improved LMS algorithm for adaptive beamforming

    NASA Technical Reports Server (NTRS)

    Godara, Lal C.

    1990-01-01

    Two adaptive algorithms which make use of all the available samples to estimate the required gradient are proposed and studied. The first algorithm is referred to as the recursive LMS (least mean squares) and is applicable to a general array. The second algorithm is referred to as the improved LMS algorithm and exploits the Toeplitz structure of the ACM (array correlation matrix); it can be used only for an equispaced linear array.

  8. Storage capacity of the Tilinglike Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Buhot, Arnaud; Gordon, Mirta B.

    2001-02-01

    The storage capacity of an incremental learning algorithm for the parity machine, the Tilinglike Learning Algorithm, is analytically determined in the limit of a large number of hidden perceptrons. Different learning rules for the simple perceptron are investigated. The usual Gardner-Derrida rule leads to a storage capacity close to the upper bound, which is independent of the learning algorithm considered.

  9. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  10. Active Processor Scheduling Using Evolutionary Algorithms

    DTIC Science & Technology

    2002-12-01

    xiii Active Processor Scheduling Using Evolutionary Algorithms I. Introduction A distributed system offers the ability to run applications across...calculations are made. This model is sometimes referred to as a form of the island model of evolutionary computation because each population is evolved... Evolutionary Algorithms for Solving Multi-Objective Problems. Genetic Algorithms and Evolutionary Computation , New York: Kluwer Academic Publishers, 2002

  11. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    ERIC Educational Resources Information Center

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  12. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  13. An algorithm on distributed mining association rules

    NASA Astrophysics Data System (ADS)

    Xu, Fan

    2005-12-01

    With the rapid development of the Internet/Intranet, distributed databases have become a broadly used environment in various areas. It is a critical task to mine association rules in distributed databases. The algorithms of distributed mining association rules can be divided into two classes. One is a DD algorithm, and another is a CD algorithm. A DD algorithm focuses on data partition optimization so as to enhance the efficiency. A CD algorithm, on the other hand, considers a setting where the data is arbitrarily partitioned horizontally among the parties to begin with, and focuses on parallelizing the communication. A DD algorithm is not always applicable, however, at the time the data is generated, it is often already partitioned. In many cases, it cannot be gathered and repartitioned for reasons of security and secrecy, cost transmission, or sheer efficiency. A CD algorithm may be a more appealing solution for systems which are naturally distributed over large expenses, such as stock exchange and credit card systems. An FDM algorithm provides enhancement to CD algorithm. However, CD and FDM algorithms are both based on net-structure and executing in non-shareable resources. In practical applications, however, distributed databases often are star-structured. This paper proposes an algorithm based on star-structure networks, which are more practical in application, have lower maintenance costs and which are more practical in the construction of the networks. In addition, the algorithm provides high efficiency in communication and good extension in parallel computation.

  14. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  15. The Belle II Detector

    NASA Astrophysics Data System (ADS)

    Piilonen, Leo; Belle Collaboration, II

    2017-01-01

    The Belle II detector is now under construction at the KEK laboratory in Japan. This project represents a substantial upgrade of the Belle detector (and the KEKB accelerator). The Belle II experiment will record 50 ab-1 of data, a factor of 50 more than that recorded by Belle. This large data set, combined with the low backgrounds and high trigger efficiencies characteristic of an e+e- experiment, should provide unprecedented sensitivity to new physics signatures in B and D meson decays, and in τ lepton decays. The detector comprises many forefront subsystems. The vertex detector consists of two inner layers of silicon DEPFET pixels and four outer layers of double-sided silicon strips. These layers surround a beryllium beam pipe having a radius of only 10 mm. Outside of the vertex detector is a large-radius, small-cell drift chamber, an ``imaging time-of-propagation'' detector based on Cerenkov radiation for particle identification, and scintillating fibers and resistive plate chambers used to identify muons. The detector will begin commissioning in 2017.

  16. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  17. AerGOM, an improved algorithm for stratospheric aerosol extinction retrieval from GOMOS observations - Part 1: Algorithm description

    NASA Astrophysics Data System (ADS)

    Vanhellemont, Filip; Mateshvili, Nina; Blanot, Laurent; Étienne Robert, Charles; Bingen, Christine; Sofieva, Viktoria; Dalaudier, Francis; Tétard, Cédric; Fussen, Didier; Dekemper, Emmanuel; Kyrölä, Erkki; Laine, Marko; Tamminen, Johanna; Zehner, Claus

    2016-09-01

    The GOMOS instrument on Envisat has successfully demonstrated that a UV-Vis-NIR spaceborne stellar occultation instrument is capable of delivering quality data on the gaseous and particulate composition of Earth's atmosphere. Still, some problems related to data inversion remained to be examined. In the past, it was found that the aerosol extinction profile retrievals in the upper troposphere and stratosphere are of good quality at a reference wavelength of 500 nm but suffer from anomalous, retrieval-related perturbations at other wavelengths. Identification of algorithmic problems and subsequent improvement was therefore necessary. This work has been carried out; the resulting AerGOM Level 2 retrieval algorithm together with the first data version AerGOMv1.0 forms the subject of this paper. The AerGOM algorithm differs from the standard GOMOS IPF processor in a number of important ways: more accurate physical laws have been implemented, all retrieval-related covariances are taken into account, and the aerosol extinction spectral model is strongly improved. Retrieval examples demonstrate that the previously observed profile perturbations have disappeared, and the obtained extinction spectra look in general more consistent. We present a detailed validation study in a companion paper; here, to give a first idea of the data quality, a worst-case comparison at 386 nm shows SAGE II-AerGOM correlation coefficients that are up to 1 order of magnitude larger than the ones obtained with the GOMOS IPFv6.01 data set.

  18. Modeling fluid dynamics on type II quantum computers

    NASA Astrophysics Data System (ADS)

    Scoville, James; Weeks, David; Yepez, Jeffrey

    2006-03-01

    A quantum algorithm is presented for modeling the time evolution of density and flow fields governed by classical equations, such as the diffusion equation, the nonlinear Burgers equation, and the damped wave equation. The algorithm is intended to run on a type-II quantum computer, a parallel quantum computer consisting of a lattice of small type I quantum computers undergoing unitary evolution and interacting via information interchanges represented by an orthogonal matrices. Information is effectively transferred between adjacent quantum computers over classical communications channels because of controlled state demolition following local quantum mechanical qubit-qubit interactions within each quantum computer. The type-II quantum algorithm presented in this paper describes a methodology for generating quantum logic operations as a generalization of classical operations associated with finite-point group symmetries. The quantum mechanical evolution of multiple qubits within each node is described. Presented is a proof that the parallel quantum system obeys a finite-difference quantum Boltzman equation at the mesoscopic scale, leading in turn to various classical linear and nonlinear effective field theories at the macroscopic scale depending on the details of the local qubit-qubit interactions.

  19. Empirical Studies of the Value of Algorithm Animation in Algorithm Understanding

    DTIC Science & Technology

    1993-08-01

    A series of studies is presented using algorithm animation to teach computer algorithms . These studies are organized into three components: eliciting...lecture with experimenter-preprepared data sets. This work has implications for the design and use of animated algorithms in teaching computer algorithms and

  20. Application of a multi-objective optimization method to provide least cost alternatives for NPS pollution control.

    PubMed

    Maringanti, Chetan; Chaubey, Indrajeet; Arabi, Mazdak; Engel, Bernard

    2011-09-01

    Nonpoint source (NPS) pollutants such as phosphorus, nitrogen, sediment, and pesticides are the foremost sources of water contamination in many of the water bodies in the Midwestern agricultural watersheds. This problem is expected to increase in the future with the increasing demand to provide corn as grain or stover for biofuel production. Best management practices (BMPs) have been proven to effectively reduce the NPS pollutant loads from agricultural areas. However, in a watershed with multiple farms and multiple BMPs feasible for implementation, it becomes a daunting task to choose a right combination of BMPs that provide maximum pollution reduction for least implementation costs. Multi-objective algorithms capable of searching from a large number of solutions are required to meet the given watershed management objectives. Genetic algorithms have been the most popular optimization algorithms for the BMP selection and placement. However, previous BMP optimization models did not study pesticide which is very commonly used in corn areas. Also, with corn stover being projected as a viable alternative for biofuel production there might be unintended consequences of the reduced residue in the corn fields on water quality. Therefore, there is a need to study the impact of different levels of residue management in combination with other BMPs at a watershed scale. In this research the following BMPs were selected for placement in the watershed: (a) residue management, (b) filter strips, (c) parallel terraces, (d) contour farming, and (e) tillage. We present a novel method of combing different NPS pollutants into a single objective function, which, along with the net costs, were used as the two objective functions during optimization. In this study we used BMP tool, a database that contains the pollution reduction and cost information of different BMPs under consideration which provides pollutant loads during optimization. The BMP optimization was performed using a NSGA-II

  1. Application of a Multi-Objective Optimization Method to Provide Least Cost Alternatives for NPS Pollution Control

    NASA Astrophysics Data System (ADS)

    Maringanti, Chetan; Chaubey, Indrajeet; Arabi, Mazdak; Engel, Bernard

    2011-09-01

    Nonpoint source (NPS) pollutants such as phosphorus, nitrogen, sediment, and pesticides are the foremost sources of water contamination in many of the water bodies in the Midwestern agricultural watersheds. This problem is expected to increase in the future with the increasing demand to provide corn as grain or stover for biofuel production. Best management practices (BMPs) have been proven to effectively reduce the NPS pollutant loads from agricultural areas. However, in a watershed with multiple farms and multiple BMPs feasible for implementation, it becomes a daunting task to choose a right combination of BMPs that provide maximum pollution reduction for least implementation costs. Multi-objective algorithms capable of searching from a large number of solutions are required to meet the given watershed management objectives. Genetic algorithms have been the most popular optimization algorithms for the BMP selection and placement. However, previous BMP optimization models did not study pesticide which is very commonly used in corn areas. Also, with corn stover being projected as a viable alternative for biofuel production there might be unintended consequences of the reduced residue in the corn fields on water quality. Therefore, there is a need to study the impact of different levels of residue management in combination with other BMPs at a watershed scale. In this research the following BMPs were selected for placement in the watershed: (a) residue management, (b) filter strips, (c) parallel terraces, (d) contour farming, and (e) tillage. We present a novel method of combing different NPS pollutants into a single objective function, which, along with the net costs, were used as the two objective functions during optimization. In this study we used BMP tool, a database that contains the pollution reduction and cost information of different BMPs under consideration which provides pollutant loads during optimization. The BMP optimization was performed using a NSGA-II

  2. Problem solving with genetic algorithms and Splicer

    NASA Technical Reports Server (NTRS)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  3. The Rational Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Clark, Michael

    2006-12-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  4. Towards General Algorithms for Grammatical Inference

    NASA Astrophysics Data System (ADS)

    Clark, Alexander

    Many algorithms for grammatical inference can be viewed as instances of a more general algorithm which maintains a set of primitive elements, which distributionally define sets of strings, and a set of features or tests that constrain various inference rules. Using this general framework, which we cast as a process of logical inference, we re-analyse Angluin's famous lstar algorithm and several recent algorithms for the inference of context-free grammars and multiple context-free grammars. Finally, to illustrate the advantages of this approach, we extend it to the inference of functional transductions from positive data only, and we present a new algorithm for the inference of finite state transducers.

  5. An Optimal Class Association Rule Algorithm

    NASA Astrophysics Data System (ADS)

    Jean Claude, Turiho; Sheng, Yang; Chuang, Li; Kaia, Xie

    Classification and association rule mining algorithms are two important aspects of data mining. Class association rule mining algorithm is a promising approach for it involves the use of association rule mining algorithm to discover classification rules. This paper introduces an optimal class association rule mining algorithm known as OCARA. It uses optimal association rule mining algorithm and the rule set is sorted by priority of rules resulting into a more accurate classifier. It outperforms the C4.5, CBA, RMR on UCI eight data sets, which is proved by experimental results.

  6. Efficient demultiplexing algorithm for noncontiguous carriers

    NASA Technical Reports Server (NTRS)

    Thanawala, A. A.; Kwatra, S. C.; Jamali, M. M.; Budinger, J.

    1992-01-01

    A channel separation algorithm for the frequency division multiple access/time division multiplexing (FDMA/TDM) scheme is presented. It is shown that implementation using this algorithm can be more effective than the fast Fourier transform (FFT) algorithm when only a small number of carriers need to be selected from many, such as satellite Earth terminals. The algorithm is based on polyphase filtering followed by application of a generalized Walsh-Hadamard transform (GWHT). Comparison of the transform technique used in this algorithm with discrete Fourier transform (DFT) and FFT is given. Estimates of the computational rates and power requirements to implement this system are also given.

  7. Antimicrobial activity of the synthetic peptide scolopendrasin ii from the centipede Scolopendra subspinipes mutilans.

    PubMed

    Kwon, Young-Nam; Lee, Joon Ha; Kim, In-Woo; Kim, Sang-Hee; Yun, Eun-Young; Nam, Sung-Hee; Ahn, Mi-Young; Jeong, Mihye; Kang, Dong-Chul; Lee, In Hee; Hwang, Jae Sam

    2013-10-28

    The centipede Scolopendra subpinipes mutilans is a medicinally important arthropod species. However, its transcriptome is not currently available and transcriptome analysis would be useful in providing insight into a molecular level approach. Hence, we performed de novo RNA sequencing of S. subpinipes mutilans using next-generation sequencing. We generated a novel peptide (scolopendrasin II) based on a SVM algorithm, and biochemically evaluated the in vitro antimicrobial activity of scolopendrasin II against various microbes. Scolopendrasin II showed antibacterial activities against gram-positive and -negative bacterial strains, including the yeast Candida albicans and antibiotic-resistant gram-negative bacteria, as determined by a radial diffusion assay and colony count assay without hemolytic activity. In addition, we confirmed that scolopendrasin II bound to the surface of bacteria through a specific interaction with lipoteichoic acid and a lipopolysaccharide, which was one of the bacterial cell-wall components. In conclusion, our results suggest that scolopendrasin II may be useful for developing peptide antibiotics.

  8. Effect of Cu(II), Cd(II) and Zn(II) on Pb(II) biosorption by algae Gelidium-derived materials.

    PubMed

    Vilar, Vítor J P; Botelho, Cidália M S; Boaventura, Rui A R

    2008-06-15

    Biosorption of Pb(II), Cu(II), Cd(II) and Zn(II) from binary metal solutions onto the algae Gelidium sesquipedale, an algal industrial waste and a waste-based composite material was investigated at pH 5.3, in a batch system. Binary Pb(II)/Cu(II), Pb(II)/Cd(II) and Pb(II)/Zn(II) solutions have been tested. For the same equilibrium concentrations of both metal ions (1 mmol l(-1)), approximately 66, 85 and 86% of the total uptake capacity of the biosorbents is taken by lead ions in the systems Pb(II)/Cu(II), Pb(II)/Cd(II) and Pb(II)/Zn(II), respectively. Two-metal results were fitted to a discrete and a continuous model, showing the inhibition of the primary metal biosorption by the co-cation. The model parameters suggest that Cd(II) and Zn(II) have the same decreasing effect on the Pb(II) uptake capacity. The uptake of Pb(II) was highly sensitive to the presence of Cu(II). From the discrete model it was possible to obtain the Langmuir affinity constant for Pb(II) biosorption. The presence of the co-cations decreases the apparent affinity of Pb(II). The experimental results were successfully fitted by the continuous model, at different pH values, for each biosorbent. The following sequence for the equilibrium affinity constants was found: Pb>Cu>Cd approximately Zn.

  9. II Zwicky 23 and Family

    NASA Astrophysics Data System (ADS)

    Wehner, E. H.; Gallagher, J. S.; Rudie, G. C.; Cigan, P. J.

    II Zwicky 23 (UGC 3179) is a luminous (MB ~ -21) nearby compact narrow emission line st arburst galaxy with blue optical colors and strong emission lines. We present a photometric and morphological study of II Zw 23 and its interacting companions using data obtained with the WIYN 3.5-m telescope in Kitt Peak, Arizona. II Zwicky 23 has a highly disturbed outer structure with long trails of debris that may be feeding tidal dwarfs.

  10. A comparative analysis of GPU implementations of spectral unmixing algorithms

    NASA Astrophysics Data System (ADS)

    Sanchez, Sergio; Plaza, Antonio

    2011-11-01

    Spectral unmixing is a very important task for remotely sensed hyperspectral data exploitation. It involves the separation of a mixed pixel spectrum into its pure component spectra (called endmembers) and the estimation of the proportion (abundance) of each endmember in the pixel. Over the last years, several algorithms have been proposed for: i) automatic extraction of endmembers, and ii) estimation of the abundance of endmembers in each pixel of the hyperspectral image. The latter step usually imposes two constraints in abundance estimation: the non-negativity constraint (meaning that the estimated abundances cannot be negative) and the sum-toone constraint (meaning that the sum of endmember fractional abundances for a given pixel must be unity). These two steps comprise a hyperspectral unmixing chain, which can be very time-consuming (particularly for high-dimensional hyperspectral images). Parallel computing architectures have offered an attractive solution for fast unmixing of hyperspectral data sets, but these systems are expensive and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in (near) real-time. In this paper, we perform an inter-comparison of parallel algorithms for automatic extraction of pure spectral signatures or endmembers and for estimation of the abundance of endmembers in each pixel of the scene. The compared techniques are implemented in graphics processing units (GPUs). These hardware accelerators can bridge the gap towards on-board processing of this kind of data. The considered algorithms comprise the orthogonal subspace projection (OSP), iterative error analysis (IEA) and N-FINDR algorithms for endmember extraction, as well as unconstrained, partially constrained and fully constrained abundance estimation. The considered implementations are inter-compared using different GPU architectures and hyperspectral

  11. The global Minmax k-means algorithm.

    PubMed

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k-means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k-means to minimize the sum of the intra-cluster variances. However the global k-means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k-means algorithm. In this paper, we modified the global k-means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k-means clustering error method to global k-means algorithm to overcome the effect of bad initialization, proposed the global Minmax k-means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k-means algorithm, the global k-means algorithm and the MinMax k-means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  12. Belle II Software

    NASA Astrophysics Data System (ADS)

    Kuhr, T.; Ritter, M.; Belle Software Group, II

    2016-10-01

    Belle II is a next generation B factory experiment that will collect 50 times more data than its predecessor, Belle. The higher luminosity at the SuperKEKB accelerator leads to higher background levels and requires a major upgrade of the detector. As a consequence, the simulation, reconstruction, and analysis software must also be upgraded substantially. Most of the software has been redesigned from scratch, taking into account the experience from Belle and other experiments and utilizing new technologies. The large amount of experimental and simulated data requires a high level of reliability and reproducibility, even in parallel environments. Several technologies, tools, and organizational measures are employed to evaluate and monitor the performance of the software during development.

  13. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  14. A hybrid algorithm with GA and DAEM

    NASA Astrophysics Data System (ADS)

    Wan, HongJie; Deng, HaoJiang; Wang, XueWei

    2013-03-01

    Although the expectation-maximization (EM) algorithm has been widely used for finding maximum likelihood estimation of parameters in probabilistic models, it has the problem of trapping by local maxima. To overcome this problem, the deterministic annealing EM (DAEM) algorithm was once proposed and had achieved better performance than EM algorithm, but it is not very effective at avoiding local maxima. In this paper, a solution is proposed by integrating GA and DAEM into one procedure to further improve the solution quality. The population based search of genetic algorithm will produce different solutions and thus can increase the search space of DAEM. Therefore, the proposed algorithm will reach better solution than just using DAEM. The algorithm retains the property of DAEM and gets the better solution by genetic operation. Experiment results on Gaussian mixture model parameter estimation demonstrate that the proposed algorithm can achieve better performance.

  15. MM Algorithms for Some Discrete Multivariate Distributions.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2010-09-01

    The MM (minorization-maximization) principle is a versatile tool for constructing optimization algorithms. Every EM algorithm is an MM algorithm but not vice versa. This article derives MM algorithms for maximum likelihood estimation with discrete multivariate distributions such as the Dirichlet-multinomial and Connor-Mosimann distributions, the Neerchal-Morel distribution, the negative-multinomial distribution, certain distributions on partitions, and zero-truncated and zero-inflated distributions. These MM algorithms increase the likelihood at each iteration and reliably converge to the maximum from well-chosen initial values. Because they involve no matrix inversion, the algorithms are especially pertinent to high-dimensional problems. To illustrate the performance of the MM algorithms, we compare them to Newton's method on data used to classify handwritten digits.

  16. Optimal Multistage Algorithm for Adjoint Computation

    SciTech Connect

    Aupy, Guillaume; Herrmann, Julien; Hovland, Paul; Robert, Yves

    2016-01-01

    We reexamine the work of Stumm and Walther on multistage algorithms for adjoint computation. We provide an optimal algorithm for this problem when there are two levels of checkpoints, in memory and on disk. Previously, optimal algorithms for adjoint computations were known only for a single level of checkpoints with no writing and reading costs; a well-known example is the binomial checkpointing algorithm of Griewank and Walther. Stumm and Walther extended that binomial checkpointing algorithm to the case of two levels of checkpoints, but they did not provide any optimality results. We bridge the gap by designing the first optimal algorithm in this context. We experimentally compare our optimal algorithm with that of Stumm and Walther to assess the difference in performance.

  17. A compilation of jet finding algorithms

    SciTech Connect

    Flaugher, B.; Meier, K.

    1992-12-31

    Technical descriptions of jet finding algorithms currently in use in p{anti p} collider experiments (CDF, UA1, UA2), e{sup +}e{sup {minus}} experiments and Monte-Carlo event generators (LUND programs, ISAJET) have been collected. For the hadron collider experiments, the clustering methods fall into two categories: cone algorithms and nearest-neighbor algorithms. In addition, UA2 has employed a combination of both methods for some analysis. While there are clearly differences between the cone and nearest-neighbor algorithms, the authors have found that there are also differences among the cone algorithms in the details of how the centroid of a cone cluster is located and how the E{sub T} and P{sub T} of the jet are defined. The most commonly used jet algorithm in electron-positron experiments is the JADE-type cluster algorithm. Five various incarnations of this approach have been described.

  18. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  19. He II-Emitting Galaxies

    NASA Astrophysics Data System (ADS)

    Heap, Sara R.

    2014-01-01

    A small fraction of star-forming galaxies at redshift, 3, show He II at 1640 A as a narrow emission line (Cassata et al. 2012), but the source of this emission is not understood. Does the He II emission arise in the stars or in the surrounding nebula? To answer this question, we use I Zw 18, a well studied blue compact dwarf galaxy showing narrow He II line emission as a test case. We consider if/how He II narrow emission lines could originate in the nearby nebulosity, or in the winds of hot, massive stars, both those on the main sequence and post-MS evolutionary phases.

  20. Mode II fatigue crack propagation.

    NASA Technical Reports Server (NTRS)

    Roberts, R.; Kibler, J. J.

    1971-01-01

    Fatigue crack propagation rates were obtained for 2024-T3 bare aluminum plates subjected to in-plane, mode I, extensional loads and transverse, mode II, bending loads. These results were compared to the results of Iida and Kobayashi for in-plane mode I-mode II extensional loads. The engineering significance of mode I-mode II fatigue crack growth is considered in view of the present results. A fatigue crack growth equation for handling mode I-mode II fatigue crack growth rates from existing mode I data is also discussed.

  1. Phase II Final Report

    SciTech Connect

    Schuknecht, Nate; White, David; Hoste, Graeme

    2014-09-11

    The SkyTrough DSP will advance the state-of-the-art in parabolic troughs for utility applications, with a larger aperture, higher operating temperature, and lower cost. The goal of this project was to develop a parabolic trough collector that enables solar electricity generation in the 2020 marketplace for a 216MWe nameplate baseload power plant. This plant requires an LCOE of 9¢/kWhe, given a capacity factor of 75%, a fossil fuel limit of 15%, a fossil fuel cost of $6.75/MMBtu, $25.00/kWht thermal storage cost, and a domestic installation corresponding to Daggett, CA. The result of our optimization was a trough design of larger aperture and operating temperature than has been fielded in large, utility scale parabolic trough applications: 7.6m width x 150m SCA length (1,118m2 aperture), with four 90mm diameter × 4.7m receivers per mirror module and an operating temperature of 500°C. The results from physical modeling in the System Advisory Model indicate that, for a capacity factor of 75%: The LCOE will be 8.87¢/kWhe. SkyFuel examined the design of almost every parabolic trough component from a perspective of load and performance at aperture areas from 500 to 2,900m2. Aperture-dependent design was combined with fixed quotations for similar parts from the commercialized SkyTrough product, and established an installed cost of $130/m2 in 2020. This project was conducted in two phases. Phase I was a preliminary design, culminating in an optimum trough size and further improvement of an advanced polymeric reflective material. This phase was completed in October of 2011. Phase II has been the detailed engineering design and component testing, which culminated in the fabrication and testing of a single mirror module. Phase II is complete, and this document presents a summary of the comprehensive work.

  2. ALFA: Automated Line Fitting Algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2015-12-01

    ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.

  3. Algorithms for skiascopy measurement automatization

    NASA Astrophysics Data System (ADS)

    Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  4. Wire Detection Algorithms for Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.

    2002-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning

  5. Sensor network algorithms and applications.

    PubMed

    Trigoni, Niki; Krishnamachari, Bhaskar

    2012-01-13

    A sensor network is a collection of nodes with processing, communication and sensing capabilities deployed in an area of interest to perform a monitoring task. There has now been about a decade of very active research in the area of sensor networks, with significant accomplishments made in terms of both designing novel algorithms and building exciting new sensing applications. This Theme Issue provides a broad sampling of the central challenges and the contributions that have been made towards addressing these challenges in the field, and illustrates the pervasive and central role of sensor networks in monitoring human activities and the environment.

  6. Cluster Algorithm Special Purpose Processor

    NASA Astrophysics Data System (ADS)

    Talapov, A. L.; Shchur, L. N.; Andreichenko, V. B.; Dotsenko, Vl. S.

    We describe a Special Purpose Processor, realizing the Wolff algorithm in hardware, which is fast enough to study the critical behaviour of 2D Ising-like systems containing more than one million spins. The processor has been checked to produce correct results for a pure Ising model and for Ising model with random bonds. Its data also agree with the Nishimori exact results for spin glass. Only minor changes of the SPP design are necessary to increase the dimensionality and to take into account more complex systems such as Potts models.

  7. The Complexity of Parallel Algorithms,

    DTIC Science & Technology

    1985-11-01

    Much of this work was done in collaboration with my advisor, Ernst Mayr . He was also supported in part by ONR contract N00014-85-C-0731. F ’. Table...Helinbold and Mayr in their algorithn to compute an optimal two processor schedule [HM2]. One of the promising developments in parallel algorithms is that...lei can be solved by it fast parallel algorithmmmi if the nmlmmmibers are smiall. llehmibold and Mayr JIlM I] have slhowm that. if Ole job timies are

  8. An efficient parallel termination detection algorithm

    SciTech Connect

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.

  9. Comparative Analysis of Guidance Algorithms for the Hyper Velocity Missile and AFTI/F-16

    DTIC Science & Technology

    1991-11-12

    latest Kalman Filter modeling tools to the problem of guiding this missile to a target. Capps and Nelson assumed a smart missile, with "a proportional...N \\ r~,sin(S)I Nx /4 p1M. rt rioi r.Di PIc2 i_e IM rm rM BIm Figure 3-7 Pursuit Algorithm Geometry From Figure 3-7 it is seen that: r, sin (8) (1...algorithm does. 4-25 101 I-\\ - - 00 I-’ c 4 o. 0) 000 < L L Z 0) LL zo 0 C!) * . 0 .D 0 0 ~ cu r- mn T CItI I’ II II I I I CUj 0 0) CD r- CD I) W m N ~ 4-2E

  10. Ligand Identification Scoring Algorithm (LISA)

    PubMed Central

    Zheng, Zheng; Merz, Kenneth M.

    2011-01-01

    A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms. PMID:21561101

  11. The Aquarius Salinity Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank; Hilburn, Kyle; Lagerloef, Gary; Le Vine, David

    2012-01-01

    The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration [2] converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to molecular oxygen, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind, which is addressed in more detail in section 3. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water [3], [4] and an auxiliary field for the sea surface temperature. In the current processing only v-pol TB are used for this last step.

  12. Filter selection using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Patel, Devesh

    1996-03-01

    Convolution operators act as matched filters for certain types of variations found in images and have been extensively used in the analysis of images. However, filtering through a bank of N filters generates N filtered images, consequently increasing the amount of data considerably. Moreover, not all these filters have the same discriminatory capabilities for the individual images, thus making the task of any classifier difficult. In this paper, we use genetic algorithms to select a subset of relevant filters. Genetic algorithms represent a class of adaptive search techniques where the processes are similar to natural selection of biological evolution. The steady state model (GENITOR) has been used in this paper. The reduction of filters improves the performance of the classifier (which in this paper is the multi-layer perceptron neural network) and furthermore reduces the computational requirement. In this study we use the Laws filters which were proposed for the analysis of texture images. Our aim is to recognize the different textures on the images using the reduced filter set.

  13. Enhanced algorithms for stochastic programming

    SciTech Connect

    Krishna, A.S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.

  14. Ligand Identification Scoring Algorithm (LISA).

    PubMed

    Zheng, Zheng; Merz, Kenneth M

    2011-06-27

    A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects, and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions, and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well-known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms.

  15. A multipopulation PSO based memetic algorithm for permutation flow shop scheduling.

    PubMed

    Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang

    2013-01-01

    The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP.

  16. Commodity cluster and hardware-based massively parallel implementations of hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David

    2006-05-01

    The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.

  17. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  18. Solar Type II Radio Bursts and IP Type II Events

    NASA Technical Reports Server (NTRS)

    Cane, H. V.; Erickson, W. C.

    2005-01-01

    We have examined radio data from the WAVES experiment on the Wind spacecraft in conjunction with ground-based data in order to investigate the relationship between the shocks responsible for metric type II radio bursts and the shocks in front of coronal mass ejections (CMEs). The bow shocks of fast, large CMEs are strong interplanetary (IP) shocks, and the associated radio emissions often consist of single broad bands starting below approx. 4 MHz; such emissions were previously called IP type II events. In contrast, metric type II bursts are usually narrowbanded and display two harmonically related bands. In addition to displaying complete dynamic spectra for a number of events, we also analyze the 135 WAVES 1 - 14 MHz slow-drift time periods in 2001-2003. We find that most of the periods contain multiple phenomena, which we divide into three groups: metric type II extensions, IP type II events, and blobs and bands. About half of the WAVES listings include probable extensions of metric type II radio bursts, but in more than half of these events, there were also other slow-drift features. In the 3 yr study period, there were 31 IP type II events; these were associated with the very fastest CMEs. The most common form of activity in the WAVES events, blobs and bands in the frequency range between 1 and 8 MHz, fall below an envelope consistent with the early signatures of an IP type II event. However, most of this activity lasts only a few tens of minutes, whereas IP type II events last for many hours. In this study we find many examples in the radio data of two shock-like phenomena with different characteristics that occur simultaneously in the metric and decametric/hectometric bands, and no clear example of a metric type II burst that extends continuously down in frequency to become an IP type II event. The simplest interpretation is that metric type II bursts, unlike IP type II events, are not caused by shocks driven in front of CMEs.

  19. Technology II: Implementation Planning Guide.

    ERIC Educational Resources Information Center

    California Community Colleges, Sacramento. Office of the Chancellor.

    The California Community Colleges (CCC) are facing a number of challenges, including the explosive use of the Internet, the digital divide, the need for integrating technology into teaching and learning, the impact of Tidal Wave II, and the need to ensure that technology is accessible to persons with disabilities. The CCCs' Technology II Strategic…

  20. PARIS II: DESIGNING GREENER SOLVENTS

    EPA Science Inventory

    PARIS II (the program for assisting the replacement of industrial solvents, version II), developed at the USEPA, is a unique software tool that can be used for customizing the design of replacement solvents and for the formulation of new solvents. This program helps users avoid ...