Science.gov

Sample records for algorithm ii nsga-ii

  1. Calibration of a polarization navigation sensor using the NSGA-II algorithm

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Hu, Xiaoping; Zhang, Lilian; He, Xiaofeng

    2016-10-01

    A bio-inspired polarization navigation sensor is designed based on the polarization sensitivity mechanisms of insects. A new calibration model by formulating the calibration problem as a multi-objective optimization problem is presented. Unlike existing calibration models, the proposed model makes the calibration problem well-posed. The calibration parameters are optimized through Non-dominated Sorting Genetic Algorithm-II (NSGA-II) approach to minimize both angle of polarization (AOP) residuals and degree of linear polarization (DOLP) dispersions. The results of simulation and experiments show that the proposed algorithm is more stable than the compared methods for the calibration applications of polarization navigation sensors.

  2. The multi-objective optimization of the horizontal-axis marine current turbine based on NSGA-II algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, G. J.; Guo, P. C.; Luo, X. Q.; Feng, J. J.

    2012-11-01

    The present paper describes a hydrodynamic optimization technique for horizontal-axial marine current turbine. The pitch angle distribution is important to marine current turbine. In this paper, the pitch angle distribution curve is parameterized as four control points by Bezier curve method. The coordinates of the four control points are chosen as optimization variables, and the sample space are structured according to the Box-Behnken experimental design method (BBD). Then the power capture coefficient and axial thrust coefficient in design tip-speed ratio is obtained for all the elements in the sample space by CFD numerical simulation. The power capture coefficient and axial thrust are chosen as objective function, and quadratic polynomial regression equations are constructed to fit the relationship between the optimization variables and each objective function according to response surface model. With the obtained quadratic polynomial regression equations as performance prediction model, the marine current turbine is optimized using the NSGA-II multi-objective genetic algorithm, which finally offers an improved marine current turbine.

  3. Design of isolated buildings with S-FBI system subjected to near-fault earthquakes using NSGA-II algorithm

    NASA Astrophysics Data System (ADS)

    Ozbulut, O. E.; Silwal, B.

    2014-04-01

    This study investigates the optimum design parameters of a superelastic friction base isolator (S-FBI) system through a multi-objective genetic algorithm and performance-based evaluation approach. The S-FBI system consists of a flat steel- PTFE sliding bearing and a superelastic NiTi shape memory alloy (SMA) device. Sliding bearing limits the transfer of shear across the isolation interface and provides damping from sliding friction. SMA device provides restoring force capability to the isolation system together with additional damping characteristics. A three-story building is modeled with S-FBI isolation system. Multiple-objective numerical optimization that simultaneously minimizes isolation-level displacements and superstructure response is carried out with a genetic algorithm (GA) in order to optimize S-FBI system. Nonlinear time history analyses of the building with S-FBI system are performed. A set of 20 near-field ground motion records are used in numerical simulations. Results show that S-FBI system successfully control response of the buildings against near-fault earthquakes without sacrificing in isolation efficacy and producing large isolation-level deformations.

  4. Optimal locations of piezoelectric patches for supersonic flutter control of honeycomb sandwich panels, using the NSGA-II method

    NASA Astrophysics Data System (ADS)

    Nezami, M.; Gholami, B.

    2016-03-01

    The active flutter control of supersonic sandwich panels with regular honeycomb interlayers under impact load excitation is studied using piezoelectric patches. A non-dominated sorting-based multi-objective evolutionary algorithm, called non-dominated sorting genetic algorithm II (NSGA-II) is suggested to find the optimal locations for different numbers of piezoelectric actuator/sensor pairs. Quasi-steady first order supersonic piston theory is employed to define aerodynamic loading and the p-method is applied to find the flutter bounds. Hamilton’s principle in conjunction with the generalized Fourier expansions and Galerkin method are used to develop the dynamical model of the structural systems in the state-space domain. The classical Runge-Kutta time integration algorithm is then used to calculate the open-loop aeroelastic response of the system. The maximum flutter velocity and minimum voltage applied to actuators are calculated according to the optimal locations of piezoelectric patches obtained using the NSGA-II and then the proportional feedback is used to actively suppress the closed loop system response. Finally the control effects, using the two different controllers, are compared.

  5. Multi-objective optimization of weld geometry in hybrid fiber laser-arc butt welding using Kriging model and NSGA-II

    NASA Astrophysics Data System (ADS)

    Gao, Zhongmei; Shao, Xinyu; Jiang, Ping; Wang, Chunming; Zhou, Qi; Cao, Longchao; Wang, Yilin

    2016-06-01

    An integrated multi-objective optimization approach combining Kriging model and non-dominated sorting genetic algorithm-II (NSGA-II) is proposed to predict and optimize weld geometry in hybrid fiber laser-arc welding on 316L stainless steel in this paper. A four-factor, five-level experiment using Taguchi L25 orthogonal array is conducted considering laser power ( P), welding current ( I), distance between laser and arc ( D) and traveling speed ( V). Kriging models are adopted to approximate the relationship between process parameters and weld geometry, namely depth of penetration (DP), bead width (BW) and bead reinforcement (BR). NSGA-II is used for multi-objective optimization taking the constructed Kriging models as objective functions and generates a set of optimal solutions with pareto-optimal front for outputs. Meanwhile, the main effects and the first-order interactions between process parameters are analyzed. Microstructure is also discussed. Verification experiments demonstrate that the optimum values obtained by the proposed integrated Kriging model and NSGA-II approach are in good agreement with experimental results.

  6. A modified NSGA-II solution for a new multi-objective hub maximal covering problem under uncertain shipments

    NASA Astrophysics Data System (ADS)

    Ebrahimi Zade, Amir; Sadegheih, Ahmad; Lotfi, Mohammad Mehdi

    2014-07-01

    Hubs are centers for collection, rearrangement, and redistribution of commodities in transportation networks. In this paper, non-linear multi-objective formulations for single and multiple allocation hub maximal covering problems as well as the linearized versions are proposed. The formulations substantially mitigate complexity of the existing models due to the fewer number of constraints and variables. Also, uncertain shipments are studied in the context of hub maximal covering problems. In many real-world applications, any link on the path from origin to destination may fail to work due to disruption. Therefore, in the proposed bi-objective model, maximizing safety of the weakest path in the network is considered as the second objective together with the traditional maximum coverage goal. Furthermore, to solve the bi-objective model, a modified version of NSGA-II with a new dynamic immigration operator is developed in which the accurate number of immigrants depends on the results of the other two common NSGA-II operators, i.e. mutation and crossover. Besides validating proposed models, computational results confirm a better performance of modified NSGA-II versus traditional one.

  7. Application of MIMO Disturbance Observer to Control of an Electric Wheelchair Using NSGA-II.

    PubMed

    Saadatzi, Mohammad Nasser; Poshtan, Javad; Saadatzi, Mohammad Sadegh

    2011-05-01

    Electric wheelchairs (EW) experience various terrain surfaces and slopes as well as occupants with diverse weights. This, in turn, imparts a substantial amount of perturbation to the EW dynamics. In this paper, we make use of a two-degree-of-freedom control architecture called disturbance observer (DOB) which reduces sensitivity to model uncertainties, while enhancing rejection of disturbances caused due to entering slopes. The feedback loop which is designed via characteristic loci method is then augmented with a DOB with a parameterized low-pass filter. According to disturbance rejection, sensitivity reduction, and noise rejection of the whole controller, three performance indices are defined which enable us to pick the filter's optimal parameters using a multi-objective optimization approach called non-dominated sorting genetic algorithm-II. Finally, experimental results show desirable improvement in stiffness and disturbance rejection of the proposed controller as well as its robust stability. PMID:22606667

  8. Application of MIMO Disturbance Observer to Control of an Electric Wheelchair Using NSGA-II

    PubMed Central

    Saadatzi, Mohammad Nasser; Poshtan, Javad; Saadatzi, Mohammad Sadegh

    2011-01-01

    Electric wheelchairs (EW) experience various terrain surfaces and slopes as well as occupants with diverse weights. This, in turn, imparts a substantial amount of perturbation to the EW dynamics. In this paper, we make use of a two-degree-of-freedom control architecture called disturbance observer (DOB) which reduces sensitivity to model uncertainties, while enhancing rejection of disturbances caused due to entering slopes. The feedback loop which is designed via characteristic loci method is then augmented with a DOB with a parameterized low-pass filter. According to disturbance rejection, sensitivity reduction, and noise rejection of the whole controller, three performance indices are defined which enable us to pick the filter's optimal parameters using a multi-objective optimization approach called non-dominated sorting genetic algorithm-II. Finally, experimental results show desirable improvement in stiffness and disturbance rejection of the proposed controller as well as its robust stability. PMID:22606667

  9. Optimization of multi-reservoir operation with a new hedging rule: application of fuzzy set theory and NSGA-II

    NASA Astrophysics Data System (ADS)

    Ahmadianfar, Iman; Adib, Arash; Taghian, Mehrdad

    2016-06-01

    The reservoir hedging rule curves are used to avoid severe water shortage during drought periods. In this method reservoir storage is divided into several zones, wherein the rationing factors are changed immediately when water storage level moves from one zone to another. In the present study, a hedging rule with fuzzy rationing factors was applied for creating a transition zone in up and down each rule curve, and then the rationing factor will be changed in this zone gradually. For this propose, a monthly simulation model was developed and linked to the non-dominated sorting genetic algorithm for calculation of the modified shortage index of two objective functions involving water supply of minimum flow and agriculture demands in a long-term simulation period. Zohre multi-reservoir system in south Iran has been considered as a case study. The results of the proposed hedging rule have improved the long-term system performance from 10 till 27 percent in comparison with the simple hedging rule, where these results demonstrate that the fuzzification of hedging factors increase the applicability and the efficiency of the new hedging rule in comparison to the conventional rule curve for mitigating the water shortage problem.

  10. Developing a bi-objective optimization model for solving the availability allocation problem in repairable series-parallel systems by NSGA II

    NASA Astrophysics Data System (ADS)

    Amiri, Maghsoud; Khajeh, Mostafa

    2016-11-01

    Bi-objective optimization of the availability allocation problem in a series-parallel system with repairable components is aimed in this paper. The two objectives of the problem are the availability of the system and the total cost of the system. Regarding the previous studies in series-parallel systems, the main contribution of this study is to expand the redundancy allocation problems to systems that have repairable components. Therefore, the considered systems in this paper are the systems that have repairable components in their configurations and subsystems. Due to the complexity of the model, a meta-heuristic method called as non-dominated sorting genetic algorithm is applied to find Pareto front. After finding the Pareto front, a procedure is used to select the best solution from the Pareto front.

  11. MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION

    EPA Science Inventory

    In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...

  12. Optimising operational amplifiers by evolutionary algorithms and gm/Id method

    NASA Astrophysics Data System (ADS)

    Tlelo-Cuautle, E.; Sanabria-Borbon, A. C.

    2016-10-01

    The evolutionary algorithm called non-dominated sorting genetic algorithm (NSGA-II) is applied herein in the optimisation of operational transconductance amplifiers. NSGA-II is accelerated by applying the gm/Id method to estimate reduced search spaces associated to widths (W) and lengths (L) of the metal-oxide-semiconductor field-effect-transistor (MOSFETs), and to guarantee their appropriate bias levels conditions. In addition, we introduce an integer encoding for the W/L sizes of the MOSFETs to avoid a post-processing step for rounding-off their values to be multiples of the integrated circuit fabrication technology. Finally, from the feasible solutions generated by NSGA-II, we introduce a second optimisation stage to guarantee that the final feasible W/L sizes solutions support process, voltage and temperature (PVT) variations. The optimisation results lead us to conclude that the gm/Id method and integer encoding are quite useful to accelerate the convergence of the evolutionary algorithm NSGA-II, while the second optimisation stage guarantees robustness of the feasible solutions to PVT variations.

  13. Multi-objective optimization in spatial planning: Improving the effectiveness of multi-objective evolutionary algorithms (non-dominated sorting genetic algorithm II)

    NASA Astrophysics Data System (ADS)

    Karakostas, Spiros

    2015-05-01

    The multi-objective nature of most spatial planning initiatives and the numerous constraints that are introduced in the planning process by decision makers, stakeholders, etc., synthesize a complex spatial planning context in which the concept of solid and meaningful optimization is a unique challenge. This article investigates new approaches to enhance the effectiveness of multi-objective evolutionary algorithms (MOEAs) via the adoption of a well-known metaheuristic: the non-dominated sorting genetic algorithm II (NSGA-II). In particular, the contribution of a sophisticated crossover operator coupled with an enhanced initialization heuristic is evaluated against a series of metrics measuring the effectiveness of MOEAs. Encouraging results emerge for both the convergence rate of the evolutionary optimization process and the occupation of valuable regions of the objective space by non-dominated solutions, facilitating the work of spatial planners and decision makers. Based on the promising behaviour of both heuristics, topics for further research are proposed to improve their effectiveness.

  14. A hybrid multi-objective evolutionary algorithm for wind-turbine blade optimization

    NASA Astrophysics Data System (ADS)

    Sessarego, M.; Dixon, K. R.; Rival, D. E.; Wood, D. H.

    2015-08-01

    A concurrent-hybrid non-dominated sorting genetic algorithm (hybrid NSGA-II) has been developed and applied to the simultaneous optimization of the annual energy production, flapwise root-bending moment and mass of the NREL 5 MW wind-turbine blade. By hybridizing a multi-objective evolutionary algorithm (MOEA) with gradient-based local search, it is believed that the optimal set of blade designs could be achieved in lower computational cost than for a conventional MOEA. To measure the convergence between the hybrid and non-hybrid NSGA-II on a wind-turbine blade optimization problem, a computationally intensive case was performed using the non-hybrid NSGA-II. From this particular case, a three-dimensional surface representing the optimal trade-off between the annual energy production, flapwise root-bending moment and blade mass was achieved. The inclusion of local gradients in the blade optimization, however, shows no improvement in the convergence for this three-objective problem.

  15. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  16. A Self-adaptive Evolutionary Algorithm for Multi-objective Optimization

    NASA Astrophysics Data System (ADS)

    Cao, Ruifen; Li, Guoli; Wu, Yican

    Evolutionary algorithm has gained a worldwide popularity among multi-objective optimization. The paper proposes a self-adaptive evolutionary algorithm (called SEA) for multi-objective optimization. In the SEA, the probability of crossover and mutation,P c and P m , are varied depending on the fitness values of the solutions. Fitness assignment of SEA realizes the twin goals of maintaining diversity in the population and guiding the population to the true Pareto Front; fitness value of individual not only depends on improved density estimation but also depends on non-dominated rank. The density estimation can keep diversity in all instances including when scalars of all objectives are much different from each other. SEA is compared against the Non-dominated Sorting Genetic Algorithm (NSGA-II) on a set of test problems introduced by the MOEA community. Simulated results show that SEA is as effective as NSGA-II in most of test functions, but when scalar of objectives are much different from each other, SEA has better distribution of non-dominated solutions.

  17. A master-slave parallel hybrid multi-objective evolutionary algorithm for groundwater remediation design under general hydrogeological conditions

    NASA Astrophysics Data System (ADS)

    Wu, J.; Yang, Y.; Luo, Q.; Wu, J.

    2012-12-01

    This study presents a new hybrid multi-objective evolutionary algorithm, the niched Pareto tabu search combined with a genetic algorithm (NPTSGA), whereby the global search ability of niched Pareto tabu search (NPTS) is improved by the diversification of candidate solutions arose from the evolving nondominated sorting genetic algorithm II (NSGA-II) population. Also, the NPTSGA coupled with the commonly used groundwater flow and transport codes, MODFLOW and MT3DMS, is developed for multi-objective optimal design of groundwater remediation systems. The proposed methodology is then applied to a large-scale field groundwater remediation system for cleanup of large trichloroethylene (TCE) plume at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts. Furthermore, a master-slave (MS) parallelization scheme based on the Message Passing Interface (MPI) is incorporated into the NPTSGA to implement objective function evaluations in distributed processor environment, which can greatly improve the efficiency of the NPTSGA in finding Pareto-optimal solutions to the real-world application. This study shows that the MS parallel NPTSGA in comparison with the original NPTS and NSGA-II can balance the tradeoff between diversity and optimality of solutions during the search process and is an efficient and effective tool for optimizing the multi-objective design of groundwater remediation systems under complicated hydrogeologic conditions.

  18. Efficient ecologic and economic operational rules for dammed systems by means of nondominated sorting genetic algorithm II

    NASA Astrophysics Data System (ADS)

    Niayifar, A.; Perona, P.

    2015-12-01

    River impoundment by dams is known to strongly affect the natural flow regime and in turn the river attributes and the related ecosystem biodiversity. Making hydropower sustainable implies to seek for innovative operational policies able to generate dynamic environmental flows while maintaining economic efficiency. For dammed systems, we build the ecological and economical efficiency plot for non-proportional flow redistribution operational rules compared to minimal flow operational. As for the case of small hydropower plants (e.g., see the companion paper by Gorla et al., this session), we use a four parameters Fermi-Dirac statistical distribution to mathematically formulate non-proportional redistribution rules. These rules allocate a fraction of water to the riverine environment depending on current reservoir inflows and storage. Riverine ecological benefits associated to dynamic environmental flows are computed by integrating the Weighted Usable Area (WUA) for fishes with Richter's hydrological indicators. Then, we apply nondominated sorting genetic algorithm II (NSGA-II) to an ensemble of non-proportional and minimal flow redistribution rules in order to generate the Pareto frontier showing the system performances in the ecologic and economic space. This fast and elitist multiobjective optimization method is eventually applied to a case study. It is found that non-proportional dynamic flow releases ensure maximal power production on the one hand, while conciliating ecological sustainability on the other hand. Much of the improvement in the environmental indicator is seen to arise from a better use of the reservoir storage dynamics, which allows to capture, and laminate flood events while recovering part of them for energy production. In conclusion, adopting such new operational policies would unravel a spectrum of globally-efficient performances of the dammed system when compared with those resulting from policies based on constant minimum flow releases.

  19. A non-dominated sorting genetic algorithm for a bi-objective pick-up and delivery problem

    NASA Astrophysics Data System (ADS)

    Velasco, N.; Dejax, P.; Guéret, C.; Prins, C.

    2012-03-01

    Some companies must transport their personnel within facilities. This is especially the case for oil companies that use helicopters to transport engineers, technicians and assistant personnel from platform to platform. This operation has the potential to become expensive if the transportation routes are not correctly planned and provide a bad quality of service. Here this issue is modelled as a pick-up and delivery problem where a set of transportation requests should be scheduled in routes, minimizing the total transportation cost while the most urgent requests are satisfied by priority. To solve the problem, a method based on a Non-dominated Sorting Genetic Algorithm (NSGA-II) is proposed. This algorithm is tested on both randomly generated and real instances provided by a petroleum company. The results show that the proposed algorithm improves the best-known solutions.

  20. An evolutionary algorithm technique for intelligence, surveillance, and reconnaissance plan optimization

    NASA Astrophysics Data System (ADS)

    Langton, John T.; Caroli, Joseph A.; Rosenberg, Brad

    2008-04-01

    To support an Effects Based Approach to Operations (EBAO), Intelligence, Surveillance, and Reconnaissance (ISR) planners must optimize collection plans within an evolving battlespace. A need exists for a decision support tool that allows ISR planners to rapidly generate and rehearse high-performing ISR plans that balance multiple objectives and constraints to address dynamic collection requirements for assessment. To meet this need we have designed an evolutionary algorithm (EA)-based "Integrated ISR Plan Analysis and Rehearsal System" (I2PARS) to support Effects-based Assessment (EBA). I2PARS supports ISR mission planning and dynamic replanning to coordinate assets and optimize their routes, allocation and tasking. It uses an evolutionary algorithm to address the large parametric space of route-finding problems which is sometimes discontinuous in the ISR domain because of conflicting objectives such as minimizing asset utilization yet maximizing ISR coverage. EAs are uniquely suited for generating solutions in dynamic environments and also allow user feedback. They are therefore ideal for "streaming optimization" and dynamic replanning of ISR mission plans. I2PARS uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to automatically generate a diverse set of high performing collection plans given multiple objectives, constraints, and assets. Intended end users of I2PARS include ISR planners in the Combined Air Operations Centers and Joint Intelligence Centers. Here we show the feasibility of applying the NSGA-II algorithm and EAs in general to the ISR planning domain. Unique genetic representations and operators for optimization within the ISR domain are presented along with multi-objective optimization criteria for ISR planning. Promising results of the I2PARS architecture design, early software prototype, and limited domain testing of the new algorithm are discussed. We also present plans for future research and development, as well as technology

  1. A new algorithm for the robust optimization of rotor-bearing systems

    NASA Astrophysics Data System (ADS)

    Lopez, R. H.; Ritto, T. G.; Sampaio, Rubens; Souza de Cursi, J. E.

    2014-08-01

    This article presents a new algorithm for the robust optimization of rotor-bearing systems. The goal of the optimization problem is to find the values of a set of parameters for which the natural frequencies of the system are as far away as possible from the rotational speeds of the machine. To accomplish this, the penalization proposed by Ritto, Lopez, Sampaio, and Souza de Cursi in 2011 is employed. Since the rotor-bearing system is subject to uncertainties, such a penalization is modelled as a random variable. The robust optimization is performed by minimizing the expected value and variance of the penalization, resulting in a multi-objective optimization problem (MOP). The objective function of this MOP is known to be non-convex and it is shown that its resulting Pareto front (PF) is also non-convex. Thus, a new algorithm is proposed for solving the MOP: the normal boundary intersection (NBI) is employed to discretize the PF handling its non-convexity, and a global optimization algorithm based on a restart procedure and local searches are employed to minimize the NBI subproblems tackling the non-convexity of the objective function. A numerical analysis section shows the advantage of using the proposed algorithm over the weighted-sum (WS) and NSGA-II approaches. In comparison with the WS, the proposed approach obtains a much more even and useful set of Pareto points. Compared with the NSGA-II approach, the proposed algorithm provides a better approximation of the PF requiring much lower computational cost.

  2. Multicomponent, multi-azimuth pre-stack seismic waveform inversion for azimuthally anisotropic media using a parallel and computationally efficient non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Li, Tao; Mallick, Subhashis

    2015-02-01

    Consideration of azimuthal anisotropy, at least to an orthorhombic symmetry is important in exploring the naturally fractured and unconventional hydrocarbon reservoirs. Full waveform inversion of multicomponent seismic data can, in principle, provide more robust estimates of subsurface elastic parameters and density than the inversion of single component (P wave) seismic data. In addition, azimuthally dependent anisotropy can only be resolved by carefully studying the multicomponent seismic displacement data acquired and processed along different azimuths. Such an analysis needs an inversion algorithm capable of simultaneously optimizing multiple objectives, one for each data component along each azimuth. These multicomponent and multi-azimuthal seismic inversions are non-linear with non-unique solutions; it is therefore appropriate to treat the objectives as a vector and simultaneously optimize each of its components such that the optimal set of solutions could be obtained. The fast non-dominated sorting genetic algorithm (NSGA II) is a robust stochastic global search method capable of handling multiple objectives, but its computational expense increases with increasing number of objectives and the number of model parameters to be inverted for. In addition, an accurate extraction of subsurface azimuthal anisotropy requires multicomponent seismic data acquired at a fine spatial resolution along many source-to-receiver azimuths. Because routine acquisition of such data is prohibitively expensive, they are typically available along two or at most three azimuthal orientations at a spatial resolution where such an inversion could be applied. This paper proposes a novel multi-objective methodology using a parallelized version of NSGA II for waveform inversion of multicomponent seismic displacement data along two azimuths. By scaling the objectives prior to ranking, redefining the crowding distance as functions of the scaled objective and the model spaces, and varying

  3. Multi-objective optimal design of magnetorheological engine mount based on an improved non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong

    2014-03-01

    A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.

  4. Comparing Evolutionary Strategies on a Biobjective Cultural Algorithm

    PubMed Central

    Lagos, Carolina; Crawford, Broderick; Cabrera, Enrique; Rubio, José-Miguel; Paredes, Fernando

    2014-01-01

    Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs) are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP), the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric. PMID:25254257

  5. Multi-objective Job Shop Rescheduling with Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Hao, Xinchang; Gen, Mitsuo

    In current manufacturing systems, production processes and management are involved in many unexpected events and new requirements emerging constantly. This dynamic environment implies that operation rescheduling is usually indispensable. A wide variety of procedures and heuristics has been developed to improve the quality of rescheduling. However, most proposed approaches are derived usually with respect to simplified assumptions. As a consequence, these approaches might be inconsistent with the actual requirements in a real production environment, i.e., they are often unsuitable and inflexible to respond efficiently to the frequent changes. In this paper, a multi-objective job shop rescheduling problem (moJSRP) is formulated to improve the practical application of rescheduling. To solve the moJSRP model, an evolutionary algorithm is designed, in which a random key-based representation and interactive adaptive-weight (i-awEA) fitness assignment are embedded. To verify the effectiveness, the proposed algorithm has been compared with other apporaches and benchmarks on the robustness of moJRP optimziation. The comparison results show that iAWGA-A is better than weighted fitness method in terms of effectiveness and stability. Simlarly, iAWGA-A also outperforms other well stability approachessuch as non-dominated sorting genetic algorithm (NSGA-II) and strength Pareto evolutionary algorithm2 (SPEA2).

  6. Multi-Objective Random Search Algorithm for Simultaneously Optimizing Wind Farm Layout and Number of Turbines

    NASA Astrophysics Data System (ADS)

    Feng, Ju; Shen, Wen Zhong; Xu, Chang

    2016-09-01

    A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximize the total power production, which is calculated by considering the wake effects using the Jensen wake model combined with the local wind distribution. The other is to minimize the total electrical cable length. This length is assumed to be the total length of the minimal spanning tree that connects all turbines and is calculated by using Prim's algorithm. Constraints on wind farm boundary and wind turbine proximity are also considered. An ideal test case shows the proposed algorithm largely outperforms a famous multi-objective genetic algorithm (NSGA-II). In the real test case based on the Horn Rev 1 wind farm, the algorithm also obtains useful Pareto frontiers and provides a wide range of Pareto optimal layouts with different numbers of turbines for a real-life wind farm developer.

  7. SAGE II inversion algorithm. [Stratospheric Aerosol and Gas Experiment

    NASA Technical Reports Server (NTRS)

    Chu, W. P.; Mccormick, M. P.; Lenoble, J.; Brogniez, C.; Pruvost, P.

    1989-01-01

    The operational Stratospheric Aerosol and Gas Experiment II multichannel data inversion algorithm is described. Aerosol and ozone retrievals obtained with the algorithm are discussed. The algorithm is compared to an independently developed algorithm (Lenoble, 1989), showing that the inverted aerosol and ozone profiles from the two algorithms are similar within their respective uncertainties.

  8. Investigation on Multiple Algorithms for Multi-Objective Optimization of Gear Box

    NASA Astrophysics Data System (ADS)

    Ananthapadmanabhan, R.; Babu, S. Arun; Hareendranath, KR; Krishnamohan, C.; Krishnapillai, S.; A, Krishnan

    2016-09-01

    The field of gear design is an extremely important area in engineering. In this work a spur gear reduction unit is considered. A review of relevant literatures in the area of gear design indicates that compact design of gearbox involves a complicated engineering analysis. This work deals with the simultaneous optimization of the power and dimensions of a gearbox, which are of conflicting nature. The focus is on developing a design space which is based on module, pinion teeth and face-width by using MATLAB. The feasible points are obtained through different multi-objective algorithms using various constraints obtained from different novel literatures. Attention has been devoted in various novel constraints like critical scoring criterion number, flash temperature, minimum film thickness, involute interference and contact ratio. The output from various algorithms like genetic algorithm, fmincon (constrained nonlinear minimization), NSGA-II etc. are compared to generate the best result. Hence, this is a much more precise approach for obtaining practical values of the module, pinion teeth and face-width for a minimum centre distance and a maximum power transmission for any given material.

  9. Efficiency of Evolutionary Algorithms for Calibration of Watershed Models

    NASA Astrophysics Data System (ADS)

    Ahmadi, M.; Arabi, M.

    2009-12-01

    Since the promulgation of the Clean Water Act in the U.S. and other similar legislations around the world over the past three decades, watershed management programs have focused on the nexus of pollution prevention and mitigation. In this context, hydrologic/water quality models have been increasingly embedded in the decision making process. Simulation models are now commonly used to investigate the hydrologic response of watershed systems under varying climatic and land use conditions, and also to study the fate and transport of contaminants at various spatiotemporal scales. Adequate calibration and corroboration of models for various outputs at varying scales is an essential component of watershed modeling. The parameter estimation process could be challenging when multiple objectives are important. For example, improving streamflow predictions of the model at a stream location may result in degradation of model predictions for sediments and/or nutrient at the same location or other outlets. This paper aims to evaluate the applicability and efficiency of single and multi objective evolutionary algorithms for parameter estimation of complex watershed models. To this end, the Shuffled Complex Evolution (SCE-UA) algorithm, a single-objective genetic algorithm (GA), and a multi-objective genetic algorithm (i.e., NSGA-II) were reconciled with the Soil and Water Assessment Tool (SWAT) to calibrate the model at various locations within the Wildcat Creek Watershed, Indiana. The efficiency of these methods were investigated using different error statistics including root mean square error, coefficient of determination and Nash-Sutcliffe efficiency coefficient for the output variables as well as the baseflow component of the stream discharge. A sensitivity analysis was carried out to screening model parameters that bear significant uncertainties. Results indicated that while flow processes can be reasonably ascertained, parameterization of nutrient and pesticide processes

  10. Automatic Tuning of a Retina Model for a Cortical Visual Neuroprosthesis Using a Multi-Objective Optimization Genetic Algorithm.

    PubMed

    Martínez-Álvarez, Antonio; Crespo-Cano, Rubén; Díaz-Tahoces, Ariadna; Cuenca-Asensi, Sergio; Ferrández Vicente, José Manuel; Fernández, Eduardo

    2016-11-01

    The retina is a very complex neural structure, which contains many different types of neurons interconnected with great precision, enabling sophisticated conditioning and coding of the visual information before it is passed via the optic nerve to higher visual centers. The encoding of visual information is one of the basic questions in visual and computational neuroscience and is also of seminal importance in the field of visual prostheses. In this framework, it is essential to have artificial retina systems to be able to function in a way as similar as possible to the biological retinas. This paper proposes an automatic evolutionary multi-objective strategy based on the NSGA-II algorithm for tuning retina models. Four metrics were adopted for guiding the algorithm in the search of those parameters that best approximate a synthetic retinal model output with real electrophysiological recordings. Results show that this procedure exhibits a high flexibility when different trade-offs has to be considered during the design of customized neuro prostheses.

  11. Automatic Tuning of a Retina Model for a Cortical Visual Neuroprosthesis Using a Multi-Objective Optimization Genetic Algorithm.

    PubMed

    Martínez-Álvarez, Antonio; Crespo-Cano, Rubén; Díaz-Tahoces, Ariadna; Cuenca-Asensi, Sergio; Ferrández Vicente, José Manuel; Fernández, Eduardo

    2016-11-01

    The retina is a very complex neural structure, which contains many different types of neurons interconnected with great precision, enabling sophisticated conditioning and coding of the visual information before it is passed via the optic nerve to higher visual centers. The encoding of visual information is one of the basic questions in visual and computational neuroscience and is also of seminal importance in the field of visual prostheses. In this framework, it is essential to have artificial retina systems to be able to function in a way as similar as possible to the biological retinas. This paper proposes an automatic evolutionary multi-objective strategy based on the NSGA-II algorithm for tuning retina models. Four metrics were adopted for guiding the algorithm in the search of those parameters that best approximate a synthetic retinal model output with real electrophysiological recordings. Results show that this procedure exhibits a high flexibility when different trade-offs has to be considered during the design of customized neuro prostheses. PMID:27354187

  12. DNA strand generation for DNA computing by using a multi-objective differential evolution algorithm.

    PubMed

    Chaves-González, José M; Vega-Rodríguez, Miguel A

    2014-02-01

    In this paper, we use an adapted multi-objective version of the differential evolution (DE) metaheuristics for the design and generation of reliable DNA libraries that can be used for computation. DNA sequence design is a very relevant task in many recent research fields, e.g. nanotechnology or DNA computing. Specifically, DNA computing is a new computational model which uses DNA molecules as information storage and their possible biological interactions as processing operators. Therefore, the possible reactions and interactions among molecules must be strictly controlled to prevent incorrect computations. The design of reliable DNA libraries for bio-molecular computing is an NP-hard combinatorial problem which involves many heterogeneous and conflicting design criteria. For this reason, we modelled DNA sequence design as a multiobjective optimization problem and we solved it by using an adapted multi-objective version of DE metaheuristics. Seven different bio-chemical design criteria have been simultaneously considered to obtain high quality DNA sequences which are suitable for molecular computing. Furthermore, we have developed the multiobjective standard fast non-dominated sorting genetic algorithm (NSGA-II) in order to perform a formal comparative study by using multi-objective indicators. Additionally, we have also compared our results with other relevant results published in the literature. We conclude that our proposal is a promising approach which is able to generate reliable real-world DNA sequences that significantly improve other DNA libraries previously published in the literature.

  13. Multi-objective optimization of a low specific speed centrifugal pump using an evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    An, Zhao; Zhounian, Lai; Peng, Wu; Linlin, Cao; Dazhuan, Wu

    2016-07-01

    This paper describes the shape optimization of a low specific speed centrifugal pump at the design point. The target pump has already been manually modified on the basis of empirical knowledge. A genetic algorithm (NSGA-II) with certain enhancements is adopted to improve its performance further with respect to two goals. In order to limit the number of design variables without losing geometric information, the impeller is parametrized using the Bézier curve and a B-spline. Numerical simulation based on a Reynolds averaged Navier-Stokes (RANS) turbulent model is done in parallel to evaluate the flow field. A back-propagating neural network is constructed as a surrogate for performance prediction to save computing time, while initial samples are selected according to an orthogonal array. Then global Pareto-optimal solutions are obtained and analysed. The results manifest that unexpected flow structures, such as the secondary flow on the meridian plane, have diminished or vanished in the optimized pump.

  14. On the use of multi-algorithm, genetically adaptive multi-objective method for multi-site calibration of the SWAT model

    SciTech Connect

    Zhang, Xuesong; Srinivasan, Raghavan; Van Liew, M.

    2010-04-15

    With the availability of spatially distributed data, distributed hydrologic models are increasingly used for simulation of spatially varied hydrologic processes to understand and manage natural and human activities that affect watershed systems. Multi-objective optimization methods have been applied to calibrate distributed hydrologic models using observed data from multiple sites. As the time consumed by running these complex models is increasing substantially, selecting efficient and effective multi-objective optimization algorithms is becoming a nontrivial issue. In this study, we evaluated a multi-algorithm, genetically adaptive multi-objective method (AMALGAM) for multi-site calibration of a distributed hydrologic model—Soil and Water Assessment Tool (SWAT), and compared its performance with two widely used evolutionary multi-objective optimization (EMO) algorithms (i.e. Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Non-dominated Sorted Genetic Algorithm II (NSGA-II)). In order to provide insights into each method’s overall performance, these three methods were tested in four watersheds with various characteristics. The test results indicate that the AMALGAM can consistently provide competitive or superior results compared with the other two methods. The multi-method search framework of AMALGAM, which can flexibly and adaptively utilize multiple optimization algorithms, makes it a promising tool for multi-site calibration of the distributed SWAT. For practical use of AMALGAM, it is suggested to implement this method in multiple trials with relatively small number of model runs rather than run it once with long iterations. In addition, incorporating different multiobjective optimization algorithms and multi-mode search operators into AMALGAM deserves further research.

  15. SAGE Version 7.0 Algorithm: Application to SAGE II

    NASA Technical Reports Server (NTRS)

    Damadeo, R. P; Zawodny, J. M.; Thomason, L. W.; Iyer, N.

    2013-01-01

    This paper details the Stratospheric Aerosol and Gas Experiments (SAGE) version 7.0 algorithm and how it is applied to SAGE II. Changes made between the previous (v6.2) and current (v7.0) versions are described and their impacts on the data products explained for both coincident event comparisons and time-series analysis. Users of the data will notice a general improvement in all of the SAGE II data products, which are now in better agreement with more modern data sets (e.g. SAGE III) and more robust for use with trend studies.

  16. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.

  17. Evaluation of RADAP II Severe-Storm-Detection Algorithms.

    NASA Astrophysics Data System (ADS)

    Winston, Herb A.; Ruthi, Larry J.

    1986-02-01

    Computer-generated volumetric radar algorithms have been available at a few operational National Weather Service sites since the mid-1970s under the Digitized Radar Experiment (D/RADFX) and Radar Data Processor (RADAP II) programs. The algorithms were first used extensively for severe-storm warnings at the Oklahoma City National Weather Service Forecast Office (WSFO OKC) in 1983. RADAP IT performance in operational severe-weather forecasting was evaluated using objectively derived warnings based on computer-generated output. Statistical scores of probability of detection, false-alarm rate, and critical-success index for the objective warnings were found to be significantly higher than the average statistical scares reported for National Weather Service warnings. Even higher statistical scores were achieved by experienced forecasters using RADAP II in addition to conventional data during the 1983 severe-storm season at WSFO OKC. This investigation lends further support to the suggestion that incorporating improved reflectivity-based algorithms with Doppler into the future Advanced Weather Interactive Processing System for the 1990s (AWIPS-90) or the Next Generation Weather Radar (NEXRAD) system should greatly enhance severe-storm-detection capabilities.

  18. Nios II hardware acceleration of the epsilon quadratic sieve algorithm

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Uwe; Botella, Guillermo; Castillo, Encarnacion; García, Antonio

    2010-04-01

    The quadratic sieve (QS) algorithm is one of the most powerful algorithms to factor large composite primes used to break RSA cryptographic systems. The hardware structure of the QS algorithm seems to be a good fit for FPGA acceleration. Our new ɛ-QS algorithm further simplifies the hardware architecture making it an even better candidate for C2H acceleration. This paper shows our design results in FPGA resource and performance when implementing very long arithmetic on the Nios microprocessor platform with C2H acceleration for different libraries (GMP, LIP, FLINT, NRMP) and QS architecture choices for factoring 32-2048 bit RSA numbers.

  19. Tracking at CDF: algorithms and experience from Run I and Run II

    SciTech Connect

    Snider, F.D.; /Fermilab

    2005-10-01

    The authors describe the tracking algorithms used during Run I and Run II by CDF at the Fermilab Tevatron Collider, covering the time from about 1992 through the present, and discuss the performance of the algorithms at high luminosity. By tracing the evolution of the detectors and algorithms, they reveal some of the successful strategies used by CDF to address the problems of tracking at high luminosities.

  20. A TCAS-II Resolution Advisory Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Narkawicz, Anthony; Chamberlain, James

    2013-01-01

    The Traffic Alert and Collision Avoidance System (TCAS) is a family of airborne systems designed to reduce the risk of mid-air collisions between aircraft. TCASII, the current generation of TCAS devices, provides resolution advisories that direct pilots to maintain or increase vertical separation when aircraft distance and time parameters are beyond designed system thresholds. This paper presents a mathematical model of the TCASII Resolution Advisory (RA) logic that assumes accurate aircraft state information. Based on this model, an algorithm for RA detection is also presented. This algorithm is analogous to a conflict detection algorithm, but instead of predicting loss of separation, it predicts resolution advisories. It has been formally verified that for a kinematic model of aircraft trajectories, this algorithm completely and correctly characterizes all encounter geometries between two aircraft that lead to a resolution advisory within a given lookahead time interval. The RA detection algorithm proposed in this paper is a fundamental component of a NASA sense and avoid concept for the integration of Unmanned Aircraft Systems in civil airspace.

  1. Iterative projection algorithms in protein crystallography. II. Application.

    PubMed

    Lo, Victor L; Kingston, Richard L; Millane, Rick P

    2015-07-01

    Iterative projection algorithms (IPAs) are a promising tool for protein crystallographic phase determination. Although related to traditional density-modification algorithms, IPAs have better convergence properties, and, as a result, can effectively overcome the phase problem given modest levels of structural redundancy. This is illustrated by applying IPAs to determine the electron densities of two protein crystals with fourfold non-crystallographic symmetry, starting with only the experimental diffraction amplitudes, a low-resolution molecular envelope and the position of the non-crystallographic axes. The algorithm returns electron densities that are sufficiently accurate for model building, allowing automated recovery of the known structures. This study indicates that IPAs should find routine application in protein crystallography, being capable of reconstructing electron densities starting with very little initial phase information. PMID:26131900

  2. Iterative projection algorithms in protein crystallography. II. Application.

    PubMed

    Lo, Victor L; Kingston, Richard L; Millane, Rick P

    2015-07-01

    Iterative projection algorithms (IPAs) are a promising tool for protein crystallographic phase determination. Although related to traditional density-modification algorithms, IPAs have better convergence properties, and, as a result, can effectively overcome the phase problem given modest levels of structural redundancy. This is illustrated by applying IPAs to determine the electron densities of two protein crystals with fourfold non-crystallographic symmetry, starting with only the experimental diffraction amplitudes, a low-resolution molecular envelope and the position of the non-crystallographic axes. The algorithm returns electron densities that are sufficiently accurate for model building, allowing automated recovery of the known structures. This study indicates that IPAs should find routine application in protein crystallography, being capable of reconstructing electron densities starting with very little initial phase information.

  3. The dynamic Allan variance II: a fast computational algorithm.

    PubMed

    Galleani, Lorenzo

    2010-01-01

    The stability of an atomic clock can change with time due to several factors, such as temperature, humidity, radiations, aging, and sudden breakdowns. The dynamic Allan variance, or DAVAR, is a representation of the time-varying stability of an atomic clock, and it can be used to monitor the clock behavior. Unfortunately, the computational time of the DAVAR grows very quickly with the length of the analyzed time series. In this article, we present a fast algorithm for the computation of the DAVAR, and we also extend it to the case of missing data. Numerical simulations show that the fast algorithm dramatically reduces the computational time. The fast algorithm is useful when the analyzed time series is long, or when many clocks must be monitored, or when the computational power is low, as happens onboard satellites and space probes.

  4. A survey of fuzzy clustering algorithms for pattern recognition. II.

    PubMed

    Baraldi, A; Blonda, P

    1999-01-01

    For pt.I see ibid., p.775-85. In part I an equivalence between the concepts of fuzzy clustering and soft competitive learning in clustering algorithms is proposed on the basis of the existing literature. Moreover, a set of functional attributes is selected for use as dictionary entries in the comparison of clustering algorithms. In this paper, five clustering algorithms taken from the literature are reviewed, assessed and compared on the basis of the selected properties of interest. These clustering models are (1) self-organizing map (SOM); (2) fuzzy learning vector quantization (FLVQ); (3) fuzzy adaptive resonance theory (fuzzy ART); (4) growing neural gas (GNG); (5) fully self-organizing simplified adaptive resonance theory (FOSART). Although our theoretical comparison is fairly simple, it yields observations that may appear parodoxical. First, only FLVQ, fuzzy ART, and FOSART exploit concepts derived from fuzzy set theory (e.g., relative and/or absolute fuzzy membership functions). Secondly, only SOM, FLVQ, GNG, and FOSART employ soft competitive learning mechanisms, which are affected by asymptotic misbehaviors in the case of FLVQ, i.e., only SOM, GNG, and FOSART are considered effective fuzzy clustering algorithms. PMID:18252358

  5. Algorithms and sensitivity analyses for stratospheric aerosol and gas experiment II water vapor retrieval

    SciTech Connect

    Chu, W.P.; Thomason, L.W.; Buglia, J.J.; McCormick, M.P.; McMaster, L.M. ); Chiou, E.W.; Larsen, J.C. ); Rind, D. ); Oltmans, S. )

    1993-03-20

    This paper provides a detailed description of the current operational inversion algorithm for the retrieval of water vapor vertical profiles from the Stratospheric Aerosol and Gas Experiment II (SAGE II) occultation data at the 0.94-[mu]m wavelength channel. This algorithm is different from the algorithm used for the retrieval of the other species such as aerosol, ozone, and nitrogen dioxide because of the nonlinear relationship between the concentration versus the broad band absorption characteristics of water vapor. Included in the discussion of the retrieval algorithm are problems related to the accuracy of the computational scheme, accuracy of the removal of other interfering species, and the expected uncertainty of the retrieved profile. A comparative analysis on the computational schemes used for the calculation of the water vapor transmission at the 0.94-[mu]m wavelength region is presented. Analyses are also presented on the sensitivity of the retrievals to interferences from the other species which contribute to the total signature as observed at the 0.94-[mu]m wavelength channel on SAGE II instrument. Error analyses of the SAGE II water vapor retrieval is shown, indicating that good quality water vapor data are being produced by the SAGE II measurements. 27 refs., 10 figs., 1 tab.

  6. Measurement of the inclusive jet cross section using the midpoint algorithm in Run II at CDF

    SciTech Connect

    Group, Robert Craig

    2006-01-01

    A measurement is presented of the inclusive jet cross section using the Midpoint jet clustering algorithm in five different rapidity regions. This is the first analysis which measures the inclusive jet cross section using the Midpoint algorithm in the forward region of the detector. The measurement is based on more than 1 fb-1 of integrated luminosity of Run II data taken by the CDF experiment at the Fermi National Accelerator Laboratory. The results are consistent with the predictions of perturbative quantum chromodynamics.

  7. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  8. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    NASA Technical Reports Server (NTRS)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  9. Algorithms for deriving crystallographic space-group information. II: Treatment of special positions

    SciTech Connect

    Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2001-10-05

    Algorithms for the treatment of special positions in 3-dimensional crystallographic space groups are presented. These include an algorithm for the determination of the site-symmetry group given the coordinates of a point, an algorithm for the determination of the exact location of the nearest special position, an algorithm for the assignment of a Wyckoff letter given the site-symmetry group, and an alternative algorithm for the assignment of a Wyckoff letter given the coordinates of a point directly. All algorithms are implemented in ISO C++ and are integrated into the Computational Crystallography Toolbox. The source code is freely available.

  10. Convergence rates of finite difference stochastic approximation algorithms part II: implementation via common random numbers

    NASA Astrophysics Data System (ADS)

    Dai, Liyi

    2016-05-01

    Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, by approximating gradient using finite differences generated through common random numbers. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences. Particularly, it is shown that the rate can be increased to n-2/5 in general and to n-1/2, the best possible rate of stochastic approximation, in Monte Carlo optimization for a broad class of problems, in the iteration number n.

  11. Experimental validation of improved 3D SBP positioning algorithm in PET applications using UW Phase II Board

    NASA Astrophysics Data System (ADS)

    Jorge, L. S.; Bonifacio, D. A. B.; DeWitt, Don; Miyaoka, R. S.

    2016-12-01

    Continuous scintillator-based detectors have been considered as a competitive and cheaper approach than highly pixelated discrete crystal positron emission tomography (PET) detectors, despite the need for algorithms to estimate 3D gamma interaction position. In this work, we report on the implementation of a positioning algorithm to estimate the 3D interaction position in a continuous crystal PET detector using a Field Programmable Gate Array (FPGA). The evaluated method is the Statistics-Based Processing (SBP) technique that requires light response function and event position characterization. An algorithm has been implemented using the Verilog language and evaluated using a data acquisition board that contains an Altera Stratix III FPGA. The 3D SBP algorithm was previously successfully implemented on a Stratix II FPGA using simulated data and a different module design. In this work, improvements were made to the FPGA coding of the 3D positioning algorithm, reducing the total memory usage to around 34%. Further the algorithm was evaluated using experimental data from a continuous miniature crystal element (cMiCE) detector module. Using our new implementation, average FWHM (Full Width at Half Maximum) for the whole block is 1.71±1 mm, 1.70±1 mm and 1.632±5 mm for x, y and z directions, respectively. Using a pipelined architecture, the FPGA is able to process 245,000 events per second for interactions inside of the central area of the detector that represents 64% of the total block area. The weighted average of the event rate by regional area (corner, border and central regions) is about 198,000 events per second. This event rate is greater than the maximum expected coincidence rate for any given detector module in future PET systems using the cMiCE detector design.

  12. Modeling heterogeneous materials via two-point correlation functions. II. Algorithmic details and applications.

    PubMed

    Jiao, Y; Stillinger, F H; Torquato, S

    2008-03-01

    In the first part of this series of two papers, we proposed a theoretical formalism that enables one to model and categorize heterogeneous materials (media) via two-point correlation functions S(2) and introduced an efficient heterogeneous-medium (re)construction algorithm called the "lattice-point" algorithm. Here we discuss the algorithmic details of the lattice-point procedure and an algorithm modification using surface optimization to further speed up the (re)construction process. The importance of the error tolerance, which indicates to what accuracy the media are (re)constructed, is also emphasized and discussed. We apply the algorithm to generate three-dimensional digitized realizations of a Fontainebleau sandstone and a boron-carbide/aluminum composite from the two-dimensional tomographic images of their slices through the materials. To ascertain whether the information contained in S(2) is sufficient to capture the salient structural features, we compute the two-point cluster functions of the media, which are superior signatures of the microstructure because they incorporate topological connectedness information. We also study the reconstruction of a binary laser-speckle pattern in two dimensions, in which the algorithm fails to reproduce the pattern accurately. We conclude that in general reconstructions using S(2) only work well for heterogeneous materials with single-scale structures. However, two-point information via S(2) is not sufficient to accurately model multiscale random media. Moreover, we construct realizations of hypothetical materials with desired structural characteristics obtained by manipulating their two-point correlation functions.

  13. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript (Frolov et al 2014 New J. Phys. 16 art. no.) , we developed a novel optimization method for the placement, sizing, and operation of flexible alternating current transmission system (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide series compensation (SC) via modification of line inductance. In this sequel manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (˜2700 nodes and ˜3300 lines). The results from the 30-bus network are used to study the general properties of the solutions, including nonlocality and sparsity. The Polish grid is used to demonstrate the computational efficiency of the heuristics that leverage sequential linearization of power flow constraints, and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, we can use the algorithm to solve a Polish transmission grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (i) uniform load growth, (ii) multiple overloaded configurations, and (iii) sequential generator retirements.

  14. Experimental analysis and mathematical prediction of Cd(II) removal by biosorption using support vector machines and genetic algorithms.

    PubMed

    Hlihor, Raluca Maria; Diaconu, Mariana; Leon, Florin; Curteanu, Silvia; Tavares, Teresa; Gavrilescu, Maria

    2015-05-25

    We investigated the bioremoval of Cd(II) in batch mode, using dead and living biomass of Trichoderma viride. Kinetic studies revealed three distinct stages of the biosorption process. The pseudo-second order model and the Langmuir model described well the kinetics and equilibrium of the biosorption process, with a determination coefficient, R(2)>0.99. The value of the mean free energy of adsorption, E, is less than 16 kJ/mol at 25 °C, suggesting that, at low temperature, the dominant process involved in Cd(II) biosorption by dead T. viride is the chemical ion-exchange. With the temperature increasing to 40-50 °C, E values are above 16 kJ/mol, showing that the particle diffusion mechanism could play an important role in Cd(II) biosorption. The studies on T. viride growth in Cd(II) solutions and its bioaccumulation performance showed that the living biomass was able to bioaccumulate 100% Cd(II) from a 50 mg/L solution at pH 6.0. The influence of pH, biomass dosage, metal concentration, contact time and temperature on the bioremoval efficiency was evaluated to further assess the biosorption capability of the dead biosorbent. These complex influences were correlated by means of a modeling procedure consisting in data driven approach in which the principles of artificial intelligence were applied with the help of support vector machines (SVM), combined with genetic algorithms (GA). According to our data, the optimal working conditions for the removal of 98.91% Cd(II) by T. viride were found for an aqueous solution containing 26.11 mg/L Cd(II) as follows: pH 6.0, contact time of 3833 min, 8 g/L biosorbent, temperature 46.5 °C. The complete characterization of bioremoval parameters indicates that T. viride is an excellent material to treat wastewater containing low concentrations of metal.

  15. Experimental analysis and mathematical prediction of Cd(II) removal by biosorption using support vector machines and genetic algorithms.

    PubMed

    Hlihor, Raluca Maria; Diaconu, Mariana; Leon, Florin; Curteanu, Silvia; Tavares, Teresa; Gavrilescu, Maria

    2015-05-25

    We investigated the bioremoval of Cd(II) in batch mode, using dead and living biomass of Trichoderma viride. Kinetic studies revealed three distinct stages of the biosorption process. The pseudo-second order model and the Langmuir model described well the kinetics and equilibrium of the biosorption process, with a determination coefficient, R(2)>0.99. The value of the mean free energy of adsorption, E, is less than 16 kJ/mol at 25 °C, suggesting that, at low temperature, the dominant process involved in Cd(II) biosorption by dead T. viride is the chemical ion-exchange. With the temperature increasing to 40-50 °C, E values are above 16 kJ/mol, showing that the particle diffusion mechanism could play an important role in Cd(II) biosorption. The studies on T. viride growth in Cd(II) solutions and its bioaccumulation performance showed that the living biomass was able to bioaccumulate 100% Cd(II) from a 50 mg/L solution at pH 6.0. The influence of pH, biomass dosage, metal concentration, contact time and temperature on the bioremoval efficiency was evaluated to further assess the biosorption capability of the dead biosorbent. These complex influences were correlated by means of a modeling procedure consisting in data driven approach in which the principles of artificial intelligence were applied with the help of support vector machines (SVM), combined with genetic algorithms (GA). According to our data, the optimal working conditions for the removal of 98.91% Cd(II) by T. viride were found for an aqueous solution containing 26.11 mg/L Cd(II) as follows: pH 6.0, contact time of 3833 min, 8 g/L biosorbent, temperature 46.5 °C. The complete characterization of bioremoval parameters indicates that T. viride is an excellent material to treat wastewater containing low concentrations of metal. PMID:25224921

  16. Optimal operational strategies for a day-ahead electricity market in the presence of market power using multi-objective evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Rodrigo, Deepal

    2007-12-01

    This dissertation introduces a novel approach for optimally operating a day-ahead electricity market not only by economically dispatching the generation resources but also by minimizing the influences of market manipulation attempts by the individual generator-owning companies while ensuring that the power system constraints are not violated. Since economic operation of the market conflicts with the individual profit maximization tactics such as market manipulation by generator-owning companies, a methodology that is capable of simultaneously optimizing these two competing objectives has to be selected. Although numerous previous studies have been undertaken on the economic operation of day-ahead markets and other independent studies have been conducted on the mitigation of market power, the operation of a day-ahead electricity market considering these two conflicting objectives simultaneously has not been undertaken previously. These facts provided the incentive and the novelty for this study. A literature survey revealed that many of the traditional solution algorithms convert multi-objective functions into either a single-objective function using weighting schemas or undertake optimization of one function at a time. Hence, these approaches do not truly optimize the multi-objectives concurrently. Due to these inherent deficiencies of the traditional algorithms, the use of alternative non-traditional solution algorithms for such problems has become popular and widely used. Of these, multi-objective evolutionary algorithms (MOEA) have received wide acceptance due to their solution quality and robustness. In the present research, three distinct algorithms were considered: a non-dominated sorting genetic algorithm II (NSGA II), a multi-objective tabu search algorithm (MOTS) and a hybrid of multi-objective tabu search and genetic algorithm (MOTS/GA). The accuracy and quality of the results from these algorithms for applications similar to the problem investigated here

  17. High Spectral Resolution MODIS Algorithms for Ocean Chlorophyll in Case II Waters

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    2004-01-01

    The Case 2 chlorophyll a algorithm is based on a semi-analytical, bio-optical model of remote sensing reflectance, R(sub rs)(lambda), where R(sub rs)(lambda) is defined as the water-leaving radiance, L(sub w)(lambda), divided by the downwelling irradiance just above the sea surface, E(sub d)(lambda,0(+)). The R(sub rs)(lambda) model (Section 3) has two free variables, the absorption coefficient due to phytoplankton at 675 nm, a(sub phi)(675), and the absorption coefficient due to colored dissolved organic matter (CDOM) or gelbstoff at 400 nm, a(sub g)(400). The R(rs) model has several parameters that are fixed or can be specified based on the region and season of the MODIS scene. These control the spectral shapes of the optical constituents of the model. R(sub rs)(lambda(sub i)) values from the MODIS data processing system are placed into the model, the model is inverted, and a(sub phi)(675), a(sub g)(400) (MOD24), and chlorophyll a (MOD21, Chlor_a_3) are computed. Algorithm development is initially focused on tropical, subtropical, and summer temperate environments, and the model is parameterized in Section 4 for three different bio-optical domains: (1) high ratios of photoprotective pigments to chlorophyll and low self-shading, which for brevity, we designate as 'unpackaged'; (2) low ratios and high self-shading, which we designate as 'packaged'; and (3) a transitional or global-average type. These domains can be identified from space by comparing sea-surface temperature to nitrogen-depletion temperatures for each domain (Section 5). Algorithm errors of more than 45% are reduced to errors of less than 30% with this approach, with the greatest effect occurring at the eastern and polar boundaries of the basins. Section 6 provides an expansion of bio-optical domains into high-latitude waters. The 'fully packaged' pigment domain is introduced in this section along with a revised strategy for implementing these variable packaging domains. Chlor_a_3 values derived semi

  18. Combinatorial theory of the semiclassical evaluation of transport moments II: Algorithmic approach for moment generating functions

    SciTech Connect

    Berkolaiko, G.; Kuipers, J.

    2013-12-15

    Electronic transport through chaotic quantum dots exhibits universal behaviour which can be understood through the semiclassical approximation. Within the approximation, calculation of transport moments reduces to codifying classical correlations between scattering trajectories. These can be represented as ribbon graphs and we develop an algorithmic combinatorial method to generate all such graphs with a given genus. This provides an expansion of the linear transport moments for systems both with and without time reversal symmetry. The computational implementation is then able to progress several orders further than previous semiclassical formulae as well as those derived from an asymptotic expansion of random matrix results. The patterns observed also suggest a general form for the higher orders.

  19. Tangent height registration method for the Version 1.4 data retrieval algorithm of the solar occultation sensor ILAS-II.

    PubMed

    Tanaka, Tomoaki; Nakajima, Hideaki; Sugita, Takafumi; Ejiri, Mitsumu K; Irie, Hitoshi; Saitoh, Naoko; Terao, Yukio; Kawasaki, Hiroyuki; Usami, Masatoshi; Yokota, Tatsuya; Kobayashi, Hirokazu; Sasano, Yasuhiro

    2007-10-10

    The Improved Limb Atmospheric Spectrometer-II (ILAS-II) is a satellite-borne solar occultation sensor onboard the Advanced Earth Observing Satellite-II (ADEOS-II). The ILAS-II succeeded the ILAS. The ILAS-II used four grating spectrometers to observe vertical profiles of gas volume mixing ratios of trace constituents and was also equipped with a Sun-edge sensor to determine tangent heights geometrically with high precision. The accuracy of gas volume mixing ratios depends on the accuracy of the tangent height determination. The combination method is a tangent height registration method that was developed to give appropriate tangent heights for the ILAS-II Version 1.4 data retrieval algorithm. This study describes the method used in the ILAS-II Version 1.4 retrieval algorithm to register tangent heights. The root-sum-square total random error is estimated to be 30 m, and the total systematic error is 180 m at an altitude of 30 km. The influence of the tangent height errors on the vertical profiles of gas volume mixing ratios in ILAS-II Version 1.4 is estimated by using the relative difference. The relative difference for each species is within 7% (20%) for an altitude shift of +/-100 m(+/-300 m).

  20. Parallel Algorithms and Software for Nuclear, Energy, and Environmental Applications. Part II: Multiphysics Software

    SciTech Connect

    Derek Gaston; Luanjing Guo; Glen Hansen; Hai Huang; Richard Johnson; Dana Knoll; Chris Newman; Hyeong Kae Park; Robert Podgorney; Michael Tonks; Richard Williamson

    2012-09-01

    This paper is the second part of a two part sequence on multiphysics algorithms and software. The first [1] focused on the algorithms; this part treats the multiphysics software framework and applications based on it. Tight coupling is typically designed into the analysis application at inception, as such an application is strongly tied to a composite nonlinear solver that arrives at the final solution by treating all equations simultaneously. The application must also take care to minimize both time and space error between the physics, particularly if more than one mesh representation is needed in the solution process. This paper presents an application framework that was specifically designed to support tightly coupled multiphysics analysis. The Multiphysics Object Oriented Simulation Environment (MOOSE) is based on the Jacobian-free Newton-Krylov (JFNK) method combined with physics-based preconditioning to provide the underlying mathematical structure for applications. The report concludes with the presentation of a host of nuclear, energy, and environmental applications that demonstrate the efficacy of the approach and the utility of a well-designed multiphysics framework.

  1. Graph Theoretic Foundations of Multibody Dynamics Part II: Analysis and Algorithms

    PubMed Central

    Jain, Abhinandan

    2011-01-01

    This second, of a two part paper, uses concepts from graph theory to obtain a deeper understanding of the mathematical foundations of multibody dynamics. The first part [7] established the block-weighted adjacency (BWA) matrix structure of spatial operators associated with serial and tree topology multibody system dynamics, and introduced the notions of spatial kernel operators (SKO) and spatial propagation operators (SPO). This paper builds upon these connections to show that key analytical results and computational algorithms are a direct consequence of these structural properties and require minimal assumptions about the specific nature of the underlying multibody system. We formalize this notion by introducing the notion of SKO models for general tree-topology multibody systems. We show that key analytical results, including mass matrix factorization, inversion, and decomposition hold for all SKO models. It is also shown that key low-order scatter/gather recursive computational algorithms follow directly from these abstract-level analytical results. Application examples to illustrate the concrete application of these general results are provided. The paper also describes a general recipe for developing SKO models. The abstract nature of SKO models allows the application of these techniques to a very broad class of multibody systems. PMID:22102791

  2. WORM ALGORITHM PATH INTEGRAL MONTE CARLO APPLIED TO THE 3He-4He II SANDWICH SYSTEM

    NASA Astrophysics Data System (ADS)

    Al-Oqali, Amer; Sakhel, Asaad R.; Ghassib, Humam B.; Sakhel, Roger R.

    2012-12-01

    We present a numerical investigation of the thermal and structural properties of the 3He-4He sandwich system adsorbed on a graphite substrate using the worm algorithm path integral Monte Carlo (WAPIMC) method [M. Boninsegni, N. Prokof'ev and B. Svistunov, Phys. Rev. E74, 036701 (2006)]. For this purpose, we have modified a previously written WAPIMC code originally adapted for 4He on graphite, by including the second 3He-component. To describe the fermions, a temperature-dependent statistical potential has been used. This has proven very effective. The WAPIMC calculations have been conducted in the millikelvin temperature regime. However, because of the heavy computations involved, only 30, 40 and 50 mK have been considered for the time being. The pair correlations, Matsubara Green's function, structure factor, and density profiles have been explored at these temperatures.

  3. Exponential Gaussian approach for spectral modelling: The EGO algorithm II. Band asymmetry

    NASA Astrophysics Data System (ADS)

    Pompilio, Loredana; Pedrazzi, Giuseppe; Cloutis, Edward A.; Craig, Michael A.; Roush, Ted L.

    2010-08-01

    The present investigation is complementary to a previous paper which introduced the EGO approach to spectral modelling of reflectance measurements acquired in the visible and near-IR range (Pompilio, L., Pedrazzi, G., Sgavetti, M., Cloutis, E.A., Craig, M.A., Roush, T.L. [2009]. Icarus, 201 (2), 781-794). Here, we show the performances of the EGO model in attempting to account for temperature-induced variations in spectra, specifically band asymmetry. Our main goals are: (1) to recognize and model thermal-induced band asymmetry in reflectance spectra; (2) to develop a basic approach for decomposition of remotely acquired spectra from planetary surfaces, where effects due to temperature variations are most prevalent; (3) to reduce the uncertainty related to quantitative estimation of band position and depth when band asymmetry is occurring. In order to accomplish these objectives, we tested the EGO algorithm on a number of measurements acquired on powdered pyroxenes at sample temperature ranging from 80 up to 400 K. The main results arising from this study are: (1) EGO model is able to numerically account for the occurrence of band asymmetry on reflectance spectra; (2) the returned set of EGO parameters can suggest the influence of some additional effect other than the electronic transition responsible for the absorption feature; (3) the returned set of EGO parameters can help in estimating the surface temperature of a planetary body; (4) the occurrence of absorptions which are less affected by temperature variations can be mapped for minerals and thus used for compositional estimates. Further work is still required in order to analyze the behaviour of the EGO algorithm with respect to temperature-induced band asymmetry using powdered pyroxene spanning a range of compositions and grain sizes and more complex band shapes.

  4. Biphasic indentation of articular cartilage--II. A numerical algorithm and an experimental study.

    PubMed

    Mow, V C; Gibbs, M C; Lai, W M; Zhu, W B; Athanasiou, K A

    1989-01-01

    Part I (Mak et al., 1987, J. Biomechanics 20, 703-714) presented the theoretical solutions for the biphasic indentation of articular cartilage under creep and stress-relaxation conditions. In this study, using the creep solution, we developed an efficient numerical algorithm to compute all three material coefficients of cartilage in situ on the joint surface from the indentation creep experiment. With this method we determined the average values of the aggregate modulus. Poisson's ratio and permeability for young bovine femoral condylar cartilage in situ to be HA = 0.90 MPa, vs = 0.39 and k = 0.44 x 10(-15) m4/Ns respectively, and those for patellar groove cartilage to be HA = 0.47 MPa, vs = 0.24, k = 1.42 x 10(-15) m4/Ns. One surprising finding from this study is that the in situ Poisson's ratio of cartilage (0.13-0.45) may be much less than those determined from measurements performed on excised osteochondral plugs (0.40-0.49) reported in the literature. We also found the permeability of patellar groove cartilage to be several times higher than femoral condyle cartilage. These findings may have important implications on understanding the functional behavior of cartilage in situ and on methods used to determine the elastic moduli of cartilage using the indentation experiments.

  5. Multi-objective optimization of discrete time-cost tradeoff problem in project networks using non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shahriari, Mohammadreza

    2016-03-01

    The time-cost tradeoff problem is one of the most important and applicable problems in project scheduling area. There are many factors that force the mangers to crash the time. This factor could be early utilization, early commissioning and operation, improving the project cash flow, avoiding unfavorable weather conditions, compensating the delays, and so on. Since there is a need to allocate extra resources to short the finishing time of project and the project managers are intended to spend the lowest possible amount of money and achieve the maximum crashing time, as a result, both direct and indirect costs will be influenced in the project, and here, we are facing into the time value of money. It means that when we crash the starting activities in a project, the extra investment will be tied in until the end date of the project; however, when we crash the final activities, the extra investment will be tied in for a much shorter period. This study is presenting a two-objective mathematical model for balancing compressing the project time with activities delay to prepare a suitable tool for decision makers caught in available facilities and due to the time of projects. Also drawing the scheduling problem to real world conditions by considering nonlinear objective function and the time value of money are considered. The presented problem was solved using NSGA-II, and the effect of time compressing reports on the non-dominant set.

  6. Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Sen, Ramazan Sonat; Smith, Curtis Lee

    2015-07-01

    The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system

  7. Cost versus life cycle assessment-based environmental impact optimization of drinking water production plants.

    PubMed

    Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L

    2016-07-15

    Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly.

  8. Cost versus life cycle assessment-based environmental impact optimization of drinking water production plants.

    PubMed

    Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L

    2016-07-15

    Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. PMID:27107954

  9. The Sloan Digital Sky Survey-II Supernova Survey:Search Algorithm and Follow-up Observations

    SciTech Connect

    Sako, Masao; Bassett, Bruce; Becker, Andrew; Cinabro, David; DeJongh, Don Frederic; Depoy, D.L.; Doi, Mamoru; Garnavich, Peter M.; Craig, Hogan, J.; Holtzman, Jon; Jha, Saurabh; Konishi, Kohki; Lampeitl, Hubert; Marriner, John; Miknaitis, Gajus; Nichol, Robert C.; Prieto, Jose Luis; Richmond, Michael W.; Schneider, Donald P.; Smith, Mathew; SubbaRao, Mark; /Chicago U. /Tokyo U. /Tokyo U. /South African Astron. Observ. /Tokyo U. /Apache Point Observ. /Seoul Natl. U. /Apache Point Observ. /Apache Point Observ. /Tokyo U. /Seoul Natl. U. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ.

    2007-09-14

    The Sloan Digital Sky Survey-II Supernova Survey has identified a large number of new transient sources in a 300 deg2 region along the celestial equator during its first two seasons of a three-season campaign. Multi-band (ugriz) light curves were measured for most of the sources, which include solar system objects, Galactic variable stars, active galactic nuclei, supernovae (SNe), and other astronomical transients. The imaging survey is augmented by an extensive spectroscopic follow-up program to identify SNe, measure their redshifts, and study the physical conditions of the explosions and their environment through spectroscopic diagnostics. During the survey, light curves are rapidly evaluated to provide an initial photometric type of the SNe, and a selected sample of sources are targeted for spectroscopic observations. In the first two seasons, 476 sources were selected for spectroscopic observations, of which 403 were identified as SNe. For the Type Ia SNe, the main driver for the Survey, our photometric typing and targeting efficiency is 90%. Only 6% of the photometric SN Ia candidates were spectroscopically classified as non-SN Ia instead, and the remaining 4% resulted in low signal-to-noise, unclassified spectra. This paper describes the search algorithm and the software, and the real-time processing of the SDSS imaging data. We also present the details of the supernova candidate selection procedures and strategies for follow-up spectroscopic and imaging observations of the discovered sources.

  10. On the modeling of equilibrium twin interfaces in a single-crystalline magnetic shape memory alloy sample. II: numerical algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Jiong; Steinmann, Paul

    2016-05-01

    This is part II of this series of papers. The aim of the current paper was to solve the governing PDE system derived in part I numerically, such that the procedure of variant reorientation in a magnetic shape memory alloy (MSMA) sample can be simulated. The sample to be considered in this paper has a 3D cuboid shape and is subject to typical magnetic and mechanical loading conditions. To investigate the demagnetization effect on the sample's response, the surrounding space of the sample is taken into account. By considering the different properties of the independent variables, an iterative numerical algorithm is proposed to solve the governing system. The related mathematical formulas and some techniques facilitating the numerical calculations are introduced. Based on the results of numerical simulations, the distributions of some important physical quantities (e.g., magnetization, demagnetization field, and mechanical stress) in the sample can be determined. Furthermore, the properties of configurational force on the twin interfaces are investigated. By virtue of the twin interface movement criteria derived in part I, the whole procedure of magnetic field- or stress-induced variant reorientations in the MSMA sample can be properly simulated.

  11. Evaluation of Optimization Methods for Hydrologic Model Calibration in Ontario Basins

    NASA Astrophysics Data System (ADS)

    Razavi, T.; Coulibaly, P. D.

    2013-12-01

    Particle Swarm Optimization algorithm (PSO), Shuffled Complex Evolution algorithm (SCE), Non-Dominated Sorted Genetic algorithm II (NSGA II) and a Monte Carlo procedure are applied to optimize the calibration of two conceptual hydrologic models namely the Sacramento Soil Moisture Accounting (SAC-SMA) and McMaster University-Hydrologiska Byråns Vattenbalansavdelning (MAC-HBV). PSO, SCE, and NSGA II are inherently evolutionary computational methods with a potential of reaching the global optimum in contrast to stochastic search algorithms such as Monte Carlo method. The spatial analysis maps of Nash Sutcliffe Efficiency (NSE) for daily streamflow and Volume Error (VE) for peak and low flows demonstrate that for both MAC-HBV and SAC-SMA, PSO and SCE are equally superior to NSGAII and Monte Carlo for all the selected 90 basins across Ontario (Canada) using 20 years (1976-1994) of hydrologic records. For peakflows, MAC-HBV with PSO has generally better performance compared to SCE, whereas SAC-SMA with SCE and PSO indicate similar performance. For low flows, MAC-HBV with PSO has a better performance for most of the northern large watersheds while SCE has a better performance for southern small watersheds. Temporal variability of NSE values for daily streamflow show that all the optimization methods perform better for the winter season compared to the summer.

  12. Blind decorrelation and deconvolution algorithm for multiple-input multiple-output system: II. Analysis and simulation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ching; Yu, Tommy; Yao, Kung; Pottie, Gregory J.

    1999-11-01

    For single-input multiple-output (SIMO) systems blind deconvolution based on second-order statistics has been shown promising given that the sources and channels meet certain assumptions. In our previous paper we extend the work to multiple-input multiple-output (MIMO) systems by introducing a blind deconvolution algorithm to remove all channel dispersion followed by a blind decorrelation algorithm to separate different sources from their instantaneous mixture. In this paper we first explore more details embedded in our algorithm. Then we present simulation results to show that our algorithm is applicable to MIMO systems excited by a broad class of signals such as speech, music and digitally modulated symbols.

  13. Developement of a same-side kaon tagging algorithm of B^0_s decays for measuring delta m_s at CDF II

    SciTech Connect

    Menzemer, Stephanie; /Heidelberg U.

    2006-06-01

    The authors developed a Same-Side Kaon Tagging algorithm to determine the production flavor of B{sub s}{sup 0} mesons. Until the B{sub s}{sup 0} mixing frequency is clearly observed the performance of the Same-Side Kaon Tagging algorithm can not be measured on data but has to be determined on Monte Carlo simulation. Data and Monte Carlo agreement has been evaluated for both the B{sub s}{sup 0} and the high statistics B{sup +} and B{sup 0} modes. Extensive systematic studies were performed to quantify potential discrepancies between data and Monte Carlo. The final optimized tagging algorithm exploits the particle identification capability of the CDF II detector. it achieves a tagging performance of {epsilon}D{sup 2} = 4.0{sub -1.2}{sup +0.9} on the B{sub s}{sup 0} {yields} D{sub s}{sup -} {pi}{sup +} sample. The Same-Side Kaon Tagging algorithm presented here has been applied to the ongoing B{sub s}{sup 0} mixing analysis, and has provided a factor of 3-4 increase in the effective statistical size of the sample. This improvement results in the first direct measurement of the B{sub s}{sup 0} mixing frequency.

  14. Autonomous robot navigation based on the evolutionary multi-objective optimization of potential fields

    NASA Astrophysics Data System (ADS)

    Herrera Ortiz, Juan Arturo; Rodríguez-Vázquez, Katya; Padilla Castañeda, Miguel A.; Arámbula Cosío, Fernando

    2013-01-01

    This article presents the application of a new multi-objective evolutionary algorithm called RankMOEA to determine the optimal parameters of an artificial potential field for autonomous navigation of a mobile robot. Autonomous robot navigation is posed as a multi-objective optimization problem with three objectives: minimization of the distance to the goal, maximization of the distance between the robot and the nearest obstacle, and maximization of the distance travelled on each field configuration. Two decision makers were implemented using objective reduction and discrimination in performance trade-off. The performance of RankMOEA is compared with NSGA-II and SPEA2, including both decision makers. Simulation experiments using three different obstacle configurations and 10 different routes were performed using the proposed methodology. RankMOEA clearly outperformed NSGA-II and SPEA2. The robustness of this approach was evaluated with the simulation of different sensor masks and sensor noise. The scheme reported was also combined with the wavefront-propagation algorithm for global path planning.

  15. A Conflict-Resolution Model for the Conjunctive Use of Surface and Groundwater Resources that Considers Water-Quality Issues: A Case Study

    NASA Astrophysics Data System (ADS)

    Bazargan-Lari, Mohammad Reza; Kerachian, Reza; Mansoori, Abbas

    2009-03-01

    The conjunctive use of surface and groundwater resources is one alternative for optimal use of available water resources in arid and semiarid regions. The optimization models proposed for conjunctive water allocation are often complicated, nonlinear, and computationally intensive, especially when different stakeholders are involved that have conflicting interests. In this article, a new conflict-resolution methodology developed for the conjunctive use of surface and groundwater resources using Nondominated Sorting Genetic Algorithm II (NSGA-II) and Young Conflict-Resolution Theory (YCRT) is presented. The proposed model is applied to the Tehran aquifer in the Tehran metropolitan area of Iran. Stakeholders in the study area have conflicting interests related to water supply with acceptable quality, pumping costs, groundwater quality, and groundwater table fluctuations. In the proposed methodology, MODFLOW and MT3D groundwater quantity and quality simulation models are linked with the NSGA-II optimization model to develop Pareto fronts among the objectives. The best solutions on the Pareto fronts are then selected using YCRT. The results of the proposed model show the significance of applying an integrated conflict-resolution approach to conjunctive use of surface and groundwater resources in the study area.

  16. A conflict-resolution model for the conjunctive use of surface and groundwater resources that considers water-quality issues: a case study.

    PubMed

    Bazargan-Lari, Mohammad Reza; Kerachian, Reza; Mansoori, Abbas

    2009-03-01

    The conjunctive use of surface and groundwater resources is one alternative for optimal use of available water resources in arid and semiarid regions. The optimization models proposed for conjunctive water allocation are often complicated, nonlinear, and computationally intensive, especially when different stakeholders are involved that have conflicting interests. In this article, a new conflict-resolution methodology developed for the conjunctive use of surface and groundwater resources using Nondominated Sorting Genetic Algorithm II (NSGA-II) and Young Conflict-Resolution Theory (YCRT) is presented. The proposed model is applied to the Tehran aquifer in the Tehran metropolitan area of Iran. Stakeholders in the study area have conflicting interests related to water supply with acceptable quality, pumping costs, groundwater quality, and groundwater table fluctuations. In the proposed methodology, MODFLOW and MT3D groundwater quantity and quality simulation models are linked with the NSGA-II optimization model to develop Pareto fronts among the objectives. The best solutions on the Pareto fronts are then selected using YCRT. The results of the proposed model show the significance of applying an integrated conflict-resolution approach to conjunctive use of surface and groundwater resources in the study area. PMID:18773238

  17. A conflict-resolution model for the conjunctive use of surface and groundwater resources that considers water-quality issues: a case study.

    PubMed

    Bazargan-Lari, Mohammad Reza; Kerachian, Reza; Mansoori, Abbas

    2009-03-01

    The conjunctive use of surface and groundwater resources is one alternative for optimal use of available water resources in arid and semiarid regions. The optimization models proposed for conjunctive water allocation are often complicated, nonlinear, and computationally intensive, especially when different stakeholders are involved that have conflicting interests. In this article, a new conflict-resolution methodology developed for the conjunctive use of surface and groundwater resources using Nondominated Sorting Genetic Algorithm II (NSGA-II) and Young Conflict-Resolution Theory (YCRT) is presented. The proposed model is applied to the Tehran aquifer in the Tehran metropolitan area of Iran. Stakeholders in the study area have conflicting interests related to water supply with acceptable quality, pumping costs, groundwater quality, and groundwater table fluctuations. In the proposed methodology, MODFLOW and MT3D groundwater quantity and quality simulation models are linked with the NSGA-II optimization model to develop Pareto fronts among the objectives. The best solutions on the Pareto fronts are then selected using YCRT. The results of the proposed model show the significance of applying an integrated conflict-resolution approach to conjunctive use of surface and groundwater resources in the study area.

  18. Application of genetic algorithm-kernel partial least square as a novel nonlinear feature selection method: activity of carbonic anhydrase II inhibitors.

    PubMed

    Jalali-Heravi, Mehdi; Kyani, Anahita

    2007-05-01

    This paper introduces the genetic algorithm-kernel partial least square (GA-KPLS), as a novel nonlinear feature selection method. This technique combines genetic algorithms (GAs) as powerful optimization methods with KPLS as a robust nonlinear statistical method for variable selection. This feature selection method is combined with artificial neural network to develop a nonlinear QSAR model for predicting activities of a series of substituted aromatic sulfonamides as carbonic anhydrase II (CA II) inhibitors. Eight simple one- and two-dimensional descriptors were selected by GA-KPLS and considered as inputs for developing artificial neural networks (ANNs). These parameters represent the role of acceptor-donor pair, hydrogen bonding, hydrosolubility and lipophilicity of the active sites and also the size of the inhibitors on inhibitor-isozyme interaction. The accuracy of 8-4-1 networks was illustrated by validation techniques of leave-one-out (LOO) and leave-multiple-out (LMO) cross-validations and Y-randomization. Superiority of this method (GA-KPLS-ANN) over the linear one (MLR) in a previous work and also the GA-PLS-ANN in which a linear feature selection method has been used indicates that the GA-KPLS approach is a powerful method for the variable selection in nonlinear systems. PMID:17316919

  19. SEBAL-A: A remote sensing ET algorithm that accounts for advection with limited data. Part II: Test for transferability

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Because the Surface Energy Balance Algorithm for Land (SEBAL) tends to underestimate ET under conditions of advection, the model was modified by incorporating an advection component as part of the energy usable for crop evapotranspiration (ET). The modification involved the estimation of advected en...

  20. Optimization of Process Parameters of Hybrid Laser-Arc Welding onto 316L Using Ensemble of Metamodels

    NASA Astrophysics Data System (ADS)

    Zhou, Qi; Jiang, Ping; Shao, Xinyu; Gao, Zhongmei; Cao, Longchao; Yue, Chen; Li, Xiongbin

    2016-08-01

    Hybrid laser-arc welding (LAW) provides an effective way to overcome problems commonly encountered during either laser or arc welding such as brittle phase formation, cracking, and porosity. The process parameters of LAW have significant effects on the bead profile and hence the quality of joint. This paper proposes an optimization methodology by combining non-dominated sorting genetic algorithm (NSGA-II) and ensemble of metamodels (EMs) to address multi-objective process parameter optimization in LAW onto 316L. Firstly, Taguchi experimental design is adopted to generate the experimental samples. Secondly, the relationships between process parameters ( i.e., laser power ( P), welding current ( A), distance between laser and arc ( D), and welding speed ( V)) and the bead geometries are fitted using EMs. The comparative results show that the EMs can take advantage of the prediction ability of each stand-alone metamodel and thus decrease the risk of adopting inappropriate metamodels. Then, the NSGA-II is used to facilitate design space exploration. Besides, the main effects and contribution rates of process parameters on bead profile are analyzed. Eventually, the verification experiments of the obtained optima are carried out and compared with the un-optimized weld seam for bead geometries, weld appearances, and welding defects. Results illustrate that the proposed hybrid approach exhibits great capability of improving welding quality in LAW.

  1. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    SciTech Connect

    Stankovski, Z.

    1995-12-31

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation about 90% of the computing time is consumed in the collision probability evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper the author presents a simple but efficient parallel algorithm based on the message passing host/node programmation model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, which is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SPI and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SPI. Because of heterogeneity of the workstation network, the author did not ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors.

  2. Contaminant detection on poultry carcasses using hyperspectral data: Part II. Algorithms for selection of sets of ratio features

    NASA Astrophysics Data System (ADS)

    Nakariyakul, Songyot; Casasent, David P.

    2007-09-01

    We consider new methods to select useful sets of ratio features in hyperspectral data to detect contaminant regions on chicken carcasses using data provided by ARS (Athens, GA). A ratio feature is the ratio of the response at each pixel for two different wavebands. Ratio features perform a type of normalization and can thus help reduce false alarms, if a good normalization algorithm is not available. Thus, they are of interest. We present a new algorithm for the general problem of such feature selection in high-dimensional data. The four contaminant types of interest are three types of feces from different gastrointestinal regions (duodenum, ceca, and colon) and ingesta (undigested food) from the gizzard. To select the best two sets of ratio features from this 492-band HS data requires an exhaustive search of more than seven billion combinations of two sets of ratio features, which is very excessive. Thus, we propose our new fast ratio feature selection algorithm that requires evaluation of a much fewer number of sets of ratio features and is capable of giving quasi-optimal or optimal sets of ratio features. This new feature selection method has not been previously presented. It is shown to offer promise for an excellent detection rate and a low false alarm rate for this application. Our tests use data with different feed types and different contaminant types.

  3. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.

  4. Efficient Algorithm for Locating and Sizing Series Compensation Devices in Large Transmission Grids: Solutions and Applications (PART II)

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael

    2014-01-14

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements

  5. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less

  6. A RAY-TRACING ALGORITHM FOR SPINNING COMPACT OBJECT SPACETIMES WITH ARBITRARY QUADRUPOLE MOMENTS. II. NEUTRON STARS

    SciTech Connect

    Bauboeck, Michi; Psaltis, Dimitrios; Oezel, Feryal; Johannsen, Tim E-mail: dpsaltis@email.arizona.edu E-mail: timj@email.arizona.edu

    2012-07-10

    A moderately spinning neutron star acquires an oblate shape and a spacetime with a significant quadrupole moment. These two properties affect its apparent surface area for an observer at infinity, as well as the light curve arising from a hot spot on its surface. In this paper, we develop a ray-tracing algorithm to calculate the apparent surface areas of moderately spinning neutron stars making use of the Hartle-Thorne metric. This analytic metric allows us to calculate various observables of the neutron star in a way that depends only on its macroscopic properties and not on the details of its equation of state. We use this algorithm to calculate the changes in the apparent surface area, which could play a role in measurements of neutron-star radii and, therefore, in constraining their equation of state. We show that whether a spinning neutron star appears larger or smaller than its non-rotating counterpart depends primarily on its equatorial radius. For neutron stars with radii {approx}10 km, the corrections to the Schwarzschild spacetime cause the apparent surface area to increase with spin frequency. In contrast, for neutron stars with radii {approx}15 km, the oblateness of the star dominates the spacetime corrections and causes the apparent surface area to decrease with increasing spin frequency. In all cases, the change in the apparent geometric surface area for the range of observed spin frequencies is {approx}<5% and hence only a small source of error in the measurement of neutron-star radii.

  7. Algorithm for evaluation of temperature distribution of a vapor cell in a diode-pumped alkali laser system (part II).

    PubMed

    Han, Juhong; Wang, You; Cai, He; An, Guofei; Zhang, Wei; Xue, Liangping; Wang, Hongyuan; Zhou, Jie; Jiang, Zhigang; Gao, Ming

    2015-04-01

    With high efficiency and small thermally-induced effects in the near-infrared wavelength region, a diode-pumped alkali laser (DPAL) is regarded as combining the major advantages of solid-state lasers and gas-state lasers and obviating their main disadvantages at the same time. Studying the temperature distribution in the cross-section of an alkali-vapor cell is critical to realize high-powered DPAL systems for both static and flowing states. In this report, a theoretical algorithm has been built to investigate the features of a flowing-gas DPAL system by uniting procedures in kinetics, heat transfer, and fluid dynamic together. The thermal features and output characteristics have been simultaneously obtained for different gas velocities. The results have demonstrated the great potential of DPALs in the extremely high-powered laser operation.

  8. Using the Iterative Input variable Selection (IIS) algorithm to assess the relevance of ENSO teleconnections patterns on hydro-meteorological processes at the catchment scale

    NASA Astrophysics Data System (ADS)

    Beltrame, Ludovica; Carbonin, Daniele; Galelli, Stefano; Castelletti, Andrea

    2014-05-01

    Population growth, water scarcity and climate change are three major factors making the understanding of variations in water availability increasingly important. Therefore, reliable medium-to-long range forecasts of streamflows are essential to the development of water management policies. To this purpose, recent modelling efforts have been dedicated to seasonal and inter-annual streamflow forecasts based on the teleconnection between "at-site" hydro-meteorological processes and low frequency climate fluctuations, such as El Niño Southern Oscillation (ENSO). This work proposes a novel procedure for first detecting the impact of ENSO on hydro-meteorological processes at the catchment scale, and then assessing the potential of ENSO indicators for building medium-to-long range statistical streamflow prediction models. Core of this procedure is the adoption of the Iterative Input variable Selection (IIS) algorithm that is employed to find the most relevant forcings of streamflow variability and derive predictive models based on the selected inputs. The procedure is tested on the Columbia (USA) and Williams (Australia) Rivers, where ENSO influence has been well-documented, and then adopted on the unexplored Red River basin (Vietnam). Results show that IIS outcomes on the Columbia and Williams Rivers are consistent with the results of previous studies, and that ENSO indicators can be effectively used to enhance the streamflow forecast models capabilities. The experiments on the Red River basin show that the ENSO influence is less pronounced, inducing little effects on the basin hydro-meteorological processes.

  9. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  10. An efficient hybrid approach for multiobjective optimization of water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2014-05-01

    An efficient hybrid approach for the design of water distribution systems (WDSs) with multiple objectives is described in this paper. The objectives are the minimization of the network cost and maximization of the network resilience. A self-adaptive multiobjective differential evolution (SAMODE) algorithm has been developed, in which control parameters are automatically adapted by means of evolution instead of the presetting of fine-tuned parameter values. In the proposed method, a graph algorithm is first used to decompose a looped WDS into a shortest-distance tree (T) or forest, and chords (Ω). The original two-objective optimization problem is then approximated by a series of single-objective optimization problems of the T to be solved by nonlinear programming (NLP), thereby providing an approximate Pareto optimal front for the original whole network. Finally, the solutions at the approximate front are used to seed the SAMODE algorithm to find an improved front for the original entire network. The proposed approach is compared with two other conventional full-search optimization methods (the SAMODE algorithm and the NSGA-II) that seed the initial population with purely random solutions based on three case studies: a benchmark network and two real-world networks with multiple demand loading cases. Results show that (i) the proposed NLP-SAMODE method consistently generates better-quality Pareto fronts than the full-search methods with significantly improved efficiency; and (ii) the proposed SAMODE algorithm (no parameter tuning) exhibits better performance than the NSGA-II with calibrated parameter values in efficiently offering optimal fronts.

  11. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    NASA Astrophysics Data System (ADS)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  12. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  13. Quantifying tradeoffs between water availability, water quality, food production and bioenergy production in a Central German Catchment

    NASA Astrophysics Data System (ADS)

    Volk, M.; Lautenbach, S.; Strauch, M.; Whittaker, G. W.

    2012-04-01

    Worldwide increasing bioenergy production is on the political agenda. It is well known that bioenergy production comes at a cost - several trade-offs with food production, water quality and quantity issues, biodiversity and ecosystem services are known. However, a quantification of these trade-offs is still missing. Hence, our study presents an analysis of trade-offs between water availability, water quality, bioenergy production and production in a Central German agricultural catchment. Our analysis is based on using SWAT and a multi-objective genetic algorithm (NSGA II). The genetic algorithm is used to find Pareto optimal configurations of crop rotation schemes. The Pareto-optimality describes solutions in which an objective cannot be improved without decreasing other objectives. This allows us to quantify the costs at which several levels of increase in bioenergy production come and to derive recommendations for policy makers.

  14. Multi-Disciplinary Design Optimization of Hypersonic Air-Breathing Vehicle

    NASA Astrophysics Data System (ADS)

    Wu, Peng; Tang, Zhili; Sheng, Jianda

    2016-06-01

    A 2D hypersonic vehicle shape with an idealized scramjet is designed at a cruise regime: Mach number (Ma) = 8.0, Angle of attack (AOA) = 0 deg and altitude (H) = 30kms. Then a multi-objective design optimization of the 2D vehicle is carried out by using a Pareto Non-dominated Sorting Genetic Algorithm II (NSGA-II). In the optimization process, the flow around the air-breathing vehicle is simulated by inviscid Euler equations using FLUENT software and the combustion in the combustor is modeled by a methodology based on the well known combination effects of area-varying pipe flow and heat transfer pipe flow. Optimization results reveal tradeoffs among total pressure recovery coefficient of forebody, lift to drag ratio of vehicle, specific impulse of scramjet engine and the maximum temperature on the surface of vehicle.

  15. Solving a bi-objective mathematical model for location-routing problem with time windows in multi-echelon reverse logistics using metaheuristic procedure

    NASA Astrophysics Data System (ADS)

    Ghezavati, V. R.; Beigi, M.

    2016-06-01

    During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.

  16. Explore the impacts of river flow and quality on biodiversity for water resources management by AI techniques

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Tsai Tsai, Wen-Ping; Chang, Li-Chiu

    2016-04-01

    Water resources development is very challenging in Taiwan due to her diverse geographic environment and climatic conditions. To pursue sustainable water resources development, rationality and integrity is essential for water resources planning. River water quality and flow regimes are closely related to each other and affect river ecosystems simultaneously. This study aims to explore the complex impacts of water quality and flow regimes on fish community in order to comprehend the situations of the eco-hydrological system in the Danshui River of northern Taiwan. To make an effective and comprehensive strategy for sustainable water resources management, this study first models fish diversity through implementing a hybrid artificial neural network (ANN) based on long-term observational heterogeneity data of water quality, stream flow and fish species in the river. Then we use stream flow to estimate the loss of dissolved oxygen based on back-propagation neural networks (BPNNs). Finally, the non-dominated sorting genetic algorithm II (NSGA-II) is established for river flow management over the Shihmen Reservoir which is the main reservoir in this study area. In addition to satisfying the water demands of human beings and ecosystems, we also consider water quality for river flow management. The ecosystem requirement takes the form of maximizing fish diversity, which can be estimated by the hybrid ANN. The human requirement is to provide a higher satisfaction degree of water supply while the water quality requirement is to reduce the loss of dissolved oxygen in the river among flow stations. The results demonstrate that the proposed methodology can offer diversified alternative strategies for reservoir operation and improve reservoir operation strategies for producing downstream flows that could better meet both human and ecosystem needs as well as maintain river water quality. Keywords: Artificial intelligence (AI), Artificial neural networks (ANNs), Non

  17. Investigation on Reservoir Operation of Agricultural Water Resources Management for Drought Mitigation

    NASA Astrophysics Data System (ADS)

    Cheng, C. L.

    2015-12-01

    Investigation on Reservoir Operation of Agricultural Water Resources Management for Drought Mitigation Chung-Lien Cheng, Wen-Ping Tsai, Fi-John Chang* Department of Bioenvironmental Systems Engineering, National Taiwan University, Da-An District, Taipei 10617, Taiwan, ROC.Corresponding author: Fi-John Chang (changfj@ntu.edu.tw) AbstractIn Taiwan, the population growth and economic development has led to considerable and increasing demands for natural water resources in the last decades. Under such condition, water shortage problems have frequently occurred in northern Taiwan in recent years such that water is usually transferred from irrigation sectors to public sectors during drought periods. Facing the uneven spatial and temporal distribution of water resources and the problems of increasing water shortages, it is a primary and critical issue to simultaneously satisfy multiple water uses through adequate reservoir operations for sustainable water resources management. Therefore, we intend to build an intelligent reservoir operation system for the assessment of agricultural water resources management strategy in response to food security during drought periods. This study first uses the grey system to forecast the agricultural water demand during February and April for assessing future agricultural water demands. In the second part, we build an intelligent water resources system by using the non-dominated sorting genetic algorithm-II (NSGA-II), an optimization tool, for searching the water allocation series based on different water demand scenarios created from the first part to optimize the water supply operation for different water sectors. The results can be a reference guide for adequate agricultural water resources management during drought periods. Keywords: Non-dominated sorting genetic algorithm-II (NSGA-II); Grey System; Optimization; Agricultural Water Resources Management.

  18. Multi-objective evolutionary optimization of biological pest control with impulsive dynamics in soybean crops.

    PubMed

    Cardoso, Rodrigo T N; da Cruz, André R; Wanner, Elizabeth F; Takahashi, Ricardo H C

    2009-08-01

    The biological pest control in agriculture, an environment-friendly practice, maintains the density of pests below an economic injury level by releasing a suitable quantity of their natural enemies. This work proposes a multi-objective numerical solution to biological pest control for soybean crops, considering both the cost of application of the control action and the cost of economic damages. The system model is nonlinear with impulsive control dynamics, in order to cope more effectively with the actual control action to be applied, which should be performed in a finite number of discrete time instants. The dynamic optimization problem is solved using the NSGA-II, a fast and trustworthy multi-objective genetic algorithm. The results suggest a dual pest control policy, in which the relative price of control action versus the associated additional harvest yield determines the usage of either a low control action strategy or a higher one.

  19. Fatigue design of a cellular phone folder using regression model-based multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Kim, Young Gyun; Lee, Jongsoo

    2016-08-01

    In a folding cellular phone, the folding device is repeatedly opened and closed by the user, which eventually results in fatigue damage, particularly to the front of the folder. Hence, it is important to improve the safety and endurance of the folder while also reducing its weight. This article presents an optimal design for the folder front that maximizes its fatigue endurance while minimizing its thickness. Design data for analysis and optimization were obtained experimentally using a test jig. Multi-objective optimization was carried out using a nonlinear regression model. Three regression methods were employed: back-propagation neural networks, logistic regression and support vector machines. The AdaBoost ensemble technique was also used to improve the approximation. Two-objective Pareto-optimal solutions were identified using the non-dominated sorting genetic algorithm (NSGA-II). Finally, a numerically optimized solution was validated against experimental product data, in terms of both fatigue endurance and thickness index.

  20. A multi-stakeholder framework for urban runoff quality management: Application of social choice and bargaining techniques.

    PubMed

    Ghodsi, Seyed Hamed; Kerachian, Reza; Zahmatkesh, Zahra

    2016-04-15

    In this paper, an integrated framework is proposed for urban runoff management. To control and improve runoff quality and quantity, Low Impact Development (LID) practices are utilized. In order to determine the LIDs' areas and locations, the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), which considers three objective functions of minimizing runoff volume, runoff pollution and implementation cost of LIDs, is utilized. In this framework, the Storm Water Management Model (SWMM) is used for stream flow simulation. The non-dominated solutions provided by the NSGA-II are considered as management scenarios. To select the most preferred scenario, interactions among the main stakeholders in the study area with conflicting utilities are incorporated by utilizing bargaining models including a non-cooperative game, Nash model and social choice procedures of Borda count and approval voting. Moreover, a new social choice procedure, named pairwise voting method, is proposed and applied. Based on each conflict resolution approach, a scenario is identified as the ideal solution providing the LIDs' areas, locations and implementation cost. The proposed framework is applied for urban water quality and quantity management in the northern part of Tehran metropolitan city, Iran. Results show that the proposed pairwise voting method tends to select a scenario with a higher percentage of reduction in TSS (Total Suspended Solid) load and runoff volume, in comparison with the Borda count and approval voting methods. Besides, the Nash method presents a management scenario with the highest cost for LIDs' implementation and the maximum values for percentage of runoff volume reduction and TSS removal. The results also signify that selection of an appropriate management scenario by stakeholders in the study area depends on the available financial resources and the relative importance of runoff quality improvement in comparison with reducing the runoff volume.

  1. A multi-stakeholder framework for urban runoff quality management: Application of social choice and bargaining techniques.

    PubMed

    Ghodsi, Seyed Hamed; Kerachian, Reza; Zahmatkesh, Zahra

    2016-04-15

    In this paper, an integrated framework is proposed for urban runoff management. To control and improve runoff quality and quantity, Low Impact Development (LID) practices are utilized. In order to determine the LIDs' areas and locations, the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), which considers three objective functions of minimizing runoff volume, runoff pollution and implementation cost of LIDs, is utilized. In this framework, the Storm Water Management Model (SWMM) is used for stream flow simulation. The non-dominated solutions provided by the NSGA-II are considered as management scenarios. To select the most preferred scenario, interactions among the main stakeholders in the study area with conflicting utilities are incorporated by utilizing bargaining models including a non-cooperative game, Nash model and social choice procedures of Borda count and approval voting. Moreover, a new social choice procedure, named pairwise voting method, is proposed and applied. Based on each conflict resolution approach, a scenario is identified as the ideal solution providing the LIDs' areas, locations and implementation cost. The proposed framework is applied for urban water quality and quantity management in the northern part of Tehran metropolitan city, Iran. Results show that the proposed pairwise voting method tends to select a scenario with a higher percentage of reduction in TSS (Total Suspended Solid) load and runoff volume, in comparison with the Borda count and approval voting methods. Besides, the Nash method presents a management scenario with the highest cost for LIDs' implementation and the maximum values for percentage of runoff volume reduction and TSS removal. The results also signify that selection of an appropriate management scenario by stakeholders in the study area depends on the available financial resources and the relative importance of runoff quality improvement in comparison with reducing the runoff volume. PMID:26849322

  2. Long-term ELBARA-II Assistance to SMOS Land Product and Algorithm Validation at the Valencia Anchor Station (MELBEX Experiment 2010-2013)

    NASA Astrophysics Data System (ADS)

    Lopez-Baeza, Ernesto; Wigneron, Jean-Pierre; Schwank, Mike; Miernecki, Maciej; Kerr, Yann; Casal, Tania; Delwart, Steven; Fernandez-Moran, Roberto; Mecklenburg, Susanne; Coll Pajaron, M. Amparo; Salgado Hernanz, Paula

    The main activity of the Valencia Anchor Station (VAS) is currently now to support the validation of SMOS (Soil Moisture and Ocean Salinity) Level 2 and 3 land products (soil moisture, SM, and vegetation optical depth, TAU). With this aim, the European Space Agency (ESA) has provided the Climatology from Satellites Group of the University of Valencia with an ELBARA-II microwave radiometer under a loan agreement since September 2009. During this time, brightness temperatures (TB) have continuously been acquired, except during normal maintenance or minor repair interruptions. ELBARA-II is an L-band dual-polarization radiometer with two channels (1400-1418 MHz, 1409-1427 MHz). It is continuously measuring over a vineyard field (El Renegado, Caudete de las Fuentes, Valencia) from a 15 m platform with a constant protocol for calibration and angular scanning measurements with the aim to assisting the validation of SMOS land products and the calibration of the L-MEB (L-Band Emission of the Biosphere) -basis for the SMOS Level 2 Land Processor- over the VAS validation site. One of the advantages of using the VAS site is the possibility of studying two different environmental conditions along the year. While the vine cycle extends mainly between April and October, during the rest of the year the area remains under bare soil conditions, adequate for the calibration of the soil model. The measurement protocol currently running has shown to be robust during the whole operation time and will be extended in time as much as possible to continue providing a long-term data set of ELBARA-II TB measurements and retrieved SM and TAU. This data set is also showing to be useful in support of SMOS scientific activities: the VAS area and, specifically the ELBARA-II site, offer good conditions to control the long-term evolution of SMOS Level 2 and Level 3 land products and interpret eventual anomalies that may obscure sensor hidden biases. In addition, SM and TAU that are currently

  3. Performance simulation of a combustion engine charged by a variable geometry turbocharger. I - Prerequirements, boundary conditions and model development. II - Simulation algorithm, computed results

    NASA Astrophysics Data System (ADS)

    Malobabic, M.; Buttschardt, W.; Rautenberg, M.

    The paper presents a theoretical derivation of the relationship between a variable geometry turbocharger and the combustion engine, using simplified boundary conditions and model restraints and taking into account the combustion process itself as well as the nonadiabatic operating conditions for the turbine and the compressor. The simulation algorithm is described, and the results computed using this algorithm are compared with measurements performed on a test engine in combination with a controllable turbocharger with adjustable turbine inlet guide vanes. In addition, the results of theoretical parameter studies are presented, which include the simulation of a given turbocharger with variable geometry in combination with different sized combustion engines and the simulation of different sized variable-geometry turbochargers in combination with a given combustion engine.

  4. Managing Algorithmic Skeleton Nesting Requirements in Realistic Image Processing Applications: The Case of the SKiPPER-II Parallel Programming Environment's Operating Model

    NASA Astrophysics Data System (ADS)

    Coudarcher, Rémi; Duculty, Florent; Serot, Jocelyn; Jurie, Frédéric; Derutin, Jean-Pierre; Dhome, Michel

    2005-12-01

    SKiPPER is a SKeleton-based Parallel Programming EnviRonment being developed since 1996 and running at LASMEA Laboratory, the Blaise-Pascal University, France. The main goal of the project was to demonstrate the applicability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This paper deals with the special features embedded in the latest version of the project: algorithmic skeleton nesting capabilities and a fully dynamic operating model. Throughout the case study of a complete and realistic image processing application, in which we have pointed out the requirement for skeleton nesting, we are presenting the operating model of this feature. The work described here is one of the few reported experiments showing the application of skeleton nesting facilities for the parallelisation of a realistic application, especially in the area of image processing. The image processing application we have chosen is a 3D face-tracking algorithm from appearance.

  5. Evaluation of the applicability of nonlinear programming algorithms to a typical commercial process flow-sheeting simulator (Volumes I and II)

    SciTech Connect

    Richard, M.J.

    1987-01-01

    An efficient methodology for using commercial flowsheeting programs with advanced mathematical programming algorithms was developed for the optimization of operating plants. The methodology was demonstrated and validated using ChemShare Corporation's DESIGN/2000 simulation of the Freeport Chemical Company's plant for sulfuric acid manufacture and three nonlinear programming techniques: successive linear programming, successive quadratic programming, and the generalized reduced-gradient method. The application of this methodology begins with the development of a feasible base-case simulation. Partial derivatives of the economic model and constraint equations are computed using fully converged simulations. This information is used to formulate an optimization problem that can be solved with the NLP algorithms giving improved values of the economic model. A line search is constructed through the point found from the nonlinear programming algorithm to find the best feasible point to repeat the procedure. The procedure is repeated using the ChemShare simulation program and the NLP code until convergence criteria are met. This method was applied to three flowsheeting problems; a plant-scale-contact sulfuric acid process model, a packed-bed-reactor design model, and an adiabatic-flash problem.

  6. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  7. Measurement of the Inclusive Jet Cross Section using the k(T) algorithm in p anti-p collisions at s**(1/2) = 1.96-TeV with the CDF II Detector

    SciTech Connect

    Abulencia, A.; Adelman, J.; Affolder, Anthony Allen; Akimoto, T.; Albrow, Michael G.; Ambrose, D.; Amerio, S.; Amidei, Dante E.; Anastassov, A.; Anikeev, Konstantin; Annovi, A.; /Frascati /Comenius U.

    2007-01-01

    The authors report on measurements of the inclusive jet production cross section as a function of the jet transverse momentum in p{bar p} collisions at {radical}s = 1.96 TeV, using the k{sub T} algorithm and a data sample corresponding to 1.0 fb{sup -1} collected with the Collider Detector at Fermilab in Run II. The measurements are carried out in five different jet rapidity regions with |y{sup jet}| < 2.1 and transverse momentum in the range 54 < p{sub T}{sup jet} < 700 GeV/c. Next-to-leading order perturbative QCD predictions are in good agreement with the measured cross sections.

  8. Spiral interpolation algorithms for multislice spiral CT--part II: measurement and evaluation of slice sensitivity profiles and noise at a clinical multislice system.

    PubMed

    Fuchs, T; Krause, J; Schaller, S; Flohr, T; Kalender, W A

    2000-09-01

    The recently introduced multislice data acquisition for computed tomography (CT) is based on multirow detector design, increased rotation speed, and advanced z-interpolation and z-filtering algorithms. We evaluated slice sensitivity profiles (SSPs) and noise of a clinical multislice spiral CT (MSCT) scanner with M = 4 simultaneously acquired slices and adaptive axial interpolator (AAI) reconstruction software. SSPs were measured with a small gold disk of 50 microm thickness and 2-mm diameter located at the center of rotation (COR) and 100 mm off center. The standard deviation of CT values within a 20-cm water phantom was used as a measure of image noise. With a detector slice collimation of S = 1.0 mm, we varied spiral pitch p from 0.25 to 2.0 in steps of 0.025. Nominal reconstructed slice thicknesses were 1.25, 1.5, and 2.0 mm. For all possible pitch values, we found the full-width at half maximum (FWHM) of the respective sensitivity profile at the COR equivalent to the selected nominal slice thickness. The profiles at 100 mm off center are broadened less than 7 % on the average compared with the FWHM at the COR. In addition, variation of the full-width at tenth maximum (FWTM) at the COR was below 10% for p < or = 1.75. Within this range, image noise varied less than 10% with respect to the mean noise level. The slight increase in measured slice-width above p = 1.75 for nominal slice-widths of 1.25 and 1.50 mm is accompanied by a decrease of noise according to the inverse square root relationship. The MSCT system that we scrutinized provides reconstructed slice-widths and image noise, which can be regarded as constant within a wide range of table speeds. With respect to this, MSCT is superior to single-slice spiral CT. These facts can be made use of when defining and optimizing clinical protocols: the spiral pitch can be selected almost freely, and scan protocols can follow the diagnostic requirements without technical restrictions. In summary, MSCT offers

  9. TVFMCATS. Time Variant Floating Mean Counting Algorithm

    SciTech Connect

    Huffman, R.K.

    1999-05-01

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor`s hardware.

  10. Time Variant Floating Mean Counting Algorithm

    SciTech Connect

    Huffman, Russell Kevin

    1999-06-03

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.

  11. Optimization of the Coverage and Accuracy of an Indoor Positioning System with a Variable Number of Sensors.

    PubMed

    Domingo-Perez, Francisco; Lazaro-Galilea, Jose Luis; Bravo, Ignacio; Gardel, Alfredo; Rodriguez, David

    2016-06-22

    This paper focuses on optimal sensor deployment for indoor localization with a multi-objective evolutionary algorithm. Our goal is to obtain an algorithm to deploy sensors taking the number of sensors, accuracy and coverage into account. Contrary to most works in the literature, we consider the presence of obstacles in the region of interest (ROI) that can cause occlusions between the target and some sensors. In addition, we aim to obtain all of the Pareto optimal solutions regarding the number of sensors, coverage and accuracy. To deal with a variable number of sensors, we add speciation and structural mutations to the well-known non-dominated sorting genetic algorithm (NSGA-II). Speciation allows one to keep the evolution of sensor sets under control and to apply genetic operators to them so that they compete with other sets of the same size. We show some case studies of the sensor placement of an infrared range-difference indoor positioning system with a fairly complex model of the error of the measurements. The results obtained by our algorithm are compared to sensor placement patterns obtained with random deployment to highlight the relevance of using such a deployment algorithm.

  12. Optimization of the Coverage and Accuracy of an Indoor Positioning System with a Variable Number of Sensors

    PubMed Central

    Domingo-Perez, Francisco; Lazaro-Galilea, Jose Luis; Bravo, Ignacio; Gardel, Alfredo; Rodriguez, David

    2016-01-01

    This paper focuses on optimal sensor deployment for indoor localization with a multi-objective evolutionary algorithm. Our goal is to obtain an algorithm to deploy sensors taking the number of sensors, accuracy and coverage into account. Contrary to most works in the literature, we consider the presence of obstacles in the region of interest (ROI) that can cause occlusions between the target and some sensors. In addition, we aim to obtain all of the Pareto optimal solutions regarding the number of sensors, coverage and accuracy. To deal with a variable number of sensors, we add speciation and structural mutations to the well-known non-dominated sorting genetic algorithm (NSGA-II). Speciation allows one to keep the evolution of sensor sets under control and to apply genetic operators to them so that they compete with other sets of the same size. We show some case studies of the sensor placement of an infrared range-difference indoor positioning system with a fairly complex model of the error of the measurements. The results obtained by our algorithm are compared to sensor placement patterns obtained with random deployment to highlight the relevance of using such a deployment algorithm. PMID:27338414

  13. A Multiobjective Approach to Homography Estimation

    PubMed Central

    Osuna-Enciso, Valentín; Oliva, Diego; Zúñiga, Virgilio; Pérez-Cisneros, Marco; Zaldívar, Daniel

    2016-01-01

    In several machine vision problems, a relevant issue is the estimation of homographies between two different perspectives that hold an extensive set of abnormal data. A method to find such estimation is the random sampling consensus (RANSAC); in this, the goal is to maximize the number of matching points given a permissible error (Pe), according to a candidate model. However, those objectives are in conflict: a low Pe value increases the accuracy of the model but degrades its generalization ability that refers to the number of matching points that tolerate noisy data, whereas a high Pe value improves the noise tolerance of the model but adversely drives the process to false detections. This work considers the estimation process as a multiobjective optimization problem that seeks to maximize the number of matching points whereas Pe is simultaneously minimized. In order to solve the multiobjective formulation, two different evolutionary algorithms have been explored: the Nondominated Sorting Genetic Algorithm II (NSGA-II) and the Nondominated Sorting Differential Evolution (NSDE). Results considering acknowledged quality measures among original and transformed images over a well-known image benchmark show superior performance of the proposal than Random Sample Consensus algorithm. PMID:26839532

  14. Robust Multiobjective Controllability of Complex Neuronal Networks.

    PubMed

    Tang, Yang; Gao, Huijun; Du, Wei; Lu, Jianquan; Vasilakos, Athanasios V; Kurths, Jurgen

    2016-01-01

    This paper addresses robust multiobjective identification of driver nodes in the neuronal network of a cat's brain, in which uncertainties in determination of driver nodes and control gains are considered. A framework for robust multiobjective controllability is proposed by introducing interval uncertainties and optimization algorithms. By appropriate definitions of robust multiobjective controllability, a robust nondominated sorting adaptive differential evolution (NSJaDE) is presented by means of the nondominated sorting mechanism and the adaptive differential evolution (JaDE). The simulation experimental results illustrate the satisfactory performance of NSJaDE for robust multiobjective controllability, in comparison with six statistical methods and two multiobjective evolutionary algorithms (MOEAs): nondominated sorting genetic algorithms II (NSGA-II) and nondominated sorting composite differential evolution. It is revealed that the existence of uncertainties in choosing driver nodes and designing control gains heavily affects the controllability of neuronal networks. We also unveil that driver nodes play a more drastic role than control gains in robust controllability. The developed NSJaDE and obtained results will shed light on the understanding of robustness in controlling realistic complex networks such as transportation networks, power grid networks, biological networks, etc.

  15. Multi-Objective Differential Evolution for Automatic Clustering with Application to Micro-Array Data Analysis

    PubMed Central

    Suresh, Kaushik; Kundu, Debarati; Ghosh, Sayan; Das, Swagatam; Abraham, Ajith; Han, Sang Yong

    2009-01-01

    This paper applies the Differential Evolution (DE) algorithm to the task of automatic fuzzy clustering in a Multi-objective Optimization (MO) framework. It compares the performances of two multi-objective variants of DE over the fuzzy clustering problem, where two conflicting fuzzy validity indices are simultaneously optimized. The resultant Pareto optimal set of solutions from each algorithm consists of a number of non-dominated solutions, from which the user can choose the most promising ones according to the problem specifications. A real-coded representation of the search variables, accommodating variable number of cluster centers, is used for DE. The performances of the multi-objective DE-variants have also been contrasted to that of two most well-known schemes of MO clustering, namely the Non Dominated Sorting Genetic Algorithm (NSGA II) and Multi-Objective Clustering with an unknown number of Clusters K (MOCK). Experimental results using six artificial and four real life datasets of varying range of complexities indicate that DE holds immense promise as a candidate algorithm for devising MO clustering schemes. PMID:22412346

  16. An archived multi-objective simulated annealing for a dynamic cellular manufacturing system

    NASA Astrophysics Data System (ADS)

    Shirazi, Hossein; Kia, Reza; Javadian, Nikbakhsh; Tavakkoli-Moghaddam, Reza

    2014-05-01

    To design a group layout of a cellular manufacturing system (CMS) in a dynamic environment, a multi-objective mixed-integer non-linear programming model is developed. The model integrates cell formation, group layout and production planning (PP) as three interrelated decisions involved in the design of a CMS. This paper provides an extensive coverage of important manufacturing features used in the design of CMSs and enhances the flexibility of an existing model in handling the fluctuations of part demands more economically by adding machine depot and PP decisions. Two conflicting objectives to be minimized are the total costs and the imbalance of workload among cells. As the considered objectives in this model are in conflict with each other, an archived multi-objective simulated annealing (AMOSA) algorithm is designed to find Pareto-optimal solutions. Matrix-based solution representation, a heuristic procedure generating an initial and feasible solution and efficient mutation operators are the advantages of the designed AMOSA. To demonstrate the efficiency of the proposed algorithm, the performance of AMOSA is compared with an exact algorithm (i.e., ∈-constraint method) solved by the GAMS software and a well-known evolutionary algorithm, namely NSGA-II for some randomly generated problems based on some comparison metrics. The obtained results show that the designed AMOSA can obtain satisfactory solutions for the multi-objective model.

  17. Algorithm development

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Lomax, Harvard

    1987-01-01

    The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.

  18. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  19. Novel system identification method and multi-objective-optimal multivariable disturbance observer for electric wheelchair.

    PubMed

    Nasser Saadatzi, Mohammad; Poshtan, Javad; Sadegh Saadatzi, Mohammad; Tafazzoli, Faezeh

    2013-01-01

    Electric wheelchair (EW) is subject to diverse types of terrains and slopes, but also to occupants of various weights, which causes the EW to suffer from highly perturbed dynamics. A precise multivariable dynamics of the EW is obtained using Lagrange equations of motion which models effects of slopes as output-additive disturbances. A static pre-compensator is analytically devised which considerably decouples the EW's dynamics and also brings about a more accurate identification of the EW. The controller is designed with a disturbance-observer (DOB) two-degree-of-freedom architecture, which reduces sensitivity to the model uncertainties while enhancing rejection of the disturbances. Upon disturbance rejection, noise reduction, and robust stability of the control system, three fitness functions are presented by which the DOB is tuned using a multi-objective optimization (MOO) approach namely non-dominated sorting genetic algorithm-II (NSGA-II). Finally, experimental results show desirable performance and robust stability of the proposed algorithm. PMID:22959528

  20. A simulation-optimization model for Stone column-supported embankment stability considering rainfall effect

    NASA Astrophysics Data System (ADS)

    Deb, Kousik; Dhar, Anirban; Purohit, Sandip

    2016-02-01

    Landslide due to rainfall has been and continues to be one of the most important concerns of geotechnical engineering. The paper presents the variation of factor of safety of stone column-supported embankment constructed over soft soil due to change in water level for an incessant period of rainfall. A combined simulation-optimization based methodology has been proposed to predict the critical surface of failure of the embankment and to optimize the corresponding factor of safety under rainfall conditions using an evolutionary genetic algorithm NSGA-II (Non-Dominated Sorted Genetic Algorithm-II). It has been observed that the position of water table can be reliably estimated with varying periods of infiltration using developed numerical method. The parametric study is presented to study the optimum factor of safety of the embankment and its corresponding critical failure surface under the steady-state infiltration condition. Results show that in case of floating stone columns, period of infiltration has no effect on factor of safety. Even critical failure surfaces for a particular floating column length remain same irrespective of rainfall duration.

  1. Multi-component seismic modeling and robust pre-stack seismic waveform inversion for elastic anisotropic media parameters

    NASA Astrophysics Data System (ADS)

    Li, Tao

    Consideration of azimuthal anisotropy, at least to an orthorhombic symmetry is important in exploring the naturally fractured and unconventional hydrocarbon reservoirs. Full waveform inversion of multicomponent seismic data can, in principle, provide more robust estimates of subsurface elastic parameters and density than the inversion of single component (P wave) seismic data. In addition, azimuthally dependent anisotropy can only be resolved by carefully studying the multicomponent seismic displacement data acquired and processed along different azimuths. Such an analysis needs an inversion algorithm capable of simultaneously optimizing multiple objectives, one for each data component along each azimuth. In this dissertation, I propose a novel multiobjective methodology using a parallelized version of NSGA II for waveform inversion of multicomponent seismic data along two azimuths. The proposed methodology is also an improvement of the original NSGA II in overall computational efficiency, preservation of population diversity, and rapid sampling of the model space. Next, the proposed methodology is applied on wide azimuth and multicomponent vertical seismic profile (VSP) data to provide reliable estimation of subsurface anisotropy at and near the well location. Prestack waveform inversion was applied to the wide-azimuth multicomponent VSP data acquired at the Wattenberg Field, located in Denver Basin of northeastern Colorado, USA, to characterize the Niobrara formation for azimuthal anisotropy. By comparing the waveform inversion results with an independent study that used a joint slowness-polarization approach to invert the same data, we conclude that the waveform inversion is a reliable tool for inverting the wide-azimuth multicomponent VSP data for anisotropy estimation. Last but not least, an anisotropic elastic three-dimensional scheme for modeling the elastodynamic wavefield is developed in order to go beyond the 1D layering assumption being used in previous

  2. A preference-based multi-objective model for the optimization of best management practices

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Qiu, Jiali; Wei, Guoyuan; Shen, Zhenyao

    2015-01-01

    The optimization of best management practices (BMPs) at the watershed scale is notably complex because of the social nature of decision process, which incorporates information that reflects the preferences of decision makers. In this study, a preference-based multi-objective model was designed by modifying the commonly-used Non-dominated Sorting Genetic Algorithm (NSGA-II). The reference points, achievement scalarizing functions and an indicator-based optimization principle were integrated for searching a set of preferred Pareto-optimality solutions. Pareto preference ordering was also used for reducing objective numbers in the final decision-making process. This proposed model was then tested in a typical watershed in the Three Gorges Region, China. The results indicated that more desirable solutions were generated, which reduced the burden of decision effort of watershed managers. Compare to traditional Genetic Algorithm (GA), those preferred solutions were concentrated in a narrow region close to the projection point instead of the entire Pareto-front. Based on Pareto preference ordering, the solutions with the best objective function values were often the more desirable solutions (i.e., the minimum cost solution and the minimum pollutant load solution). In the authors' view, this new model provides a useful tool for optimizing BMPs at watershed scale and is therefore of great benefit to watershed managers.

  3. Modeling and optimization of a multi-product biosynthesis factory for multiple objectives.

    PubMed

    Lee, Fook Choon; Pandu Rangaiah, Gade; Lee, Dong-Yup

    2010-05-01

    Genetic algorithms and optimization in general, enable us to probe deeper into the metabolic pathway recipe for multi-product biosynthesis. An augmented model for optimizing serine and tryptophan flux ratios simultaneously in Escherichia coli, was developed by linking the dynamic tryptophan operon model and aromatic amino acid-tryptophan biosynthesis pathways to the central carbon metabolism model. Six new kinetic parameters of the augmented model were estimated with considerations of available experimental data and other published works. Major differences between calculated and reference concentrations and fluxes were explained. Sensitivities and underlying competition among fluxes for carbon sources were consistent with intuitive expectations based on metabolic network and previous results. Biosynthesis rates of serine and tryptophan were simultaneously maximized using the augmented model via concurrent gene knockout and manipulation. The optimization results were obtained using the elitist non-dominant sorting genetic algorithm (NSGA-II) supported by pattern recognition heuristics. A range of Pareto-optimal enzyme activities regulating the amino acids biosynthesis was successfully obtained and elucidated wherever possible vis-à-vis fermentation work based on recombinant DNA technology. The predicted potential improvements in various metabolic pathway recipes using the multi-objective optimization strategy were highlighted and discussed in detail. PMID:20051269

  4. Pareto Optimization Identifies Diverse Set of Phosphorylation Signatures Predicting Response to Treatment with Dasatinib.

    PubMed

    Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph

    2015-01-01

    Multivariate biomarkers that can predict the effectiveness of targeted therapy in individual patients are highly desired. Previous biomarker discovery studies have largely focused on the identification of single biomarker signatures, aimed at maximizing prediction accuracy. Here, we present a different approach that identifies multiple biomarkers by simultaneously optimizing their predictive power, number of features, and proximity to the drug target in a protein-protein interaction network. To this end, we incorporated NSGA-II, a fast and elitist multi-objective optimization algorithm that is based on the principle of Pareto optimality, into the biomarker discovery workflow. The method was applied to quantitative phosphoproteome data of 19 non-small cell lung cancer (NSCLC) cell lines from a previous biomarker study. The algorithm successfully identified a total of 77 candidate biomarker signatures predicting response to treatment with dasatinib. Through filtering and similarity clustering, this set was trimmed to four final biomarker signatures, which then were validated on an independent set of breast cancer cell lines. All four candidates reached the same good prediction accuracy (83%) as the originally published biomarker. Although the newly discovered signatures were diverse in their composition and in their size, the central protein of the originally published signature - integrin β4 (ITGB4) - was also present in all four Pareto signatures, confirming its pivotal role in predicting dasatinib response in NSCLC cell lines. In summary, the method presented here allows for a robust and simultaneous identification of multiple multivariate biomarkers that are optimized for prediction performance, size, and relevance.

  5. Integrated water and sediment flow simulation and forecasting models for river reaches

    NASA Astrophysics Data System (ADS)

    Choudhury, Parthasarathi; Sil, Briti Sundar

    2010-05-01

    SummaryIn the present study integrated water and sediment flow simulation and forecasting models for a river reach have been developed. The new models combine Muskingum model and the sediment rating model leading to integrated water discharge-sediment concentration model ( WSCM) and water discharge-sediment discharge model ( WSDM) for a reach. The models depict coherence in water discharge and sediment load variations at a site; incorporate two hydrologic variables, water discharge and sediment load for the gauge sites and represent revised forms of the basic Muskingum model. The models can be recast into forecasting form useful for obtaining downstream water and sediment flow forecasts Δt'=2kx time unit ahead. During calibration the models can select a commensurate inflow-outflow set depending on upstream and the downstream relative sediment discharge characteristics for a reach. The models can be used for developing Muskingum model for river reaches having no water discharge records. With forecasting capabilities the present models are useful in the real time management of sediment related pollution hazards in water courses. The study indicates that a single model could be used to describe both water and sediment flow in river reaches. The proposed model formulations are demonstrated for simulating and forecasting sediment concentration, sediment discharge and water discharge in the Mississippi River Basin, USA. Model parameters are estimated using non-dominated sorting Genetic Algorithm II (NSGA-II). Comparison of models performances with reported works show better performances by the present models.

  6. A niched Pareto tabu search for multi-objective optimal design of groundwater remediation systems

    NASA Astrophysics Data System (ADS)

    Yang, Yun; Wu, Jianfeng; Sun, Xiaomin; Wu, Jichun; Zheng, Chunmiao

    2013-05-01

    This study presents a new multi-objective optimization method, the niched Pareto tabu search (NPTS), for optimal design of groundwater remediation systems. The proposed NPTS is then coupled with the commonly used flow and transport code, MODFLOW and MT3DMS, to search for the near Pareto-optimal tradeoffs of groundwater remediation strategies. The difference between the proposed NPTS and the existing multiple objective tabu search (MOTS) lies in the use of the niche selection strategy and fitness archiving to maintain the diversity of the optimal solutions along the Pareto front and avoid repetitive calculations of the objective functions associated with the flow and transport model. Sensitivity analysis of the NPTS parameters is evaluated through a synthetic pump-and-treat remediation application involving two conflicting objectives, minimizations of both remediation cost and contaminant mass remaining in the aquifer. Moreover, the proposed NPTS is applied to a large-scale pump-and-treat groundwater remediation system of the field site at the Massachusetts Military Reservation (MMR) in Cape Cod, Massachusetts, involving minimizations of both total pumping rates and contaminant mass remaining in the aquifer. Additional comparison of the results based on the NPTS with those obtained from other two methods, namely the single objective tabu search (SOTS) and the nondominated sorting genetic algorithm II (NSGA-II), further indicates that the proposed NPTS has desirable computation efficiency, stability, and robustness and is a promising tool for optimizing the multi-objective design of groundwater remediation systems.

  7. LID-BMPs planning for urban runoff control and the case study in China.

    PubMed

    Jia, Haifeng; Yao, Hairong; Tang, Ying; Yu, Shaw L; Field, Richard; Tafuri, Anthony N

    2015-02-01

    Low Impact Development Best Management Practices (LID-BMPs) have in recent years received much recognition as cost-effective measures for mitigating urban runoff impacts. In the present paper, a procedure for LID-BMPs planning and analysis using a comprehensive decision support tool was proposed. A case study was conducted to the planning of an LID-BMPs implementation effort at a college campus in Foshan, Guangdong Province, China. By examining information obtained, potential LID-BMPs were first selected. SUSTAIN was then used to analyze four runoff control scenarios, namely: pre-development scenario; basic scenario (existing campus development plan without BMP control); Scenario 1 (least-cost BMPs implementation); and, Scenario 2 (maximized BMPs performance). A sensitivity analysis was also performed to assess the impact of the hydrologic and water quality parameters. The optimal solution for each of the two LID-BMPs scenarios was obtained by using the non-dominated sorting genetic algorithm-II (NSGA-II). Finally, the cost-effectiveness of the LID-BMPs implementation scenarios was examined by determining the incremental cost for a unit improvement of control.

  8. An optimal design of wind turbine and ship structure based on neuro-response surface method

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young

    2015-07-01

    The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.

  9. AI techniques for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands

    NASA Astrophysics Data System (ADS)

    Tsai, Wen-Ping; Chang, Fi-John; Chang, Li-Chiu; Herricks, Edwin E.

    2015-11-01

    Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (AI) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs.

  10. LID-BMPs planning for urban runoff control and the case study in China.

    PubMed

    Jia, Haifeng; Yao, Hairong; Tang, Ying; Yu, Shaw L; Field, Richard; Tafuri, Anthony N

    2015-02-01

    Low Impact Development Best Management Practices (LID-BMPs) have in recent years received much recognition as cost-effective measures for mitigating urban runoff impacts. In the present paper, a procedure for LID-BMPs planning and analysis using a comprehensive decision support tool was proposed. A case study was conducted to the planning of an LID-BMPs implementation effort at a college campus in Foshan, Guangdong Province, China. By examining information obtained, potential LID-BMPs were first selected. SUSTAIN was then used to analyze four runoff control scenarios, namely: pre-development scenario; basic scenario (existing campus development plan without BMP control); Scenario 1 (least-cost BMPs implementation); and, Scenario 2 (maximized BMPs performance). A sensitivity analysis was also performed to assess the impact of the hydrologic and water quality parameters. The optimal solution for each of the two LID-BMPs scenarios was obtained by using the non-dominated sorting genetic algorithm-II (NSGA-II). Finally, the cost-effectiveness of the LID-BMPs implementation scenarios was examined by determining the incremental cost for a unit improvement of control. PMID:25463572

  11. Response of a quarter car model with optimal magnetorheological damper parameters

    NASA Astrophysics Data System (ADS)

    Prabakar, R. S.; Sujatha, C.; Narayanan, S.

    2013-04-01

    In this paper, the control of the stationary response of a quarter car model to random road excitation with a Magnetorheological (MR) damper as a semi-active suspension device is considered. The MR damper is a hypothetical analytical damper whose parameters are determined optimally using a multi-objective optimization technique Non-dominated Sorting Genetic Algorithm II (NSGA II). The hysteretic behaviour of the MR damper is characterized using Bingham and modified Bouc-Wen models. The multi-objective optimization problem is solved by minimizing the difference between the root mean square (rms) sprung mass acceleration, suspension stroke and the road holding responses of the quarter car model with the MR damper and those of the active suspension system based on linear quadratic regulator (LQR) control with the constraint that the MR damper control force lies between ±5 percent of the LQR control force. It is observed that the MR damper suspension systems with optimal parameters perform an order of magnitude better than the passive suspension and perform as well as active suspensions with limited state feedback and closer to the performance of fully active suspensions.

  12. Algorithmic chemistry

    SciTech Connect

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  13. GPU Accelerated Event Detection Algorithm

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less

  14. GPU Accelerated Event Detection Algorithm

    SciTech Connect

    2011-05-25

    Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but also model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.

  15. Nonorthogonal orbital based N-body reduced density matrices and their applications to valence bond theory. II. An efficient algorithm for matrix elements and analytical energy gradients in VBSCF method.

    PubMed

    Chen, Zhenhua; Chen, Xun; Wu, Wei

    2013-04-28

    In this paper, by applying the reduced density matrix (RDM) approach for nonorthogonal orbitals developed in the first paper of this series, efficient algorithms for matrix elements between VB structures and energy gradients in valence bond self-consistent field (VBSCF) method were presented. Both algorithms scale only as nm(4) for integral transformation and d(2)n(β)(2) for VB matrix elements and 3-RDM evaluation, while the computational costs of other procedures are negligible, where n, m, d, and n(β )are the numbers of variable occupied active orbitals, basis functions, determinants, and active β electrons, respectively. Using tensor properties of the energy gradients with respect to the orbital coefficients presented in the first paper of this series, a partial orthogonal auxiliary orbital set was introduced to reduce the computational cost of VBSCF calculation in which orbitals are flexibly defined. Test calculations on the Diels-Alder reaction of butadiene and ethylene have shown that the novel algorithm is very efficient for VBSCF calculations. PMID:23635124

  16. Tyrosinaemia II.

    PubMed

    Colditz, P B; Yu, J S; Billson, F A; Rogers, M; Molloy, H F; O'Halloran, M; Wilcken, B

    1984-08-18

    Four cases of tyrosinaemia type II (Richner-Hanhart syndrome) are reported. This syndrome consists of corneal erosions, palmar and plantar hyperkeratoses, and sometimes mental retardation. Presentation with photophobia and dendritic corneal ulceration or circumscribed palmoplantar keratoderma should alert the physician to the possible diagnosis of tyrosinaemia II. Early diagnosis is important, as the clinical picture can be modified by dietary restriction.

  17. Evolutionary multiobjective design of a flexible caudal fin for robotic fish.

    PubMed

    Clark, Anthony J; Tan, Xiaobo; McKinley, Philip K

    2015-12-01

    Robotic fish accomplish swimming by deforming their bodies or other fin-like appendages. As an emerging class of embedded computing system, robotic fish are anticipated to play an important role in environmental monitoring, inspection of underwater structures, tracking of hazardous wastes and oil spills, and the study of live fish behaviors. While integration of flexible materials (into the fins and/or body) holds the promise of improved swimming performance (in terms of both speed and maneuverability) for these robots, such components also introduce significant design challenges due to the complex material mechanics and hydrodynamic interactions. The problem is further exacerbated by the need for the robots to meet multiple objectives (e.g., both speed and energy efficiency). In this paper, we propose an evolutionary multiobjective optimization approach to the design and control of a robotic fish with a flexible caudal fin. Specifically, we use the NSGA-II algorithm to investigate morphological and control parameter values that optimize swimming speed and power usage. Several evolved fin designs are validated experimentally with a small robotic fish, where fins of different stiffness values and sizes are printed with a multi-material 3D printer. Experimental results confirm the effectiveness of the proposed design approach in balancing the two competing objectives. PMID:26601975

  18. Long Series Multi-objectives Optimal Operation of Water And Sediment Regulation

    NASA Astrophysics Data System (ADS)

    Bai, T.; Jin, W.

    2015-12-01

    Secondary suspended river in Inner Mongolia reaches have formed and the security of reach and ecological health of the river are threatened. Therefore, researches on water-sediment regulation by cascade reservoirs are urgent and necessary. Under this emergency background, multi-objectives water and sediment regulation are studied in this paper. Firstly, multi-objective optimal operation models of Longyangxia and Liujiaxia cascade reservoirs are established. Secondly, based on constraints handling and feasible search space techniques, the Non-dominated Sorting Genetic Algorithm (NSGA-II) is greatly improved to solve the model. Thirdly, four different scenarios are set. It is demonstrated that: (1) scatter diagrams of perato front are obtained to show optimal solutions of power generation maximization, sediment maximization and the global equilibrium solutions between the two; (2) the potentiality of water-sediment regulation by Longyangxia and Liujiaxia cascade reservoirs are analyzed; (3) with the increasing water supply in future, conflict between water supply and water-sediment regulation occurred, and the sustainability of water and sediment regulation will confront with negative influences for decreasing transferable water in cascade reservoirs; (4) the transfer project has less benefit for water-sediment regulation. The research results have an important practical significance and application on water-sediment regulation by cascade reservoirs in the Upper Yellow River, to construct water and sediment control system in the whole Yellow River basin.

  19. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs.

    PubMed

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  20. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs.

    PubMed

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture.

  1. Multi-objective optimisation of wastewater treatment plant control to reduce greenhouse gas emissions.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2014-05-15

    This study investigates the potential of control strategy optimisation for the reduction of operational greenhouse gas emissions from wastewater treatment in a cost-effective manner, and demonstrates that significant improvements can be realised. A multi-objective evolutionary algorithm, NSGA-II, is used to derive sets of Pareto optimal operational and control parameter values for an activated sludge wastewater treatment plant, with objectives including minimisation of greenhouse gas emissions, operational costs and effluent pollutant concentrations, subject to legislative compliance. Different problem formulations are explored, to identify the most effective approach to emissions reduction, and the sets of optimal solutions enable identification of trade-offs between conflicting objectives. It is found that multi-objective optimisation can facilitate a significant reduction in greenhouse gas emissions without the need for plant redesign or modification of the control strategy layout, but there are trade-offs to consider: most importantly, if operational costs are not to be increased, reduction of greenhouse gas emissions is likely to incur an increase in effluent ammonia and total nitrogen concentrations. Design of control strategies for a high effluent quality and low costs alone is likely to result in an inadvertent increase in greenhouse gas emissions, so it is of key importance that effects on emissions are considered in control strategy development and optimisation.

  2. New model for sustainable management of pressurized irrigation networks. Application to Bembézar MD irrigation district (Spain).

    PubMed

    Carrillo Cobo, M T; Camacho Poyato, E; Montesinos, P; Rodríguez Díaz, J A

    2014-03-01

    Pressurized irrigation networks require large amounts of energy for their operation which are linked to significant greenhouse gas (GHG) emissions. In recent years, several management strategies have been developed to reduce energy consumption in the agricultural sector. One strategy is the reduction of the water supplied for irrigation but implies a reduction in crop yields and farmer's profits. In this work, a new methodology is developed for sustainable management of irrigation networks considering environmental and economic criteria. The multiobjective non-dominated Sorting Genetic Algorithm (NSGA II) has been selected to obtain the optimum irrigation pattern that would reduce GHG emissions and increase profits. This methodology has been applied to Bembézar Margen Derecha (BMD) irrigation district (Spain). Irrigation patterns that reduce GHG emissions or increase actual profits are obtained. The best irritation pattern reduces the current GHG emissions in 8.56% with increases the actual profits in 14.56%. Thus, these results confirm that simultaneous improvements in environmental and economic factors are possible.

  3. Optimization of PHEV Power Split Gear Ratio to Minimize Fuel Consumption and Operation Cost

    NASA Astrophysics Data System (ADS)

    Li, Yanhe

    A Plug-in Hybrid Electric Vehicle (PHEV) is a vehicle powered by a combination of an internal combustion engine and an electric motor with a battery pack. The battery pack can be charged by plugging the vehicle to the electric grid and from using excess engine power. The research activity performed in this thesis focused on the development of an innovative optimization approach of PHEV Power Split Device (PSD) gear ratio with the aim to minimize the vehicle operation costs. Three research activity lines have been followed: • Activity 1: The PHEV control strategy optimization by using the Dynamic Programming (DP) and the development of PHEV rule-based control strategy based on the DP results. • Activity 2: The PHEV rule-based control strategy parameter optimization by using the Non-dominated Sorting Genetic Algorithm (NSGA-II). • Activity 3: The comprehensive analysis of the single mode PHEV architecture to offer the innovative approach to optimize the PHEV PSD gear ratio.

  4. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs

    PubMed Central

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  5. Evolutionary multiobjective design of a flexible caudal fin for robotic fish.

    PubMed

    Clark, Anthony J; Tan, Xiaobo; McKinley, Philip K

    2015-12-01

    Robotic fish accomplish swimming by deforming their bodies or other fin-like appendages. As an emerging class of embedded computing system, robotic fish are anticipated to play an important role in environmental monitoring, inspection of underwater structures, tracking of hazardous wastes and oil spills, and the study of live fish behaviors. While integration of flexible materials (into the fins and/or body) holds the promise of improved swimming performance (in terms of both speed and maneuverability) for these robots, such components also introduce significant design challenges due to the complex material mechanics and hydrodynamic interactions. The problem is further exacerbated by the need for the robots to meet multiple objectives (e.g., both speed and energy efficiency). In this paper, we propose an evolutionary multiobjective optimization approach to the design and control of a robotic fish with a flexible caudal fin. Specifically, we use the NSGA-II algorithm to investigate morphological and control parameter values that optimize swimming speed and power usage. Several evolved fin designs are validated experimentally with a small robotic fish, where fins of different stiffness values and sizes are printed with a multi-material 3D printer. Experimental results confirm the effectiveness of the proposed design approach in balancing the two competing objectives.

  6. Prediction of a Flash Flood in Complex Terrain. Part II: A Comparison of Flood Discharge Simulations Using Rainfall Input from Radar, a Dynamic Model, and an Automated Algorithmic System.

    NASA Astrophysics Data System (ADS)

    Yates, David N.; Warner, Thomas T.; Leavesley, George H.

    2000-06-01

    Three techniques were employed for the estimation and prediction of precipitation from a thunderstorm that produced a flash flood in the Buffalo Creek watershed located in the mountainous Front Range near Denver, Colorado, on 12 July 1996. The techniques included 1) quantitative precipitation estimation using the National Weather Service's Weather Surveillance Radar-1988 Doppler and the National Center for Atmospheric Research's S-band, dual-polarization radars, 2) quantitative precipitation forecasting utilizing a dynamic model, and 3) quantitative precipitation forecasting using an automated algorithmic system for tracking thunderstorms. Rainfall data provided by these various techniques at short timescales (6 min) and at fine spatial resolutions (150 m to 2 km) served as input to a distributed-parameter hydrologic model for analysis of the flash flood. The quantitative precipitation estimates from the weather radar demonstrated their ability to aid in simulating a watershed's response to precipitation forcing from small-scale, convective weather in complex terrain. That is, with the radar-based quantitative precipitation estimates employed as input, the simulated peak discharge was similar to that estimated. The dynamic model showed the most promise in providing a significant forecast lead time for this flash-flood event. The algorithmic system did not show as much skill in comparison with the dynamic model in providing precipitation forcing to the hydrologic model. The discharge forecasts based on the dynamic-model and algorithmic-system inputs point to the need to improve the ability to forecast convective storms, especially if models such as these eventually are to be used in operational flood forecasting.

  7. Library of Continuation Algorithms

    2005-03-01

    LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newton’s method for their nonlinear solve.

  8. Photosystem II

    ScienceCinema

    James Barber

    2016-07-12

    James Barber, Ernst Chain Professor of Biochemistry at Imperial College, London, gives a BSA Distinguished Lecture titled, "The Structure and Function of Photosystem II: The Water-Splitting Enzyme of Photosynthesis."

  9. I. Thermal evolution of Ganymede and implications for surface features. II. Magnetohydrodynamic constraints on deep zonal flow in the giant planets. III. A fast finite-element algorithm for two-dimensional photoclinometry

    SciTech Connect

    Kirk, R.L.

    1987-01-01

    Thermal evolution of Ganymede from a hot start is modeled. On cooling ice I forms above the liquid H/sub 2/O and dense ices at higher entropy below it. A novel diapiric instability is proposed to occur if the ocean thins enough, mixing these layers and perhaps leading to resurfacing and groove formation. Rising warm-ice diapirs may cause a dramatic heat pulse and fracturing at the surface, and provide material for surface flows. Timing of the pulse depends on ice rheology but could agree with crater-density dates for resurfacing. Origins of the Ganymede-Callisto dichotomy in light of the model are discussed. Based on estimates of the conductivity of H/sub 2/ (Jupiter, Saturn) and H/sub 2/O (Uranus, Neptune), the zonal winds of the giant planets will, if they penetrate below the visible atmosphere, interact with the magnetic field well outside the metallic core. The scaling argument is supported by a model with zonal velocity constant on concentric cylinders, the Lorentz torque on each balanced by viscous stresses. The problem of two-dimensional photoclinometry, i.e. reconstruction of a surface from its image, is formulated in terms of finite elements and a fast algorithm using Newton-SOR iteration accelerated by multigridding is presented.

  10. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  11. Algorithm-development activities

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.

    1994-01-01

    The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.

  12. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  13. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  14. SAGE II

    Atmospheric Science Data Center

    2016-02-16

    ... of stratospheric aerosols, ozone, nitrogen dioxide, water vapor and cloud occurrence by mapping vertical profiles and calculating ... (i.e. MLS and SAGE III versus HALOE) Fixed various bugs Details are in the  SAGE II V7.00 Release Notes .   ...

  15. Neural network cloud screening algorithm Part II: global synthetic cases using high resolution spectra in O2 and CO2 near infrared absorption bands in nadir and sun glint

    NASA Astrophysics Data System (ADS)

    Taylor, Thomas E.; O'Brien, D. M.

    2010-03-01

    In Part I a set of two layer feed-forward neural networks, trained via back propagation of sensitivities, was applied to a synthetic set of radiances in micro-windows of the near-infrared to make predictions of cloud water (cw), cloud ice (ci), effective scattering heights of cloud water and ice, (pcw and pci, respectively) and the column water vapor (w). A threshold test, using 2 g/m-2 for cloud water and 10 g/m-2 for cloud ice, was applied to the retrieved values to distinguish clear from cloudy scenes. In that work the discussion was limited to the nadir viewing geometry, and was applied only to land surfaces, excluding desert and snow and ice fields. Part II describes the extension to a set of high resolution radiances, as might be measured by a grating spectrometer from space, in both nadir and sun glint viewing geometries. Furthermore, results are given for all land surface types as well as scenes over ocean. Prior to neural network training, a Principal Component Analysis (PCA) is applied to the high resolution spectra, which consist of three bands centered at 0.76μm (O2 A-band), 1.61μm (weak CO2 band) and 2.06μm (strong CO2 band), each with 1016 channels. Analysis shows that the five leading EOFs together capture 99.9% of the variance in each band, reducing the data size by more than two orders of magnitude. Application of the trained neural networks to an independent data set, generated using CloudSat and Calipso cloud and aerosol profiles, as well as carbon dioxide profiles from a chemical transport model, were used to quantify the skill in the retrieval. The results vary significantly with surface type, viewing mode and cloud properties. Accuracies range from 7% to 100% (typically close to 75%), with confidence levels almost always greater than 90%.

  16. Applying a Genetic Algorithm to Reconfigurable Hardware

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim

    2004-01-01

    This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.

  17. Optimum design of phononic crystal perforated plate structures for widest bandgap of fundamental guided wave modes and maximized in-plane stiffness

    NASA Astrophysics Data System (ADS)

    Hedayatrasa, Saeid; Abhary, Kazem; Uddin, Mohammad; Ng, Ching-Tai

    2016-04-01

    This paper presents a topology optimization of single material phononic crystal plate (PhP) to be produced by perforation of a uniform background plate. The primary objective of this optimization study is to explore widest exclusive bandgaps of fundamental (first order) symmetric or asymmetric guided wave modes as well as widest complete bandgap of mixed wave modes (symmetric and asymmetric). However, in the case of single material porous phononic crystals the bandgap width essentially depends on the resultant structural integration introduced by achieved unitcell topology. Thinner connections of scattering segments (i.e. lower effective stiffness) generally lead to (i) wider bandgap due to enhanced interfacial reflections, and (ii) lower bandgap frequency range due to lower wave speed. In other words higher relative bandgap width (RBW) is produced by topology with lower effective stiffness. Hence in order to study the bandgap efficiency of PhP unitcell with respect to its structural worthiness, the in-plane stiffness is incorporated in optimization algorithm as an opposing objective to be maximized. Thick and relatively thin Polysilicon PhP unitcells with square symmetry are studied. Non-dominated sorting genetic algorithm NSGA-II is employed for this multi-objective optimization problem and modal band analysis of individual topologies is performed through finite element method. Specialized topology initiation, evaluation and filtering are applied to achieve refined feasible topologies without penalizing the randomness of genetic algorithm (GA) and diversity of search space. Selected Pareto topologies are presented and gradient of RBW and elastic properties in between the two Pareto front extremes are investigated. Chosen intermediate Pareto topology, even not extreme topology with widest bandgap, show superior bandgap efficiency compared with the results reported in other works on widest bandgap topology of asymmetric guided waves, available in the literature

  18. PORT II

    NASA Technical Reports Server (NTRS)

    Muniz, Beau

    2009-01-01

    One unique project that the Prototype lab worked on was PORT I (Post-landing Orion Recovery Test). PORT is designed to test and develop the system and components needed to recover the Orion capsule once it splashes down in the ocean. PORT II is designated as a follow up to PORT I that will utilize a mock up pressure vessel that is spatially compar able to the final Orion capsule.

  19. Semioptimal practicable algorithmic cooling

    NASA Astrophysics Data System (ADS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-04-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  20. BORE II

    2015-08-01

    Bore II, co-developed by Berkeley Lab researchers Frank Hale, Chin-Fu Tsang, and Christine Doughty, provides vital information for solving water quality and supply problems and for improving remediation of contaminated sites. Termed "hydrophysical logging," this technology is based on the concept of measuring repeated depth profiles of fluid electric conductivity in a borehole that is pumping. As fluid enters the wellbore, its distinct electric conductivity causes peaks in the conductivity log that grow and migratemore » upward with time. Analysis of the evolution of the peaks enables characterization of groundwater flow distribution more quickly, more cost effectively, and with higher resolution than ever before. Combining the unique interpretation software Bore II with advanced downhole instrumentation (the hydrophysical logging tool), the method quantifies inflow and outflow locations, their associated flow rates, and the basic water quality parameters of the associated formation waters (e.g., pH, oxidation-reduction potential, temperature). In addition, when applied in conjunction with downhole fluid sampling, Bore II makes possible a complete assessment of contaminant concentration within groundwater.« less

  1. BORE II

    SciTech Connect

    2015-08-01

    Bore II, co-developed by Berkeley Lab researchers Frank Hale, Chin-Fu Tsang, and Christine Doughty, provides vital information for solving water quality and supply problems and for improving remediation of contaminated sites. Termed "hydrophysical logging," this technology is based on the concept of measuring repeated depth profiles of fluid electric conductivity in a borehole that is pumping. As fluid enters the wellbore, its distinct electric conductivity causes peaks in the conductivity log that grow and migrate upward with time. Analysis of the evolution of the peaks enables characterization of groundwater flow distribution more quickly, more cost effectively, and with higher resolution than ever before. Combining the unique interpretation software Bore II with advanced downhole instrumentation (the hydrophysical logging tool), the method quantifies inflow and outflow locations, their associated flow rates, and the basic water quality parameters of the associated formation waters (e.g., pH, oxidation-reduction potential, temperature). In addition, when applied in conjunction with downhole fluid sampling, Bore II makes possible a complete assessment of contaminant concentration within groundwater.

  2. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  3. Competing Sudakov veto algorithms

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2016-07-01

    We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.

  4. Progress in AMSR Snow Algorithm Development

    NASA Technical Reports Server (NTRS)

    Chang, Alfred; Koike, Toshio

    1998-01-01

    Advanced Microwave Scanning Radiometer (AMSR) will be flown on-board of the Japanese Advanced Earth Observing Satellite-II (ADEOS-II) and United States Earth Observation System (EOS) PM-1 satellite. AMSR is a passive microwave radiometer with frequency ranges from 6.9 GHz to 89 GHz. It scans conically with a constant incidence angle of 55 deg at the Earth's surface. The swath width is about 1600 km. With a large antenna, AMSR will provide the best spatial resolution of multi-frequency radiometer from space. This provides us an opportunity to improve the snow parameter retrieval. Accurate determination of snow parameters from space is a challenging effort. Over the years, many different techniques have been used to account for the complicated snow parameters such as the density, stratigraphy, snow grain size, temperature variation of the snow-pack. Forest type, fractional forest cover and land use type also need to be considered in developing an improved retrieval algorithm. However, snow is such a dynamic variable, snow-pack parameter keeps changing once the snow is deposited on the earth surface. Currently, NASDA and NASA are developing AMSR snow retrieval algorithms. These algorithms are now being carefully tested and evaluated using the SSM/I data. Due to limited snow-pack data available for comparison, this activity is progressing slowly. However, it is clear that in order to improve the snow retrieval algorithm, it is necessary to model the metamorphism history of the snow-pack.

  5. Algorithm That Synthesizes Other Algorithms for Hashing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2010-01-01

    An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the

  6. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  7. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.

    2013-09-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.

  8. Cyclic cooling algorithm

    SciTech Connect

    Rempp, Florian; Mahler, Guenter; Michel, Mathias

    2007-09-15

    We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.

  9. Firefly algorithm with chaos

    NASA Astrophysics Data System (ADS)

    Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.

    2013-01-01

    A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.

  10. Network-Control Algorithm

    NASA Technical Reports Server (NTRS)

    Chan, Hak-Wai; Yan, Tsun-Yee

    1989-01-01

    Algorithm developed for optimal routing of packets of data along links of multilink, multinode digital communication network. Algorithm iterative and converges to cost-optimal assignment independent of initial assignment. Each node connected to other nodes through links, each containing number of two-way channels. Algorithm assigns channels according to message traffic leaving and arriving at each node. Modified to take account of different priorities among packets belonging to different users by using different delay constraints or imposing additional penalties via cost function.

  11. New stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Yasser A.; Afifi, Hossam; Rubino, Gerardo

    1999-05-01

    This paper present a new algorithm for stereo matching. The main idea is to decompose the original problem into independent hierarchical and more elementary problems that can be solved faster without any complicated mathematics using BBD. To achieve that, we use a new image feature called 'continuity feature' instead of classical noise. This feature can be extracted from any kind of images by a simple process and without using a searching technique. A new matching technique is proposed to match the continuity feature. The new algorithm resolves the main disadvantages of feature based stereo matching algorithms.

  12. Cloud model bat algorithm.

    PubMed

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  13. OpenEIS Algorithms

    2013-07-29

    The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.

  14. The Superior Lambert Algorithm

    NASA Astrophysics Data System (ADS)

    der, G.

    2011-09-01

    Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most

  15. Analysis of estimation algorithms for CDTI and CAS applications

    NASA Technical Reports Server (NTRS)

    Goka, T.

    1985-01-01

    Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.

  16. US-VISIT Identity Matching Algorithm Evaluation Program: ADIS Algorithm Evaluation Project Plan Update

    SciTech Connect

    Grant, C W; Lenderman, J S; Gansemer, J D

    2011-02-24

    This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed by Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).

  17. Pareto Optimization Identifies Diverse Set of Phosphorylation Signatures Predicting Response to Treatment with Dasatinib

    PubMed Central

    Klammer, Martin; Dybowski, J. Nikolaj; Hoffmann, Daniel; Schaab, Christoph

    2015-01-01

    Multivariate biomarkers that can predict the effectiveness of targeted therapy in individual patients are highly desired. Previous biomarker discovery studies have largely focused on the identification of single biomarker signatures, aimed at maximizing prediction accuracy. Here, we present a different approach that identifies multiple biomarkers by simultaneously optimizing their predictive power, number of features, and proximity to the drug target in a protein-protein interaction network. To this end, we incorporated NSGA-II, a fast and elitist multi-objective optimization algorithm that is based on the principle of Pareto optimality, into the biomarker discovery workflow. The method was applied to quantitative phosphoproteome data of 19 non-small cell lung cancer (NSCLC) cell lines from a previous biomarker study. The algorithm successfully identified a total of 77 candidate biomarker signatures predicting response to treatment with dasatinib. Through filtering and similarity clustering, this set was trimmed to four final biomarker signatures, which then were validated on an independent set of breast cancer cell lines. All four candidates reached the same good prediction accuracy (83%) as the originally published biomarker. Although the newly discovered signatures were diverse in their composition and in their size, the central protein of the originally published signature — integrin β4 (ITGB4) — was also present in all four Pareto signatures, confirming its pivotal role in predicting dasatinib response in NSCLC cell lines. In summary, the method presented here allows for a robust and simultaneous identification of multiple multivariate biomarkers that are optimized for prediction performance, size, and relevance. PMID:26083411

  18. Optimal design of tunable phononic bandgap plates under equibiaxial stretch

    NASA Astrophysics Data System (ADS)

    Hedayatrasa, Saeid; Abhary, Kazem; Uddin, M. S.; Guest, James K.

    2016-05-01

    Design and application of phononic crystal (PhCr) acoustic metamaterials has been a topic with tremendous growth of interest in the last decade due to their promising capabilities to manipulate acoustic and elastodynamic waves. Phononic controllability of waves through a particular PhCr is limited only to the spectrums located within its fixed bandgap frequency. Hence the ability to tune a PhCr is desired to add functionality over its variable bandgap frequency or for switchability. Deformation induced bandgap tunability of elastomeric PhCr solids and plates with prescribed topology have been studied by other researchers. Principally the internal stress state and distorted geometry of a deformed phononic crystal plate (PhP) changes its effective stiffness and leads to deformation induced tunability of resultant modal band structure. Thus the microstructural topology of a PhP can be altered so that specific tunability features are met through prescribed deformation. In the present study novel tunable PhPs of this kind with optimized bandgap efficiency-tunability of guided waves are computationally explored and evaluated. Low loss transmission of guided waves throughout thin walled structures makes them ideal for fabrication of low loss ultrasound devices and structural health monitoring purposes. Various tunability targets are defined to enhance or degrade complete bandgaps of plate waves through macroscopic tensile deformation. Elastomeric hyperelastic material is considered which enables recoverable micromechanical deformation under tuning finite stretch. Phononic tunability through stable deformation of phononic lattice is specifically required and so any topology showing buckling instability under assumed deformation is disregarded. Nondominated sorting genetic algorithm (GA) NSGA-II is adopted for evolutionary multiobjective topology optimization of hypothesized tunable PhP with square symmetric unit-cell and relevant topologies are analyzed through finite

  19. Development of an uncertainty technique using Bayesian methods to study the impact of climate change and land use change on solutions obtained by the BMP selection and placement optimization tool

    NASA Astrophysics Data System (ADS)

    Maringanti, C.; Chaubey, I.

    2009-12-01

    A multi-objective genetic algorithm (NSGA-II) in combination with a watershed model (Soil and Water Assessment Tool (SWAT)) is used in an optimization framework for making the Best Management Practices (BMP) selection and placement decisions to reduce the nonpoint source (NPS) pollutants and the net cost for implementation of BMPs. Shuffled complex evolutionary metropolis uncertainty analysis (SCEM-UA) method will be used to quantify the uncertainty of the BMP selection and placement tool. The sources of input uncertainty for the tool include the uncertainties in the estimation of economic costs for the implementation of BMPs, and input SWAT model predictions at field level. The SWAT model predictions are in turn influenced by the model parameters and the input climate forcing such as precipitation and temperature which in turn are affected due to the changing climate, and the changing land use in the watershed. The optimization tool is also influenced by the operational parameters of the genetic algorithm. The SCEM-UA method will be initiated using a uniform distribution for the range of the model parameters and the input sources of uncertainty to estimate the posterior probability distribution of the model response variables. This methodology will be applied to estimate the uncertainty in the BMP selection and placement in Wildcat Creek Watershed located in northcentral Indiana. Nitrogen, phosphorus, sediment, and pesticide are the various NPS pollutants that will be reduced through implementation of BMPs in the watershed. The uncertainty bounds around the Pareto-optimal fronts after the optimization will provide the watershed management groups a clear insight on how the desired water quality goals could be realistically met for the least amount of money that is available for BMP implementation in the watershed.

  20. SLAP lesions: a treatment algorithm.

    PubMed

    Brockmeyer, Matthias; Tompkins, Marc; Kohn, Dieter M; Lorbach, Olaf

    2016-02-01

    Tears of the superior labrum involving the biceps anchor are a common entity, especially in athletes, and may highly impair shoulder function. If conservative treatment fails, successful arthroscopic repair of symptomatic SLAP lesions has been described in the literature particularly for young athletes. However, the results in throwing athletes are less successful with a significant amount of patients who will not regain their pre-injury level of performance. The clinical results of SLAP repairs in middle-aged and older patients are mixed, with worse results and higher revision rates as compared to younger patients. In this population, tenotomy or tenodesis of the biceps tendon is a viable alternative to SLAP repairs in order to improve clinical outcomes. The present article introduces a treatment algorithm for SLAP lesions based upon the recent literature as well as the authors' clinical experience. The type of lesion, age of patient, concomitant lesions, and functional requirements, as well as sport activity level of the patient, need to be considered. Moreover, normal variations and degenerative changes in the SLAP complex have to be distinguished from "true" SLAP lesions in order to improve results and avoid overtreatment. The suggestion for a treatment algorithm includes: type I: conservative treatment or arthroscopic debridement, type II: SLAP repair or biceps tenotomy/tenodesis, type III: resection of the instable bucket-handle tear, type IV: SLAP repair (biceps tenotomy/tenodesis if >50 % of biceps tendon is affected), type V: Bankart repair and SLAP repair, type VI: resection of the flap and SLAP repair, and type VII: refixation of the anterosuperior labrum and SLAP repair.

  1. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  2. Efficient Controls for Finitely Convergent Sequential Algorithms

    PubMed Central

    Chen, Wei; Herman, Gabor T.

    2010-01-01

    Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327

  3. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  4. Power spectral estimation algorithms

    NASA Technical Reports Server (NTRS)

    Bhatia, Manjit S.

    1989-01-01

    Algorithms to estimate the power spectrum using Maximum Entropy Methods were developed. These algorithms were coded in FORTRAN 77 and were implemented on the VAX 780. The important considerations in this analysis are: (1) resolution, i.e., how close in frequency two spectral components can be spaced and still be identified; (2) dynamic range, i.e., how small a spectral peak can be, relative to the largest, and still be observed in the spectra; and (3) variance, i.e., how accurate the estimate of the spectra is to the actual spectra. The application of the algorithms based on Maximum Entropy Methods to a variety of data shows that these criteria are met quite well. Additional work in this direction would help confirm the findings. All of the software developed was turned over to the technical monitor. A copy of a typical program is included. Some of the actual data and graphs used on this data are also included.

  5. Optical rate sensor algorithms

    NASA Astrophysics Data System (ADS)

    Uhde-Lacovara, Jo A.

    1989-12-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  6. Optical rate sensor algorithms

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, Jo A.

    1989-01-01

    Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.

  7. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  8. Automatic design of decision-tree algorithms with evolutionary algorithms.

    PubMed

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  9. Contact solution algorithms

    NASA Technical Reports Server (NTRS)

    Tielking, John T.

    1989-01-01

    Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.

  10. Comprehensive eye evaluation algorithm

    NASA Astrophysics Data System (ADS)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  11. PSC algorithm description

    NASA Technical Reports Server (NTRS)

    Nobbs, Steven G.

    1995-01-01

    An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.

  12. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  13. Quantum gate decomposition algorithms.

    SciTech Connect

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  14. Robotic Follow Algorithm

    SciTech Connect

    2005-03-30

    The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.

  15. Data Structures and Algorithms.

    ERIC Educational Resources Information Center

    Wirth, Niklaus

    1984-01-01

    Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)

  16. The Lure of Algorithms

    ERIC Educational Resources Information Center

    Drake, Michael

    2011-01-01

    One debate that periodically arises in mathematics education is the issue of how to teach calculation more effectively. "Modern" approaches seem to initially favour mental calculation, informal methods, and the development of understanding before introducing written forms, while traditionalists tend to champion particular algorithms. The debate is…

  17. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2012-01-01

    The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  18. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  19. Reactive Collision Avoidance Algorithm

    NASA Technical Reports Server (NTRS)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  20. Design and optimization of a bend-and-sweep compliant mechanism

    NASA Astrophysics Data System (ADS)

    Tummala, Y.; Frecker, M. I.; Wissa, A. A.; Hubbard, J. E., Jr.

    2013-09-01

    A novel contact aided compliant mechanism called bend-and-sweep compliant mechanism is presented in this paper. This mechanism has nonlinear stiffness properties in two orthogonal directions. An angled compliant joint (ACJ) is the fundamental element of this mechanism. Geometric parameters of ACJs determine the stiffness of the compliant mechanism. This paper presents the design and optimization of bend-and-sweep compliant mechanism. A multi-objective optimization problem was formulated for design optimization of the bend-and-sweep compliant mechanism. The objectives of the optimization problem were to maximize or minimize the bending and sweep displacements, depending on the situation, while minimizing the von Mises stress and mass of each mechanism. This optimization problem was solved using NSGA-II (a genetic algorithm). The results of this optimization for a single ACJ during upstroke and downstroke are presented in this paper. Results of two different loading conditions used during optimization of a single ACJ for upstroke are presented. Finally, optimization results comparing the performance of compliant mechanisms with one and two ACJs are also presented. It can be inferred from these results that the number of ACJs and the design of each ACJ determines the stiffness of the bend-and-sweep compliant mechanism. These mechanisms can be used in various applications. The goal of this research is to improve the performance of ornithopters by passively morphing their wings. In order to achieve a bio-inspired wing gait called continuous vortex gait, the wings of the ornithopter need to bend, and sweep simultaneously. This can be achieved by inserting the bend-and-sweep compliant mechanism into the leading edge wing spar of the ornithopters.

  1. Discovery of a phosphor for light emitting diode applications and its structural determination, Ba(Si,Al)5(O,N)8:Eu2+.

    PubMed

    Park, Woon Bae; Singh, Satendra Pal; Sohn, Kee-Sun

    2014-02-12

    Most of the novel phosphors that appear in the literature are either a variant of well-known materials or a hybrid material consisting of well-known materials. This situation has actually led to intellectual property (IP) complications in industry and several lawsuits have been the result. Therefore, the definition of a novel phosphor for use in light-emitting diodes should be clarified. A recent trend in phosphor-related IP applications has been to focus on the novel crystallographic structure, so that a slight composition variance and/or the hybrid of a well-known material would not qualify from either a scientific or an industrial point of view. In our previous studies, we employed a systematic materials discovery strategy combining heuristics optimization and a high-throughput process to secure the discovery of genuinely novel and brilliant phosphors that would be immediately ready for use in light emitting diodes. Despite such an achievement, this strategy requires further refinement to prove its versatility under any circumstance. To accomplish such demands, we improved our discovery strategy by incorporating an elitism-involved nondominated sorting genetic algorithm (NSGA-II) that would guarantee the discovery of truly novel phosphors in the present investigation. Using the improved discovery strategy, we discovered an Eu(2+)-doped AB5X8 (A = Sr or Ba, B = Si and Al, X = O and N) phosphor in an orthorhombic structure (A21am) with lattice parameters a = 9.48461(3) Å, b = 13.47194(6) Å, c = 5.77323(2) Å, α = β = γ = 90°, which cannot be found in any of the existing inorganic compound databases. PMID:24437942

  2. Multi-objective design optimization of the transverse gaseous jet in supersonic flows

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Yang, Jun; Yan, Li

    2014-01-01

    The mixing process between the injectant and the supersonic crossflow is one of the important issues for the design of the scramjet engine, and the efficiency mixing has a great impact on the improvement of the combustion efficiency. A hovering vortex is formed between the separation region and the barrel shock wave, and this may be induced by the large negative density gradient. The separation region provides a good mixing area for the injectant and the subsonic boundary layer. In the current study, the transverse injection flow field with a freestream Mach number of 3.5 has been optimized by the non-dominated sorting genetic algorithm (NSGA II) coupled with the Kriging surrogate model; and the variance analysis method and the extreme difference analysis method have been employed to evaluate the values of the objective functions. The obtained results show that the jet-to-crossflow pressure ratio is the most important design variable for the transverse injection flow field, and the injectant molecular weight and the slot width should be considered for the mixing process between the injectant and the supersonic crossflow. There exists an optimal penetration height for the mixing efficiency, and its value is about 14.3 mm in the range considered in the current study. The larger penetration height provides a larger total pressure loss, and there must be a tradeoff between these two objection functions. In addition, this study demonstrates that the multi-objective design optimization method with the data mining technique can be used efficiently to explore the relationship between the design variables and the objective functions.

  3. An efficient algorithm for function optimization: modified stem cells algorithm

    NASA Astrophysics Data System (ADS)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  4. Algorithm Visualization System for Teaching Spatial Data Algorithms

    ERIC Educational Resources Information Center

    Nikander, Jussi; Helminen, Juha; Korhonen, Ari

    2010-01-01

    TRAKLA2 is a web-based learning environment for data structures and algorithms. The system delivers automatically assessed algorithm simulation exercises that are solved using a graphical user interface. In this work, we introduce a novel learning environment for spatial data algorithms, SDA-TRAKLA2, which has been implemented on top of the…

  5. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  6. Algorithms, games, and evolution.

    PubMed

    Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh

    2014-07-22

    Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: "What algorithm could possibly achieve all this in a mere three and a half billion years?" In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution.

  7. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  8. Adaptive continuous twisting algorithm

    NASA Astrophysics Data System (ADS)

    Moreno, Jaime A.; Negrete, Daniel Y.; Torres-González, Victor; Fridman, Leonid

    2016-09-01

    In this paper, an adaptive continuous twisting algorithm (ACTA) is presented. For double integrator, ACTA produces a continuous control signal ensuring finite time convergence of the states to zero. Moreover, the control signal generated by ACTA compensates the Lipschitz perturbation in finite time, i.e. its value converges to the opposite value of the perturbation. ACTA also keeps its convergence properties, even in the case that the upper bound of the derivative of the perturbation exists, but it is unknown.

  9. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  10. Basic cluster compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.

    1980-01-01

    Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.

  11. NOSS altimeter algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Forsythe, R. G.; Mcmillan, J. D.

    1982-01-01

    A description of all algorithms required for altimeter processing is given. Each description includes title, description, inputs/outputs, general algebraic sequences and data volume. All required input/output data files are described and the computer resources required for the entire altimeter processing system were estimated. The majority of the data processing requirements for any radar altimeter of the Seasat-1 type are scoped. Additions and deletions could be made for the specific altimeter products required by other projects.

  12. The Loop Algorithm

    NASA Astrophysics Data System (ADS)

    Evertz, Hans Gerd

    1998-03-01

    Exciting new investigations have recently become possible for strongly correlated systems of spins, bosons, and fermions, through Quantum Monte Carlo simulations with the Loop Algorithm (H.G. Evertz, G. Lana, and M. Marcu, Phys. Rev. Lett. 70, 875 (1993).) (For a recent review see: H.G. Evertz, cond- mat/9707221.) and its generalizations. A review of this new method, its generalizations and its applications is given, including some new results. The Loop Algorithm is based on a formulation of physical models in an extended ensemble of worldlines and graphs, and is related to Swendsen-Wang cluster algorithms. It performs nonlocal changes of worldline configurations, determined by local stochastic decisions. It overcomes many of the difficulties of traditional worldline simulations. Computer time requirements are reduced by orders of magnitude, through a corresponding reduction in autocorrelations. The grand-canonical ensemble (e.g. varying winding numbers) is naturally simulated. The continuous time limit can be taken directly. Improved Estimators exist which further reduce the errors of measured quantities. The algorithm applies unchanged in any dimension and for varying bond-strengths. It becomes less efficient in the presence of strong site disorder or strong magnetic fields. It applies directly to locally XYZ-like spin, fermion, and hard-core boson models. It has been extended to the Hubbard and the tJ model and generalized to higher spin representations. There have already been several large scale applications, especially for Heisenberg-like models, including a high statistics continuous time calculation of quantum critical exponents on a regularly depleted two-dimensional lattice of up to 20000 spatial sites at temperatures down to T=0.01 J.

  13. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  14. JPSS CGS Tools For Rapid Algorithm Updates

    NASA Astrophysics Data System (ADS)

    Smith, D. C.; Grant, K. D.

    2011-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will contribute the afternoon orbit component and ground processing system of the restructured National Polar-orbiting Operational Environmental Satellite System (NPOESS). As such, JPSS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the ground processing component of both POES and the Defense Meteorological Satellite Program (DMSP) replacement known as the Defense Weather Satellite System (DWSS), managed by the Department of Defense (DoD). The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS), and consists of a Command, Control, and Communications Segment (C3S) and the Interface Data Processing Segment (IDPS). Both are developed by Raytheon Intelligence and Information Systems (IIS). The Interface Data Processing Segment will process NPOESS Preparatory Project, Joint Polar Satellite System and Defense Weather Satellite System satellite data to provide environmental data products to NOAA and DoD processing centers operated by the United States government. Under NPOESS, Northrop Grumman Aerospace Systems Algorithms and Data Products (A&DP) organization was responsible for the algorithms that produce the EDRs, including their quality aspects. For JPSS, that responsibility has transferred to NOAA's Center for Satellite Applications & Research (STAR). As the Calibration and Validation (Cal/Val) activities move forward following both the NPP launch and subsequent JPSS and DWSS launches, rapid algorithm updates may be required. Raytheon and

  15. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  16. RADFLO physics and algorithms

    SciTech Connect

    Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.

    1995-09-01

    This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.

  17. Evaluating super resolution algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Park, Jong Hyun; Shin, Gun Shik; Lee, Hyun-Seung; Kim, Dong-Hyun; Park, Se Hyeok; Kim, Jaehyun

    2011-01-01

    This study intends to establish a sound testing and evaluation methodology based upon the human visual characteristics for appreciating the image restoration accuracy; in addition to comparing the subjective results with predictions by some objective evaluation methods. In total, six different super resolution (SR) algorithms - such as iterative back-projection (IBP), robust SR, maximum a posteriori (MAP), projections onto convex sets (POCS), a non-uniform interpolation, and frequency domain approach - were selected. The performance comparison between the SR algorithms in terms of their restoration accuracy was carried out through both subjectively and objectively. The former methodology relies upon the paired comparison method that involves the simultaneous scaling of two stimuli with respect to image restoration accuracy. For the latter, both conventional image quality metrics and color difference methods are implemented. Consequently, POCS and a non-uniform interpolation outperformed the others for an ideal situation, while restoration based methods appear more accurate to the HR image in a real world case where any prior information about the blur kernel is remained unknown. However, the noise-added-image could not be restored successfully by any of those methods. The latest International Commission on Illumination (CIE) standard color difference equation CIEDE2000 was found to predict the subjective results accurately and outperformed conventional methods for evaluating the restoration accuracy of those SR algorithms.

  18. YAMPA: Yet Another Matching Pursuit Algorithm for compressive sensing

    NASA Astrophysics Data System (ADS)

    Lodhi, Muhammad A.; Voronin, Sergey; Bajwa, Waheed U.

    2016-05-01

    State-of-the-art sparse recovery methods often rely on the restricted isometry property for their theoretical guarantees. However, they cannot explicitly incorporate metrics such as restricted isometry constants within their recovery procedures due to the computational intractability of calculating such metrics. This paper formulates an iterative algorithm, termed yet another matching pursuit algorithm (YAMPA), for recovery of sparse signals from compressive measurements. YAMPA differs from other pursuit algorithms in that: (i) it adapts to the measurement matrix using a threshold that is explicitly dependent on two computable coherence metrics of the matrix, and (ii) it does not require knowledge of the signal sparsity. Performance comparisons of YAMPA against other matching pursuit and approximate message passing algorithms are made for several types of measurement matrices. These results show that while state-of-the-art approximate message passing algorithms outperform other algorithms (including YAMPA) in the case of well-conditioned random matrices, they completely break down in the case of ill-conditioned measurement matrices. On the other hand, YAMPA and comparable pursuit algorithms not only result in reasonable performance for well-conditioned matrices, but their performance also degrades gracefully for ill-conditioned matrices. The paper also shows that YAMPA uniformly outperforms other pursuit algorithms for the case of thresholding parameters chosen in a clairvoyant fashion. Further, when combined with a simple and fast technique for selecting thresholding parameters in the case of ill-conditioned matrices, YAMPA outperforms other pursuit algorithms in the regime of low undersampling, although some of these algorithms can outperform YAMPA in the regime of high undersampling in this setting.

  19. Comparison and improvement of algorithms for computing minimal cut sets

    PubMed Central

    2013-01-01

    Background Constrained minimal cut sets (cMCSs) have recently been introduced as a framework to enumerate minimal genetic intervention strategies for targeted optimization of metabolic networks. Two different algorithmic schemes (adapted Berge algorithm and binary integer programming) have been proposed to compute cMCSs from elementary modes. However, in their original formulation both algorithms are not fully comparable. Results Here we show that by a small extension to the integer program both methods become equivalent. Furthermore, based on well-known preprocessing procedures for integer programming we present efficient preprocessing steps which can be used for both algorithms. We then benchmark the numerical performance of the algorithms in several realistic medium-scale metabolic models. The benchmark calculations reveal (i) that these preprocessing steps can lead to an enormous speed-up under both algorithms, and (ii) that the adapted Berge algorithm outperforms the binary integer approach. Conclusions Generally, both of our new implementations are by at least one order of magnitude faster than other currently available implementations. PMID:24191903

  20. Design of robust systolic algorithms

    SciTech Connect

    Varman, P.J.; Fussell, D.S.

    1983-01-01

    A primary reason for the susceptibility of systolic algorithms to faults is their strong dependence on the interconnection between the processors in a systolic array. A technique to transform any linear systolic algorithm into an equivalent pipelined algorithm that executes on arbitrary trees is presented. 5 references.

  1. Multipartite entanglement in quantum algorithms

    SciTech Connect

    Bruss, D.; Macchiavello, C.

    2011-05-15

    We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyze the multipartite entanglement properties in the Deutsch-Jozsa, Grover, and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.

  2. Two Meanings of Algorithmic Mathematics.

    ERIC Educational Resources Information Center

    Maurer, Stephen B.

    1984-01-01

    Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…

  3. Algorithm for Constructing Contour Plots

    NASA Technical Reports Server (NTRS)

    Johnson, W.; Silva, F.

    1984-01-01

    General computer algorithm developed for construction of contour plots. algorithm accepts as input data values at set of points irregularly distributed over plane. Algorithm based on interpolation scheme: points in plane connected by straight-line segments to form set of triangles. Program written in FORTRAN IV.

  4. The clinical algorithm nosology: a method for comparing algorithmic guidelines.

    PubMed

    Pearson, S D; Margolis, C Z; Davis, S; Schreier, L K; Gottlieb, L K

    1992-01-01

    Concern regarding the cost and quality of medical care has led to a proliferation of competing clinical practice guidelines. No technique has been described for determining objectively the degree of similarity between alternative guidelines for the same clinical problem. The authors describe the development of the Clinical Algorithm Nosology (CAN), a new method to compare one form of guideline: the clinical algorithm. The CAN measures overall design complexity independent of algorithm content, qualitatively describes the clinical differences between two alternative algorithms, and then scores the degree of similarity between them. CAN algorithm design-complexity scores correlated highly with clinicians' estimates of complexity on an ordinal scale (r = 0.86). Five pairs of clinical algorithms addressing three topics (gallstone lithotripsy, thyroid nodule, and sinusitis) were selected for interrater reliability testing of the CAN clinical-similarity scoring system. Raters categorized the similarity of algorithm pathways in alternative algorithms as "identical," "similar," or "different." Interrater agreement was achieved on 85/109 scores (80%), weighted kappa statistic, k = 0.73. It is concluded that the CAN is a valid method for determining the structural complexity of clinical algorithms, and a reliable method for describing differences and scoring the similarity between algorithms for the same clinical problem. In the future, the CAN may serve to evaluate the reliability of algorithm development programs, and to support providers and purchasers in choosing among alternative clinical guidelines.

  5. Obstacle Detection Algorithms for Aircraft Navigation: Performance Characterization of Obstacle Detection Algorithms for Aircraft Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Coraor, Lee

    2000-01-01

    The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.

  6. BIKMAS-II: A Knowledge Management System for Biomedical Informatics

    PubMed Central

    López-Alonso, V.; Moreno, L.; Lopez-Campos, G.; Maojo, V.; Martín-Sanchez, F.

    2002-01-01

    We present here BIKMAS II - Biomedical Informatics Knowledge Management System- a system that allows to efficiently process and filter scientific information. The system aids and assists in some common tasks carried out in a Biomedical research unit. We have designed BIKMAS-II as a modular system that can be easily adapted to the different information sources and biomedical domains and that has been implemented with an algorithm to discard, to store and to select what to do with the information.

  7. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  8. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  9. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  10. MLP iterative construction algorithm

    NASA Astrophysics Data System (ADS)

    Rathbun, Thomas F.; Rogers, Steven K.; DeSimio, Martin P.; Oxley, Mark E.

    1997-04-01

    The MLP Iterative Construction Algorithm (MICA) designs a Multi-Layer Perceptron (MLP) neural network as it trains. MICA adds Hidden Layer Nodes one at a time, separating classes on a pair-wise basis, until the data is projected into a linear separable space by class. Then MICA trains the Output Layer Nodes, which results in an MLP that achieves 100% accuracy on the training data. MICA, like Backprop, produces an MLP that is a minimum mean squared error approximation of the Bayes optimal discriminant function. Moreover, MICA's training technique yields novel feature selection technique and hidden node pruning technique

  11. Online Planning Algorithm

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  12. Contour Error Map Algorithm

    NASA Technical Reports Server (NTRS)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  13. FIRE II - Cirrus Data Sets

    Atmospheric Science Data Center

    2013-07-26

    FIRE II - Cirrus Data Sets First ISCCP Regional Experiment (FIRE) II ... stratocumulus systems, the radiative properties of these clouds and their interactions. Relevant Documents:  FIRE Project Guide FIRE II - Cirrus Home Page FIRE II - Cirrus Mission Summaries ...

  14. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  15. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    PubMed

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  16. A computational study of routing algorithms for realistic transportation networks

    SciTech Connect

    Jacob, R.; Marathe, M.V.; Nagel, K.

    1998-12-01

    The authors carry out an experimental analysis of a number of shortest path (routing) algorithms investigated in the context of the TRANSIMS (Transportation Analysis and Simulation System) project. The main focus of the paper is to study how various heuristic and exact solutions, associated data structures affected the computational performance of the software developed especially for realistic transportation networks. For this purpose the authors have used Dallas Fort-Worth road network with very high degree of resolution. The following general results are obtained: (1) they discuss and experimentally analyze various one-one shortest path algorithms, which include classical exact algorithms studied in the literature as well as heuristic solutions that are designed to take into account the geometric structure of the input instances; (2) they describe a number of extensions to the basic shortest path algorithm. These extensions were primarily motivated by practical problems arising in TRANSIMS and ITS (Intelligent Transportation Systems) related technologies. Extensions discussed include--(i) time dependent networks, (ii) multi-modal networks, (iii) networks with public transportation and associated schedules. Computational results are provided to empirically compare the efficiency of various algorithms. The studies indicate that a modified Dijkstra`s algorithm is computationally fast and an excellent candidate for use in various transportation planning applications as well as ITS related technologies.

  17. Multisensor data fusion algorithm development

    SciTech Connect

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  18. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  19. Audio detection algorithms

    NASA Astrophysics Data System (ADS)

    Neta, B.; Mansager, B.

    1992-08-01

    Audio information concerning targets generally includes direction, frequencies, and energy levels. One use of audio cueing is to use direction information to help determine where more sensitive visual direction and acquisition sensors should be directed. Generally, use of audio cueing will shorten times required for visual detection, although there could be circumstances where the audio information is misleading and degrades visual performance. Audio signatures can also be useful for helping classify the emanating platform, as well as to provide estimates of its velocity. The Janus combat simulation is the premier high resolution model used by the Army and other agencies to conduct research. This model has a visual detection model which essentially incorporates algorithms as described by Hartman(1985). The model in its current form does not have any sound cueing capability. This report is part of a research effort to investigate the utility of developing such a capability.

  20. Fighting Censorship with Algorithms

    NASA Astrophysics Data System (ADS)

    Mahdian, Mohammad

    In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.

  1. Ozone Uncertainties Study Algorithm (OUSA)

    NASA Technical Reports Server (NTRS)

    Bahethi, O. P.

    1982-01-01

    An algorithm to carry out sensitivities, uncertainties and overall imprecision studies to a set of input parameters for a one dimensional steady ozone photochemistry model is described. This algorithm can be used to evaluate steady state perturbations due to point source or distributed ejection of H2O, CLX, and NOx, besides, varying the incident solar flux. This algorithm is operational on IBM OS/360-91 computer at NASA/Goddard Space Flight Center's Science and Applications Computer Center (SACC).

  2. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  3. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  4. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  5. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results.

    SciTech Connect

    Swiler, Laura Painton; Eldred, Michael Scott

    2009-09-01

    This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

  6. Type II universal spacetimes

    NASA Astrophysics Data System (ADS)

    Hervik, S.; Málek, T.; Pravda, V.; Pravdová, A.

    2015-12-01

    We study type II universal metrics of the Lorentzian signature. These metrics simultaneously solve vacuum field equations of all theories of gravitation with the Lagrangian being a polynomial curvature invariant constructed from the metric, the Riemann tensor and its covariant derivatives of an arbitrary order. We provide examples of type II universal metrics for all composite number dimensions. On the other hand, we have no examples for prime number dimensions and we prove the non-existence of type II universal spacetimes in five dimensions. We also present type II vacuum solutions of selected classes of gravitational theories, such as Lovelock, quadratic and L({{Riemann}}) gravities.

  7. Angiotensin II receptor signalling.

    PubMed

    Daniels, Derek; Yee, Daniel K; Fluharty, Steven J

    2007-05-01

    Angiotensin II plays a key role in the regulation of body fluid homeostasis. To correct body fluid deficits that occur during hypovolaemia, an animal needs to ingest both water and electrolytes. Thus, it is not surprising that angiotensin II, which is synthesized in response to hypovolaemia, acts centrally to increase both water and NaCl intake. Here, we review findings relating to the properties of angiotensin II receptors that give rise to changes in behaviour. Data are described to suggest that divergent signal transduction pathways are responsible for separable behavioural responses to angiotensin II, and a hypothesis is proposed to explain how this divergence may map onto neural circuits in the brain.

  8. Unsupervised Clustering of Type II Supernova Light Curves

    NASA Astrophysics Data System (ADS)

    Rubin, Adam; Gal-Yam, Avishay

    2016-09-01

    As new facilities come online, the astronomical community will be provided with extremely large data sets of well-sampled light curves (LCs) of transients. This motivates systematic studies of the LCs of supernovae (SNe) of all types, including the early rising phase. We performed unsupervised k-means clustering on a sample of 59 R-band SN II LCs and find that the rise to peak plays an important role in classifying LCs. Our sample can be divided into three classes: slowly rising (II-S), fast rise/slow decline (II-FS), and fast rise/fast decline (II-FF). We also identify three outliers based on the algorithm. The II-FF and II-FS classes are disjoint in their decline rates, while the II-S class is intermediate and “bridges the gap.” This may explain recent conflicting results regarding II-P/II-L populations. The II-FS class is also significantly less luminous than the other two classes. Performing clustering on the first two principal component analysis components gives equivalent results to using the full LC morphologies. This indicates that Type II LCs could possibly be reduced to two parameters. We present several important caveats to the technique, and find that the division into these classes is not fully robust. Moreover, these classes have some overlap, and are defined in the R band only. It is currently unclear if they represent distinct physical classes, and more data is needed to study these issues. However, we show that the outliers are actually composed of slowly evolving SN IIb, demonstrating the potential of such methods. The slowly evolving SNe IIb may arise from single massive progenitors.

  9. Improved algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2013-02-01

    algorithms by selecting ranges of the argument omega in which the performance is the fastest. Reasons for the new version: Some of the theoretical models describing electron transport in condensed matter need a source of the Chandrasekhar H function values with an accuracy of at least 10 decimal places. Additionally, calculations of this function should be as fast as possible since frequent calls to a subroutine providing this function are made (e.g., numerical evaluation of a double integral with a complicated integrand containing the H function). Both conditions were satisfied in the algorithm previously published [1]. However, it has been found that a proper selection of the quadrature in an integral representation of the Chandrasekhar function may considerably decrease the running time. By suitable selection of the number of abscissas in Gauss-Legendre quadrature, the execution time was decreased by a factor of more than 20. Simultaneously, the accuracy of results has not been affected. Summary of revisions: (1) As in previous work [1], two integral representations of the Chandrasekhar function, H(x,omega), were considered: the expression published by Dudarev and Whelan [2] and the expression published by Davidović et al. [3]. The algorithms implementing these representations were designated A and B, respectively. All integrals in these implementations were previously calculated using Romberg quadrature. It has been found, however, that the use of Gauss-Legendre quadrature considerably improved the performance of both algorithms. Two conditions have to be satisfied. (i) The number of abscissas, N, has to be rather large, and (ii) the abscissas and corresponding weights should be determined with accuracy as high as possible. The abscissas and weights are available for N=16, 20, 24, 32, 40, 48, 64, 80, and 96 with accuracy of 20 decimal places [4], and all these values were introduced into a new procedure GAUSS replacing procedure ROMBERG. Due to the fact that the

  10. Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms

    NASA Technical Reports Server (NTRS)

    Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.; Nowak, M. A.

    2010-01-01

    INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.

  11. Algorithm Engineering - An Attempt at a Definition

    NASA Astrophysics Data System (ADS)

    Sanders, Peter

    This paper defines algorithm engineering as a general methodology for algorithmic research. The main process in this methodology is a cycle consisting of algorithm design, analysis, implementation and experimental evaluation that resembles Popper’s scientific method. Important additional issues are realistic models, algorithm libraries, benchmarks with real-world problem instances, and a strong coupling to applications. Algorithm theory with its process of subsequent modelling, design, and analysis is not a competing approach to algorithmics but an important ingredient of algorithm engineering.

  12. Algorithm Calculates Cumulative Poisson Distribution

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  13. Interpolation algorithms for machine tools

    SciTech Connect

    Burleson, R.R.

    1981-08-01

    There are three types of interpolation algorithms presently used in most numerical control systems: digital differential analyzer, pulse-rate multiplier, and binary-rate multiplier. A method for higher order interpolation is in the experimental stages. The trends point toward the use of high-speed micrprocessors to perform these interpolation algorithms.

  14. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  15. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  16. The New Algorithm for Symbolic Network Analysis.

    NASA Astrophysics Data System (ADS)

    Chow, John Tsai-Chiang

    A new and highly efficient tree identification algorithm is derived here for obtaining the determinant and the cofactors of a circuit's node admittance matrix, and hence, for obtaining various symbolic network functions for one-port and two-port reciprocal and nonreciprocal networks, with the network's topological description as its input. The algorithm is so devised that it is practically memory-storage free, and it is simple enough that even a microcomputer can obtain symbolic network functions for a fairly large circuit in a reasonably short time. It is worth noting that the algorithm can handle topological branches with infinite admittance values. Making use of this special feature, we have derived a simple topological model for the ideal operational amplifier, hence providing the ability to obtain various topological formulas of operational amplifier circuits in a reasonable time. By choosing appropriate symbolic network functions, along with some measured transfer function data, the circuit's nominal element values, and a nonlinear-equation solving subroutine, we have constructed a computer program to perform analog circuit fault diagnosis. This program can identify which of a circuit's elements are faulty or out of design tolerances. In the course of this research we have also identified an application to a biological problem, one in which the resistor values of an electrical model of the guinea-pig cochlea can easily be deduced even when some nodes are inaccessible for measurements. All these features have been implemented on a very modest microcomputer, the Apple II. Obviously, a larger computer will not only accomplish the same result faster but also it will be capable of analyzing much larger circuits.

  17. Algorithm for Autonomous Landing

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki

    2011-01-01

    Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.

  18. Panniculitides, an algorithmic approach.

    PubMed

    Zelger, B

    2013-08-01

    The issue of inflammatory diseases of subcutis and its mimicries is generally considered a difficult field of dermatopathology. Yet, in my experience, with appropriate biopsies and good clinicopathological correlation, a specific diagnosis of panniculitides can usually be made. Thereby, knowledge about some basic anatomic and pathological issues is essential. Anatomy differentiates within the panniculus between the fatty lobules separated by fibrous septa. Pathologically, inflammation of panniculus is defined and recognized by an inflammatory process which leads to tissue damage and necrosis. Several types of fat necrosis are observed: xanthomatized macrophages in lipophagic necrosis; granular fat necrosis and fat micropseudocysts in liquefactive fat necrosis; mummified adipocytes in "hyalinizing" fat necrosis with/without saponification and/or calcification; and lipomembranous membranes in membranous fat necrosis. In an algorithmic approach the recognition of an inflammatory process recognized by features as elaborated above is best followed in three steps: recognition of pattern, second of subpattern, and finally of presence and composition of inflammatory cells. Pattern differentiates a mostly septal or mostly lobular distribution at scanning magnification. In the subpattern category one looks for the presence or absence of vasculitis, and, if this is the case, the size and the nature of the involved blood vessel: arterioles and small arteries or veins; capillaries or postcapillary venules. The third step will be to identify the nature of the cells present in the inflammatory infiltrate and, finally, to look for additional histopathologic features that allow for a specific final diagnosis in the language of clinical dermatology of disease involving the subcutaneous fat.

  19. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  20. Testing an earthquake prediction algorithm

    USGS Publications Warehouse

    Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.

    1997-01-01

    A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

  1. Algorithmic advances in stochastic programming

    SciTech Connect

    Morton, D.P.

    1993-07-01

    Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.

  2. Ovarian Cancer Stage II

    MedlinePlus

    ... hyphen, e.g. -historical Searches are case-insensitive Ovarian Cancer Stage II Add to My Pictures View /Download : ... 1650x675 View Download Large: 3300x1350 View Download Title: Ovarian Cancer Stage II Description: Three-panel drawing of stage ...

  3. World War II Homefront.

    ERIC Educational Resources Information Center

    Garcia, Rachel

    2002-01-01

    Presents an annotated bibliography that provides Web sites focusing on the U.S. homefront during World War II. Covers various topics such as the homefront, Japanese Americans, women during World War II, posters, and African Americans. Includes lesson plan sources and a list of additional resources. (CMK)

  4. Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.

    PubMed

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-01

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems. PMID:22697525

  5. FPGA implementation of sparse matrix algorithm for information retrieval

    NASA Astrophysics Data System (ADS)

    Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio

    2005-06-01

    Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.

  6. An algorithm for simulating fracture of cohesive-frictional materials

    SciTech Connect

    Nukala, Phani K; Sampath, Rahul S; Barai, Pallab

    2010-01-01

    Fracture of disordered frictional granular materials is dominated by interfacial failure response that is characterized by de-cohesion followed by frictional sliding response. To capture such an interfacial failure response, we introduce a cohesive-friction random fuse model (CFRFM), wherein the cohesive response of the interface is represented by a linear stress-strain response until a failure threshold, which is then followed by a constant response at a threshold lower than the initial failure threshold to represent the interfacial frictional sliding mechanism. This paper presents an efficient algorithm for simulating fracture of such disordered frictional granular materials using the CFRFM. We note that, when applied to perfectly plastic disordered materials, our algorithm is both theoretically and numerically equivalent to the traditional tangent algorithm (Roux and Hansen 1992 J. Physique II 2 1007) used for such simulations. However, the algorithm is general and is capable of modeling discontinuous interfacial response. Our numerical simulations using the algorithm indicate that the local and global roughness exponents ({zeta}{sub loc} and {zeta}, respectively) of the fracture surface are equal to each other, and the two-dimensional crack roughness exponent is estimated to be {zeta}{sub loc} = {zeta} = 0.69 {+-} 0.03.

  7. An algorithm for the treatment of schizophrenia in the correctional setting: the Forensic Algorithm Project.

    PubMed

    Buscema, C A; Abbasi, Q A; Barry, D J; Lauve, T H

    2000-10-01

    The Forensic Algorithm Project (FAP) was born of the need for a holistic approach in the treatment of the inmate with schizophrenia. Schizophrenia was chosen as the first entity to be addressed by the algorithm because of its refractory nature and high rate of recidivism in the correctional setting. Schizophrenia is regarded as a spectrum disorder, with symptom clusters and behaviors ranging from positive to negative symptoms to neurocognitive dysfunction and affective instability. Furthermore, the clinical picture is clouded by Axis II symptomatology (particularly prominent in the inmate population), comorbid Axis I disorders, and organicity. Four subgroups of schizophrenia were created to coincide with common clinical presentations in the forensic inpatient facility and also to parallel 4 tracks of intervention, consisting of pharmacologic management and programming recommendations. The algorithm begins with any antipsychotic medication and proceeds to atypical neuroleptic usage, augmentation with other psychotropic agents, and, finally, the use of clozapine as the common pathway for refractory schizophrenia. Outcome measurement of pharmacologic intervention is assessed every 6 weeks through the use of a 4-item subscale, specific for each forensic subgroup. A "floating threshold" of 40% symptom severity reduction on Positive and Negative Syndrome Scale and Brief Psychiatric Rating Scale items over a 6-week period is considered an indication for neuroleptic continuation. The forensic algorithm differs from other clinical practice guidelines in that specific programming in certain prison environments is stipulated. Finally, a social commentary on the importance of state-of-the-art psychiatric treatment for all members of society is woven into the clinical tapestry of this article. PMID:11078038

  8. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  9. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  10. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  11. NSLS-II RF BEAM POSITION MONITOR

    SciTech Connect

    Vetter, K.; Della Penna, A. J.; DeLong, J.; Kosciuk, B.; Mead, J.; Pinayev, I.; Singh, O.; Tian, Y.; Ha, K.; Portmann, G.; Sebek J.

    2011-03-28

    An internal R&D program has been undertaken at BNL to develop a sub-micron RF Beam Position Monitor (BPM) for the NSLS-II 3rd generation light source that is currently under construction. The BPM R&D program started in August 2009. Successful beam tests were conducted 15 months from the start of the program. The NSLS-II RF BPM has been designed to meet all requirements for the NSLS-II Injection system and Storage Ring. Housing of the RF BPM's in +-0.1 C thermally controlled racks provide sub-micron stabilization without active correction. An active pilot-tone has been incorporated to aid long-term (8hr min) stabilization to 200nm RMS. The development of a sub-micron BPM for the NSLS-II has successfully demonstrated performance and stability. Pilot Tone calibration combiner and RF synthesizer has been implemented and algorithm development is underway. The program is currently on schedule to start production development of 60 Injection BPM's starting in the Fall of 2011. The production of {approx}250 Storage Ring BPM's will overlap the Injection schedule.

  12. Linearization algorithms for line transfer

    SciTech Connect

    Scott, H.A.

    1990-11-06

    Complete linearization is a very powerful technique for solving multi-line transfer problems that can be used efficiently with a variety of transfer formalisms. The linearization algorithm we describe is computationally very similar to ETLA, but allows an effective treatment of strongly-interacting lines. This algorithm has been implemented (in several codes) with two different transfer formalisms in all three one-dimensional geometries. We also describe a variation of the algorithm that handles saturable laser transport. Finally, we present a combination of linearization with a local approximate operator formalism, which has been implemented in two dimensions and is being developed in three dimensions. 11 refs.

  13. Review of jet reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Atkin, Ryan

    2015-10-01

    Accurate jet reconstruction is necessary for understanding the link between the unobserved partons and the jets of observed collimated colourless particles the partons hadronise into. Understanding this link sheds light on the properties of these partons. A review of various common jet algorithms is presented, namely the Kt, Anti-Kt, Cambridge/Aachen, Iterative cones and the SIScone, highlighting their strengths and weaknesses. If one is interested in studying jets, the Anti-Kt algorithm is the best choice, however if ones interest is in the jet substructures then the Cambridge/Aachen algorithm would be the best option.

  14. Routing Algorithm Exploits Spatial Relations

    NASA Technical Reports Server (NTRS)

    Okino, Clayton; Jennings, Esther

    2004-01-01

    A recently developed routing algorithm for broadcasting in an ad hoc wireless communication network takes account of, and exploits, the spatial relationships among the locations of nodes, in addition to transmission power levels and distances between the nodes. In contrast, most prior algorithms for discovering routes through ad hoc networks rely heavily on transmission power levels and utilize limited graph-topology techniques that do not involve consideration of the aforesaid spatial relationships. The present algorithm extracts the relevant spatial-relationship information by use of a construct denoted the relative-neighborhood graph (RNG).

  15. A universal symmetry detection algorithm.

    PubMed

    Maurer, Peter M

    2015-01-01

    Research on symmetry detection focuses on identifying and detecting new types of symmetry. The paper presents an algorithm that is capable of detecting any type of permutation-based symmetry, including many types for which there are no existing algorithms. General symmetry detection is library-based, but symmetries that can be parameterized, (i.e. total, partial, rotational, and dihedral symmetry), can be detected without using libraries. In many cases it is faster than existing techniques. Furthermore, it is simpler than most existing techniques, and can easily be incorporated into existing software. The algorithm can also be used with virtually any type of matrix-based symmetry, including conjugate symmetry.

  16. Multiprojection algorithms with generalized projections

    SciTech Connect

    Censor, J.; Elfving, T.

    1994-12-31

    Generalized distances give raise to generalized projections onto convex sets. An important question is whether or not one can use, within the same projection algorithm, different types of such generalized projections. This question has practical consequences in the areas of signal detection and image recovery, in situations that can be formulated mathematically as convex feasibility problems. We show here that a simultaneous multiprojection algorithmic scheme converges. Different specific multiprojection algorithms can be derived from our scheme by a judicious choice of the Bregman functions which govern the process. As a by-product of the investigation we also obtain block-iterative schemes for certain kinds of linearly constrained optimization problems.

  17. Dynamic Programming Algorithm vs. Genetic Algorithm: Which is Faster?

    NASA Astrophysics Data System (ADS)

    Petković, Dušan

    The article compares two different approaches for the optimization problem of large join queries (LJQs). Almost all commercial database systems use a form of the dynamic programming algorithm to solve the ordering of join operations for large join queries, i.e. joins with more than dozen join operations. The property of the dynamic programming algorithm is that the execution time increases significantly in the case, where the number of join operations in a query is large. Genetic algorithms (GAs), as a data mining technique, have been shown as a promising technique in solving the ordering of join operations in LJQs. Using the existing implementation of GA, we compare the dynamic programming algorithm implemented in commercial database systems with the corresponding GA module. Our results show that the use of a genetic algorithm is a better solution for optimization of large join queries, i.e., that such a technique outperforms the implementations of the dynamic programming algorithm in conventional query optimization components for very large join queries.

  18. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  19. A method for the dynamic analysis of the heart using a Lyapounov based denoising algorithm.

    PubMed

    Nascimento, Jacinto C; Sanches, João M; Marques, Jorge S

    2006-01-01

    Heart tracking in ultrasound sequences is a difficult task due to speckle noise, low SNR and lack of contrast. Therefore it is usually difficult to obtain robust estimates of the heart cavities since feature detectors produce a large number of outliers. This paper presents an algorithm which combines two main operations: i) a novel denoising algorithm based on the Lyapounov equation and ii) a robust tracker, recently proposed by the authors, based on a model of the outlier features. Experimental results are provided, showing that the proposed algorithm is computationally efficient and leads to accurate estimates of the left ventricle during the cardiac cycle.

  20. Multiple endocrine neoplasia (MEN) II

    MedlinePlus

    Sipple syndrome; MEN II; Pheochromocytoma - MEN II; Thyroid cancer - pheochromocytoma; Parathyroid cancer - pheochromocytoma ... The cause of MEN II is a defect in a gene called RET. This defect causes many tumors to appear in the same ...

  1. Multikernel least mean square algorithm.

    PubMed

    Tobar, Felipe A; Kung, Sun-Yuan; Mandic, Danilo P

    2014-02-01

    The multikernel least-mean-square algorithm is introduced for adaptive estimation of vector-valued nonlinear and nonstationary signals. This is achieved by mapping the multivariate input data to a Hilbert space of time-varying vector-valued functions, whose inner products (kernels) are combined in an online fashion. The proposed algorithm is equipped with novel adaptive sparsification criteria ensuring a finite dictionary, and is computationally efficient and suitable for nonstationary environments. We also show the ability of the proposed vector-valued reproducing kernel Hilbert space to serve as a feature space for the class of multikernel least-squares algorithms. The benefits of adaptive multikernel (MK) estimation algorithms are illuminated in the nonlinear multivariate adaptive prediction setting. Simulations on nonlinear inertial body sensor signals and nonstationary real-world wind signals of low, medium, and high dynamic regimes support the approach. PMID:24807027

  2. Parallel algorithms for matrix computations

    SciTech Connect

    Plemmons, R.J.

    1990-01-01

    The present conference on parallel algorithms for matrix computations encompasses both shared-memory systems and distributed-memory systems, as well as combinations of the two, to provide an overall perspective on parallel algorithms for both dense and sparse matrix computations in solving systems of linear equations, dense or structured problems related to least-squares computations, eigenvalue computations, singular-value computations, and rapid elliptic solvers. Specific issues addressed include the influence of parallel and vector architectures on algorithm design, computations for distributed-memory architectures such as hypercubes, solutions for sparse symmetric positive definite linear systems, symbolic and numeric factorizations, and triangular solutions. Also addressed are reference sources for parallel and vector numerical algorithms, sources for machine architectures, and sources for programming languages.

  3. Fibonacci Numbers and Computer Algorithms.

    ERIC Educational Resources Information Center

    Atkins, John; Geist, Robert

    1987-01-01

    The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)

  4. The Origins of Counting Algorithms

    PubMed Central

    Cantlon, Jessica F.; Piantadosi, Steven T.; Ferrigno, Stephen; Hughes, Kelly D.; Barnard, Allison M.

    2015-01-01

    Humans’ ability to ‘count’ by verbally labeling discrete quantities is unique in animal cognition. The evolutionary origins of counting algorithms are not understood. We report that non-human primates exhibit a cognitive ability that is algorithmically and logically similar to human counting. Monkeys were given the task of choosing between two food caches. Monkeys saw one cache baited with some number of food items, one item at a time. Then, a second cache was baited with food items, one at a time. At the point when the second set approximately outnumbered the first set, monkeys spontaneously moved to choose the second set even before it was completely baited. Using a novel Bayesian analysis, we show that monkeys used an approximate counting algorithm to increment and compare quantities in sequence. This algorithm is structurally similar to formal counting in humans and thus may have been an important evolutionary precursor to human counting. PMID:25953949

  5. What is a Systolic Algorithm?

    NASA Astrophysics Data System (ADS)

    Rao, Sailesh K.; Kollath, T.

    1986-07-01

    In this paper, we show that every systolic array executes a Regular Iterative Algorithm with a strongly separating hyperplane and conversely, that every such algorithm can be implemented on a systolic array. This characterization provides us with an unified framework for describing the contributions of other authors. It also exposes the relevance of many fundamental concepts that were introduced in the sixties by Hennie, Waite and Karp, Miller and Winograd, to the present day concern of systolic array

  6. Genetic algorithms as discovery programs

    SciTech Connect

    Hilliard, M.R.; Liepins, G.

    1986-01-01

    Genetic algorithms are mathematical counterparts to natural selection and gene recombination. As such, they have provided one of the few significant breakthroughs in machine learning. Used with appropriate reward functions and apportionment of credit, they have been successfully applied to gas pipeline operation, x-ray registration and mathematical optimization problems. This paper discusses the basics of genetic algorithms, describes a few successes, and reports on current progress at Oak Ridge National Laboratory in applications to set covering and simulated robots.

  7. An Efficient Pattern Matching Algorithm

    NASA Astrophysics Data System (ADS)

    Sleit, Azzam; Almobaideen, Wesam; Baarah, Aladdin H.; Abusitta, Adel H.

    In this study, we present an efficient algorithm for pattern matching based on the combination of hashing and search trees. The proposed solution is classified as an offline algorithm. Although, this study demonstrates the merits of the technique for text matching, it can be utilized for various forms of digital data including images, audio and video. The performance superiority of the proposed solution is validated analytically and experimentally.

  8. Unidirectional rotating coordinate rotation digital computer algorithm based on rotational phase estimation.

    PubMed

    Zhang, Chaozhu; Han, Jinan; Yan, Huizhi

    2015-06-01

    The improved coordinate rotation digital computer (CORDIC) algorithm gives high precision and resolution phase rotation, but it has some shortages such as high iterations and big system delay. This paper puts forward unidirectional rotating CORDIC algorithm to solve these problems. First, using under-damping theory, a part of unidirectional phase rotations is carried out. Then, the threshold value of angle is determined based on phase rotation estimation method. Finally, rotation phase estimation completes the rest angle iterations. Furthermore, the paper simulates and implements the numerical control oscillator by Quartus II software and Modelsim software. According to the experimental results, the algorithm reduces iterations and judgment of sign bit, so that it decreases system delay and resource utilization and improves the throughput. We always analyze the error brought by this algorithm. It turned out that the algorithm has a good application prospect in global navigation satellite system and channelized receiver. PMID:26133856

  9. Network II Database

    1994-11-07

    The Oak Ridge National Laboratory (ORNL) Rail and Barge Network II Database is a representation of the rail and barge system of the United States. The network is derived from the Federal Rail Administration (FRA) rail database.

  10. Factor II deficiency

    MedlinePlus

    ... blood. It leads to problems with blood clotting (coagulation). Factor II is also known as prothrombin. ... blood clots form. This process is called the coagulation cascade. It involves special proteins called coagulation, or ...

  11. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  12. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal

  13. A multi-objective optimization tool for the selection and placement of BMPs for pesticide control

    NASA Astrophysics Data System (ADS)

    Maringanti, C.; Chaubey, I.; Arabi, M.; Engel, B.

    2008-07-01

    Pesticides (particularly atrazine used in corn fields) are the foremost source of water contamination in many of the water bodies in Midwestern corn belt, exceeding the 3 ppb MCL established by the U.S. EPA for drinking water. Best management practices (BMPs), such as buffer strips and land management practices, have been proven to effectively reduce the pesticide pollution loads from agricultural areas. However, selection and placement of BMPs in watersheds to achieve an ecologically effective and economically feasible solution is a daunting task. BMP placement decisions under such complex conditions require a multi-objective optimization algorithm that would search for the best possible solution that satisfies the given watershed management objectives. Genetic algorithms (GA) have been the most popular optimization algorithms for the BMP selection and placement problem. Most optimization models also had a dynamic linkage with the water quality model, which increased the computation time considerably thus restricting them to apply models on field scale or relatively smaller (11 or 14 digit HUC) watersheds. However, most previous works have considered the two objectives individually during the optimization process by introducing a constraint on the other objective, therefore decreasing the degree of freedom to find the solution. In this study, the optimization for atrazine reduction is performed by considering the two objectives simultaneously using a multi-objective genetic algorithm (NSGA-II). The limitation with the dynamic linkage with a distributed parameter watershed model was overcome through the utilization of a BMP tool, a database that stores the pollution reduction and cost information of different BMPs under consideration. The model was used for the selection and placement of BMPs in Wildcat Creek Watershed (located in Indiana, for atrazine reduction. The most ecologically effective solution from the model had an annual atrazine concentration reduction

  14. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  15. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  16. Improved autonomous star identification algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong

    2015-06-01

    The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).

  17. Adaptive Routing Algorithm in Wireless Communication Networks Using Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Wu, Qinghua; Cai, Zhihua

    At present, mobile communications traffic routing designs are complicated because there are more systems inter-connecting to one another. For example, Mobile Communication in the wireless communication networks has two routing design conditions to consider, i.e. the circuit switching and the packet switching. The problem in the Packet Switching routing design is its use of high-speed transmission link and its dynamic routing nature. In this paper, Evolutionary Algorithms is used to determine the best solution and the shortest communication paths. We developed a Genetic Optimization Process that can help network planners solving the best solutions or the best paths of routing table in wireless communication networks are easily and quickly. From the experiment results can be noted that the evolutionary algorithm not only gets good solutions, but also a more predictable running time when compared to sequential genetic algorithm.

  18. Carnitine palmitoyltransferase II deficiency

    PubMed Central

    Roe, C R.; Yang, B-Z; Brunengraber, H; Roe, D S.; Wallace, M; Garritson, B K.

    2008-01-01

    Background: Carnitine palmitoyltransferase II (CPT II) deficiency is an important cause of recurrent rhabdomyolysis in children and adults. Current treatment includes dietary fat restriction, with increased carbohydrate intake and exercise restriction to avoid muscle pain and rhabdomyolysis. Methods: CPT II enzyme assay, DNA mutation analysis, quantitative analysis of acylcarnitines in blood and cultured fibroblasts, urinary organic acids, the standardized 36-item Short-Form Health Status survey (SF-36) version 2, and bioelectric impedance for body fat composition. Diet treatment with triheptanoin at 30% to 35% of total daily caloric intake was used for all patients. Results: Seven patients with CPT II deficiency were studied from 7 to 61 months on the triheptanoin (anaplerotic) diet. Five had previous episodes of rhabdomyolysis requiring hospitalizations and muscle pain on exertion prior to the diet (two younger patients had not had rhabdomyolysis). While on the diet, only two patients experienced mild muscle pain with exercise. During short periods of noncompliance, two patients experienced rhabdomyolysis with exercise. None experienced rhabdomyolysis or hospitalizations while on the diet. All patients returned to normal physical activities including strenuous sports. Exercise restriction was eliminated. Previously abnormal SF-36 physical composite scores returned to normal levels that persisted for the duration of the therapy in all five symptomatic patients. Conclusions: The triheptanoin diet seems to be an effective therapy for adult-onset carnitine palmitoyltransferase II deficiency. GLOSSARY ALT = alanine aminotransferase; AST = aspartate aminotransferase; ATP = adenosine triphosphate; BHP = β-hydroxypentanoate; BKP = β-ketopentanoate; BKP-CoA = β-ketopentanoyl–coenzyme A; BUN = blood urea nitrogen; CAC = citric acid cycle; CoA = coenzyme A; CPK = creatine phosphokinase; CPT II = carnitine palmitoyltransferase II; LDL = low-density lipoprotein; MCT

  19. Evaluation of chlorophyll-a retrieval algorithms based on MERIS bands for optically varying eutrophic inland lakes.

    PubMed

    Lyu, Heng; Li, Xiaojun; Wang, Yannan; Jin, Qi; Cao, Kai; Wang, Qiao; Li, Yunmei

    2015-10-15

    Fourteen field campaigns were conducted in five inland lakes during different seasons between 2006 and 2013, and a total of 398 water samples with varying optical characteristics were collected. The characteristics were analyzed based on remote sensing reflectance, and an automatic cluster two-step method was applied for water classification. The inland waters could be clustered into three types, which we labeled water types I, II and III. From water types I to III, the effect of the phytoplankton on the optical characteristics gradually decreased. Four chlorophyll-a retrieval algorithms for Case II water, a two-band, three-band, four-band and SCI (Synthetic Chlorophyll Index) algorithm were evaluated for three water types based on the MERIS bands. Different MERIS bands were used for the three water types in each of the four algorithms. The four algorithms had different levels of retrieval accuracy for each water type, and no single algorithm could be successfully applied to all water types. For water types I and III, the three-band algorithm performed the best, while the four-band algorithm had the highest retrieval accuracy for water type II. However, the three-band algorithm is preferable to the two-band algorithm for turbid eutrophic inland waters. The SCI algorithm is recommended for highly turbid water with a higher concentration of total suspended solids. Our research indicates that the chlorophyll-a concentration retrieval by remote sensing for optically contrasted inland water requires a specific algorithm that is based on the optical characteristics of inland water bodies to obtain higher estimation accuracy.

  20. Multi-Objective Scheduling for the Cluster II Constellation

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Giuliano, Mark

    2011-01-01

    This paper describes the application of the MUSE multiobjecctive scheduling framework to the Cluster II WBD scheduling domain. Cluster II is an ESA four-spacecraft constellation designed to study the plasma environment of the Earth and it's magnetosphere. One of the instruments on each of the four spacecraft is the Wide Band Data (WBD) plasma wave experiment. We have applied the MUSE evolutionary algorithm to the scheduling problem represented by this instrument, and the result has been adopted and utilized by the WBD schedulers for nearly a year. This paper describes the WBD scheduling problem, its representation in MUSE, and some of the visualization elements that provide insight into objective value tradeoffs.

  1. Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida

    EPA Science Inventory

    By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...

  2. Using Genetic Algorithm and MODFLOW to Characterize Aquifer System of Northwest Florida (Published Proceedings)

    EPA Science Inventory

    By integrating Genetic Algorithm and MODFLOW2005, an optimizing tool is developed to characterize the aquifer system of Region II, Northwest Florida. The history and the newest available observation data of the aquifer system is fitted automatically by using the numerical model c...

  3. Algorithms, complexity, and the sciences.

    PubMed

    Papadimitriou, Christos

    2014-11-11

    Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.

  4. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms.

  5. POSE Algorithms for Automated Docking

    NASA Technical Reports Server (NTRS)

    Heaton, Andrew F.; Howard, Richard T.

    2011-01-01

    POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.

  6. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  7. Ensemble algorithms in reinforcement learning.

    PubMed

    Wiering, Marco A; van Hasselt, Hado

    2008-08-01

    This paper describes several ensemble methods that combine multiple different reinforcement learning (RL) algorithms in a single agent. The aim is to enhance learning speed and final performance by combining the chosen actions or action probabilities of different RL algorithms. We designed and implemented four different ensemble methods combining the following five different RL algorithms: Q-learning, Sarsa, actor-critic (AC), QV-learning, and AC learning automaton. The intuitively designed ensemble methods, namely, majority voting (MV), rank voting, Boltzmann multiplication (BM), and Boltzmann addition, combine the policies derived from the value functions of the different RL algorithms, in contrast to previous work where ensemble methods have been used in RL for representing and learning a single value function. We show experiments on five maze problems of varying complexity; the first problem is simple, but the other four maze tasks are of a dynamic or partially observable nature. The results indicate that the BM and MV ensembles significantly outperform the single RL algorithms. PMID:18632380

  8. SDR input power estimation algorithms

    NASA Astrophysics Data System (ADS)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  9. Algorithms for automated DNA assembly

    PubMed Central

    Densmore, Douglas; Hsiau, Timothy H.-C.; Kittleson, Joshua T.; DeLoache, Will; Batten, Christopher; Anderson, J. Christopher

    2010-01-01

    Generating a defined set of genetic constructs within a large combinatorial space provides a powerful method for engineering novel biological functions. However, the process of assembling more than a few specific DNA sequences can be costly, time consuming and error prone. Even if a correct theoretical construction scheme is developed manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and formal approaches are needed for exploring these vast design spaces. By automating the design of DNA fabrication schemes using computational algorithms, we can eliminate human error while reducing redundant operations, thus minimizing the time and cost required for conducting biological engineering experiments. Here, we provide algorithms that optimize the simultaneous assembly of a collection of related DNA sequences. We compare our algorithms to an exhaustive search on a small synthetic dataset and our results show that our algorithms can quickly find an optimal solution. Comparison with random search approaches on two real-world datasets show that our algorithms can also quickly find lower-cost solutions for large datasets. PMID:20335162

  10. An energy basin finding algorithm for kinetic Monte Carlo acceleration.

    PubMed

    Puchala, Brian; Falk, Michael L; Garikipati, Krishna

    2010-04-01

    We present an energy basin finding algorithm for identifying the states in absorbing Markov chains used for accelerating kinetic Monte Carlo (KMC) simulations out of trapping energy basins. The algorithm saves groups of states corresponding to basic energy basins in which there is (i) a minimum energy saddle point and (ii) in moving away from the minimum the saddle point energies do not decrease between successive moves. When necessary, these groups are merged to help the system escape basins of basins. Energy basins are identified either as the system visits states, or by exploring surrounding states before the system visits them. We review exact and approximate methods for accelerating KMC simulations out of trapping energy basins and implement them within our algorithm. Its flexibility to store varying numbers of states, and ability to merge sets of saved states as the program runs, allows it to efficiently escape complicated trapping energy basins. Through simulations of vacancy-As cluster dissolution in Si, we demonstrate our algorithm can be several orders of magnitude faster than standard KMC simulations.

  11. Joint optimization of algorithmic suites for EEG analysis.

    PubMed

    Santana, Eder; Brockmeier, Austin J; Principe, Jose C

    2014-01-01

    Electroencephalogram (EEG) data analysis algorithms consist of multiple processing steps each with a number of free parameters. A joint optimization methodology can be used as a wrapper to fine-tune these parameters for the patient or application. This approach is inspired by deep learning neural network models, but differs because the processing layers for EEG are heterogeneous with different approaches used for processing space and time. Nonetheless, we treat the processing stages as a neural network and apply backpropagation to jointly optimize the parameters. This approach outperforms previous results on the BCI Competition II - dataset IV; additionally, it outperforms the common spatial patterns (CSP) algorithm on the BCI Competition III dataset IV. In addition, the optimized parameters in the architecture are still interpretable. PMID:25570621

  12. The "Juggler" algorithm: a hybrid deformable image registration algorithm for adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Xia, Junyi; Chen, Yunmei; Samant, Sanjiv S.

    2007-03-01

    Fast deformable registration can potentially facilitate the clinical implementation of adaptive radiation therapy (ART), which allows for daily organ deformations not accounted for in radiotherapy treatment planning, which typically utilizes a static organ model, to be incorporated into the fractionated treatment. Existing deformable registration algorithms typically utilize a specific diffusion model, and require a large number of iterations to achieve convergence. This limits the online applications of deformable image registration for clinical radiotherapy, such as daily patient setup variations involving organ deformation, where high registration precision is required. We propose a hybrid algorithm, the "Juggler", based on a multi-diffusion model to achieve fast convergence. The Juggler achieves fast convergence by applying two different diffusion models: i) one being optimized quickly for matching high gradient features, i.e. bony anatomies; and ii) the other being optimized for further matching low gradient features, i.e. soft tissue. The regulation of these 2 competing criteria is achieved using a threshold of a similarity measure, such as cross correlation or mutual information. A multi-resolution scheme was applied for faster convergence involving large deformations. Comparisons of the Juggler algorithm were carried out with demons method, accelerated demons method, and free-form deformable registration using 4D CT lung imaging from 5 patients. Based on comparisons of difference images and similarity measure computations, the Juggler produced a superior registration result. It achieved the desired convergence within 30 iterations, and typically required <90sec to register two 3D image sets of size 256×256×40 using a 3.2 GHz PC. This hybrid registration strategy successfully incorporates the benefits of different diffusion models into a single unified model.

  13. Seamless Merging of Hypertext and Algorithm Animation

    ERIC Educational Resources Information Center

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  14. Firefly Algorithm for Structural Search.

    PubMed

    Avendaño-Franco, Guillermo; Romero, Aldo H

    2016-07-12

    The problem of computational structure prediction of materials is approached using the firefly (FF) algorithm. Starting from the chemical composition and optionally using prior knowledge of similar structures, the FF method is able to predict not only known stable structures but also a variety of novel competitive metastable structures. This article focuses on the strengths and limitations of the algorithm as a multimodal global searcher. The algorithm has been implemented in software package PyChemia ( https://github.com/MaterialsDiscovery/PyChemia ), an open source python library for materials analysis. We present applications of the method to van der Waals clusters and crystal structures. The FF method is shown to be competitive when compared to other population-based global searchers. PMID:27232694

  15. Some nonlinear space decomposition algorithms

    SciTech Connect

    Tai, Xue-Cheng; Espedal, M.

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  16. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  17. PEP-II Status

    SciTech Connect

    Sullivan, M.; Bertsche, K.; Browne, M.; Cai, Y.; Cheng, W.; Colocho, W.; Decker, F.-J.; Donald, M.; Ecklund, S.; Erickson, R.; Fisher, A.S.; Fox, J.; Heifets, S.; Himel, T.; Iverson, R.; Kulikov, A.; Novokhatski, A.; Pacak, V.; Pivi, M.; Rivetta, C.; Ross, M.; /SLAC /Saclay /Frascati

    2008-07-25

    PEP-II and BaBar have just finished run 7, the last run of the SLAC B-factory. PEP-II was one of the few high-current e+e- colliding accelerators and holds the present world record for stored electrons and stored positrons. It has stored 2.07 A of electrons, nearly 3 times the design current of 0.75 A and it has stored 3.21 A of positrons, 1.5 times more than the design current of 2.14 A. High-current beams require careful design of several systems. The feedback systems that control instabilities, the RF system stability loops, and especially the vacuum systems have to handle the higher power demands. We present here some of the accomplishments of the PEP-II accelerator and some of the problems we encountered while running high-current beams.

  18. About APPLE II Operation

    SciTech Connect

    Schmidt, T.; Zimoch, D.

    2007-01-19

    The operation of an APPLE II based undulator beamline with all its polarization states (linear horizontal and vertical, circular and elliptical, and continous variation of the linear vector) requires an effective description allowing an automated calculation of gap and shift parameter as function of energy and operation mode. The extension of the linear polarization range from 0 to 180 deg. requires 4 shiftable magnet arrrays, permitting use of the APU (adjustable phase undulator) concept. Studies for a pure fixed gap APPLE II for the SLS revealed surprising symmetries between circular and linear polarization modes allowing for simplified operation. A semi-analytical model covering all types of APPLE II and its implementation will be presented.

  19. About APPLE II Operation

    NASA Astrophysics Data System (ADS)

    Schmidt, T.; Zimoch, D.

    2007-01-01

    The operation of an APPLE II based undulator beamline with all its polarization states (linear horizontal and vertical, circular and elliptical, and continous variation of the linear vector) requires an effective description allowing an automated calculation of gap and shift parameter as function of energy and operation mode. The extension of the linear polarization range from 0 to 180° requires 4 shiftable magnet arrrays, permitting use of the APU (adjustable phase undulator) concept. Studies for a pure fixed gap APPLE II for the SLS revealed surprising symmetries between circular and linear polarization modes allowing for simplified operation. A semi-analytical model covering all types of APPLE II and its implementation will be presented.

  20. Mod II engine performance

    NASA Technical Reports Server (NTRS)

    Richey, Albert E.; Huang, Shyan-Cherng

    1987-01-01

    The testing of a prototype of an automotive Stirling engine, the Mod II, is discussed. The Mod II is a one-piece cast block with a V-4 single-crankshaft configuration and an annular regenerator/cooler design. The initial testing of Mod II concentrated on the basic engine, with auxiliaries driven by power sources external to the engine. The performance of the engine was tested at 720 C set temperature and 820 C tube temperature. At 720 C, it is observed that the power deficiency is speed dependent and linear, with a weak pressure dependency, and at 820 C, the power deficiency is speed and pressure dependent. The effects of buoyancy and nozzle spray pattern on the heater temperature spread are investigated. The characterization of the oil pump and the operating cycle and temperature spread tests are proposed for further evaluation of the engine.

  1. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  2. Algorithm Development Library for Environmental Satellite Missions

    NASA Astrophysics Data System (ADS)

    Smith, D. C.; Grant, K. D.; Miller, S. W.; Jamilkowski, M. L.

    2012-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation civilian weather and environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will contribute the afternoon orbit component and ground processing system of the restructured National Polar-orbiting Operational Environmental Satellite System (NPOESS). As such, the Joint Polar Satellite System replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by the National Oceanic and Atmospheric Administration and the ground processing component of both Polar-orbiting Operational Environmental Satellites and the Defense Meteorological Satellite Program (DMSP) replacement, previously known as the Defense Weather Satellite System (DWSS), managed by the Department of Defense (DoD). The JPSS satellites will carry a suite of sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground processing system for JPSS is known as the JPSS Common Ground System (JPSS CGS), and consists of a Command, Control, and Communications Segment (C3S) and an Interface Data Processing Segment (IDPS). Both segments are developed by Raytheon Intelligence and Information Systems (IIS). The C3S currently flies the Suomi National Polar Partnership (Suomi NPP) satellite and transfers mission data from Suomi NPP and between the ground facilities. The IDPS processes Suomi NPP satellite data to provide Environmental Data Records (EDRs) to NOAA and DoD processing centers operated by the United States government. When the JPSS-1 satellite is launched in early 2017, the responsibilities of the C3S and the IDPS will be expanded to support both Suomi NPP and JPSS-1. The EDRs for Suomi NPP are currently undergoing an extensive Calibration and Validation (Cal/Val) campaign. As Cal/Val proceeds, changes to the

  3. An Efficient Reachability Analysis Algorithm

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2008-01-01

    A document discusses a new algorithm for generating higher-order dependencies for diagnostic and sensor placement analysis when a system is described with a causal modeling framework. This innovation will be used in diagnostic and sensor optimization and analysis tools. Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in-situ platforms. This algorithm will serve as a power tool for technologies that satisfy a key requirement of autonomous spacecraft, including science instruments and in-situ missions.

  4. A generalized memory test algorithm

    NASA Technical Reports Server (NTRS)

    Milner, E. J.

    1982-01-01

    A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.

  5. A swaying object detection algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Shidong; Rong, Jianzhong; Zhou, Dechuang; Wang, Jian

    2013-07-01

    Moving object detection is a most important preliminary step in video analysis. Some moving objects such as spitting steam, fire and smoke have unique motion feature whose lower position keep basically unchanged and the upper position move back and forth. Based on this unique motion feature, a swaying object detection algorithm is presented in this paper. Firstly, fuzzy integral was adopted to integrate color features for extracting moving objects from video frames. Secondly, a swaying identification algorithm based on centroid calculation was used to distinguish the swaying object from other moving objects. Experiments show that the proposed method is effective to detect swaying object.

  6. ALGORITHM DEVELOPMENT FOR SPATIAL OPERATORS.

    USGS Publications Warehouse

    Claire, Robert W.

    1984-01-01

    An approach is given that develops spatial operators about the basic geometric elements common to spatial data structures. In this fashion, a single set of spatial operators may be accessed by any system that reduces its operands to such basic generic representations. Algorithms based on this premise have been formulated to perform operations such as separation, overlap, and intersection. Moreover, this generic approach is well suited for algorithms that exploit concurrent properties of spatial operators. The results may provide a framework for a geometry engine to support fundamental manipulations within a geographic information system.

  7. Born approximation, scattering, and algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun

    2015-05-01

    In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.

  8. Parallel algorithms for unconstrained optimizations by multisplitting

    SciTech Connect

    He, Qing

    1994-12-31

    In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.

  9. Blind Alley Aware ACO Routing Algorithm

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Masaya; Otani, Kazuo

    2010-10-01

    The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.

  10. Two Algorithms for Processing Electronic Nose Data

    NASA Technical Reports Server (NTRS)

    Young, Rebecca; Linnell, Bruce

    2007-01-01

    Two algorithms for processing the digitized readings of electronic noses, and computer programs to implement the algorithms, have been devised in a continuing effort to increase the utility of electronic noses as means of identifying airborne compounds and measuring their concentrations. One algorithm identifies the two vapors in a two-vapor mixture and estimates the concentration of each vapor (in principle, this algorithm could be extended to more than two vapors). The other algorithm identifies a single vapor and estimates its concentration.

  11. Formalization of algorithms for relational database machines

    SciTech Connect

    Ryvkin, V.M.; Komarov, P.I.; Nazarov, A.S.

    1986-11-01

    This paper applies the apparatus of algorithmic algebras to formalize the mapping of the relational algebra language into the internal database processor language. The apparatus is a popular tool for formal structured description of parallel algorithms. The MUL'TIPROTSESSIST automatic parallel program design system using systems of algorithmic algebras may be applied to automate the design of database machine operating algorithms in experimental research and to formalize the parallel organization of interpretation algorithms for the relational algebraic operations.

  12. The Eutelsat II programme

    NASA Astrophysics Data System (ADS)

    Burgio, Claude; Dumesnil, Jean-Jacques

    Eutelsat II is designed to provide Europe with Ku-band communication and TV services with 16 active channels of 50 W power output. In-orbit reconfigurable antenna feed networks permit customized transmission offering either medium-gain over the whole of Europe or high-gain over tailored geographic areas, allowing TV reception on dishes as small as 60 cm. The payload design makes use of only two antennas, each comprising a dual dish reflector and two reconfigurable primary feed arrays. This paper gives an overview of the Eutelsat II mission, and presents a technical description of the satellite, the program schedule, and future prospects.

  13. SAGE II Ozone Analysis

    NASA Technical Reports Server (NTRS)

    Cunnold, Derek; Wang, Ray

    2002-01-01

    Publications from 1999-2002 describing research funded by the SAGE II contract to Dr. Cunnold and Dr. Wang are listed below. Our most recent accomplishments include a detailed analysis of the quality of SAGE II, v6.1, ozone measurements below 20 km altitude (Wang et al., 2002 and Kar et al., 2002) and an analysis of the consistency between SAGE upper stratospheric ozone trends and model predictions with emphasis on hemispheric asymmetry (Li et al., 2001). Abstracts of the 11 papers are attached.

  14. Application of a Multi-Objective Optimization Method to Provide Least Cost Alternatives for NPS Pollution Control

    NASA Astrophysics Data System (ADS)

    Maringanti, Chetan; Chaubey, Indrajeet; Arabi, Mazdak; Engel, Bernard

    2011-09-01

    Nonpoint source (NPS) pollutants such as phosphorus, nitrogen, sediment, and pesticides are the foremost sources of water contamination in many of the water bodies in the Midwestern agricultural watersheds. This problem is expected to increase in the future with the increasing demand to provide corn as grain or stover for biofuel production. Best management practices (BMPs) have been proven to effectively reduce the NPS pollutant loads from agricultural areas. However, in a watershed with multiple farms and multiple BMPs feasible for implementation, it becomes a daunting task to choose a right combination of BMPs that provide maximum pollution reduction for least implementation costs. Multi-objective algorithms capable of searching from a large number of solutions are required to meet the given watershed management objectives. Genetic algorithms have been the most popular optimization algorithms for the BMP selection and placement. However, previous BMP optimization models did not study pesticide which is very commonly used in corn areas. Also, with corn stover being projected as a viable alternative for biofuel production there might be unintended consequences of the reduced residue in the corn fields on water quality. Therefore, there is a need to study the impact of different levels of residue management in combination with other BMPs at a watershed scale. In this research the following BMPs were selected for placement in the watershed: (a) residue management, (b) filter strips, (c) parallel terraces, (d) contour farming, and (e) tillage. We present a novel method of combing different NPS pollutants into a single objective function, which, along with the net costs, were used as the two objective functions during optimization. In this study we used BMP tool, a database that contains the pollution reduction and cost information of different BMPs under consideration which provides pollutant loads during optimization. The BMP optimization was performed using a NSGA-II

  15. Applying various algorithms for species distribution modelling.

    PubMed

    Li, Xinhai; Wang, Yuan

    2013-06-01

    Species distribution models have been used extensively in many fields, including climate change biology, landscape ecology and conservation biology. In the past 3 decades, a number of new models have been proposed, yet researchers still find it difficult to select appropriate models for data and objectives. In this review, we aim to provide insight into the prevailing species distribution models for newcomers in the field of modelling. We compared 11 popular models, including regression models (the generalized linear model, the generalized additive model, the multivariate adaptive regression splines model and hierarchical modelling), classification models (mixture discriminant analysis, the generalized boosting model, and classification and regression tree analysis) and complex models (artificial neural network, random forest, genetic algorithm for rule set production and maximum entropy approaches). Our objectives are: (i) to compare the strengths and weaknesses of the models, their characteristics and identify suitable situations for their use (in terms of data type and species-environment relationships) and (ii) to provide guidelines for model application, including 3 steps: model selection, model formulation and parameter estimation. PMID:23731809

  16. Applying various algorithms for species distribution modelling.

    PubMed

    Li, Xinhai; Wang, Yuan

    2013-06-01

    Species distribution models have been used extensively in many fields, including climate change biology, landscape ecology and conservation biology. In the past 3 decades, a number of new models have been proposed, yet researchers still find it difficult to select appropriate models for data and objectives. In this review, we aim to provide insight into the prevailing species distribution models for newcomers in the field of modelling. We compared 11 popular models, including regression models (the generalized linear model, the generalized additive model, the multivariate adaptive regression splines model and hierarchical modelling), classification models (mixture discriminant analysis, the generalized boosting model, and classification and regression tree analysis) and complex models (artificial neural network, random forest, genetic algorithm for rule set production and maximum entropy approaches). Our objectives are: (i) to compare the strengths and weaknesses of the models, their characteristics and identify suitable situations for their use (in terms of data type and species-environment relationships) and (ii) to provide guidelines for model application, including 3 steps: model selection, model formulation and parameter estimation.

  17. Quartic Rotation Criteria and Algorithms.

    ERIC Educational Resources Information Center

    Clarkson, Douglas B.; Jennrich, Robert I.

    1988-01-01

    Most of the current analytic rotation criteria for simple structure in factor analysis are summarized and identified as members of a general symmetric family of quartic criteria. A unified development of algorithms for orthogonal and direct oblique rotation using arbitrary criteria from this family is presented. (Author/TJH)

  18. Adaptive protection algorithm and system

    DOEpatents

    Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA

    2009-04-28

    An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.

  19. Algorithms, complexity, and the sciences

    PubMed Central

    Papadimitriou, Christos

    2014-01-01

    Algorithms, perhaps together with Moore’s law, compose the engine of the information technology revolution, whereas complexity—the antithesis of algorithms—is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal—and therefore less compelling—than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene’s cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution. PMID:25349382

  20. Associative Algorithms for Computational Creativity

    ERIC Educational Resources Information Center

    Varshney, Lav R.; Wang, Jun; Varshney, Kush R.

    2016-01-01

    Computational creativity, the generation of new, unimagined ideas or artifacts by a machine that are deemed creative by people, can be applied in the culinary domain to create novel and flavorful dishes. In fact, we have done so successfully using a combinatorial algorithm for recipe generation combined with statistical models for recipe ranking…

  1. Coagulation algorithms with size binning

    NASA Technical Reports Server (NTRS)

    Statton, David M.; Gans, Jason; Williams, Eric

    1994-01-01

    The Smoluchowski equation describes the time evolution of an aerosol particle size distribution due to aggregation or coagulation. Any algorithm for computerized solution of this equation requires a scheme for describing the continuum of aerosol particle sizes as a discrete set. One standard form of the Smoluchowski equation accomplishes this by restricting the particle sizes to integer multiples of a basic unit particle size (the monomer size). This can be inefficient when particle concentrations over a large range of particle sizes must be calculated. Two algorithms employing a geometric size binning convention are examined: the first assumes that the aerosol particle concentration as a function of size can be considered constant within each size bin; the second approximates the concentration as a linear function of particle size within each size bin. The output of each algorithm is compared to an analytical solution in a special case of the Smoluchowski equation for which an exact solution is known . The range of parameters more appropriate for each algorithm is examined.

  2. Key Concepts in Informatics: Algorithm

    ERIC Educational Resources Information Center

    Szlávi, Péter; Zsakó, László

    2014-01-01

    "The system of key concepts contains the most important key concepts related to the development tasks of knowledge areas and their vertical hierarchy as well as the links of basic key concepts of different knowledge areas." (Vass 2011) One of the most important of these concepts is the algorithm. In everyday life, when learning or…

  3. Document Organization Using Kohonen's Algorithm.

    ERIC Educational Resources Information Center

    Guerrero Bote, Vicente P.; Moya Anegon, Felix de; Herrero Solana, Victor

    2002-01-01

    Discussion of the classification of documents from bibliographic databases focuses on a method of vectorizing reference documents from LISA (Library and Information Science Abstracts) which permits their topological organization using Kohonen's algorithm. Analyzes possibilities of this type of neural network with respect to the development of…

  4. The origins of counting algorithms.

    PubMed

    Cantlon, Jessica F; Piantadosi, Steven T; Ferrigno, Stephen; Hughes, Kelly D; Barnard, Allison M

    2015-06-01

    Humans' ability to count by verbally labeling discrete quantities is unique in animal cognition. The evolutionary origins of counting algorithms are not understood. We report that nonhuman primates exhibit a cognitive ability that is algorithmically and logically similar to human counting. Monkeys were given the task of choosing between two food caches. First, they saw one cache baited with some number of food items, one item at a time. Then, a second cache was baited with food items, one at a time. At the point when the second set was approximately equal to the first set, the monkeys spontaneously moved to choose the second set even before that cache was completely baited. Using a novel Bayesian analysis, we show that the monkeys used an approximate counting algorithm for comparing quantities in sequence that is incremental, iterative, and condition controlled. This proto-counting algorithm is structurally similar to formal counting in humans and thus may have been an important evolutionary precursor to human counting. PMID:25953949

  5. Threshold extended ID3 algorithm

    NASA Astrophysics Data System (ADS)

    Kumar, A. B. Rajesh; Ramesh, C. Phani; Madhusudhan, E.; Padmavathamma, M.

    2012-04-01

    Information exchange over insecure networks needs to provide authentication and confidentiality to the database in significant problem in datamining. In this paper we propose a novel authenticated multiparty ID3 Algorithm used to construct multiparty secret sharing decision tree for implementation in medical transactions.

  6. Algorithm Visualization in Teaching Practice

    ERIC Educational Resources Information Center

    Törley, Gábor

    2014-01-01

    This paper presents the history of algorithm visualization (AV), highlighting teaching-methodology aspects. A combined, two-group pedagogical experiment will be presented as well, which measured the efficiency and the impact on the abstract thinking of AV. According to the results, students, who learned with AV, performed better in the experiment.

  7. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  8. Hyperspectral image compressive projection algorithm

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Allen, David W.

    2009-05-01

    We describe a compressive projection algorithm and experimentally assess its performance when used with a Hyperspectral Image Projector (HIP). The HIP is being developed by NIST for system-level performance testing of hyperspectral and multispectral imagers. It projects a two-dimensional image into the unit under test (UUT), whereby each pixel can have an independently programmable arbitrary spectrum. To efficiently project a single frame of dynamic realistic hyperspectral imagery through the collimator into the UUT, a compression algorithm has been developed whereby the series of abundance images and corresponding endmember spectra that comprise the image cube of that frame are first computed using an automated endmember-finding algorithm such as the Sequential Maximum Angle Convex Cone (SMACC) endmember model. Then these endmember spectra are projected sequentially on the HIP spectral engine in sync with the projection of the abundance images on the HIP spatial engine, during the singleframe exposure time of the UUT. The integrated spatial image captured by the UUT is the endmember-weighted sum of the abundance images, which results in the formation of a datacube for that frame. Compressive projection enables a much smaller set of broadband spectra to be projected than monochromatic projection, and thus utilizes the inherent multiplex advantage of the HIP spectral engine. As a result, radiometric brightness and projection frame rate are enhanced. In this paper, we use a visible breadboard HIP to experimentally assess the compressive projection algorithm performance.

  9. An Algorithm for Suffix Stripping

    ERIC Educational Resources Information Center

    Porter, M. F.

    2006-01-01

    Purpose: The automatic removal of suffixes from words in English is of particular interest in the field of information retrieval. This work was originally published in Program in 1980 and is republished as part of a series of articles commemorating the 40th anniversary of the journal. Design/methodology/approach: An algorithm for suffix stripping…

  10. Randomized approximate nearest neighbors algorithm.

    PubMed

    Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir

    2011-09-20

    We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.

  11. Understanding Algorithms in Different Presentations

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Abari, Kálmán; Máth, János

    2015-01-01

    Within the framework of the Testing Algorithmic and Application Skills project we tested first year students of Informatics at the beginning of their tertiary education. We were focusing on the students' level of understanding in different programming environments. In the present paper we provide the results from the University of Debrecen, the…

  12. Some Practical Payments Clearance Algorithms

    NASA Astrophysics Data System (ADS)

    Kumlander, Deniss

    The globalisation of corporations' operations has produced a huge volume of inter-company invoices. Optimisation of those known as payment clearance can produce a significant saving in costs associated with those transfers and handling. The paper revises some common and so practical approaches to the payment clearance problem and proposes some novel algorithms based on graphs theory and heuristic totals' distribution.

  13. College Algebra II.

    ERIC Educational Resources Information Center

    Benjamin, Carl; And Others

    Presented are student performance objectives, a student progress chart, and assignment sheets with objective and diagnostic measures for the stated performance objectives in College Algebra II. Topics covered include: differencing and complements; real numbers; factoring; fractions; linear equations; exponents and radicals; complex numbers,…

  14. Listen & Learn II.

    ERIC Educational Resources Information Center

    Community Building Resources, Spruce Grove (Alberta).

    Six community builders in Edmonton, Alberta, planned, developed, and implemented Listen and Learn II, a reflective research project in asset-based community building, over a 6-month period in 1998. They met regularly over 2 months to plan the research and design a method that was open to participation at any stage, encouraged exchange of…

  15. Instant Insanity II

    ERIC Educational Resources Information Center

    Richmond, Tom; Young, Aaron

    2013-01-01

    "Instant Insanity II" is a sliding mechanical puzzle whose solution requires the special alignment of 16 colored tiles. We count the number of solutions of the puzzle's classic challenge and show that the more difficult ultimate challenge has, up to row permutation, exactly two solutions, and further show that no…

  16. Dissecting Diversity Part II

    ERIC Educational Resources Information Center

    Matthews, Frank

    2005-01-01

    This article presents "Dissecting Diversity, Part II," the conclusion of a wide-ranging two-part roundtable discussion on diversity in higher education. The participants were as follows: Lezli Baskerville, J.D., President and CEO of the National Association for Equal Opportunity (NAFEO); Dr. Gerald E. Gipp, Executive Director of the American…

  17. Periodontics II: Course Proposal.

    ERIC Educational Resources Information Center

    Dordick, Bruce

    A proposal is presented for Periodontics II, a course offered at the Community College of Philadelphia to give the dental hygiene/assisting student an understanding of the disease states of the periodontium and their treatment. A standardized course proposal cover form is given, followed by a statement of purpose for the course, a list of major…

  18. Structure Learning and Statistical Estimation in Distribution Networks - Part II

    SciTech Connect

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    2015-02-13

    Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/or line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.

  19. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  20. EMGeo-II

    2009-01-01

    An algorithm that improves both the computational capabilities of joint 3D electromagnetic EM and magnetotelluric MT field simulation and inverse modeling. Based upon non-linear conjugate gradients for the imaging component and 3D finite difference methodology for field EM & MT simulation. Improving the modeling efficiency of the algorithm involves the separation of the modeling/imaging grid from the simulation grid. This grid separation method allows for the treatment of very large data sets and imaging volumes.more » Further computational efficiency is obtained by combining different levels of parallelization using the message Passing Interface (MPI). Bound constraints are employed in the imaging process to insure stability. Additional acceleration of the inverse modeling is achieved by preconditions the conjugate gradient optimizer using an approximate Hessian. The algorithm includes improved capabilities to accurately treat models that exhibit transverse anisotropy in electrical conductivity in the presence of topography and bathymetry. Background anisotropic Earth models are assigned to each transmitter-receiver set, results in solutions to the scattering equations at much improved accuracy. The software also incudes a set of pre and post processing tools to designing input model meshes and data plotting.« less

  1. Padé approximations for Painlevé I and II transcendents

    NASA Astrophysics Data System (ADS)

    Novokshenov, V. Yu.

    2009-06-01

    We use a version of the Fair-Luke algorithm to find the Padé approximate solutions of the Painlevé I and II equations. We find the distributions of poles for the well-known Ablowitz-Segur and Hastings-McLeod solutions of the Painlevé II equation. We show that the Boutroux tritronquée solution of the Painleé I equation has poles only in the critical sector of the complex plane. The algorithm allows checking other analytic properties of the Painlevé transcendents, such as the asymptotic behavior at infinity in the complex plane.

  2. 160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA)

    PubMed Central

    Li, Isaac TS; Shum, Warren; Truong, Kevin

    2007-01-01

    Background To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. Results In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. Conclusion This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching. PMID:17555593

  3. Design and Implementation of Hybrid CORDIC Algorithm Based on Phase Rotation Estimation for NCO

    PubMed Central

    Zhang, Chaozhu; Han, Jinan; Li, Ke

    2014-01-01

    The numerical controlled oscillator has wide application in radar, digital receiver, and software radio system. Firstly, this paper introduces the traditional CORDIC algorithm. Then in order to improve computing speed and save resources, this paper proposes a kind of hybrid CORDIC algorithm based on phase rotation estimation applied in numerical controlled oscillator (NCO). Through estimating the direction of part phase rotation, the algorithm reduces part phase rotation and add-subtract unit, so that it decreases delay. Furthermore, the paper simulates and implements the numerical controlled oscillator by Quartus II software and Modelsim software. Finally, simulation results indicate that the improvement over traditional CORDIC algorithm is achieved in terms of ease of computation, resource utilization, and computing speed/delay while maintaining the precision. It is suitable for high speed and precision digital modulation and demodulation. PMID:25110750

  4. Unsupervised and stable LBG algorithm for data classification: application to aerial multicomponent images

    NASA Astrophysics Data System (ADS)

    Taher, A.; Chehdi, K.; Cariou, C.

    2015-10-01

    In this paper a stable and unsupervised Linde-Buzo-Gray (LBG) algorithm named LBGO is presented. The originality of the proposed algorithm relies: i) on the utilization of an adaptive incremental technique to initialize the class centres that calls into question the intermediate initializations; this technique makes the algorithm stable and deterministic, and the classification results do not vary from a run to another, and ii) on the unsupervised evaluation criteria of the intermediate classification result to estimate the optimal number of classes; this makes the algorithm unsupervised. The efficiency of this optimized version of LBG is shown through some experimental results on synthetic and real aerial hyperspectral data. More precisely we have tested our proposed classification approach regarding three aspects: firstly for its stability, secondly for its correct classification rate, and thirdly for the correct estimation of number of classes.

  5. Why is Boris algorithm so good?

    SciTech Connect

    Qin, Hong; Zhang, Shuangxi; Xiao, Jianyuan; Liu, Jian; Sun, Yajuan; Tang, William M.

    2013-08-15

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this paper, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  6. Why is Boris Algorithm So Good?

    SciTech Connect

    et al, Hong Qin

    2013-03-03

    Due to its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. Despite its popularity, up to now there has been no convincing explanation why the Boris algorithm has this advantageous feature. In this letter, we provide an answer to this question. We show that the Boris algorithm conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas.

  7. Higher-order force gradient symplectic algorithms

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.; Kidwell, Donald W.

    2000-12-01

    We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.

  8. Role of Bound Zn(II) in the CadC Cd(II)/Pb(II)/Zn(II)-Responsive Repressor

    SciTech Connect

    Kandegedara, A.; Thiyagarajan, S; Kondapalli, K; Stemmler, T; Rosen, B

    2009-01-01

    The Staphylococcus aureus plasmid pI258 cadCA operon encodes a P-type ATPase, CadA, that confers resistance to Cd(II)/Pb(II)/Zn(II). Expression is regulated by CadC, a homodimeric repressor that dissociates from the cad operator/promoter upon binding of Cd(II), Pb(II), or Zn(II). CadC is a member of the ArsR/SmtB family of metalloregulatory proteins. The crystal structure of CadC shows two types of metal binding sites, termed Site 1 and Site 2, and the homodimer has two of each. Site 1 is the physiological inducer binding site. The two Site 2 metal binding sites are formed at the dimerization interface. Site 2 is not regulatory in CadC but is regulatory in the homologue SmtB. Here the role of each site was investigated by mutagenesis. Both sites bind either Cd(II) or Zn(II). However, Site 1 has higher affinity for Cd(II) over Zn(II), and Site 2 prefers Zn(II) over Cd(II). Site 2 is not required for either derepression or dimerization. The crystal structure of the wild type with bound Zn(II) and of a mutant lacking Site 2 was compared with the SmtB structure with and without bound Zn(II). We propose that an arginine residue allows for Zn(II) regulation in SmtB and, conversely, a glycine results in a lack of regulation by Zn(II) in CadC. We propose that a glycine residue was ancestral whether the repressor binds Zn(II) at a Site 2 like CadC or has no Site 2 like the paralogous ArsR and implies that acquisition of regulatory ability in SmtB was a more recent evolutionary event.

  9. Introducing CAML II

    SciTech Connect

    Pelaia II, Tom; Boyes, Matthew

    2009-01-01

    Channel Access Markup Language (CAML) is a XML based markup language and implementation for displaying EPICS channel access controls within a web browser. The CAML II project expanded upon the work of CAML I adding more features and greater integration with other web technologies. The most dramatic new feature introduced in CAML II is the introduction of a namespace so CAML controls can be embedded within XHTML documents. A repetition template with macro substitution allows for rapid coding of arbitrary XHTML repetitions. Enhancements have been made to several controls including more powerful plotting options. Advanced formatting options were introduced for text controls. Virtual process variables allow for custom calculations. An EDL to CAML translator eases the transition from EDM screens to CAML pages.

  10. RADTRAN II user guide

    SciTech Connect

    Madsen, M M; Wilmot, E L; Taylor, J M

    1983-02-01

    RADTRAN II is a flexible analytical tool for calculating both the incident-free and accident impacts of transporting radioactive materials. The consequences from incident-free shipments are apportioned among eight population subgroups and can be calculated for several transport modes. The radiological accident risk (probability times consequence summed over all postulated accidents) is calculated in terms of early fatalities, early morbidities, latent cancer fatalities, genetic effects, and economic impacts. Groundshine, inhalation, direct exposure, resuspension, and cloudshine dose pathways are modeled to calculate the radiological health risks from accidents. Economic impacts are evaluated based on costs for emergency response, cleanup, evacuation, income loss, and land use. RADTRAN II can be applied to specific scenario evaluations (individual transport modes or specified combinations), to compare alternative modes or to evaluate generic radioactive material shipments. Unit-risk factors can easily be evaluated to aid in performing generic analyses when several options must be compared with the amount of travel as the only variable.

  11. Results from SAGE II

    SciTech Connect

    Nico, J.S.

    1994-10-01

    The Russian-American Gallium solar neutrino Experiment (SAGE) began the second phase of operation (SAGE II) in September of 1992. Monthly measurements of the integral flux of solar neutrinos have been made with 55 tonnes of gallium. The K-peak results of the first nine runs of SAGE II give a capture rate of 66{sub -13}{sup +18} (stat) {sub -7}{sup +5} (sys) SNU. Combined with the SAGE I result of 73{sub -16}{sup +18} (stat) {sub -7}{sup 5} (sys) SNU, the capture rate is 69{sub -11}{sup +11} (stat) {sub -7}{sup +5} (sys) SNU. This represents only 52%--56% of the capture rate predicted by different Standard Solar Models.

  12. TARN II project

    SciTech Connect

    Katayama, T.

    1985-04-01

    On the basis of the achievement of the accelerator studies at present TARN, it is decided to construct the new ring TARN II which will be operated as an accumulator, accelerator, cooler and stretcher. It has the maximum magnetic rigidity of 7 Txm corresponding to the proton energy 1.3 GeV and the ring diameter is around 23 m. Light and heavy ions from the SF cyclotron will be injected and accelerated to the working energy where the ring will be operated as a desired mode, for example a cooler ring mode. At the cooler ring operation, the strong cooling devices such as stochastic and electron beam coolings will work together with the internal gas jet target for the precise nuclear experiments. TARN II is currently under the contruction with the schedule of completion in 1986. In this paper general features of the project are presented.

  13. Ribosomal Database Project II

    DOE Data Explorer

    The Ribosomal Database Project (RDP) provides ribosome related data and services to the scientific community, including online data analysis and aligned and annotated Bacterial small-subunit 16S rRNA sequences. As of March 2008, RDP Release 10 is available and currently (August 2009) contains 1,074,075 aligned 16S rRNA sequences. Data that can be downloaded include zipped GenBank and FASTA alignment files, a histogram (in Excel) of the number of RDP sequences spanning each base position, data in the Functional Gene Pipeline Repository, and various user submitted data. The RDP-II website also provides numerous analysis tools.[From the RDP-II home page at http://rdp.cme.msu.edu/index.jsp

  14. Systolic algorithms and their implementation

    SciTech Connect

    Kung, H.T.

    1984-01-01

    Very high performance computer systems must rely heavily on parallelism since there are severe physical and technological limits on the ultimate speed of any single processor. The systolic array concept developed in the last several years allows effective use of a very large number of processors in parallel. This article illustrates the basic ideas by reviewing a systolic array design for matrix triangularization and describing its use in the on-the-fly updating of Cholesky decomposition of covariance matrices-a crucial computation in adaptive signal processing. Following this are discussions on issues related to the hardware implementation of systolic algorithms in general, and some guidelines for designing systolic algorithms that will be convenient for implementation. 33 references.

  15. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  16. An NOy* Algorithm for SOLVE

    NASA Technical Reports Server (NTRS)

    Loewenstein, M.; Greenblatt. B. J.; Jost, H.; Podolske, J. R.; Elkins, Jim; Hurst, Dale; Romanashkin, Pavel; Atlas, Elliott; Schauffler, Sue; Donnelly, Steve; Condon, Estelle (Technical Monitor)

    2000-01-01

    De-nitrification and excess re-nitrification was widely observed by ER-2 instruments in the Arctic vortex during SOLVE in winter/spring 2000. Analyses of these events requires a knowledge of the initial or pre-vortex state of the sampled air masses. The canonical relationship of NOy to the long-lived tracer N2O observed in the unperturbed stratosphere is generally used for this purpose. In this paper we will attempt to establish the current unperturbed NOy:N2O relationship (NOy* algorithm) using the ensemble of extra-vortex data from in situ instruments flying on the ER-2 and DC-8, and from the Mark IV remote measurements on the OMS balloon. Initial analysis indicates a change in the SOLVE NOy* from the values predicted by the 1994 Northern Hemisphere NOy* algorithm which was derived from the observations in the ASHOE/MAESA campaign.

  17. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  18. Authenticated algorithms for Byzantine agreement

    SciTech Connect

    Dolev, D.; Strong, H.R.

    1983-11-01

    Reaching agreement in a distributed system in the presence of fault processors is a central issue for reliable computer systems. Using an authentication protocol, one can limit the undetected behavior of faulty processors to a simple failure to relay messages to all intended targets. In this paper the authors show that, in spite of such an ability to limit faulty behavior, and no matter what message types or protocols are allowed, reaching (Byzantine) agreement requires at least t+1 phases or rounds of information exchange, where t is an upper bound on the number of faulty processors. They present algorithms for reaching agreement based on authentication that require a total number of messages sent by correctly operating processors that is polynomial in both t and the number of processors, n. The best algorithm uses only t+1 phases and o(nt) messages. 9 references.

  19. Molecular beacon sequence design algorithm.

    PubMed

    Monroe, W Todd; Haselton, Frederick R

    2003-01-01

    A method based on Web-based tools is presented to design optimally functioning molecular beacons. Molecular beacons, fluorogenic hybridization probes, are a powerful tool for the rapid and specific detection of a particular nucleic acid sequence. However, their synthesis costs can be considerable. Since molecular beacon performance is based on its sequence, it is imperative to rationally design an optimal sequence before synthesis. The algorithm presented here uses simple Microsoft Excel formulas and macros to rank candidate sequences. This analysis is carried out using mfold structural predictions along with other free Web-based tools. For smaller laboratories where molecular beacons are not the focus of research, the public domain algorithm described here may be usefully employed to aid in molecular beacon design.

  20. Algorithm refinement for fluctuating hydrodynamics

    SciTech Connect

    Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.

    2007-07-03

    This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.

  1. PEP-II Transverse Feedback Electronics Upgrade

    SciTech Connect

    Weber, J.; Chin, M.; Doolittle, L.; Akre, R.

    2005-05-09

    The PEP-II B Factory at the Stanford Linear Accelerator Center (SLAC) requires an upgrade of the transverse feedback system electronics. The new electronics require 12-bit resolution and a minimum sampling rate of 238 Msps. A Field Programmable Gate Array (FPGA) is used to implement the feedback algorithm. The FPGA also contains an embedded PowerPC 405 (PPC-405) processor to run control system interface software for data retrieval, diagnostics, and system monitoring. The design of this system is based on the Xilinx(R) ML300 Development Platform, a circuit board set containing an FPGA with an embedded processor, a large memory bank, and other peripherals. This paper discusses the design of a digital feedback system based on an FPGA with an embedded processor. Discussion will include specifications, component selection, and integration with the ML300 design.

  2. PEP-II Transverse Feedback Electronics Upgrade

    SciTech Connect

    Weber, J.M.; Chin, M.J.; Doolittle, L.R.; Akre, R.; /SLAC

    2006-03-13

    The PEP-II B Factory at the Stanford Linear Accelerator Center (SLAC) requires an upgrade of the transverse feedback system electronics. The new electronics require 12-bit resolution and a minimum sampling rate of 238 Msps. A Field Programmable Gate Array (FPGA) is used to implement the feedback algorithm. The FPGA also contains an embedded PowerPC 405 (PPC-405) processor to run control system interface software for data retrieval, diagnostics, and system monitoring. The design of this system is based on the Xilinx{reg_sign} ML300 Development Platform, a circuit board set containing an FPGA with an embedded processor, a large memory bank, and other peripherals. This paper discusses the design of a digital feedback system based on an FPGA with an embedded processor. Discussion will include specifications, component selection, and integration with the ML300 design.

  3. RISTA II trials

    NASA Astrophysics Data System (ADS)

    Martin, John R.

    1998-11-01

    Northrop Grumman Corporation has developed an advanced 2nd generation IR sensor system under the guidance of the US Army's Night Vision and Electronic Sensors Directorate (NVESD) as part of an Advanced Concept Technology Demonstration (ACTD) called Counter Mobile Rocket Launcher (CMRL). Designed to support rapid counter fire against mobile targets from an unmanned aerial vehicle (UAV), the sensor system, called reconnaissance IR surveillance target acquisition (RISTA II), consists of a 2nd generation FLIR/line scanner, a digital data link, a ground processing facility, and an aided target recognizer (AiTF). The concept of operation together with component details was reported at the passive sensors IRIS in March, 1996. The performance testing of the RISTA II System was reported at the National IRIS in November, 1997. The RISTA II sensor has subsequently undergone performance testing on a Royal Netherlands Air Force F-16 for a manned reconnaissance application in August and October, 1997, at Volkel Airbase, Netherlands. That testing showed performance compatible with the medium altitude IR sensor performance. The results of that testing, together with flight test imagery, will be presented.

  4. Systolic systems: algorithms and complexity

    SciTech Connect

    Chang, J.H.

    1986-01-01

    This thesis has two main contributions. The first is the design of efficient systolic algorithms for solving recurrence equations, dynamic programming problems, scheduling problems, as well as new systolic implementation of data structures such as stacks, queues, priority queues, and dictionary machines. The second major contribution is the investigation of the computational power of systolic arrays in comparison to sequential models and other models of parallel computation.

  5. Algorithms Could Automate Cancer Diagnosis

    NASA Technical Reports Server (NTRS)

    Baky, A. A.; Winkler, D. G.

    1982-01-01

    Five new algorithms are a complete statistical procedure for quantifying cell abnormalities from digitized images. Procedure could be basis for automated detection and diagnosis of cancer. Objective of procedure is to assign each cell an atypia status index (ASI), which quantifies level of abnormality. It is possible that ASI values will be accurate and economical enough to allow diagnoses to be made quickly and accurately by computer processing of laboratory specimens extracted from patients.

  6. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  7. Feature and Statistical Model Development in Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Kim, Inho

    , are trained and utilized to interpret nonlinear far-field wave patterns. Next, a novel bridge scour estimation approach that comprises advantages of both empirical and data-driven models is developed. Two field datasets from the literature are used, and a Support Vector Machine (SVM), a machine-learning algorithm, is used to fuse the field data samples and classify the data with physical phenomena. The Fast Non-dominated Sorting Genetic Algorithm (NSGA-II) is evaluated on the model performance objective functions to search for Pareto optimal fronts.

  8. Summing It All Up: Pre-1900 Algorithms.

    ERIC Educational Resources Information Center

    Pearson, Eleanor S.

    1986-01-01

    Computational algorithms from American textbooks copyrighted prior to 1900 are presented--some that convey the concept, some just for special cases, and some just for fun. Algorithms for each operation with whole numbers are presented and analyzed. (MNS)

  9. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  10. Algorithmic complexity and entanglement of quantum states.

    PubMed

    Mora, Caterina E; Briegel, Hans J

    2005-11-11

    We define the algorithmic complexity of a quantum state relative to a given precision parameter, and give upper bounds for various examples of states. We also establish a connection between the entanglement of a quantum state and its algorithmic complexity.

  11. An algorithm for generating abstract syntax trees

    NASA Technical Reports Server (NTRS)

    Noonan, R. E.

    1985-01-01

    The notion of an abstract syntax is discussed. An algorithm is presented for automatically deriving an abstract syntax directly from a BNF grammar. The implementation of this algorithm and its application to the grammar for Modula are discussed.

  12. Spaceborne SAR Imaging Algorithm for Coherence Optimized.

    PubMed

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application.

  13. Spaceborne SAR Imaging Algorithm for Coherence Optimized

    PubMed Central

    Qiu, Zhiwei; Yue, Jianping; Wang, Xueqin; Yue, Shun

    2016-01-01

    This paper proposes SAR imaging algorithm with largest coherence based on the existing SAR imaging algorithm. The basic idea of SAR imaging algorithm in imaging processing is that output signal can have maximum signal-to-noise ratio (SNR) by using the optimal imaging parameters. Traditional imaging algorithm can acquire the best focusing effect, but would bring the decoherence phenomenon in subsequent interference process. Algorithm proposed in this paper is that SAR echo adopts consistent imaging parameters in focusing processing. Although the SNR of the output signal is reduced slightly, their coherence is ensured greatly, and finally the interferogram with high quality is obtained. In this paper, two scenes of Envisat ASAR data in Zhangbei are employed to conduct experiment for this algorithm. Compared with the interferogram from the traditional algorithm, the results show that this algorithm is more suitable for SAR interferometry (InSAR) research and application. PMID:26871446

  14. Teaching Multiplication Algorithms from Other Cultures

    ERIC Educational Resources Information Center

    Lin, Cheng-Yao

    2007-01-01

    This article describes a number of multiplication algorithms from different cultures around the world: Hindu, Egyptian, Russian, Japanese, and Chinese. Students can learn these algorithms and better understand the operation and properties of multiplication.

  15. Concurrent algorithms for transient FE analysis

    NASA Technical Reports Server (NTRS)

    Ortiz, M.; Nour-Omid, B.

    1989-01-01

    Information on concurrent algorithms for transient finite element analysis is given in viewgraph form. Information is given on concurrent dynamic algorithms, interprocessor communication, the performance of the BAR problem on the 32 Processor Hypercube, computational efficiency and accuracy analysis.

  16. Algorithmic Strategies in Combinatorial Chemistry

    SciTech Connect

    GOLDMAN,DEBORAH; ISTRAIL,SORIN; LANCIA,GIUSEPPE; PICCOLBONI,ANTONIO; WALENZ,BRIAN

    2000-08-01

    Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.

  17. Algorithms and Requirements for Measuring Network Bandwidth

    SciTech Connect

    Jin, Guojun

    2002-12-08

    This report unveils new algorithms for actively measuring (not estimating) available bandwidths with very low intrusion, computing cross traffic, thus estimating the physical bandwidth, provides mathematical proof that the algorithms are accurate, and addresses conditions, requirements, and limitations for new and existing algorithms for measuring network bandwidths. The paper also discusses a number of important terminologies and issues for network bandwidth measurement, and introduces a fundamental parameter -Maximum Burst Size that is critical for implementing algorithms based on multiple packets.

  18. The performance of asynchronous algorithms on hypercubes

    SciTech Connect

    Womble, D.E.

    1988-12-01

    Many asynchronous algorithms have been developed for parallel computers. Most implementations of asynchronous algorithms, however, have been for shared memory machines. In this paper, we study the implementation and performance of some common asynchronous algorithms on the NCUBE/ten, a 1024 node hypercube. In addition, we summarize existing theoretical work and discuss some classes of algorithms that can be made asynchronous and some that cannot. 16 refs., 3 figs.

  19. Algorithmic approach to intelligent robot mobility

    SciTech Connect

    Kauffman, S.

    1983-05-01

    This paper presents Sutherland's algorithm, plus an alternative algorithm, which allows mobile robots to move about intelligently in environments resembling the rooms and hallways in which we move around. The main hardware requirements for a robot to use the algorithms presented are mobility and an ability to sense distances with some type of non-contact scanning device. This article does not discuss the actual robot construction. The emphasis is on heuristics and algorithms. 1 reference.

  20. An algorithm for segmenting range imagery

    SciTech Connect

    Roberts, R.S.

    1997-03-01

    This report describes the technical accomplishments of the FY96 Cross Cutting and Advanced Technology (CC&AT) project at Los Alamos National Laboratory. The project focused on developing algorithms for segmenting range images. The image segmentation algorithm developed during the project is described here. In addition to segmenting range images, the algorithm can fuse multiple range images thereby providing true 3D scene models. The algorithm has been incorporated into the Rapid World Modelling System at Sandia National Laboratory.

  1. A generic algorithm for Monte Carlo simulation of proton transport

    NASA Astrophysics Data System (ADS)

    Salvat, Francesc

    2013-12-01

    A mixed (class II) algorithm for Monte Carlo simulation of the transport of protons, and other heavy charged particles, in matter is presented. The emphasis is on the electromagnetic interactions (elastic and inelastic collisions) which are simulated using strategies similar to those employed in the electron-photon code PENELOPE. Elastic collisions are described in terms of numerical differential cross sections (DCSs) in the center-of-mass frame, calculated from the eikonal approximation with the Dirac-Hartree-Fock-Slater atomic potential. The polar scattering angle is sampled by employing an adaptive numerical algorithm which allows control of interpolation errors. The energy transferred to the recoiling target atoms (nuclear stopping) is consistently described by transformation to the laboratory frame. Inelastic collisions are simulated from DCSs based on the plane-wave Born approximation (PWBA), making use of the Sternheimer-Liljequist model of the generalized oscillator strength, with parameters adjusted to reproduce (1) the electronic stopping power read from the input file, and (2) the total cross sections for impact ionization of inner subshells. The latter were calculated from the PWBA including screening and Coulomb corrections. This approach provides quite a realistic description of the energy-loss distribution in single collisions, and of the emission of X-rays induced by proton impact. The simulation algorithm can be readily modified to include nuclear reactions, when the corresponding cross sections and emission probabilities are available, and bremsstrahlung emission.

  2. Algorithmic Processes for Increasing Design Efficiency.

    ERIC Educational Resources Information Center

    Terrell, William R.

    1983-01-01

    Discusses the role of algorithmic processes as a supplementary method for producing cost-effective and efficient instructional materials. Examines three approaches to problem solving in the context of developing training materials for the Naval Training Command: application of algorithms, quasi-algorithms, and heuristics. (EAO)

  3. Learning Intelligent Genetic Algorithms Using Japanese Nonograms

    ERIC Educational Resources Information Center

    Tsai, Jinn-Tsong; Chou, Ping-Yi; Fang, Jia-Cen

    2012-01-01

    An intelligent genetic algorithm (IGA) is proposed to solve Japanese nonograms and is used as a method in a university course to learn evolutionary algorithms. The IGA combines the global exploration capabilities of a canonical genetic algorithm (CGA) with effective condensed encoding, improved fitness function, and modified crossover and…

  4. In-Trail Procedure (ITP) Algorithm Design

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Siminiceanu, Radu I.

    2007-01-01

    The primary objective of this document is to provide a detailed description of the In-Trail Procedure (ITP) algorithm, which is part of the Airborne Traffic Situational Awareness In-Trail Procedure (ATSA-ITP) application. To this end, the document presents a high level description of the ITP Algorithm and a prototype implementation of this algorithm in the programming language C.

  5. Improvements of HITS Algorithms for Spam Links

    NASA Astrophysics Data System (ADS)

    Asano, Yasuhito; Tezuka, Yu; Nishizeki, Takao

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find “linkfarms,” that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  6. A Robustly Stabilizing Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  7. An adaptive algorithm for noise rejection.

    PubMed

    Lovelace, D E; Knoebel, S B

    1978-01-01

    An adaptive algorithm for the rejection of noise artifact in 24-hour ambulatory electrocardiographic recordings is described. The algorithm is based on increased amplitude distortion or increased frequency of fluctuations associated with an episode of noise artifact. The results of application of the noise rejection algorithm on a high noise population of test tapes are discussed.

  8. New Results in Astrodynamics Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Coverstone-Carroll, V.; Hartmann, J. W.; Williams, S. N.; Mason, W. J.

    1998-01-01

    Generic algorithms have gained popularity as an effective procedure for obtaining solutions to traditionally difficult space mission optimization problems. In this paper, a brief survey of the use of genetic algorithms to solve astrodynamics problems is presented and is followed by new results obtained from applying a Pareto genetic algorithm to the optimization of low-thrust interplanetary spacecraft missions.

  9. Verification of IEEE Compliant Subtractive Division Algorithms

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Leathrum, James F., Jr.

    1996-01-01

    A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.

  10. Optimisation of nonlinear motion cueing algorithm based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid

    2015-04-01

    Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching

  11. AerGOM, an improved algorithm for stratospheric aerosol extinction retrieval from GOMOS observations - Part 1: Algorithm description

    NASA Astrophysics Data System (ADS)

    Vanhellemont, Filip; Mateshvili, Nina; Blanot, Laurent; Étienne Robert, Charles; Bingen, Christine; Sofieva, Viktoria; Dalaudier, Francis; Tétard, Cédric; Fussen, Didier; Dekemper, Emmanuel; Kyrölä, Erkki; Laine, Marko; Tamminen, Johanna; Zehner, Claus

    2016-09-01

    The GOMOS instrument on Envisat has successfully demonstrated that a UV-Vis-NIR spaceborne stellar occultation instrument is capable of delivering quality data on the gaseous and particulate composition of Earth's atmosphere. Still, some problems related to data inversion remained to be examined. In the past, it was found that the aerosol extinction profile retrievals in the upper troposphere and stratosphere are of good quality at a reference wavelength of 500 nm but suffer from anomalous, retrieval-related perturbations at other wavelengths. Identification of algorithmic problems and subsequent improvement was therefore necessary. This work has been carried out; the resulting AerGOM Level 2 retrieval algorithm together with the first data version AerGOMv1.0 forms the subject of this paper. The AerGOM algorithm differs from the standard GOMOS IPF processor in a number of important ways: more accurate physical laws have been implemented, all retrieval-related covariances are taken into account, and the aerosol extinction spectral model is strongly improved. Retrieval examples demonstrate that the previously observed profile perturbations have disappeared, and the obtained extinction spectra look in general more consistent. We present a detailed validation study in a companion paper; here, to give a first idea of the data quality, a worst-case comparison at 386 nm shows SAGE II-AerGOM correlation coefficients that are up to 1 order of magnitude larger than the ones obtained with the GOMOS IPFv6.01 data set.

  12. Subjective audio quality evaluation of embedded-optimization-based distortion precompensation algorithms.

    PubMed

    Defraene, Bruno; van Waterschoot, Toon; Diehl, Moritz; Moonen, Marc

    2016-07-01

    Subjective audio quality evaluation experiments have been conducted to assess the performance of embedded-optimization-based precompensation algorithms for mitigating perceptible linear and nonlinear distortion in audio signals. It is concluded with statistical significance that the perceived audio quality is improved by applying an embedded-optimization-based precompensation algorithm, both in case (i) nonlinear distortion and (ii) a combination of linear and nonlinear distortion is present. Moreover, a significant positive correlation is reported between the collected subjective and objective PEAQ audio quality scores, supporting the validity of using PEAQ to predict the impact of linear and nonlinear distortion on the perceived audio quality. PMID:27475197

  13. Subjective audio quality evaluation of embedded-optimization-based distortion precompensation algorithms.

    PubMed

    Defraene, Bruno; van Waterschoot, Toon; Diehl, Moritz; Moonen, Marc

    2016-07-01

    Subjective audio quality evaluation experiments have been conducted to assess the performance of embedded-optimization-based precompensation algorithms for mitigating perceptible linear and nonlinear distortion in audio signals. It is concluded with statistical significance that the perceived audio quality is improved by applying an embedded-optimization-based precompensation algorithm, both in case (i) nonlinear distortion and (ii) a combination of linear and nonlinear distortion is present. Moreover, a significant positive correlation is reported between the collected subjective and objective PEAQ audio quality scores, supporting the validity of using PEAQ to predict the impact of linear and nonlinear distortion on the perceived audio quality.

  14. Fuzzy physical programming for Space Manoeuvre Vehicles trajectory optimization based on hp-adaptive pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chai, Runqi; Savvaris, Al; Tsourdos, Antonios

    2016-06-01

    In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.

  15. Pricing resources in LTE networks through multiobjective optimization.

    PubMed

    Lai, Yung-Liang; Jiang, Jehn-Ruey

    2014-01-01

    The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid "user churn," which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution.

  16. Pricing resources in LTE networks through multiobjective optimization.

    PubMed

    Lai, Yung-Liang; Jiang, Jehn-Ruey

    2014-01-01

    The LTE technology offers versatile mobile services that use different numbers of resources. This enables operators to provide subscribers or users with differential quality of service (QoS) to boost their satisfaction. On one hand, LTE operators need to price the resources high for maximizing their profits. On the other hand, pricing also needs to consider user satisfaction with allocated resources and prices to avoid "user churn," which means subscribers will unsubscribe services due to dissatisfaction with allocated resources or prices. In this paper, we study the pricing resources with profits and satisfaction optimization (PRPSO) problem in the LTE networks, considering the operator profit and subscribers' satisfaction at the same time. The problem is modelled as nonlinear multiobjective optimization with two optimal objectives: (1) maximizing operator profit and (2) maximizing user satisfaction. We propose to solve the problem based on the framework of the NSGA-II. Simulations are conducted for evaluating the proposed solution. PMID:24526889

  17. Parallelized dilate algorithm for remote sensing image.

    PubMed

    Zhang, Suli; Hu, Haoran; Pan, Xin

    2014-01-01

    As an important algorithm, dilate algorithm can give us more connective view of a remote sensing image which has broken lines or objects. However, with the technological progress of satellite sensor, the resolution of remote sensing image has been increasing and its data quantities become very large. This would lead to the decrease of algorithm running speed or cannot obtain a result in limited memory or time. To solve this problem, our research proposed a parallelized dilate algorithm for remote sensing Image based on MPI and MP. Experiments show that our method runs faster than traditional single-process algorithm.

  18. Alternative learning algorithms for feedforward neural networks

    SciTech Connect

    Vitela, J.E.

    1996-03-01

    The efficiency of the back propagation algorithm to train feed forward multilayer neural networks has originated the erroneous belief among many neural networks users, that this is the only possible way to obtain the gradient of the error in this type of networks. The purpose of this paper is to show how alternative algorithms can be obtained within the framework of ordered partial derivatives. Two alternative forward-propagating algorithms are derived in this work which are mathematically equivalent to the BP algorithm. This systematic way of obtaining learning algorithms illustrated with this particular type of neural networks can also be used with other types such as recurrent neural networks.

  19. Problem solving with genetic algorithms and Splicer

    NASA Technical Reports Server (NTRS)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  20. Is there a best hyperspectral detection algorithm?

    NASA Astrophysics Data System (ADS)

    Manolakis, D.; Lockwood, R.; Cooley, T.; Jacobson, J.

    2009-05-01

    A large number of hyperspectral detection algorithms have been developed and used over the last two decades. Some algorithms are based on highly sophisticated mathematical models and methods; others are derived using intuition and simple geometrical concepts. The purpose of this paper is threefold. First, we discuss the key issues involved in the design and evaluation of detection algorithms for hyperspectral imaging data. Second, we present a critical review of existing detection algorithms for practical hyperspectral imaging applications. Finally, we argue that the "apparent" superiority of sophisticated algorithms with simulated data or in laboratory conditions, does not necessarily translate to superiority in real-world applications.

  1. NSLS II Vacuum System

    SciTech Connect

    Ferreira, M.; Doom, L.; Hseuh, H.; Longo, C.; Settepani, P.; Wilson, K.; Hu, J.

    2009-09-13

    National Synchrotron Light Source II, being constructed at Brookhaven, is a 3-GeV, 500 mA, 3rd generation synchrotron radiation facility with ultra low emittance electron beams. The storage ring vacuum system has a circumference of 792 m and consists of over 250 vacuum chambers with a simulated average operating pressure of less than 1 x 10{sup -9} mbar. A summary of the update design of the vacuum system including girder supports of the chambers, gauges, vacuum pumps, bellows, beam position monitors and simulation of the average pressure will be shown. A brief description of the techniques and procedures for cleaning and mounting the chambers are given.

  2. Delta II Mars Pathfinder

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Final preparations for lift off of the DELTA II Mars Pathfinder Rocket are shown. Activities include loading the liquid oxygen, completing the construction of the Rover, and placing the Rover into the Lander. After the countdown, important visual events include the launch of the Delta Rocket, burnout and separation of the three Solid Rocket Boosters, and the main engine cutoff. The cutoff of the main engine marks the beginning of the second stage engine. After the completion of the second stage, the third stage engine ignites and then cuts off. Once the third stage engine cuts off spacecraft separation occurs.

  3. Run II luminosity progress

    SciTech Connect

    Gollwitzer, K.; /Fermilab

    2007-06-01

    The Fermilab Tevatron Collider Run II program continues at the energy and luminosity frontier of high energy particle physics. To the collider experiments CDF and D0, over 3 fb{sup -1} of integrated luminosity has been delivered to each. Upgrades and improvements in the Antiproton Source of the production and collection of antiprotons have led to increased number of particles stored in the Recycler. Electron cooling and associated improvements have help make a brighter antiproton beam at collisions. Tevatron improvements to handle the increased number of particles and the beam lifetimes have resulted in an increase in luminosity.

  4. Color sorting algorithm based on K-means clustering algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, BaoFeng; Huang, Qian

    2009-11-01

    In the process of raisin production, there were a variety of color impurities, which needs be removed effectively. A new kind of efficient raisin color-sorting algorithm was presented here. First, the technology of image processing basing on the threshold was applied for the image pre-processing, and then the gray-scale distribution characteristic of the raisin image was found. In order to get the chromatic aberration image and reduce some disturbance, we made the flame image subtraction that the target image data minus the background image data. Second, Haar wavelet filter was used to get the smooth image of raisins. According to the different colors and mildew, spots and other external features, the calculation was made to identify the characteristics of their images, to enable them to fully reflect the quality differences between the raisins of different types. After the processing above, the image were analyzed by K-means clustering analysis method, which can achieve the adaptive extraction of the statistic features, in accordance with which, the image data were divided into different categories, thereby the categories of abnormal colors were distinct. By the use of this algorithm, the raisins of abnormal colors and ones with mottles were eliminated. The sorting rate was up to 98.6%, and the ratio of normal raisins to sorted grains was less than one eighth.

  5. Empirical study of parallel LRU simulation algorithms

    NASA Technical Reports Server (NTRS)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  6. New algorithms for binary wavefront optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolong; Kner, Peter

    2015-03-01

    Binary amplitude modulation promises to allow rapid focusing through strongly scattering media with a large number of segments due to the faster update rates of digital micromirror devices (DMDs) compared to spatial light modulators (SLMs). While binary amplitude modulation has a lower theoretical enhancement than phase modulation, the faster update rate should more than compensate for the difference - a factor of π2 /2. Here we present two new algorithms, a genetic algorithm and a transmission matrix algorithm, for optimizing the focus with binary amplitude modulation that achieve enhancements close to the theoretical maximum. Genetic algorithms have been shown to work well in noisy environments and we show that the genetic algorithm performs better than a stepwise algorithm. Transmission matrix algorithms allow complete characterization and control of the medium but require phase control either at the input or output. Here we introduce a transmission matrix algorithm that works with only binary amplitude control and intensity measurements. We apply these algorithms to binary amplitude modulation using a Texas Instruments Digital Micromirror Device. Here we report an enhancement of 152 with 1536 segments (9.90%×N) using a genetic algorithm with binary amplitude modulation and an enhancement of 136 with 1536 segments (8.9%×N) using an intensity-only transmission matrix algorithm.

  7. Algorithms versus architectures for computational chemistry

    NASA Technical Reports Server (NTRS)

    Partridge, H.; Bauschlicher, C. W., Jr.

    1986-01-01

    The algorithms employed are computationally intensive and, as a result, increased performance (both algorithmic and architectural) is required to improve accuracy and to treat larger molecular systems. Several benchmark quantum chemistry codes are examined on a variety of architectures. While these codes are only a small portion of a typical quantum chemistry library, they illustrate many of the computationally intensive kernels and data manipulation requirements of some applications. Furthermore, understanding the performance of the existing algorithm on present and proposed supercomputers serves as a guide for future programs and algorithm development. The algorithms investigated are: (1) a sparse symmetric matrix vector product; (2) a four index integral transformation; and (3) the calculation of diatomic two electron Slater integrals. The vectorization strategies are examined for these algorithms for both the Cyber 205 and Cray XMP. In addition, multiprocessor implementations of the algorithms are looked at on the Cray XMP and on the MIT static data flow machine proposed by DENNIS.

  8. A compilation of jet finding algorithms

    SciTech Connect

    Flaugher, B.; Meier, K.

    1992-12-31

    Technical descriptions of jet finding algorithms currently in use in p{anti p} collider experiments (CDF, UA1, UA2), e{sup +}e{sup {minus}} experiments and Monte-Carlo event generators (LUND programs, ISAJET) have been collected. For the hadron collider experiments, the clustering methods fall into two categories: cone algorithms and nearest-neighbor algorithms. In addition, UA2 has employed a combination of both methods for some analysis. While there are clearly differences between the cone and nearest-neighbor algorithms, the authors have found that there are also differences among the cone algorithms in the details of how the centroid of a cone cluster is located and how the E{sub T} and P{sub T} of the jet are defined. The most commonly used jet algorithm in electron-positron experiments is the JADE-type cluster algorithm. Five various incarnations of this approach have been described.

  9. A synthesized heuristic task scheduling algorithm.

    PubMed

    Dai, Yanyan; Zhang, Xiangli

    2014-01-01

    Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance.

  10. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  11. Wire Detection Algorithms for Navigation

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia I.

    2002-01-01

    In this research we addressed the problem of obstacle detection for low altitude rotorcraft flight. In particular, the problem of detecting thin wires in the presence of image clutter and noise was studied. Wires present a serious hazard to rotorcrafts. Since they are very thin, their detection early enough so that the pilot has enough time to take evasive action is difficult, as their images can be less than one or two pixels wide. Two approaches were explored for this purpose. The first approach involved a technique for sub-pixel edge detection and subsequent post processing, in order to reduce the false alarms. After reviewing the line detection literature, an algorithm for sub-pixel edge detection proposed by Steger was identified as having good potential to solve the considered task. The algorithm was tested using a set of images synthetically generated by combining real outdoor images with computer generated wire images. The performance of the algorithm was evaluated both, at the pixel and the wire levels. It was observed that the algorithm performs well, provided that the wires are not too thin (or distant) and that some post processing is performed to remove false alarms due to clutter. The second approach involved the use of an example-based learning scheme namely, Support Vector Machines. The purpose of this approach was to explore the feasibility of an example-based learning based approach for the task of detecting wires from their images. Support Vector Machines (SVMs) have emerged as a promising pattern classification tool and have been used in various applications. It was found that this approach is not suitable for very thin wires and of course, not suitable at all for sub-pixel thick wires. High dimensionality of the data as such does not present a major problem for SVMs. However it is desirable to have a large number of training examples especially for high dimensional data. The main difficulty in using SVMs (or any other example-based learning

  12. ALFA: Automated Line Fitting Algorithm

    NASA Astrophysics Data System (ADS)

    Wesson, R.

    2015-12-01

    ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.

  13. Algorithms for skiascopy measurement automatization

    NASA Astrophysics Data System (ADS)

    Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  14. An efficient parallel termination detection algorithm

    SciTech Connect

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Of these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.

  15. Evaluation of the computerized procedures Manual II (COPMA II)

    SciTech Connect

    Converse, S.A.

    1995-11-01

    The purpose of this study was to evaluate the effects of a computerized procedure system, the Computerized Procedure Manual II (COPMA-II), on the performance and mental workload of licensed reactor operators. To evaluate COPMA-II, eight teams of two operators were trained to operate a scaled pressurized water reactor facility (SPWRF) with traditional paper procedures and with COPMA-II. Following training, each team operated the SPWRF under normal operating conditions with both paper procedures and COPMA-II. The teams then performed one of two accident scenarios with paper procedures, but performed the remaining accident scenario with COPMA-II. Performance measures and subjective estimates of mental workload were recorded for each performance trial. The most important finding of the study was that the operators committed only half as many errors during the accident scenarios with COPMA-II as they committed with paper procedures. However, time to initiate a procedure was fastest for paper procedures for accident scenario trials. For performance under normal operating conditions, there was no difference in time to initiate or to complete a procedure, or in the number of errors committed with paper procedures and with COPMA-II. There were no consistent differences in the mental workload ratings operators recorded for trials with paper procedures and COPMA-II.

  16. Region processing algorithm for HSTAMIDS

    NASA Astrophysics Data System (ADS)

    Ngan, Peter; Burke, Sean; Cresci, Roger; Wilson, Joseph N.; Gader, Paul; Ho, Dominic K. C.

    2006-05-01

    The AN/PSS-14 (a.k.a. HSTAMIDS) has been tested for its performance in South East Asia, Thailand), South Africa (Namibia) and in November of 2005 in South West Asia (Afghanistan). The system has been proven effective in manual demining particularly in discriminating indigenous, metallic artifacts in the minefields. The Humanitarian Demining Research and Development (HD R&D) Program has sought to further improve the system to address specific needs in several areas. One particular area of these improvement efforts is the development of a mine detection/discrimination improvement software algorithm called Region Processing (RP). RP is an innovative technique in processing and is designed to work on a set of data acquired in a unique sweep pattern over a region-of-interest (ROI). The RP team is a joint effort consisting of three universities (University of Florida, University of Missouri, and Duke University), but is currently being led by the University of Florida. This paper describes the state-of-the-art Region Processing algorithm, its implementation into the current HSTAMIDS system, and its most recent test results.

  17. Enhanced algorithms for stochastic programming

    SciTech Connect

    Krishna, A.S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.

  18. Quantum Algorithms for Fermionic Simulations

    NASA Astrophysics Data System (ADS)

    Ortiz, Gerardo

    2001-06-01

    The probabilistic simulation of quantum systems in classical computers is known to be limited by the so-called sign or phase problem, a problem believed to be of exponential complexity. This ``disease" manifests itself by the exponentially hard task of estimating the expectation value of an observable with a given error. Therefore, probabilistic simulations on a classical computer do not seem to qualify as a practical computational scheme for general quantum many-body problems. The limiting factors, for whatever reasons, are negative or complex-valued probabilities whether the simulations are done in real or imaginary time. In 1981 Richard Feynman raised some provocative questions in connection to the ``exact imitation'' of such systems using a special device named a ``quantum computer.'' Feynman hesitated about the possibility of imitating fermion systems using such a device. Here we address some of his concerns and, in particular, investigate the simulation of fermionic systems. We show how quantum algorithms avoid the sign problem by reducing the complexity from exponential to polynomial. Our demonstration is based upon the use of isomorphisms of *-algebras (spin-particle transformations) which connect different models of quantum computation. In particular, we present fermionic models (the fabled ``Grassmann Chip''); but, of course, these models are not the only ones since our spin-particle connections allow us to introduce more ``esoteric'' models of computation. We present specific quantum algorithms that illustrate the main points of our algebraic approach.

  19. Ligand Identification Scoring Algorithm (LISA)

    PubMed Central

    Zheng, Zheng; Merz, Kenneth M.

    2011-01-01

    A central problem in de novo drug design is determining the binding affinity of a ligand with a receptor. A new scoring algorithm is presented that estimates the binding affinity of a protein-ligand complex given a three-dimensional structure. The method, LISA (Ligand Identification Scoring Algorithm), uses an empirical scoring function to describe the binding free energy. Interaction terms have been designed to account for van der Waals (VDW) contacts, hydrogen bonding, desolvation effects and metal chelation to model the dissociation equilibrium constants using a linear model. Atom types have been introduced to differentiate the parameters for VDW, H-bonding interactions and metal chelation between different atom pairs. A training set of 492 protein-ligand complexes was selected for the fitting process. Different test sets have been examined to evaluate its ability to predict experimentally measured binding affinities. By comparing with other well known scoring functions, the results show that LISA has advantages over many existing scoring functions in simulating protein-ligand binding affinity, especially metalloprotein-ligand binding affinity. Artificial Neural Network (ANN) was also used in order to demonstrate that the energy terms in LISA are well designed and do not require extra cross terms. PMID:21561101

  20. The Aquarius Salinity Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank; Hilburn, Kyle; Lagerloef, Gary; Le Vine, David

    2012-01-01

    The first part of this presentation gives an overview over the Aquarius salinity retrieval algorithm. The instrument calibration [2] converts Aquarius radiometer counts into antenna temperatures (TA). The salinity retrieval algorithm converts those TA into brightness temperatures (TB) at a flat ocean surface. As a first step, contributions arising from the intrusion of solar, lunar and galactic radiation are subtracted. The antenna pattern correction (APC) removes the effects of cross-polarization contamination and spillover. The Aquarius radiometer measures the 3rd Stokes parameter in addition to vertical (v) and horizontal (h) polarizations, which allows for an easy removal of ionospheric Faraday rotation. The atmospheric absorption at L-band is almost entirely due to molecular oxygen, which can be calculated based on auxiliary input fields from numerical weather prediction models and then successively removed from the TB. The final step in the TA to TB conversion is the correction for the roughness of the sea surface due to wind, which is addressed in more detail in section 3. The TB of the flat ocean surface can now be matched to a salinity value using a surface emission model that is based on a model for the dielectric constant of sea water [3], [4] and an auxiliary field for the sea surface temperature. In the current processing only v-pol TB are used for this last step.

  1. A signal processing approach for enriched region detection in RNA polymerase II ChIP-seq data

    PubMed Central

    2012-01-01

    Background RNA polymerase II (PolII) is essential in gene transcription and ChIP-seq experiments have been used to study PolII binding patterns over the entire genome. However, since PolII enriched regions in the genome can be very long, existing peak finding algorithms for ChIP-seq data are not adequate for identifying such long regions. Methods Here we propose an enriched region detection method for ChIP-seq data to identify long enriched regions by combining a signal denoising algorithm with a false discovery rate (FDR) approach. The binned ChIP-seq data for PolII are first processed using a non-local means (NL-means) algorithm for purposes of denoising. Then, a FDR approach is developed to determine the threshold for marking enriched regions in the binned histogram. Results We first test our method using a public PolII ChIP-seq dataset and compare our results with published results obtained using the published algorithm HPeak. Our results show a high consistency with the published results (80-100%). Then, we apply our proposed method on PolII ChIP-seq data generated in our own study on the effects of hormone on the breast cancer cell line MCF7. The results demonstrate that our method can effectively identify long enriched regions in ChIP-seq datasets. Specifically, pertaining to MCF7 control samples we identified 5,911 segments with length of at least 4 Kbp (maximum 233,000 bp); and in MCF7 treated with E2 samples, we identified 6,200 such segments (maximum 325,000 bp). Conclusions We demonstrated the effectiveness of this method in studying binding patterns of PolII in cancer cells which enables further deep analysis in transcription regulation and epigenetics. Our method complements existing peak detection algorithms for ChIP-seq experiments. PMID:22536865

  2. Antimicrobial activity of the synthetic peptide scolopendrasin ii from the centipede Scolopendra subspinipes mutilans.

    PubMed

    Kwon, Young-Nam; Lee, Joon Ha; Kim, In-Woo; Kim, Sang-Hee; Yun, Eun-Young; Nam, Sung-Hee; Ahn, Mi-Young; Jeong, Mihye; Kang, Dong-Chul; Lee, In Hee; Hwang, Jae Sam

    2013-10-28

    The centipede Scolopendra subpinipes mutilans is a medicinally important arthropod species. However, its transcriptome is not currently available and transcriptome analysis would be useful in providing insight into a molecular level approach. Hence, we performed de novo RNA sequencing of S. subpinipes mutilans using next-generation sequencing. We generated a novel peptide (scolopendrasin II) based on a SVM algorithm, and biochemically evaluated the in vitro antimicrobial activity of scolopendrasin II against various microbes. Scolopendrasin II showed antibacterial activities against gram-positive and -negative bacterial strains, including the yeast Candida albicans and antibiotic-resistant gram-negative bacteria, as determined by a radial diffusion assay and colony count assay without hemolytic activity. In addition, we confirmed that scolopendrasin II bound to the surface of bacteria through a specific interaction with lipoteichoic acid and a lipopolysaccharide, which was one of the bacterial cell-wall components. In conclusion, our results suggest that scolopendrasin II may be useful for developing peptide antibiotics.

  3. A Breeder Algorithm for Stellarator Optimization

    NASA Astrophysics Data System (ADS)

    Wang, S.; Ware, A. S.; Hirshman, S. P.; Spong, D. A.

    2003-10-01

    An optimization algorithm that combines the global parameter space search properties of a genetic algorithm (GA) with the local parameter search properties of a Levenberg-Marquardt (LM) algorithm is described. Optimization algorithms used in the design of stellarator configurations are often classified as either global (such as GA and differential evolution algorithm) or local (such as LM). While nonlinear least-squares methods such as LM are effective at minimizing a cost-function based on desirable plasma properties such as quasi-symmetry and ballooning stability, whether or not this is a local or global minimum is unknown. The advantage of evolutionary algorithms such as GA is that they search a wider range of parameter space and are not susceptible to getting stuck in a local minimum of the cost function. Their disadvantage is that in some cases the evolutionary algorithms are ineffective at finding a minimum state. Here, we describe the initial development of the Breeder Algorithm (BA). BA consists of a genetic algorithm outer loop with an inner loop in which each generation is refined using a LM step. Initial results for a quasi-poloidal stellarator optimization will be presented, along with a comparison to existing optimization algorithms.

  4. Effects of visualization on algorithm comprehension

    NASA Astrophysics Data System (ADS)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  5. Medical target prediction from genome sequence: combining different sequence analysis algorithms with expert knowledge and input from artificial intelligence approaches.

    PubMed

    Dandekar, T; Du, F; Schirmer, R H; Schmidt, S

    2001-12-01

    By exploiting the rapid increase in available sequence data, the definition of medically relevant protein targets has been improved by a combination of: (i) differential genome analysis (target list): and (ii) analysis of individual proteins (target analysis). Fast sequence comparisons, data mining, and genetic algorithms further promote these procedures. Mycobacterium tuberculosis proteins were chosen as applied examples.

  6. Mod II Stirling engine overviews

    NASA Technical Reports Server (NTRS)

    Farrell, Roger A.

    1988-01-01

    The Mod II engine is a second-generation automotive Stirling engine (ASE) optimized for part-power operation. It has been designed specifically to meet the fuel economy and exhaust emissions objectives of the ASE development program. The design, test experience, performance, and comparison of data to analytical performance estimates of the Mod II engine to date are reviewed. Estimates of Mod II performance in its final configuration are also given.

  7. A Multipopulation PSO Based Memetic Algorithm for Permutation Flow Shop Scheduling

    PubMed Central

    Liu, Ruochen; Ma, Chenlin; Ma, Wenping; Li, Yangyang

    2013-01-01

    The permutation flow shop scheduling problem (PFSSP) is part of production scheduling, which belongs to the hardest combinatorial optimization problem. In this paper, a multipopulation particle swarm optimization (PSO) based memetic algorithm (MPSOMA) is proposed in this paper. In the proposed algorithm, the whole particle swarm population is divided into three subpopulations in which each particle evolves itself by the standard PSO and then updates each subpopulation by using different local search schemes such as variable neighborhood search (VNS) and individual improvement scheme (IIS). Then, the best particle of each subpopulation is selected to construct a probabilistic model by using estimation of distribution algorithm (EDA) and three particles are sampled from the probabilistic model to update the worst individual in each subpopulation. The best particle in the entire particle swarm is used to update the global optimal solution. The proposed MPSOMA is compared with two recently proposed algorithms, namely, PSO based memetic algorithm (PSOMA) and hybrid particle swarm optimization with estimation of distribution algorithm (PSOEDA), on 29 well-known PFFSPs taken from OR-library, and the experimental results show that it is an effective approach for the PFFSP. PMID:24453841

  8. Commodity cluster and hardware-based massively parallel implementations of hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David

    2006-05-01

    The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.

  9. PEP-II Alignment

    SciTech Connect

    Gaydosh, Michael

    2003-05-14

    The PEP-II Asymmetric B-factory consists of two independent storage rings, one located atop the other in the 2200m-circumference PEP tunnel. The high-energy ring, which stores a 9-GeV electron beam, is an upgrade of the existing PEP collider. It re-utilizes all of the PEP magnets and incorporates a state-of-the-art copper vacuum chamber and a new RF system capable of supporting a one-amp stored beam. The low-energy ring, which stores 3.1-GeV positrons, is new construction. Injection is achieved by extracting electrons and positrons at collision energies from the SLC and transporting them each in a dedicated bypass line. The low-emittance SLC beams will be used for the injection process.

  10. Phase II Final Report

    SciTech Connect

    Schuknecht, Nate; White, David; Hoste, Graeme

    2014-09-11

    The SkyTrough DSP will advance the state-of-the-art in parabolic troughs for utility applications, with a larger aperture, higher operating temperature, and lower cost. The goal of this project was to develop a parabolic trough collector that enables solar electricity generation in the 2020 marketplace for a 216MWe nameplate baseload power plant. This plant requires an LCOE of 9¢/kWhe, given a capacity factor of 75%, a fossil fuel limit of 15%, a fossil fuel cost of $6.75/MMBtu, $25.00/kWht thermal storage cost, and a domestic installation corresponding to Daggett, CA. The result of our optimization was a trough design of larger aperture and operating temperature than has been fielded in large, utility scale parabolic trough applications: 7.6m width x 150m SCA length (1,118m2 aperture), with four 90mm diameter × 4.7m receivers per mirror module and an operating temperature of 500°C. The results from physical modeling in the System Advisory Model indicate that, for a capacity factor of 75%: The LCOE will be 8.87¢/kWhe. SkyFuel examined the design of almost every parabolic trough component from a perspective of load and performance at aperture areas from 500 to 2,900m2. Aperture-dependent design was combined with fixed quotations for similar parts from the commercialized SkyTrough product, and established an installed cost of $130/m2 in 2020. This project was conducted in two phases. Phase I was a preliminary design, culminating in an optimum trough size and further improvement of an advanced polymeric reflective material. This phase was completed in October of 2011. Phase II has been the detailed engineering design and component testing, which culminated in the fabrication and testing of a single mirror module. Phase II is complete, and this document presents a summary of the comprehensive work.

  11. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    SciTech Connect

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-12-31

    The ARIES {number_sign}1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as `acceptable` or `suspect`. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed.

  12. Optimization of floodplain monitoring sensors through an entropy approach

    NASA Astrophysics Data System (ADS)

    Ridolfi, E.; Yan, K.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.; Russo, F.; Bates, P. D.

    2012-04-01

    minimization of total correlation (a measure of redundancy). From analysis of the Pareto optimal solutions, the optimal number of sensors is evaluated. Finally the LISFLOOD-FP model is calibrated considering both the whole sensor network, and the set of sensors chosen by the genetic non sorted algorithm (NSGA-II) that solved the MOOP. In the absence of a suitable observed data set, the calibration is performed using by treating results provided by a physically based 2D hydrodynamic model (TELEMAC 2D) as 'truth' in order to evaluate the potential value of the model based on the coarse resolution DEM (SRTM).

  13. On mapping systolic algorithms onto the hypercube

    SciTech Connect

    Ibarra, O.H.; Sohn, S.M. )

    1990-01-01

    Much effort has been devoted toward developing efficient algorithms for systolic arrays. Here the authors consider the problem of mapping these algorithms into efficient algorithms for a fixed-size hypercube architecture. They describe in detail several optimal implementations of algorithms given for one-way one and two-dimensional systolic arrays. Since interprocessor communication is many times slower than local computation in parallel computers built to date, the problem of efficient communication is specifically addressed for these mappings. In order to experimentally validate the technique, five systolic algorithms were mapped in various ways onto a 64-node NCUBE/7 MMD hypercube machine. The algorithms are for the following problems: the shuffle scheduling problem, finite impulse response filtering, linear context-free language recognition, matrix multiplication, and computing the Boolean transitive closure. Experimental evidence indicates that good performance is obtained for the mappings.

  14. Fast training algorithms for multilayer neural nets.

    PubMed

    Brent, R P

    1991-01-01

    An algorithm that is faster than back-propagation and for which it is not necessary to specify the number of hidden units in advance is described. The relationship with other fast pattern-recognition algorithms, such as algorithms based on k-d trees, is discussed. The algorithm has been implemented and tested on artificial problems, such as the parity problem, and on real problems arising in speech recognition. Experimental results, including training times and recognition accuracy, are given. Generally, the algorithm achieves accuracy as good as or better than nets trained using back-propagation. Accuracy is comparable to that for the nearest-neighbor algorithm, which is slower and requires more storage space.

  15. Visualizing output for a data learning algorithm

    NASA Astrophysics Data System (ADS)

    Carson, Daniel; Graham, James; Ternovskiy, Igor

    2016-05-01

    This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.

  16. A novel chaos danger model immune algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Qingyang; Wang, Song; Zhang, Li; Liang, Ying

    2013-11-01

    Making use of ergodicity and randomness of chaos, a novel chaos danger model immune algorithm (CDMIA) is presented by combining the benefits of chaos and danger model immune algorithm (DMIA). To maintain the diversity of antibodies and ensure the performances of the algorithm, two chaotic operators are proposed. Chaotic disturbance is used for updating the danger antibody to exploit local solution space, and the chaotic regeneration is referred to the safe antibody for exploring the entire solution space. In addition, the performances of the algorithm are examined based upon several benchmark problems. The experimental results indicate that the diversity of the population is improved noticeably, and the CDMIA exhibits a higher efficiency than the danger model immune algorithm and other optimization algorithms.

  17. Adaptive link selection algorithms for distributed estimation

    NASA Astrophysics Data System (ADS)

    Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent

    2015-12-01

    This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.

  18. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  19. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  20. Realization of a scalable Shor algorithm.

    PubMed

    Monz, Thomas; Nigg, Daniel; Martinez, Esteban A; Brandl, Matthias F; Schindler, Philipp; Rines, Richard; Wang, Shannon X; Chuang, Isaac L; Blatt, Rainer

    2016-03-01

    Certain algorithms for quantum computers are able to outperform their classical counterparts. In 1994, Peter Shor came up with a quantum algorithm that calculates the prime factors of a large number vastly more efficiently than a classical computer. For general scalability of such algorithms, hardware, quantum error correction, and the algorithmic realization itself need to be extensible. Here we present the realization of a scalable Shor algorithm, as proposed by Kitaev. We factor the number 15 by effectively employing and controlling seven qubits and four "cache qubits" and by implementing generalized arithmetic operations, known as modular multipliers. This algorithm has been realized scalably within an ion-trap quantum computer and returns the correct factors with a confidence level exceeding 99%. PMID:26941315

  1. Orbital objects detection algorithm using faint streaks

    NASA Astrophysics Data System (ADS)

    Tagawa, Makoto; Yanagisawa, Toshifumi; Kurosaki, Hirohisa; Oda, Hiroshi; Hanada, Toshiya

    2016-02-01

    This study proposes an algorithm to detect orbital objects that are small or moving at high apparent velocities from optical images by utilizing their faint streaks. In the conventional object-detection algorithm, a high signal-to-noise-ratio (e.g., 3 or more) is required, whereas in our proposed algorithm, the signals are summed along the streak direction to improve object-detection sensitivity. Lower signal-to-noise ratio objects were detected by applying the algorithm to a time series of images. The algorithm comprises the following steps: (1) image skewing, (2) image compression along the vertical axis, (3) detection and determination of streak position, (4) searching for object candidates using the time-series streak-position data, and (5) selecting the candidate with the best linearity and reliability. Our algorithm's ability to detect streaks with signals weaker than the background noise was confirmed using images from the Australia Remote Observatory.

  2. [Algorithm for treating preoperative anemia].

    PubMed

    Bisbe Vives, E; Basora Macaya, M

    2015-06-01

    Hemoglobin optimization and treatment of preoperative anemia in surgery with a moderate to high risk of surgical bleeding reduces the rate of transfusions and improves hemoglobin levels at discharge and can also improve postoperative outcomes. To this end, we need to schedule preoperative visits sufficiently in advance to treat the anemia. The treatment algorithm we propose comes with a simple checklist to determine whether we should refer the patient to a specialist or if we can treat the patient during the same visit. With the blood count test and additional tests for iron metabolism, inflammation parameter and glomerular filtration rate, we can decide whether to start the treatment with intravenous iron alone or erythropoietin with or without iron. With significant anemia, a visit after 15 days might be necessary to observe the response and supplement the treatment if required. The hemoglobin objective will depend on the type of surgery and the patient's characteristics.

  3. Energy functions for regularization algorithms

    NASA Technical Reports Server (NTRS)

    Delingette, H.; Hebert, M.; Ikeuchi, K.

    1991-01-01

    Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.

  4. Parallel algorithms for message decomposition

    SciTech Connect

    Teng, S.H.; Wang, B.

    1987-06-01

    The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

  5. Improved Heat-Stress Algorithm

    NASA Technical Reports Server (NTRS)

    Teets, Edward H., Jr.; Fehn, Steven

    2007-01-01

    NASA Dryden presents an improved and automated site-specific algorithm for heat-stress approximation using standard atmospheric measurements routinely obtained from the Edwards Air Force Base weather detachment. Heat stress, which is the net heat load a worker may be exposed to, is officially measured using a thermal-environment monitoring system to calculate the wet-bulb globe temperature (WBGT). This instrument uses three independent thermometers to measure wet-bulb, dry-bulb, and the black-globe temperatures. By using these improvements, a more realistic WBGT estimation value can now be produced. This is extremely useful for researchers and other employees who are working on outdoor projects that are distant from the areas that the Web system monitors. Most importantly, the improved WBGT estimations will make outdoor work sites safer by reducing the likelihood of heat stress.

  6. Online Planning Algorithms for POMDPs

    PubMed Central

    Ross, Stéphane; Pineau, Joelle; Paquet, Sébastien; Chaib-draa, Brahim

    2009-01-01

    Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their complexity. Here, we focus on online approaches that alleviate the computational complexity by computing good local policies at each decision step during the execution. Online algorithms generally consist of a lookahead search to find the best action to execute at each time step in an environment. Our objectives here are to survey the various existing online POMDP methods, analyze their properties and discuss their advantages and disadvantages; and to thoroughly evaluate these online approaches in different environments under various metrics (return, error bound reduction, lower bound improvement). Our experimental results indicate that state-of-the-art online heuristic search methods can handle large POMDP domains efficiently. PMID:19777080

  7. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  8. Evolutionary Algorithm for Optimal Vaccination Scheme

    NASA Astrophysics Data System (ADS)

    Parousis-Orthodoxou, K. J.; Vlachos, D. S.

    2014-03-01

    The following work uses the dynamic capabilities of an evolutionary algorithm in order to obtain an optimal immunization strategy in a user specified network. The produced algorithm uses a basic genetic algorithm with crossover and mutation techniques, in order to locate certain nodes in the inputted network. These nodes will be immunized in an SIR epidemic spreading process, and the performance of each immunization scheme, will be evaluated by the level of containment that provides for the spreading of the disease.

  9. An Intrusion Detection Algorithm Based On NFPA

    NASA Astrophysics Data System (ADS)

    Anming, Zhong

    A process oriented intrusion detection algorithm based on Probabilistic Automaton with No Final probabilities (NFPA) is introduced, system call sequence of process is used as the source data. By using information in system call sequence of normal process and system call sequence of anomaly process, the anomaly detection and the misuse detection are efficiently combined. Experiments show better performance of our algorithm compared to the classical algorithm in this field.

  10. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  11. Testing block subdivision algorithms on block designs

    NASA Astrophysics Data System (ADS)

    Wiseman, Natalie; Patterson, Zachary

    2016-01-01

    Integrated land use-transportation models predict future transportation demand taking into account how households and firms arrange themselves partly as a function of the transportation system. Recent integrated models require parcels as inputs and produce household and employment predictions at the parcel scale. Block subdivision algorithms automatically generate parcel patterns within blocks. Evaluating block subdivision algorithms is done by way of generating parcels and comparing them to those in a parcel database. Three block subdivision algorithms are evaluated on how closely they reproduce parcels of different block types found in a parcel database from Montreal, Canada. While the authors who developed each of the algorithms have evaluated them, they have used their own metrics and block types to evaluate their own algorithms. This makes it difficult to compare their strengths and weaknesses. The contribution of this paper is in resolving this difficulty with the aim of finding a better algorithm suited to subdividing each block type. The proposed hypothesis is that given the different approaches that block subdivision algorithms take, it's likely that different algorithms are better adapted to subdividing different block types. To test this, a standardized block type classification is used that consists of mutually exclusive and comprehensive categories. A statistical method is used for finding a better algorithm and the probability it will perform well for a given block type. Results suggest the oriented bounding box algorithm performs better for warped non-uniform sites, as well as gridiron and fragmented uniform sites. It also produces more similar parcel areas and widths. The Generalized Parcel Divider 1 algorithm performs better for gridiron non-uniform sites. The Straight Skeleton algorithm performs better for loop and lollipop networks as well as fragmented non-uniform and warped uniform sites. It also produces more similar parcel shapes and patterns.

  12. MRCK_3D contact detonation algorithm

    SciTech Connect

    Rougier, Esteban; Munjiza, Antonio

    2010-01-01

    Large-scale Combined Finite-Discrete Element Methods (FEM-DEM) and Discrete Element Methods (DEM) simulations involving contact of a large number of separate bod ies need an efficient, robust and flexible contact detection algorithm. In this work the MRCK-3D search algorithm is outlined and its main CPU perfonnances are evaluated. One of the most important aspects of this newly developed search algorithm is that it is applicable to systems consisting of many bodies of different shapes and sizes.

  13. Frontal optimization algorithms for multiprocessor computers

    SciTech Connect

    Sergienko, I.V.; Gulyanitskii, L.F.

    1981-11-01

    The authors describe one of the approaches to the construction of locally optimal optimization algorithms on multiprocessor computers. Algorithms of this type, called frontal, have been realized previously on single-processor computers, although this configuration does not fully exploit the specific features of their computational scheme. Experience with a number of practical discrete optimization problems confirms that the frontal algorithms are highly successful even with single-processor computers. 9 references.

  14. Robustness of Tree Extraction Algorithms from LIDAR

    NASA Astrophysics Data System (ADS)

    Dumitru, M.; Strimbu, B. M.

    2015-12-01

    Forest inventory faces a new era as unmanned aerial systems (UAS) increased the precision of measurements, while reduced field effort and price of data acquisition. A large number of algorithms were developed to identify various forest attributes from UAS data. The objective of the present research is to assess the robustness of two types of tree identification algorithms when UAS data are combined with digital elevation models (DEM). The algorithms use as input photogrammetric point cloud, which are subsequent rasterized. The first type of algorithms associate tree crown with an inversed watershed (subsequently referred as watershed based), while the second type is based on simultaneous representation of tree crown as an individual entity, and its relation with neighboring crowns (subsequently referred as simultaneous representation). A DJI equipped with a SONY a5100 was used to acquire images over an area from center Louisiana. The images were processed with Pix4D, and a photogrammetric point cloud with 50 points / m2 was attained. DEM was obtained from a flight executed in 2013, which also supplied a LIDAR point cloud with 30 points/m2. The algorithms were tested on two plantations with different species and crown class complexities: one homogeneous (i.e., a mature loblolly pine plantation), and one heterogeneous (i.e., an unmanaged uneven-aged stand with mixed species pine -hardwoods). Tree identification on photogrammetric point cloud reveled that simultaneous representation algorithm outperforms watershed algorithm, irrespective stand complexity. Watershed algorithm exhibits robustness to parameters, but the results were worse than majority sets of parameters needed by the simultaneous representation algorithm. The simultaneous representation algorithm is a better alternative to watershed algorithm even when parameters are not accurately estimated. Similar results were obtained when the two algorithms were run on the LIDAR point cloud.

  15. Mapping algorithms on regular parallel architectures

    SciTech Connect

    Lee, P.

    1989-01-01

    It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.

  16. Streamwise Upwind, Moving-Grid Flow Algorithm

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.; Guruswamy, Guru P.; Obayashi, Shigeru

    1992-01-01

    Extension to moving grids enables computation of transonic flows about moving bodies. Algorithm computes unsteady transonic flow on basis of nondimensionalized thin-layer Navier-Stokes equations in conservation-law form. Solves equations by use of computational grid based on curvilinear coordinates conforming to, and moving with, surface(s) of solid body or bodies in flow field. Simulates such complicated phenomena as transonic flow (including shock waves) about oscillating wing. Algorithm developed by extending prior streamwise upwind algorithm solving equations on fixed curvilinear grid described in "Streamwise Algorithm for Simulation of Flow" (ARC-12718).

  17. Compression algorithm for multideterminant wave functions.

    PubMed

    Weerasinghe, Gihan L; Ríos, Pablo López; Needs, Richard J

    2014-02-01

    A compression algorithm is introduced for multideterminant wave functions which can greatly reduce the number of determinants that need to be evaluated in quantum Monte Carlo calculations. We have devised an algorithm with three levels of compression, the least costly of which yields excellent results in polynomial time. We demonstrate the usefulness of the compression algorithm for evaluating multideterminant wave functions in quantum Monte Carlo calculations, whose computational cost is reduced by factors of between about 2 and over 25 for the examples studied. We have found evidence of sublinear scaling of quantum Monte Carlo calculations with the number of determinants when the compression algorithm is used.

  18. Java implementation of Class Association Rule algorithms

    2007-08-30

    Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix andmore » a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be applied more generally.« less

  19. Ascent guidance algorithm using lidar wind measurements

    NASA Technical Reports Server (NTRS)

    Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.

    1990-01-01

    The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.

  20. Monte Carlo algorithm for free energy calculation.

    PubMed

    Bi, Sheng; Tong, Ning-Hua

    2015-07-01

    We propose a Monte Carlo algorithm for the free energy calculation based on configuration space sampling. An upward or downward temperature scan can be used to produce F(T). We implement this algorithm for the Ising model on a square lattice and triangular lattice. Comparison with the exact free energy shows an excellent agreement. We analyze the properties of this algorithm and compare it with the Wang-Landau algorithm, which samples in energy space. This method is applicable to general classical statistical models. The possibility of extending it to quantum systems is discussed.

  1. Algorithm to search for genomic rearrangements

    NASA Astrophysics Data System (ADS)

    Nałecz-Charkiewicz, Katarzyna; Nowak, Robert

    2013-10-01

    The aim of this article is to discuss the issue of comparing nucleotide sequences in order to detect chromosomal rearrangements (for example, in the study of genomes of two cucumber varieties, Polish and Chinese). Two basic algorithms for detecting rearrangements has been described: Smith-Waterman algorithm, as well as a new method of searching genetic markers in combination with Knuth-Morris-Pratt algorithm. The computer program in client-server architecture was developed. The algorithms properties were examined on genomes Escherichia coli and Arabidopsis thaliana genomes, and are prepared to compare two cucumber varieties, Polish and Chinese. The results are promising and further works are planned.

  2. A simple greedy algorithm for reconstructing pedigrees.

    PubMed

    Cowell, Robert G

    2013-02-01

    This paper introduces a simple greedy algorithm for searching for high likelihood pedigrees using micro-satellite (STR) genotype information on a complete sample of related individuals. The core idea behind the algorithm is not new, but it is believed that putting it into a greedy search setting, and specifically the application to pedigree learning, is novel. The algorithm does not require age or sex information, but this information can be incorporated if desired. The algorithm is applied to human and non-human genetic data and in a simulation study. PMID:23164633

  3. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator. PMID:26986320

  4. Generation of attributes for learning algorithms

    SciTech Connect

    Hu, Yuh-Jyh; Kibler, D.

    1996-12-31

    Inductive algorithms rely strongly on their representational biases. Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA on artificial and real domains for several learners: C4.5, CN2, perceptron and backpropagation.

  5. Java implementation of Class Association Rule algorithms

    SciTech Connect

    Tamura, Makio

    2007-08-30

    Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix and a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be applied more generally.

  6. Distilling the Verification Process for Prognostics Algorithms

    NASA Technical Reports Server (NTRS)

    Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai

    2013-01-01

    The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.

  7. Automatic control algorithm effects on energy production

    NASA Technical Reports Server (NTRS)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  8. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  9. Thermostat algorithm for generating target ensembles

    NASA Astrophysics Data System (ADS)

    Bravetti, A.; Tapias, D.

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  10. A parallel algorithm for global routing

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall J.; Banerjee, Prithviraj

    1990-01-01

    A Parallel Hierarchical algorithm for Global Routing (PHIGURE) is presented. The router is based on the work of Burstein and Pelavin, but has many extensions for general global routing and parallel execution. Main features of the algorithm include structured hierarchical decomposition into separate independent tasks which are suitable for parallel execution and adaptive simplex solution for adding feedthroughs and adjusting channel heights for row-based layout. Alternative decomposition methods and the various levels of parallelism available in the algorithm are examined closely. The algorithm is described and results are presented for a shared-memory multiprocessor implementation.

  11. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  12. Performance study of a new time-delay estimation algorithm in ultrasonic echo signals and ultrasound elastography.

    PubMed

    Shaswary, Elyas; Xu, Yuan; Tavakkoli, Jahan

    2016-07-01

    Time-delay estimation has countless applications in ultrasound medical imaging. Previously, we proposed a new time-delay estimation algorithm, which was based on the summation of the sign function to compute the time-delay estimate (Shaswary et al., 2015). We reported that the proposed algorithm performs similar to normalized cross-correlation (NCC) and sum squared differences (SSD) algorithms, even though it was significantly more computationally efficient. In this paper, we study the performance of the proposed algorithm using statistical analysis and image quality analysis in ultrasound elastography imaging. Field II simulation software was used for generation of ultrasound radio frequency (RF) echo signals for statistical analysis, and a clinical ultrasound scanner (Sonix® RP scanner, Ultrasonix Medical Corp., Richmond, BC, Canada) was used to scan a commercial ultrasound elastography tissue-mimicking phantom for image quality analysis. The statistical analysis results confirmed that, in overall, the proposed algorithm has similar performance compared to NCC and SSD algorithms. The image quality analysis results indicated that the proposed algorithm produces strain images with marginally higher signal-to-noise and contrast-to-noise ratios compared to NCC and SSD algorithms. PMID:27010697

  13. An algorithmically optimized combinatorial library screened by digital imaging spectroscopy.

    PubMed

    Goldman, E R; Youvan, D C

    1992-12-01

    Combinatorial cassettes based on a phylogenetic "target set" were used to simultaneously mutagenize seven amino acid residues on one face of a transmembrane alpha helix comprising a bacteriochlorophyll binding site in the light harvesting II antenna of Rhodobacter capsulatus. This pigmented protein provides a model system for developing complex mutagenesis schemes, because simple absorption spectroscopy can be used to assay protein expression, structure, and function. Colony screening by Digital Imaging Spectroscopy showed that 6% of the optimized library bound bacteriochlorophyll in two distinct spectroscopic classes. This is approximately 200 times the throughput (ca. 0.03%) of conventional combinatorial cassette mutagenesis using [NN(G/C)]. "Doping" algorithms evaluated in this model system are generally applicable and should enable simultaneous mutagenesis at more positions in a protein than currently possible, or alternatively, decrease the screening size of combinatorial libraries.

  14. Radiance-ratio algorithm wavelengths for remote oceanic chlorophyll determination

    NASA Technical Reports Server (NTRS)

    Hoge, Frank E.; Wright, C. Wayne; Swift, Robert N.

    1987-01-01

    Two-band radiance-ratio in-water algorithms in the visible spectrum have been evaluated for remote oceanic chlorophyll determination. Airborne active-passive (laser-solar) data from coastal, shelf-slope, and blue-water regions were used to generate two-dimensional chlorophyll-fluorescence and radiance-ratio statistical correlation matrices containing all possible two-band ratio combinations from the thirty-two available contiguous 11.25-nm passive bands. The principal finding was that closely spaced radiance-ratio bands yield chlorophyll estimates which are highly correlated with laser-induced chlorophyll fluorescence within several distinct regions of the ocean color spectrum. Band combinations in the yellow, orange-red, spectral regions showed considerable promise for satisfactory chlorophyll pigment estimation in near-coastal Case II waters. Pigment recovery in Case I waters was best accomplished using blue-green radiance ratios in the 490/500-nm region.

  15. BTH:an Efficient Parsing Algorithm for Keyword Spotting

    NASA Astrophysics Data System (ADS)

    Yano, Takehide; Sasajima, Munehiko; Kono, Yasuyuki

    In this paper, we propose BTH, a parsing algorithm which is able to efficiently parse keyword lattice that contains large number of false candidates. In BTH, the grammar is written in template form, and then, it is compiled into a hash table. BTH analyzes the lattice without unfolding to keyword sequences, by propagating acceptable templates among the linked keywords and filtering through the hash table in each keywords. It has a time bound proportional to n2 (where n is the number of keywords in the lattice), although the number of false candidates increases exponentially. Simulation results shows that BTH can parse lattice which contains over 100 billion false candidates within 0.35 sec, with grammar which is corresponding to 2 million of templates, on a notebook-PC(PentiumII 266MHz).

  16. Utilizing clouds for Belle II

    NASA Astrophysics Data System (ADS)

    Sobie, R. J.

    2015-12-01

    This paper describes the use of cloud computing resources for the Belle II experiment. A number of different methods are used to exploit the private and opportunistic clouds. Clouds are making significant contributions to the generation of Belle II MC data samples and it is expected that their impact will continue to grow over the coming years.

  17. PARIS II: DESIGNING GREENER SOLVENTS

    EPA Science Inventory

    PARIS II (the program for assisting the replacement of industrial solvents, version II), developed at the USEPA, is a unique software tool that can be used for customizing the design of replacement solvents and for the formulation of new solvents. This program helps users avoid ...

  18. [Modified Class II tunnel preparation].

    PubMed

    Rimondini, L; Baroni, C

    1991-05-15

    Tunnel preparations for restoration of Class II carious lesions in primary molars preserve the marginal ridge and minimize sacrifice of healthy tooth substructure. Materials with improved bonding to tooth structure and increase potential for fluoride release allow Class II restorations without "extension for prevention". PMID:1864420

  19. Technology II: Implementation Planning Guide.

    ERIC Educational Resources Information Center

    California Community Colleges, Sacramento. Office of the Chancellor.

    The California Community Colleges (CCC) are facing a number of challenges, including the explosive use of the Internet, the digital divide, the need for integrating technology into teaching and learning, the impact of Tidal Wave II, and the need to ensure that technology is accessible to persons with disabilities. The CCCs' Technology II Strategic…

  20. ACRIM II Data and Information

    Atmospheric Science Data Center

    2015-12-30

    ACRIM II Data and Information Active Cavity Radiometer Irradiance ... and Order:   ASDC Order Tool FTP Web Access:  Data Pool Parameters:  Total Solar Irradiance ... ACRIM II Instrument Page ACRIM III Data Sets Readme Files:  Readme File Image ...