Science.gov

Sample records for hard constraint algorithm

  1. Hard Constraints in Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2008-01-01

    This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.

  2. Gaining Algorithmic Insight through Simplifying Constraints.

    ERIC Educational Resources Information Center

    Ginat, David

    2002-01-01

    Discusses algorithmic problem solving in computer science education, particularly algorithmic insight, and focuses on the relevance and effectiveness of the heuristic simplifying constraints which involves simplification of a given problem to a problem in which constraints are imposed on the input data. Presents three examples involving…

  3. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  4. Easy and hard testbeds for real-time search algorithms

    SciTech Connect

    Koenig, S.; Simmons, R.G.

    1996-12-31

    Although researchers have studied which factors influence the behavior of traditional search algorithms, currently not much is known about how domain properties influence the performance of real-time search algorithms. In this paper we demonstrate, both theoretically and experimentally, that Eulerian state spaces (a super set of undirected state spaces) are very easy for some existing real-time search algorithms to solve: even real-time search algorithms that can be intractable, in general, are efficient for Eulerian state spaces. Because traditional real-time search testbeds (such as the eight puzzle and gridworlds) are Eulerian, they cannot be used to distinguish between efficient and inefficient real-time search algorithms. It follows that one has to use non-Eulerian domains to demonstrate the general superiority of a given algorithm. To this end, we present two classes of hard-to-search state spaces and demonstrate the performance of various real-time search algorithms on them.

  5. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    PubMed Central

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645

  6. Time-reversible molecular dynamics algorithms with bond constraints

    NASA Astrophysics Data System (ADS)

    Toxvaerd, Søren; Heilmann, Ole J.; Ingebrigtsen, Trond; Schrøder, Thomas B.; Dyre, Jeppe C.

    2009-08-01

    Time-reversible molecular dynamics algorithms with bond constraints are derived. The algorithms are stable with and without a thermostat and in double precision as well as in single-precision arithmetic. Time reversibility is achieved by applying a central-difference expression for the velocities in the expression for Gauss' principle of least constraint. The imposed time symmetry results in a quadratic expression for the Lagrange multiplier. For a system of complex molecules with connected constraints the corresponding set of coupled quadratic equations is easily solved by a consecutive iteration scheme. The algorithms were tested on two models. One is a dumbbell model of Toluene, the other system consists of molecules with four connected constraints forming a triangle and a branch point of constraints. The equilibrium particle distributions and the mean-square particle displacements for the dumbbell model were compared to the corresponding functions obtained by GROMACS. The agreement is perfect within statistical error.

  7. Effective hybrid teaching-learning-based optimization algorithm for balancing two-sided assembly lines with multiple constraints

    NASA Astrophysics Data System (ADS)

    Tang, Qiuhua; Li, Zixiang; Zhang, Liping; Floudas, C. A.; Cao, Xiaojun

    2015-09-01

    Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost function. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.

  8. A synthetic dataset for evaluating soft and hard fusion algorithms

    NASA Astrophysics Data System (ADS)

    Graham, Jacob L.; Hall, David L.; Rimland, Jeffrey

    2011-06-01

    There is an emerging demand for the development of data fusion techniques and algorithms that are capable of combining conventional "hard" sensor inputs such as video, radar, and multispectral sensor data with "soft" data including textual situation reports, open-source web information, and "hard/soft" data such as image or video data that includes human-generated annotations. New techniques that assist in sense-making over a wide range of vastly heterogeneous sources are critical to improving tactical situational awareness in counterinsurgency (COIN) and other asymmetric warfare situations. A major challenge in this area is the lack of realistic datasets available for test and evaluation of such algorithms. While "soft" message sets exist, they tend to be of limited use for data fusion applications due to the lack of critical message pedigree and other metadata. They also lack corresponding hard sensor data that presents reasonable "fusion opportunities" to evaluate the ability to make connections and inferences that span the soft and hard data sets. This paper outlines the design methodologies, content, and some potential use cases of a COIN-based synthetic soft and hard dataset created under a United States Multi-disciplinary University Research Initiative (MURI) program funded by the U.S. Army Research Office (ARO). The dataset includes realistic synthetic reports from a variety of sources, corresponding synthetic hard data, and an extensive supporting database that maintains "ground truth" through logical grouping of related data into "vignettes." The supporting database also maintains the pedigree of messages and other critical metadata.

  9. Constraint identification and algorithm stabilization for degenerate nonlinear programs.

    SciTech Connect

    Wright, S. J.; Mathematics and Computer Science

    2003-01-01

    In the vicinity of a solution of a nonlinear programming problem at which both strict complementarity and linear independence of the active constraints may fail to hold, we describe a technique for distinguishing weakly active from strongly active constraints. We show that this information can be used to modify the sequential quadratic programming algorithm so that it exhibits superlinear convergence to the solution under assumptions weaker than those made in previous analyses.

  10. An active set algorithm for nonlinear optimization with polyhedral constraints

    NASA Astrophysics Data System (ADS)

    Hager, William W.; Zhang, Hongchao

    2016-08-01

    A polyhedral active set algorithm PASA is developed for solving a nonlinear optimization problem whose feasible set is a polyhedron. Phase one of the algorithm is the gradient projection method, while phase two is any algorithm for solving a linearly constrained optimization problem. Rules are provided for branching between the two phases. Global convergence to a stationary point is established, while asymptotically PASA performs only phase two when either a nondegeneracy assumption holds, or the active constraints are linearly independent and a strong second-order sufficient optimality condition holds.

  11. An iterative hard thresholding algorithm for CS MRI

    NASA Astrophysics Data System (ADS)

    Rajani, S. R.; Reddy, M. Ramasubba

    2012-02-01

    The recently proposed compressed sensing theory equips us with methods to recover exactly or approximately, high resolution images from very few encoded measurements of the scene. The traditional ill-posed problem of MRI image recovery from heavily under-sampled κ-space data can be thus solved using CS theory. Differing from the soft thresholding methods that have been used earlier in the case of CS MRI, we suggest a simple iterative hard thresholding algorithm which efficiently recovers diagnostic quality MRI images from highly incomplete κ-space measurements. The new multi-scale redundant systems, curvelets and contourlets having high directionality and anisotropy, and thus best suited for curved-edge representation are used in this iterative hard thresholding framework for CS MRI reconstruction and their performance is compared. The κ-space under-sampling schemes such as the variable density sampling and the more conventional radial sampling are experimented at the same sampling rate and the effect of encoding scheme on iterative hard thresholding compressed sensing reconstruction is studied.

  12. An algorithm for enforcement of contact constraints in quasistatic applications using matrix-free solution algorithms

    SciTech Connect

    Heinstein, M.W.

    1997-10-01

    A contact enforcement algorithm has been developed for matrix-free quasistatic finite element techniques. Matrix-free (iterative) solution algorithms such as nonlinear Conjugate Gradients (CG) and Dynamic Relaxation (DR) are distinctive in that the number of iterations required for convergence is typically of the same order as the number of degrees of freedom of the model. From iteration to iteration the contact normal and tangential forces vary significantly making contact constraint satisfaction tenuous. Furthermore, global determination and enforcement of the contact constraints every iteration could be questioned on the grounds of efficiency. This work addresses this situation by introducing an intermediate iteration for treating the active gap constraint and at the same time exactly (kinematically) enforcing the linearized gap rate constraint for both frictionless and frictional response.

  13. Leaf Sequencing Algorithm Based on MLC Shape Constraint

    NASA Astrophysics Data System (ADS)

    Jing, Jia; Pei, Xi; Wang, Dong; Cao, Ruifen; Lin, Hui

    2012-06-01

    Intensity modulated radiation therapy (IMRT) requires the determination of the appropriate multileaf collimator settings to deliver an intensity map. The purpose of this work was to attempt to regulate the shape between adjacent multileaf collimator apertures by a leaf sequencing algorithm. To qualify and validate this algorithm, the integral test for the segment of the multileaf collimator of ARTS was performed with clinical intensity map experiments. By comparisons and analyses of the total number of monitor units and number of segments with benchmark results, the proposed algorithm performed well while the segment shape constraint produced segments with more compact shapes when delivering the planned intensity maps, which may help to reduce the multileaf collimator's specific effects.

  14. Evolutionary algorithm based structure search for hard ruthenium carbides

    NASA Astrophysics Data System (ADS)

    Harikrishnan, G.; Ajith, K. M.; Chandra, Sharat; Valsakumar, M. C.

    2015-12-01

    An exhaustive structure search employing evolutionary algorithm and density functional theory has been carried out for ruthenium carbides, for the three stoichiometries Ru1C1, Ru2C1 and Ru3C1, yielding five lowest energy structures. These include the structures from the two reported syntheses of ruthenium carbides. Their emergence in the present structure search in stoichiometries, unlike the previously reported ones, is plausible in the light of the high temperature required for their synthesis. The mechanical stability and ductile character of all these systems are established by their elastic constants, and the dynamical stability of three of them by the phonon data. Rhombohedral structure ≤ft(R\\bar{3}m\\right) is found to be energetically the most stable one in Ru1C1 stoichiometry and hexagonal structure ≤ft( P\\bar{6}m2\\right) , the most stable in Ru3C1 stoichiometry. RuC-Zinc blende system is a semiconductor with a band gap of 0.618 eV while the other two stable systems are metallic. Employing a semi-empirical model based on the bond strength, the hardness of RuC-Zinc blende is found to be a significantly large value of ~37 GPa while a fairly large value of ~21GPa is obtained for the RuC-Rhombohedral system. The positive formation energies of these systems show that high temperature and possibly high pressure are necessary for their synthesis.

  15. Emissivity range constraints algorithm for multi-wavelength pyrometer (MWP).

    PubMed

    Xing, Jian; Rana, R S; Gu, Weihong

    2016-08-22

    In order to realize rapid and real temperature measurement for high temperature targets by multi-wavelength pyrometer (MWP), emissivity range constraints to optimize data processing algorithm without effect from emissivity has been developed. Through exploring the relation between emissivity deviation and true temperature by fitting of large number of data from different emissivity distribution target models, the effective search range of emissivity for every time iteration is obtained, so data processing time is greatly reduced. Simulation and experimental results indicate that calculation time is less by 0.2 seconds with 25K absolute error at 1800K true temperature, and the efficiency is improved by more than 90% compared with the previous algorithm. The method has advantages of simplicity, rapidity, and suitability for in-line high temperature measurement. PMID:27557198

  16. A multiagent evolutionary algorithm for constraint satisfaction problems.

    PubMed

    Liu, Jing; Zhong, Weicai; Jiao, Licheng

    2006-02-01

    With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10(4) to 10(7), and has a linear time complexity. Even for 10(7)-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained. PMID:16468566

  17. Flexible Job-Shop Scheduling with Dual-Resource Constraints to Minimize Tardiness Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Paksi, A. B. N.; Ma'ruf, A.

    2016-02-01

    In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.

  18. How Do Severe Constraints Affect the Search Ability of Multiobjective Evolutionary Algorithms in Water Resources?

    NASA Astrophysics Data System (ADS)

    Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.

    2015-12-01

    This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or

  19. Predicting side-chain conformations of methionine using a hard-sphere model with stereochemical constraints

    NASA Astrophysics Data System (ADS)

    Virrueta, A.; Gaines, J.; O'Hern, C. S.; Regan, L.

    2015-03-01

    Current research in the O'Hern and Regan laboratories focuses on the development of hard-sphere models with stereochemical constraints for protein structure prediction as an alternative to molecular dynamics methods that utilize knowledge-based corrections in their force-fields. Beginning with simple hydrophobic dipeptides like valine, leucine, and isoleucine, we have shown that our model is able to reproduce the side-chain dihedral angle distributions derived from sets of high-resolution protein crystal structures. However, methionine remains an exception - our model yields a chi-3 side-chain dihedral angle distribution that is relatively uniform from 60 to 300 degrees, while the observed distribution displays peaks at 60, 180, and 300 degrees. Our goal is to resolve this discrepancy by considering clashes with neighboring residues, and averaging the reduced distribution of allowable methionine structures taken from a set of crystallized proteins. We will also re-evaluate the electron density maps from which these protein structures are derived to ensure that the methionines and their local environments are correctly modeled. This work will ultimately serve as a tool for computing side-chain entropy and protein stability. A. V. is supported by an NSF Graduate Research Fellowship and a Ford Foundation Fellowship. J. G. is supported by NIH training Grant NIH-5T15LM007056-28.

  20. Approximation algorithms for NEXTtime-hard periodically specified problems and domino problems

    SciTech Connect

    Marathe, M.V.; Hunt, H.B., III; Stearns, R.E.; Rosenkrantz, D.J.

    1996-02-01

    We study the efficient approximability of two general class of problems: (1) optimization versions of the domino problems studies in [Ha85, Ha86, vEB83, SB84] and (2) graph and satisfiability problems when specified using various kinds of periodic specifications. Both easiness and hardness results are obtained. Our efficient approximation algorithms and schemes are based on extensions of the ideas. Two of properties of our results obtained here are: (1) For the first time, efficient approximation algorithms and schemes have been developed for natural NEXPTIME-complete problems. (2) Our results are the first polynomial time approximation algorithms with good performance guarantees for `hard` problems specified using various kinds of periodic specifications considered in this paper. Our results significantly extend the results in [HW94, Wa93, MH+94].

  1. Model predictive driving simulator motion cueing algorithm with actuator-based constraints

    NASA Astrophysics Data System (ADS)

    Garrett, Nikhil J. I.; Best, Matthew C.

    2013-08-01

    The simulator motion cueing problem has been considered extensively in the literature; approaches based on linear filtering and optimal control have been presented and shown to perform reasonably well. More recently, model predictive control (MPC) has been considered as a variant of the optimal control approach; MPC is perhaps an obvious candidate for motion cueing due to its ability to deal with constraints, in this case the platform workspace boundary. This paper presents an MPC-based cueing algorithm that, unlike other algorithms, uses the actuator positions and velocities as the constraints. The result is a cueing algorithm that can make better use of the platform workspace whilst ensuring that its bounds are never exceeded. The algorithm is shown to perform well against the classical cueing algorithm and an algorithm previously proposed by the authors, both in simulation and in tests with human drivers.

  2. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  3. Parallelized event chain algorithm for dense hard sphere and polymer systems

    SciTech Connect

    Kampmann, Tobias A. Boltz, Horst-Holger; Kierfeld, Jan

    2015-01-15

    We combine parallelization and cluster Monte Carlo for hard sphere systems and present a parallelized event chain algorithm for the hard disk system in two dimensions. For parallelization we use a spatial partitioning approach into simulation cells. We find that it is crucial for correctness to ensure detailed balance on the level of Monte Carlo sweeps by drawing the starting sphere of event chains within each simulation cell with replacement. We analyze the performance gains for the parallelized event chain and find a criterion for an optimal degree of parallelization. Because of the cluster nature of event chain moves massive parallelization will not be optimal. Finally, we discuss first applications of the event chain algorithm to dense polymer systems, i.e., bundle-forming solutions of attractive semiflexible polymers.

  4. A fast multigrid algorithm for energy minimization under planar density constraints.

    SciTech Connect

    Ron, D.; Safro, I.; Brandt, A.; Mathematics and Computer Science; Weizmann Inst. of Science

    2010-09-07

    The two-dimensional layout optimization problem reinforced by the efficient space utilization demand has a wide spectrum of practical applications. Formulating the problem as a nonlinear minimization problem under planar equality and/or inequality density constraints, we present a linear time multigrid algorithm for solving a correction to this problem. The method is demonstrated in various graph drawing (visualization) instances.

  5. A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem

    SciTech Connect

    Bayardo, R.J. Jr.; Miranker, D.P.

    1996-12-31

    Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-space-bounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevance-bounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes.

  6. On-line reentry guidance algorithm with both path and no-fly zone constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Da; Liu, Lei; Wang, Yongji

    2015-12-01

    This study proposes an on-line predictor-corrector reentry guidance algorithm that satisfies path and no-fly zone constraints for hypersonic vehicles with a high lift-to-drag ratio. The proposed guidance algorithm can generate a feasible trajectory at each guidance cycle during the entry flight. In the longitudinal profile, numerical predictor-corrector approaches are used to predict the flight capability from current flight states to expected terminal states and to generate an on-line reference drag acceleration profile. The path constraints on heat rate, aerodynamic load, and dynamic pressure are implemented as a part of the predictor-corrector algorithm. A tracking control law is then designed to track the reference drag acceleration profile. In the lateral profile, a novel guidance algorithm is presented. The velocity azimuth angle error threshold and artificial potential field method are used to reduce heading error and to avoid the no-fly zone. Simulated results for nominal and dispersed cases show that the proposed guidance algorithm not only can avoid the no-fly zone but can also steer a typical entry vehicle along a feasible 3D trajectory that satisfies both terminal and path constraints.

  7. NEW CONSTRAINTS ON THE BLACK HOLE LOW/HARD STATE INNER ACCRETION FLOW WITH NuSTAR

    SciTech Connect

    Miller, J. M.; King, A. L.; Tomsick, J. A.; Boggs, S. E.; Bachetti, M.; Wilkins, D.; Christensen, F. E.; Craig, W. W.; Fabian, A. C.; Kara, E.; Grefenstette, B. W.; Harrison, F. A.; Hailey, C. J.; Stern, D. K; Zhang, W. W.

    2015-01-20

    We report on an observation of the Galactic black hole candidate GRS 1739–278 during its 2014 outburst, obtained with NuSTAR. The source was captured at the peak of a rising ''low/hard'' state, at a flux of ∼0.3 Crab. A broad, skewed iron line and disk reflection spectrum are revealed. Fits to the sensitive NuSTAR spectra with a number of relativistically blurred disk reflection models yield strong geometrical constraints on the disk and hard X-ray ''corona''. Two models that explicitly assume a ''lamp post'' corona find its base to have a vertical height above the black hole of h=5{sub −2}{sup +7} GM/c{sup 2} and h = 18 ± 4 GM/c {sup 2} (90% confidence errors); models that do not assume a ''lamp post'' return emissivity profiles that are broadly consistent with coronae of this size. Given that X-ray microlensing studies of quasars and reverberation lags in Seyferts find similarly compact coronae, observations may now signal that compact coronae are fundamental across the black hole mass scale. All of the models fit to GRS 1739–278 find that the accretion disk extends very close to the black hole—the least stringent constraint is r{sub in}=5{sub −4}{sup +3} GM/c{sup 2}. Only two of the models deliver meaningful spin constraints, but a = 0.8 ± 0.2 is consistent with all of the fits. Overall, the data provide especially compelling evidence of an association between compact hard X-ray coronae and the base of relativistic radio jets in black holes.

  8. Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas

    NASA Technical Reports Server (NTRS)

    Smith, Barbara M.; Bennett, Sean

    1992-01-01

    A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.

  9. The Successive Projection Algorithm (SPA), an Algorithm with a Spatial Constraint for the Automatic Search of Endmembers in Hyperspectral Data

    PubMed Central

    Zhang, Jinkai; Rivard, Benoit; Rogge, D.M.

    2008-01-01

    Spectral mixing is a problem inherent to remote sensing data and results in few image pixel spectra representing ″pure″ targets. Linear spectral mixture analysis is designed to address this problem and it assumes that the pixel-to-pixel variability in a scene results from varying proportions of spectral endmembers. In this paper we present a different endmember-search algorithm called the Successive Projection Algorithm (SPA). SPA builds on convex geometry and orthogonal projection common to other endmember search algorithms by including a constraint on the spatial adjacency of endmember candidate pixels. Consequently it can reduce the susceptibility to outlier pixels and generates realistic endmembers.This is demonstrated using two case studies (AVIRIS Cuprite cube and Probe-1 imagery for Baffin Island) where image endmembers can be validated with ground truth data. The SPA algorithm extracts endmembers from hyperspectral data without having to reduce the data dimensionality. It uses the spectral angle (alike IEA) and the spatial adjacency of pixels in the image to constrain the selection of candidate pixels representing an endmember. We designed SPA based on the observation that many targets have spatial continuity (e.g. bedrock lithologies) in imagery and thus a spatial constraint would be beneficial in the endmember search. An additional product of the SPA is data describing the change of the simplex volume ratio between successive iterations during the endmember extraction. It illustrates the influence of a new endmember on the data structure, and provides information on the convergence of the algorithm. It can provide a general guideline to constrain the total number of endmembers in a search.

  10. An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints

    PubMed Central

    Sung, Jinmo; Jeong, Bongju

    2014-01-01

    Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments. PMID:24701158

  11. Precise algorithm to generate random sequential addition of hard hyperspheres at saturation.

    PubMed

    Zhang, G; Torquato, S

    2013-11-01

    The study of the packing of hard hyperspheres in d-dimensional Euclidean space R^{d} has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. E 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g(2)(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed "decorrelation" principle, and the degree of "hyperuniformity" (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the second moment of inertia of the average

  12. Precise algorithm to generate random sequential addition of hard hyperspheres at saturation

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Torquato, S.

    2013-11-01

    The study of the packing of hard hyperspheres in d-dimensional Euclidean space Rd has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.74.061308 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g2(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed “decorrelation” principle, and the degree of “hyperuniformity” (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the

  13. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    PubMed

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms. PMID:25126605

  14. A Novel Artificial Immune Algorithm for Spatial Clustering with Obstacle Constraint and Its Applications

    PubMed Central

    Sun, Liping; Luo, Yonglong; Ding, Xintao; Zhang, Ji

    2014-01-01

    An important component of a spatial clustering algorithm is the distance measure between sample points in object space. In this paper, the traditional Euclidean distance measure is replaced with innovative obstacle distance measure for spatial clustering under obstacle constraints. Firstly, we present a path searching algorithm to approximate the obstacle distance between two points for dealing with obstacles and facilitators. Taking obstacle distance as similarity metric, we subsequently propose the artificial immune clustering with obstacle entity (AICOE) algorithm for clustering spatial point data in the presence of obstacles and facilitators. Finally, the paper presents a comparative analysis of AICOE algorithm and the classical clustering algorithms. Our clustering model based on artificial immune system is also applied to the case of public facility location problem in order to establish the practical applicability of our approach. By using the clone selection principle and updating the cluster centers based on the elite antibodies, the AICOE algorithm is able to achieve the global optimum and better clustering effect. PMID:25435862

  15. Solving hard computational problems efficiently: asymptotic parametric complexity 3-coloring algorithm.

    PubMed

    Martín H, José Antonio

    2013-01-01

    Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter α∈N. Nevertheless, here it is proved that the probability of requiring a value of α>k to obtain a solution for a random graph decreases exponentially: P(α>k)≤2(-(k+1)), making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711

  16. Multiobjective inverse planning for intensity modulated radiotherapy with constraint-free gradient-based optimization algorithms

    NASA Astrophysics Data System (ADS)

    Lahanas, Michael; Schreibmann, Eduard; Baltas, Dimos

    2003-09-01

    We consider the behaviour of the limited memory L-BFGS algorithm as a representative constraint-free gradient-based algorithm which is used for multiobjective (MO) dose optimization for intensity modulated radiotherapy (IMRT). Using a parameter transformation, the positivity constraint problem of negative beam fluences is entirely eliminated: a feature which to date has not been fully understood by all investigators. We analyse the global convergence properties of L-BFGS by searching for the existence and the influence of possible local minima. With a fast simulated annealing (FSA) algorithm we examine whether the L-BFGS solutions are globally Pareto optimal. The three examples used in our analysis are a brain tumour, a prostate tumour and a test case with a C-shaped PTV. In 1% of the optimizations global convergence is violated. A simple mechanism practically eliminates the influence of this failure and the obtained solutions are globally optimal. A single-objective dose optimization requires less than 4 s for 5400 parameters and 40 000 sampling points. The elimination of the problem of negative beam fluences and the high computational speed permit constraint-free gradient-based optimization algorithms to be used for MO dose optimization. In this situation, a representative spectrum of possible solutions is obtained which contains information such as the trade-off between the objectives and range of dose values. Using simple decision making tools the best of all the possible solutions can be chosen. We perform an MO dose optimization for the three examples and compare the spectra of solutions, firstly using recommended critical dose values for the organs at risk and secondly, setting these dose values to zero.

  17. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation.

    PubMed

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  18. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  19. Constraint satisfaction using a hybrid evolutionary hill-climbing algorithm that performs opportunistic arc and path revision

    SciTech Connect

    Bowen, J.; Dozier, G.

    1996-12-31

    This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.

  20. A constraint-based search algorithm for parameter identification of environmental models

    NASA Astrophysics Data System (ADS)

    Gharari, S.; Shafiei, M.; Hrachowitz, M.; Kumar, R.; Fenicia, F.; Gupta, H. V.; Savenije, H. H. G.

    2014-12-01

    Many environmental systems models, such as conceptual rainfall-runoff models, rely on model calibration for parameter identification. For this, an observed output time series (such as runoff) is needed, but frequently not available (e.g., when making predictions in ungauged basins). In this study, we provide an alternative approach for parameter identification using constraints based on two types of restrictions derived from prior (or expert) knowledge. The first, called parameter constraints, restricts the solution space based on realistic relationships that must hold between the different model parameters while the second, called process constraints requires that additional realism relationships between the fluxes and state variables must be satisfied. Specifically, we propose a search algorithm for finding parameter sets that simultaneously satisfy such constraints, based on stepwise sampling of the parameter space. Such parameter sets have the desirable property of being consistent with the modeler's intuition of how the catchment functions, and can (if necessary) serve as prior information for further investigations by reducing the prior uncertainties associated with both calibration and prediction.

  1. Incorporating chemical modification constraints into a dynamic programming algorithm for prediction of RNA secondary structure

    PubMed Central

    Mathews, David H.; Disney, Matthew D.; Childs, Jessica L.; Schroeder, Susan J.; Zuker, Michael; Turner, Douglas H.

    2004-01-01

    A dynamic programming algorithm for prediction of RNA secondary structure has been revised to accommodate folding constraints determined by chemical modification and to include free energy increments for coaxial stacking of helices when they are either adjacent or separated by a single mismatch. Furthermore, free energy parameters are revised to account for recent experimental results for terminal mismatches and hairpin, bulge, internal, and multibranch loops. To demonstrate the applicability of this method, in vivo modification was performed on 5S rRNA in both Escherichia coli and Candida albicans with 1-cyclohexyl-3-(2-morpholinoethyl) carbodiimide metho-p-toluene sulfonate, dimethyl sulfate, and kethoxal. The percentage of known base pairs in the predicted structure increased from 26.3% to 86.8% for the E. coli sequence by using modification constraints. For C. albicans, the accuracy remained 87.5% both with and without modification data. On average, for these sequences and a set of 14 sequences with known secondary structure and chemical modification data taken from the literature, accuracy improves from 67% to 76%. This enhancement primarily reflects improvement for three sequences that are predicted with <40% accuracy on the basis of energetics alone. For these sequences, inclusion of chemical modification constraints improves the average accuracy from 28% to 78%. For the 11 sequences with <6% pseudoknotted base pairs, structures predicted with constraints from chemical modification contain on average 84% of known canonical base pairs. PMID:15123812

  2. Formal analysis, hardness, and algorithms for extracting internal structure of test-based problems.

    PubMed

    Jaśkowski, Wojciech; Krawiec, Krzysztof

    2011-01-01

    Problems in which some elementary entities interact with each other are common in computational intelligence. This scenario, typical for coevolving artificial life agents, learning strategies for games, and machine learning from examples, can be formalized as a test-based problem and conveniently embedded in the common conceptual framework of coevolution. In test-based problems, candidate solutions are evaluated on a number of test cases (agents, opponents, examples). It has been recently shown that every test of such problem can be regarded as a separate objective, and the whole problem as multi-objective optimization. Research on reducing the number of such objectives while preserving the relations between candidate solutions and tests led to the notions of underlying objectives and internal problem structure, which can be formalized as a coordinate system that spatially arranges candidate solutions and tests. The coordinate system that spans the minimal number of axes determines the so-called dimension of a problem and, being an inherent property of every problem, is of particular interest. In this study, we investigate in-depth the formalism of a coordinate system and its properties, relate them to properties of partially ordered sets, and design an exact algorithm for finding a minimal coordinate system. We also prove that this problem is NP-hard and come up with a heuristic which is superior to the best algorithm proposed so far. Finally, we apply the algorithms to three abstract problems and demonstrate that the dimension of the problem is typically much lower than the number of tests, and for some problems converges to the intrinsic parameter of the problem--its a priori dimension. PMID:21815770

  3. Suzaku Constraints on Soft and Hard Excess Emissions from Abell 2199

    NASA Astrophysics Data System (ADS)

    Kawaharada, Madoka; Makishima, Kazuo; Kitaguchi, Takao; Okuyama, Sho; Nakazawa, Kazuhiro; Fukazawa, Yasushi

    2010-02-01

    The nearby (z = 0.03015) cluster of galaxies Abell 2199 was observed by Suzaku in X-rays, with five pointings for ˜20ks each. From the XIS data, the temperature and metal abundance profiles were derived out to ˜700 kpc (0.4 times the virial radius). Both of these quantities decrease gradually from the center to peripheries by a factor of ˜2, while the oxygen abundance tends to be flat. The temperature within 12' (˜430 kpc) is ˜4 keV, and the 0.5-10 keV X-ray luminosity integrated up to 30' is (2.9±0.1) × 1044 erg s-1, in agreement with previous XMM-Newton measurements. Above this thermal emission, no significant excess was found either in the XIS range below ˜1 keV, or in the HXD-PIN range above ˜15 keV. The 90%-confidence upper limit on the emission measure of an assumed 0.2 keV warm gas is (3.7-7.5) × 1062 cm-3 arcmin-2, which is 3.7-7.6 times tighter than the detection reported with XMM-Newton. The 90%-confidence upper limit on the 20-80 keV luminosity of any power-law component is 1.8 × 1043 erg s-1, assuming a photon index of 2.0. Although this upper limit does not reject the possible 2.1σ detection by the BeppoSAX PDS, it is a factor of 2.1 tighter than that of the PDS if both are considered upper limits. The non-detection of the hard excess can be reconciled with the upper limit on diffuse radio emission, without invoking very low magnetic fields (<0.073μG) which were suggested previously.

  4. Comparison of multiobjective evolutionary algorithms for operations scheduling under machine availability constraints.

    PubMed

    Frutos, M; Méndez, M; Tohmé, F; Broz, D

    2013-01-01

    Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502

  5. Comparison of Multiobjective Evolutionary Algorithms for Operations Scheduling under Machine Availability Constraints

    PubMed Central

    Frutos, M.; Méndez, M.; Tohmé, F.; Broz, D.

    2013-01-01

    Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502

  6. Iterative reconstruction algorithm for analyzer-based phase-contrast computed tomography of hard and soft tissue

    NASA Astrophysics Data System (ADS)

    Sunaguchi, Naoki; Yuasa, Tetsuya; Ando, Masami

    2013-09-01

    We propose a reconstruction algorithm for analyzer-based phase-contrast computed tomography (CT) applicable to biological samples including hard tissue that may generate conspicuous artifacts with the conventional reconstruction method. The algorithm is an iterative procedure that goes back and forth between a tomogram and its sinogram through the Radon transform and CT reconstruction, while imposing a priori information in individual regions. We demonstrate the efficacy of the algorithm using synthetic data generated by computer simulation reflecting actual experimental conditions and actual data acquired from a rat foot by a dark field imaging system.

  7. RNAiFOLD: a constraint programming algorithm for RNA inverse folding and molecular design.

    PubMed

    Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan

    2013-04-01

    Synthetic biology is a rapidly emerging discipline with long-term ramifications that range from single-molecule detection within cells to the creation of synthetic genomes and novel life forms. Truly phenomenal results have been obtained by pioneering groups--for instance, the combinatorial synthesis of genetic networks, genome synthesis using BioBricks, and hybridization chain reaction (HCR), in which stable DNA monomers assemble only upon exposure to a target DNA fragment, biomolecular self-assembly pathways, etc. Such work strongly suggests that nanotechnology and synthetic biology together seem poised to constitute the most transformative development of the 21st century. In this paper, we present a Constraint Programming (CP) approach to solve the RNA inverse folding problem. Given a target RNA secondary structure, we determine an RNA sequence which folds into the target structure; i.e. whose minimum free energy structure is the target structure. Our approach represents a step forward in RNA design--we produce the first complete RNA inverse folding approach which allows for the specification of a wide range of design constraints. We also introduce a Large Neighborhood Search approach which allows us to tackle larger instances at the cost of losing completeness, while retaining the advantages of meeting design constraints (motif, GC-content, etc.). Results demonstrate that our software, RNAiFold, performs as well or better than all state-of-the-art approaches; nevertheless, our approach is unique in terms of completeness, flexibility, and the support of various design constraints. The algorithms presented in this paper are publicly available via the interactive webserver http://bioinformatics.bc.edu/clotelab/RNAiFold; additionally, the source code can be downloaded from that site. PMID:23600819

  8. Truss optimization on shape and sizing with frequency constraints based on orthogonal multi-gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Khatibinia, Mohsen; Sadegh Naseralavi, Seyed

    2014-12-01

    Structural optimization on shape and sizing with frequency constraints is well-known as a highly nonlinear dynamic optimization problem with several local optimum solutions. Hence, efficient optimization algorithms should be utilized to solve this problem. In this study, orthogonal multi-gravitational search algorithm (OMGSA) as a meta-heuristic algorithm is introduced to solve truss optimization on shape and sizing with frequency constraints. The OMGSA is a hybrid approach based on a combination of multi-gravitational search algorithm (multi-GSA) and an orthogonal crossover (OC). In multi-GSA, the population is split into several sub-populations. Then, each sub-population is independently evaluated by an improved gravitational search algorithm (IGSA). Furthermore, the OC is used in the proposed OMGSA in order to find and exploit the global solution in the search space. The capability of OMGSA is demonstrated through six benchmark examples. Numerical results show that the proposed OMGSA outperform the other optimization techniques.

  9. Hard Data Analytics Problems Make for Better Data Analysis Algorithms: Bioinformatics as an Example

    PubMed Central

    Widera, Paweł; Lazzarini, Nicola; Krasnogor, Natalio

    2014-01-01

    Abstract Data mining and knowledge discovery techniques have greatly progressed in the last decade. They are now able to handle larger and larger datasets, process heterogeneous information, integrate complex metadata, and extract and visualize new knowledge. Often these advances were driven by new challenges arising from real-world domains, with biology and biotechnology a prime source of diverse and hard (e.g., high volume, high throughput, high variety, and high noise) data analytics problems. The aim of this article is to show the broad spectrum of data mining tasks and challenges present in biological data, and how these challenges have driven us over the years to design new data mining and knowledge discovery procedures for biodata. This is illustrated with the help of two kinds of case studies. The first kind is focused on the field of protein structure prediction, where we have contributed in several areas: by designing, through regression, functions that can distinguish between good and bad models of a protein's predicted structure; by creating new measures to characterize aspects of a protein's structure associated with individual positions in a protein's sequence, measures containing information that might be useful for protein structure prediction; and by creating accurate estimators of these structural aspects. The second kind of case study is focused on omics data analytics, a class of biological data characterized for having extremely high dimensionalities. Our methods were able not only to generate very accurate classification models, but also to discover new biological knowledge that was later ratified by experimentalists. Finally, we describe several strategies to tightly integrate knowledge extraction and data mining in order to create a new class of biodata mining algorithms that can natively embrace the complexity of biological data, efficiently generate accurate information in the form of classification/regression models, and extract valuable

  10. Homotopy Algorithm for Optimal Control Problems with a Second-order State Constraint

    SciTech Connect

    Hermant, Audrey

    2010-02-15

    This paper deals with optimal control problems with a regular second-order state constraint and a scalar control, satisfying the strengthened Legendre-Clebsch condition. We study the stability of structure of stationary points. It is shown that under a uniform strict complementarity assumption, boundary arcs are stable under sufficiently smooth perturbations of the data. On the contrary, nonreducible touch points are not stable under perturbations. We show that under some reasonable conditions, either a boundary arc or a second touch point may appear. Those results allow us to design an homotopy algorithm which automatically detects the structure of the trajectory and initializes the shooting parameters associated with boundary arcs and touch points.

  11. Line Matching Algorithm for Aerial Image Combining image and object space similarity constraints

    NASA Astrophysics Data System (ADS)

    Wang, Jingxue; Wang, Weixi; Li, Xiaoming; Cao, Zhenyu; Zhu, Hong; Li, Miao; He, Biao; Zhao, Zhigang

    2016-06-01

    A new straight line matching method for aerial images is proposed in this paper. Compared to previous works, similarity constraints combining radiometric information in image and geometry attributes in object plane are employed in these methods. Firstly, initial candidate lines and the elevation values of lines projection plane are determined by corresponding points in neighborhoods of reference lines. Secondly, project reference line and candidate lines back forward onto the plane, and then similarity measure constraints are enforced to reduce the number of candidates and to determine the finial corresponding lines in a hierarchical way. Thirdly, "one-to-many" and "many-to-one" matching results are transformed into "one-to-one" by merging many lines into the new one, and the errors are eliminated simultaneously. Finally, endpoints of corresponding lines are detected by line expansion process combing with "image-object-image" mapping mode. Experimental results show that the proposed algorithm can be able to obtain reliable line matching results for aerial images.

  12. Research on imaging ranging algorithm base on constraint matching of trinocular vision

    NASA Astrophysics Data System (ADS)

    Ye, Pan; Li, Li; Jin, Wei-Qi; Jiang, Yu-tong

    2014-11-01

    Binocular stereo vision is a common passive ranging method, which directly simulates the approach of human visual. It can flexibly measure the stereo information in a complex condition. However there is a problem that binocular vision ranging accuracy is not high , one of the reasons is the low precision of the stereo image pairs matching . In this paper, based on trinocular vision imaging ranging algorithm of constraint matching, we use trinocular visual ranging system which is composed of three parallel placed cameras to image and achieve distance measurement of the target. we use calibration method of Zhang to calibrate the cameras, firstly, the three cameras are calibrated respectively, then using the results to get three groups binocular calibration. Thereby the relative position information of each camera are obtained. The using of the information obtained by the third camera can reduce ambiguity of corresponding points matching in a Binocular camera system. limiting search space by the epipolar constraint and improve the matching speed, filtering the distance information , eliminate interference information which brings by the feature points on the prospect and background to obtain a more accurate distance result of target. Experimental results show that, this method can overcome the limitations of binocular vision ranging , effectively improving the range accuracy.

  13. An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction

    ERIC Educational Resources Information Center

    Bhasin, Harpreet

    2011-01-01

    Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…

  14. A Greedy reassignment algorithm for the PBS minimum monitor unit constraint

    NASA Astrophysics Data System (ADS)

    Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin

    2016-06-01

    Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90  ±  1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5

  15. A Greedy reassignment algorithm for the PBS minimum monitor unit constraint.

    PubMed

    Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin

    2016-06-21

    Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90  ±  1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5

  16. A modified generalized extremal optimization algorithm for the quay crane scheduling problem with interference constraints

    NASA Astrophysics Data System (ADS)

    Guo, Peng; Cheng, Wenming; Wang, Yi

    2014-10-01

    The quay crane scheduling problem (QCSP) determines the handling sequence of tasks at ship bays by a set of cranes assigned to a container vessel such that the vessel's service time is minimized. A number of heuristics or meta-heuristics have been proposed to obtain the near-optimal solutions to overcome the NP-hardness of the problem. In this article, the idea of generalized extremal optimization (GEO) is adapted to solve the QCSP with respect to various interference constraints. The resulting GEO is termed the modified GEO. A randomized searching method for neighbouring task-to-QC assignments to an incumbent task-to-QC assignment is developed in executing the modified GEO. In addition, a unidirectional search decoding scheme is employed to transform a task-to-QC assignment to an active quay crane schedule. The effectiveness of the developed GEO is tested on a suite of benchmark problems introduced by K.H. Kim and Y.M. Park in 2004 (European Journal of Operational Research, Vol. 156, No. 3). Compared with other well-known existing approaches, the experiment results show that the proposed modified GEO is capable of obtaining the optimal or near-optimal solution in a reasonable time, especially for large-sized problems.

  17. Seismic small-scale discontinuity sparsity-constraint inversion method using a penalty decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Jingtao; Peng, Suping; Du, Wenfeng

    2016-02-01

    We consider sparsity-constraint inversion method for detecting seismic small-scale discontinuities, such as edges, faults and cavities, which provide rich information about petroleum reservoirs. However, where there is karstification and interference caused by macro-scale fault systems, these seismic small-scale discontinuities are hard to identify when using currently available discontinuity-detection methods. In the subsurface, these small-scale discontinuities are separately and sparsely distributed and their seismic responses occupy a very small part of seismic image. Considering these sparsity and non-smooth features, we propose an effective L 2-L 0 norm model for improvement of their resolution. First, we apply a low-order plane-wave destruction method to eliminate macro-scale smooth events. Then, based the residual data, we use a nonlinear structure-enhancing filter to build a L 2-L 0 norm model. In searching for its solution, an efficient and fast convergent penalty decomposition method is employed. The proposed method can achieve a significant improvement in enhancing seismic small-scale discontinuities. Numerical experiment and field data application demonstrate the effectiveness and feasibility of the proposed method in studying the relevant geology of these reservoirs.

  18. [Multispectral Radiation Algorithm Based on Emissivity Model Constraints for True Temperature Measurement].

    PubMed

    Liang, Mei; Sun, Xiao-gang; Luan, Mei-sheng

    2015-10-01

    Temperature measurement is one of the important factors for ensuring product quality, reducing production cost and ensuring experiment safety in industrial manufacture and scientific experiment. Radiation thermometry is the main method for non-contact temperature measurement. The second measurement (SM) method is one of the common methods in the multispectral radiation thermometry. However, the SM method cannot be applied to on-line data processing. To solve the problems, a rapid inversion method for multispectral radiation true temperature measurement is proposed and constraint conditions of emissivity model are introduced based on the multispectral brightness temperature model. For non-blackbody, it can be drawn that emissivity is an increasing function in the interval if the brightness temperature is an increasing function or a constant function in a range and emissivity satisfies an inequality of emissivity and wavelength in that interval if the brightness temperature is a decreasing function in a range, according to the relationship of brightness temperatures at different wavelengths. The construction of emissivity assumption values is reduced from multiclass to one class and avoiding the unnecessary emissivity construction with emissivity model constraint conditions on the basis of brightness temperature information. Simulation experiments and comparisons for two different temperature points are carried out based on five measured targets with five representative variation trends of real emissivity. decreasing monotonically, increasing monotonically, first decreasing with wavelength and then increasing, first increasing and then decreasing and fluctuating with wavelength randomly. The simulation results show that compared with the SM method, for the same target under the same initial temperature and emissivity search range, the processing speed of the proposed algorithm is increased by 19.16%-43.45% with the same precision and the same calculation results

  19. Gradient flipping algorithm: introducing non-convex constraints in wavefront reconstructions with the transport of intensity equation.

    PubMed

    Parvizi, A; Van den Broek, W; Koch, C T

    2016-04-18

    The transport of intensity equation (TIE) is widely applied for recovering wave fronts from an intensity measurement and a measurement of its variation along the direction of propagation. In order to get around the problem of non-uniqueness and ill-conditionedness of the solution of the TIE in the very common case of unspecified boundary conditions or noisy data, additional constraints to the solution are necessary. Although from a numerical optimization point of view, convex constraint as imposed to by total variation minimization is preferable, we will show that in many cases non-convex constraints are necessary to overcome the low-frequency artifacts so typical for convex constraints. We will provide simulated and experimental examples that demonstrate the superiority of solutions to the TIE obtained by our recently introduced gradient flipping algorithm over a total variation constrained solution. PMID:27137272

  20. Genetic algorithm to design Laue lenses with optimal performance for focusing hard X- and γ-rays

    NASA Astrophysics Data System (ADS)

    Camattari, Riccardo; Guidi, Vincenzo

    2014-10-01

    To focus hard X- and γ-rays it is possible to use a Laue lens as a concentrator. With this optics it is possible to improve the detection of radiation for several applications, from the observation of the most violent phenomena in the sky to nuclear medicine applications for diagnostic and therapeutic purposes. We implemented a code named LaueGen, which is based on a genetic algorithm and aims to design optimized Laue lenses. The genetic algorithm was selected because optimizing a Laue lens is a complex and discretized problem. The output of the code consists of the design of a Laue lens, which is composed of diffracting crystals that are selected and arranged in such a way as to maximize the lens performance. The code allows managing crystals of any material and crystallographic orientation. The program is structured in such a way that the user can control all the initial lens parameters. As a result, LaueGen is highly versatile and can be used to design very small lenses, for example, for nuclear medicine, or very large lenses, for example, for satellite-borne astrophysical missions.

  1. Improving chemical mapping algorithm and visualization in full-field hard x-ray spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Chang, Cheng; Xu, Wei; Chen-Wiegart, Yu-chen Karen; Wang, Jun; Yu, Dantong

    2013-12-01

    X-ray Absorption Near Edge Structure (XANES) imaging, an advanced absorption spectroscopy technique, at the Transmission X-ray Microscopy (TXM) Beamline X8C of NSLS enables high-resolution chemical mapping (a.k.a. chemical composition identification or chemical spectra fitting). Two-Dimensional (2D) chemical mapping has been successfully applied to study many functional materials to decide the percentages of chemical components at each pixel position of the material images. In chemical mapping, the attenuation coefficient spectrum of the material (sample) can be fitted with the weighted sum of standard spectra of individual chemical compositions, where the weights are the percentages to be calculated. In this paper, we first implemented and compared two fitting approaches: (i) a brute force enumeration method, and (ii) a constrained least square minimization algorithm proposed by us. Next, as 2D spectra fitting can be conducted pixel by pixel, so theoretically, both methods can be implemented in parallel. In order to demonstrate the feasibility of parallel computing in the chemical mapping problem and investigate how much efficiency improvement can be achieved, we used the second approach as an example and implemented a parallel version for a multi-core computer cluster. Finally we used a novel way to visualize the calculated chemical compositions, by which domain scientists could grasp the percentage difference easily without looking into the real data.

  2. A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints.

    SciTech Connect

    Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson

    2006-08-01

    We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.

  3. Total variation iterative constraint algorithm for limited-angle tomographic reconstruction of non-piecewise-constant structures

    NASA Astrophysics Data System (ADS)

    Krauze, W.; Makowski, P.; Kujawińska, M.

    2015-06-01

    Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.

  4. CCM Continuity Constraint Method: A finite-element computational fluid dynamics algorithm for incompressible Navier-Stokes fluid flows

    SciTech Connect

    Williams, P.T.

    1993-09-01

    As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.

  5. Constraints on hard spectator scattering and annihilation corrections in Bu,d → PV decays within QCD factorization

    NASA Astrophysics Data System (ADS)

    Sun, Junfeng; Chang, Qin; Hu, Xiaohui; Yang, Yueling

    2015-04-01

    In this paper, we investigate the contributions of hard spectator scattering and annihilation in B → PV decays within the QCD factorization framework. With available experimental data on B → πK* , ρK , πρ and Kϕ decays, comprehensive χ2 analyses of the parameters XA,Hi,f (ρA,Hi,f, ϕA,Hi,f) are performed, where XAf (XAi) and XH are used to parameterize the endpoint divergences of the (non)factorizable annihilation and hard spectator scattering amplitudes, respectively. Based on χ2 analyses, it is observed that (1) The topology-dependent parameterization scheme is feasible for B → PV decays; (2) At the current accuracy of experimental measurements and theoretical evaluations, XH = XAi is allowed by B → PV decays, but XH ≠ XAf at 68% C.L.; (3) With the simplification XH = XAi, parameters XAf and XAi should be treated individually. The above-described findings are very similar to those obtained from B → PP decays. Numerically, for B → PV decays, we obtain (ρA,Hi ,ϕA,Hi [ ° ]) = (2.87-1.95+0.66 , -145-21+14) and (ρAf, ϕAf [ ° ]) = (0.91-0.13+0.12 , -37-9+10) at 68% C.L. With the best-fit values, most of the theoretical results are in good agreement with the experimental data within errors. However, significant corrections to the color-suppressed tree amplitude α2 related to a large ρH result in the wrong sign for ACPdir (B- →π0K*-) compared with the most recent BABAR data, which presents a new obstacle in solving "ππ" and "πK" puzzles through α2. A crosscheck with measurements at Belle (or Belle II) and LHCb, which offer higher precision, is urgently expected to confirm or refute such possible mismatch.

  6. Computational stability ranking of mutated hydrophobic cores in staphylococcal nuclease and T4 lysozyme using hard-sphere and stereochemical constraints

    NASA Astrophysics Data System (ADS)

    Virrueta, Alejandro; Zhou, Alice; O'Hern, Corey; Regan, Lynne

    2014-03-01

    Molecular dynamics methods have significantly advanced the understanding of protein folding and stability. However, current force-fields cannot accurately calculate and rank the stability of modified or de novo proteins. One possible reason is that current force-fields use knowledge-based corrections that improve dihedral angle sampling, but do not satisfy the stereochemical constraints for amino acids. I propose the use of simple hard-sphere models for amino acids with stereochemical constraints taken from high-resolution protein crystal structures. This model can enable a correct consideration of the entropy of side-chain rotations, and may be sufficient to predict the effects of single residue mutations in the hydrophobic cores of staphylococcal nuclease and T4 lysozyme on stability changes. I will computationally count the total number of allowed side-chain conformations Ω and calculate the associated entropy, S = kBln(Ω) , before and after each mutation. I will then rank the stability of the mutated cores based on my computed entropy changes, and compare my results with structural and thermodynamic data published by the Stites and Matthews groups. If successful, this project will provide a novel framework for the evaluation of entropic protein stabilities, and serve as a possible tool for computational protein design.

  7. A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Ortiz, Francisco

    2004-01-01

    COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions

  8. Algorithm for finding partitionings of hard variants of boolean satisfiability problem with application to inversion of some cryptographic functions.

    PubMed

    Semenov, Alexander; Zaikin, Oleg

    2016-01-01

    In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method. PMID:27190753

  9. Blind identification and restoration of turbulence degraded images based on the nonnegativity and support constraints recursive inverse filtering algorithm

    NASA Astrophysics Data System (ADS)

    Li, Dongxing; Zhao, Yan; Dong, Xu

    2008-03-01

    In general image restoration, the point spread function (PSF) of the imaging system, and the observation noise, are known a priori information. The aero-optics effect is yielded when the objects ( e.g, missile, aircraft etc.) are flying in high speed or ultrasonic speed. In this situation, the PSF and the observation noise are unknown a priori. The identification and the restoration of the turbulence degraded images is a challenging problem in the world. The algorithm based on the nonnegativity and support constraints recursive inverse filtering (NAS-RIF) is proposed in order to identify and restore the turbulence degraded images. The NAS-RIF technique applies to situations in which the scene consists of a finite support object against a uniformly black, grey, or white background. The restoration procedure of NAS-RIF involves recursive filtering of the blurred image to minimize a convex cost function. The algorithm proposed in this paper is that the turbulence degraded image is filtered before it passes the recursive filter. The conjugate gradient minimization routine was used for minimization of the NAS-RIF cost function. The algorithm based on the NAS-RIF is used to identify and restore the wind tunnel tested images. The experimental results show that the restoration effect is improved obviously.

  10. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  11. A Case Study on Investigating the Effect of Genetic Algorithm Operators on Predicting the Global Minimum Hardness Value of Biomaterial Extrudate

    SciTech Connect

    Shankar, T.J.; Sokhansanj, Shahabaddine

    2010-02-01

    Crossover and mutation are the main search operators of genetic algorithm, one of the most important features which distinguish it from other search algorithms like simulated annealing. A genetic algorithm adopts crossover and mutation as their main genetic operators. The present work was aimed to see the effect of genetic algorithm operators like crossover and mutation (Pc & Pm), population size (n), and number of iterations (I) on predicting the minimum hardness (N) of the biomaterial extrudate. The second order polynomial regression equation developed for the extrudate property hardness in terms of the independent variables like barrel temperature, screw speed, fish content of the feed, and feed moisture content was used as the objective function in the GA analysis. A simple genetic algorithm (SGA) with a crossover and mutation operators was used in the present study. A program was developed in C language for a SGA with a rank based fitness selection method. The upper limit of population and iterations were fixed at 100. It was observed that increasing population and iterations the prediction of function minimum improved drastically. Minimum predicted hardness values were achievable with a medium population of 50, iterations of 50 and crossover and mutation probabilities of 50 % and 0.5 %. Further the Pareto charts indicated that the effect of Pc was found to be more significant when population is 50 and Pm played a major role at low population ( 10). A crossover probability of 50 % and mutation probability of 0.5 % are the threshold values for the convergence of GA to reach a global search space. A minimum predicted hardness value of 3.82 (N) was observed for n = 60 and I = 100 and Pc & Pm of 85 % and 0.5 %.

  12. Sequential quadratic programming-based fast path planning algorithm subject to no-fly zone constraints

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang

    2016-08-01

    Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.

  13. Constraints on silicates formation in the Si-Al-Fe system: Application to hard deposits in steam generators of PWR nuclear reactors

    NASA Astrophysics Data System (ADS)

    Berger, Gilles; Million-Picallion, Lisa; Lefevre, Grégory; Delaunay, Sophie

    2015-04-01

    Introduction: The hydrothermal crystallization of silicates phases in the Si-Al-Fe system may lead to industrial constraints that can be encountered in the nuclear industry in at least two contexts: the geological repository for nuclear wastes and the formation of hard sludges in the steam generator of the PWR nuclear plants. In the first situation, the chemical reactions between the Fe-canister and the surrounding clays have been extensively studied in laboratory [1-7] and pilot experiments [8]. These studies demonstrated that the high reactivity of metallic iron leads to the formation of Fe-silicates, berthierine like, in a wide range of temperature. By contrast, the formation of deposits in the steam generators of PWR plants, called hard sludges, is a newer and less studied issue which can affect the reactor performance. Experiments: We present here a preliminary set of experiments reproducing the formation of hard sludges under conditions representative of the steam generator of PWR power plant: 275°C, diluted solutions maintained at low potential by hydrazine addition and at alkaline pH by low concentrations of amines and ammoniac. Magnetite, a corrosion by-product of the secondary circuit, is the source of iron while aqueous Si and Al, the major impurities in this system, are supplied either as trace elements in the circulating solution or by addition of amorphous silica and alumina when considering confined zones. The fluid chemistry is monitored by sampling aliquots of the solution. Eh and pH are continuously measured by hydrothermal Cormet© electrodes implanted in a titanium hydrothermal reactor. The transformation, or not, of the solid fraction was examined post-mortem. These experiments evidenced the role of Al colloids as precursor of cements composed of kaolinite and boehmite, and the passivation of amorphous silica (becoming unreactive) likely by sorption of aqueous iron. But no Fe-bearing was formed by contrast to many published studies on the Fe

  14. Using heuristic algorithms for capacity leasing and task allocation issues in telecommunication networks under fuzzy quality of service constraints

    NASA Astrophysics Data System (ADS)

    Huseyin Turan, Hasan; Kasap, Nihat; Savran, Huseyin

    2014-03-01

    Nowadays, every firm uses telecommunication networks in different amounts and ways in order to complete their daily operations. In this article, we investigate an optimisation problem that a firm faces when acquiring network capacity from a market in which there exist several network providers offering different pricing and quality of service (QoS) schemes. The QoS level guaranteed by network providers and the minimum quality level of service, which is needed for accomplishing the operations are denoted as fuzzy numbers in order to handle the non-deterministic nature of the telecommunication network environment. Interestingly, the mathematical formulation of the aforementioned problem leads to the special case of a well-known two-dimensional bin packing problem, which is famous for its computational complexity. We propose two different heuristic solution procedures that have the capability of solving the resulting nonlinear mixed integer programming model with fuzzy constraints. In conclusion, the efficiency of each algorithm is tested in several test instances to demonstrate the applicability of the methodology.

  15. On the use of genetic algorithm to optimize industrial assets lifecycle management under safety and budget constraints

    SciTech Connect

    Lonchampt, J.; Fessart, K.

    2013-07-01

    The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in the Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description

  16. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  17. A Neighbourhood Algorithm Analysis of the Constraints Provided by Surface Wave Dispersion Data on Upper Mantle Structure

    NASA Astrophysics Data System (ADS)

    Beghein, C.; Lebedev, S.; van der Hilst, R.

    2005-12-01

    Interstation dispersion curves can be used to obtain regional 1D profiles of the crust and upper mantle. Unlike phase velocity maps, dispersion curves can be determined with small errors and for a broad frequency band. We want to determine what features interstation surface wave dispersion curves can constrain. Using synthetic data and the Neighbourhood Algorithm, a direct search approach that provides a full statistical assessment of model uncertainites and trade-offs, we investigate how well crustal and upper mantle structure can be recovered with fundamental Love and Rayleigh waves. We also determine how strong are the trade-offs between the different parameters and what depth resolution can we expect to achieve with the current level of precision of this type of data. Synthetic dispersion curves between approximately 7 and 340s were assigned realistic error bars, i.e. an increase of the relative uncertainty with the period but with an amplitude consistent with the one achieve in ``real'' measurements. These dispersion curves were generated by two types of isotropic model differing only by their crustal structure. One represents an oceanic region (shallow Moho) and the other corresponds to an archean continental area with a larger Moho depth. Preliminary results show that while the Moho depth, the shear-velocity structure in the transition zone, between 200 and 410km depth, and between the base of the crust and 50km depth are generally well recovered, crustal structure and Vs between between 50 and 200km depth are more difficult to constrain with Love waves or Rayleigh waves alone because of some trade-off between the two layers. When these two layers are put together, the resolution of Vs between 50 and 100km depth apperas to improve. Stucture deeper than the transition zone is not constrained by the data because of a lack of sensitivity. We explore the possibility of differentiating between an upper and lower crust as well, and we investigate whether a joint

  18. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite. PMID:18558530

  19. Crystal-structure prediction via the Floppy-Box Monte Carlo algorithm: Method and application to hard (non)convex particles

    NASA Astrophysics Data System (ADS)

    de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein

    2012-12-01

    In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009), 10.1103/PhysRevLett.103.188302] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011), 10.1103/PhysRevLett.107.155501] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.

  20. Crystal-structure prediction via the floppy-box Monte Carlo algorithm: method and application to hard (non)convex particles.

    PubMed

    de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein

    2012-12-01

    In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009)] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011)] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations. PMID:23231211

  1. Statistical Physics of Hard Optimization Problems

    NASA Astrophysics Data System (ADS)

    Zdeborová, Lenka

    2008-06-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the NP-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this thesis is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.

  2. Statistical physics of hard optimization problems

    NASA Astrophysics Data System (ADS)

    Zdeborová, Lenka

    2009-06-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.

  3. A note on: A modified generalized extremal optimization algorithm for the quay crane scheduling problem with interference constraints

    NASA Astrophysics Data System (ADS)

    Trunfio, Roberto

    2015-06-01

    In a recent article, Guo, Cheng and Wang proposed a randomized search algorithm, called modified generalized extremal optimization (MGEO), to solve the quay crane scheduling problem for container groups under the assumption that schedules are unidirectional. The authors claim that the proposed algorithm is capable of finding new best solutions with respect to a well-known set of benchmark instances taken from the literature. However, as shown in this note, there are some errors in their work that can be detected by analysing the Gantt charts of two solutions provided by MGEO. In addition, some comments on the method used to evaluate the schedule corresponding to a task-to-quay crane assignment and on the search scheme of the proposed algorithm are provided. Finally, to assess the effectiveness of the proposed algorithm, the computational experiments are repeated and additional computational experiments are provided.

  4. Closing in on a Short-Hard Burst Progenitor: Constraints From Early-Time Optical Imaging and Spectroscopy of a Possible Host Galaxy of GRB 050509b

    SciTech Connect

    Bloom, Joshua S.; Prochaska, J.X.; Pooley, D.; Blake, C.W.; Foley, R.J.; Jha, S.; Ramirez-Ruiz, E.; Granot, J.; Filippenko, A.V.; Sigurdsson, S.; Barth, A.J.; Chen, H.-W.; Cooper, M.C.; Falco, E.E.; Gal, R.R.; Gerke, B.F.; Gladders, M.D.; Greene, J.E.; Hennanwi, J.; Ho, L.C.; Hurley, K.; /UC, Berkeley, Astron. Dept. /Lick Observ. /Harvard-Smithsonian Ctr. Astrophys. /Princeton, Inst. Advanced Study /KIPAC, Menlo Park /Penn State U., Astron. Astrophys. /UC, Irvine /MIT, MKI /UC, Davis /UC, Berkeley /Carnegie Inst. Observ. /UC, Berkeley, Space Sci. Dept. /Michigan U. /LBL, Berkeley /Spitzer Space Telescope

    2005-06-07

    The localization of the short-duration, hard-spectrum gamma-ray burst GRB050509b by the Swift satellite was a watershed event. Never before had a member of this mysterious subclass of classic GRBs been rapidly and precisely positioned in a sky accessible to the bevy of ground-based follow-up facilities. Thanks to the nearly immediate relay of the GRB position by Swift, we began imaging the GRB field 8 minutes after the burst and have continued during the 8 days since. Though the Swift X-ray Telescope (XRT) discovered an X-ray afterglow of GRB050509b, the first ever of a short-hard burst, thus far no convincing optical/infrared candidate afterglow or supernova has been found for the object. We present a re-analysis of the XRT afterglow and find an absolute position of R.A. = 12h36m13.59s, Decl. = +28{sup o}59'04.9'' (J2000), with a 1{sigma} uncertainty of 3.68'' in R.A., 3.52'' in Decl.; this is about 4'' to the west of the XRT position reported previously. Close to this position is a bright elliptical galaxy with redshift z = 0.2248 {+-} 0.0002, about 1' from the center of a rich cluster of galaxies. This cluster has detectable diffuse emission, with a temperature of kT = 5.25{sub -1.68}{sup +3.36} keV. We also find several ({approx}11) much fainter galaxies consistent with the XRT position from deep Keck imaging and have obtained Gemini spectra of several of these sources. Nevertheless we argue, based on positional coincidences, that the GRB and the bright elliptical are likely to be physically related. We thus have discovered reasonable evidence that at least some short-duration, hard-spectra GRBs are at cosmological distances. We also explore the connection of the properties of the burst and the afterglow, finding that GRB050509b was underluminous in both of these relative to long-duration GRBs. However, we also demonstrate that the ratio of the blast-wave energy to the {gamma}-ray energy is consistent with that of long-duration GRBs. We thus find plausible

  5. Boosting Set Constraint Propagation for Network Design

    NASA Astrophysics Data System (ADS)

    Yip, Justin; van Hentenryck, Pascal; Gervet, Carmen

    This paper reconsiders the deployment of synchronous optical networks (SONET), an optimization problem naturally expressed in terms of set variables. Earlier approaches, using either MIP or CP technologies, focused on symmetry breaking, including the use of SBDS, and the design of effective branching strategies. This paper advocates an orthogonal approach and argues that the thrashing behavior experienced in earlier attempts is primarily due to a lack of pruning. It studies how to improve domain filtering by taking a more global view of the application and imposing redundant global constraints. The technical results include novel hardness results, propagation algorithms for global constraints, and inference rules. The paper also evaluates the contributions experimentally by presenting a novel model with static symmetric-breaking constraints and a static variable ordering which is many orders of magnitude faster than existing approaches.

  6. Temporal Constraint Reasoning With Preferences

    NASA Technical Reports Server (NTRS)

    Khatib, Lina; Morris, Paul; Morris, Robert; Rossi, Francesca

    2001-01-01

    A number of reasoning problems involving the manipulation of temporal information can naturally be viewed as implicitly inducing an ordering of potential local decisions involving time (specifically, associated with durations or orderings of events) on the basis of preferences. For example. a pair of events might be constrained to occur in a certain order, and, in addition. it might be preferable that the delay between them be as large, or as small, as possible. This paper explores problems in which a set of temporal constraints is specified, where each constraint is associated with preference criteria for making local decisions about the events involved in the constraint, and a reasoner must infer a complete solution to the problem such that, to the extent possible, these local preferences are met in the best way. A constraint framework for reasoning about time is generalized to allow for preferences over event distances and durations, and we study the complexity of solving problems in the resulting formalism. It is shown that while in general such problems are NP-hard, some restrictions on the shape of the preference functions, and on the structure of the preference set, can be enforced to achieve tractability. In these cases, a simple generalization of a single-source shortest path algorithm can be used to compute a globally preferred solution in polynomial time.

  7. NGC 5548: LACK OF A BROAD Fe K{alpha} LINE AND CONSTRAINTS ON THE LOCATION OF THE HARD X-RAY SOURCE

    SciTech Connect

    Brenneman, L. W.; Elvis, M.; Krongold, Y.; Liu, Y.; Mathur, S.

    2012-01-01

    We present an analysis of the co-added and individual 0.7-40 keV spectra from seven Suzaku observations of the Sy 1.5 galaxy NGC 5548 taken over a period of eight weeks. We conclude that the source has a moderately ionized, three-zone warm absorber, a power-law continuum, and exhibits contributions from cold, distant reflection. Relativistic reflection signatures are not significantly detected in the co-added data, and we place an upper limit on the equivalent width of a relativistically broad Fe K{alpha} line at EW {<=} 26 eV at 90% confidence. Thus NGC 5548 can be labeled as a 'weak' type 1 active galactic nucleus (AGN) in terms of its observed inner disk reflection signatures, in contrast to sources with very broad, strong iron lines such as MCG-6-30-15, which are likely much fewer in number. We compare physical properties of NGC 5548 and MCG-6-30-15 that might explain this difference in their reflection properties. Though there is some evidence that NGC 5548 may harbor a truncated inner accretion disk, this evidence is inconclusive, so we also consider light bending of the hard X-ray continuum emission in order to explain the lack of relativistic reflection in our observation. If the absence of a broad Fe K{alpha} line is interpreted in the light-bending context, we conclude that the source of the hard X-ray continuum lies at radii r{sub s} {approx}> 100 r{sub g}. We note, however, that light-bending models must be expanded to include a broader range of physical parameter space in order to adequately explain the spectral and timing properties of average AGNs, rather than just those with strong, broad iron lines.

  8. Optimization of automated segmentation of monkeypox virus-induced lung lesions from normal lung CT images using hard C-means algorithm

    NASA Astrophysics Data System (ADS)

    Castro, Marcelo A.; Thomasson, David; Avila, Nilo A.; Hufton, Jennifer; Senseney, Justin; Johnson, Reed F.; Dyall, Julie

    2013-03-01

    Monkeypox virus is an emerging zoonotic pathogen that results in up to 10% mortality in humans. Knowledge of clinical manifestations and temporal progression of monkeypox disease is limited to data collected from rare outbreaks in remote regions of Central and West Africa. Clinical observations show that monkeypox infection resembles variola infection. Given the limited capability to study monkeypox disease in humans, characterization of the disease in animal models is required. A previous work focused on the identification of inflammatory patterns using PET/CT image modality in two non-human primates previously inoculated with the virus. In this work we extended techniques used in computer-aided detection of lung tumors to identify inflammatory lesions from monkeypox virus infection and their progression using CT images. Accurate estimation of partial volumes of lung lesions via segmentation is difficult because of poor discrimination between blood vessels, diseased regions, and outer structures. We used hard C-means algorithm in conjunction with landmark based registration to estimate the extent of monkeypox virus induced disease before inoculation and after disease progression. Automated estimation is in close agreement with manual segmentation.

  9. Simultaneous multi-vehicle detection and tracking framework with pavement constraints based on machine learning and particle filter algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Huang, Zhi; Zhong, Zhihua

    2014-11-01

    Due to the large variations of environment with ever-changing background and vehicles with different shapes, colors and appearances, to implement a real-time on-board vehicle recognition system with high adaptability, efficiency and robustness in complicated environments, remains challenging. This paper introduces a simultaneous detection and tracking framework for robust on-board vehicle recognition based on monocular vision technology. The framework utilizes a novel layered machine learning and particle filter to build a multi-vehicle detection and tracking system. In the vehicle detection stage, a layered machine learning method is presented, which combines coarse-search and fine-search to obtain the target using the AdaBoost-based training algorithm. The pavement segmentation method based on characteristic similarity is proposed to estimate the most likely pavement area. Efficiency and accuracy are enhanced by restricting vehicle detection within the downsized area of pavement. In vehicle tracking stage, a multi-objective tracking algorithm based on target state management and particle filter is proposed. The proposed system is evaluated by roadway video captured in a variety of traffics, illumination, and weather conditions. The evaluating results show that, under conditions of proper illumination and clear vehicle appearance, the proposed system achieves 91.2% detection rate and 2.6% false detection rate. Experiments compared to typical algorithms show that, the presented algorithm reduces the false detection rate nearly by half at the cost of decreasing 2.7%-8.6% detection rate. This paper proposes a multi-vehicle detection and tracking system, which is promising for implementation in an on-board vehicle recognition system with high precision, strong robustness and low computational cost.

  10. ICA analysis of fMRI with real-time constraints: an evaluation of fast detection performance as function of algorithms, parameters and a priori conditions

    PubMed Central

    Soldati, Nicola; Calhoun, Vince D.; Bruzzone, Lorenzo; Jovicich, Jorge

    2013-01-01

    Independent component analysis (ICA) techniques offer a data-driven possibility to analyze brain functional MRI data in real-time. Typical ICA methods used in functional magnetic resonance imaging (fMRI), however, have been until now mostly developed and optimized for the off-line case in which all data is available. Real-time experiments are ill-posed for ICA in that several constraints are added: limited data, limited analysis time and dynamic changes in the data and computational speed. Previous studies have shown that particular choices of ICA parameters can be used to monitor real-time fMRI (rt-fMRI) brain activation, but it is unknown how other choices would perform. In this rt-fMRI simulation study we investigate and compare the performance of 14 different publicly available ICA algorithms systematically sampling different growing window lengths (WLs), model order (MO) as well as a priori conditions (none, spatial or temporal). Performance is evaluated by computing the spatial and temporal correlation to a target component as well as computation time. Four algorithms are identified as best performing (constrained ICA, fastICA, amuse, and evd), with their corresponding parameter choices. Both spatial and temporal priors are found to provide equal or improved performances in similarity to the target compared with their off-line counterpart, with greatly reduced computation costs. This study suggests parameter choices that can be further investigated in a sliding-window approach for a rt-fMRI experiment. PMID:23378835

  11. The free energy of the metastable supersaturated vapor via restricted ensemble simulations. III. An extension to the Corti and Debenedetti subcell constraint algorithm

    NASA Astrophysics Data System (ADS)

    Nie, Chu; Geng, Jun; Marlow, William H.

    2016-04-01

    In order to improve the sampling of restricted microstates in our previous work [C. Nie, J. Geng, and W. H. Marlow, J. Chem. Phys. 127, 154505 (2007); 128, 234310 (2008)] and quantitatively predict thermal properties of supersaturated vapors, an extension is made to the Corti and Debenedetti subcell constraint algorithm [D. S. Corti and P. Debenedetti, Chem. Eng. Sci. 49, 2717 (1994)], which restricts the maximum allowed local density at any point in a simulation box. The maximum allowed local density at a point in a simulation box is defined by the maximum number of particles Nm allowed to appear inside a sphere of radius R, with this point as the center of the sphere. Both Nm and R serve as extra thermodynamic variables for maintaining a certain degree of spatial homogeneity in a supersaturated system. In a restricted canonical ensemble, at a given temperature and an overall density, series of local minima on the Helmholtz free energy surface F(Nm, R) are found subject to different (Nm, R) pairs. The true equilibrium metastable state is identified through the analysis of the formation free energies of Stillinger clusters of various sizes obtained from these restricted states. The simulation results of a supersaturated Lennard-Jones vapor at reduced temperature 0.7 including the vapor pressure isotherm, formation free energies of critical nuclei, and chemical potential differences are presented and analyzed. In addition, with slight modifications, the current algorithm can be applied to computing thermal properties of superheated liquids.

  12. Null steering of adaptive beamforming using linear constraint minimum variance assisted by particle swarm optimization, dynamic mutated artificial immune system, and gravitational search algorithm.

    PubMed

    Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859

  13. Null Steering of Adaptive Beamforming Using Linear Constraint Minimum Variance Assisted by Particle Swarm Optimization, Dynamic Mutated Artificial Immune System, and Gravitational Search Algorithm

    PubMed Central

    Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859

  14. The free energy of the metastable supersaturated vapor via restricted ensemble simulations. III. An extension to the Corti and Debenedetti subcell constraint algorithm.

    PubMed

    Nie, Chu; Geng, Jun; Marlow, William H

    2016-04-14

    In order to improve the sampling of restricted microstates in our previous work [C. Nie, J. Geng, and W. H. Marlow, J. Chem. Phys. 127, 154505 (2007); 128, 234310 (2008)] and quantitatively predict thermal properties of supersaturated vapors, an extension is made to the Corti and Debenedetti subcell constraint algorithm [D. S. Corti and P. Debenedetti, Chem. Eng. Sci. 49, 2717 (1994)], which restricts the maximum allowed local density at any point in a simulation box. The maximum allowed local density at a point in a simulation box is defined by the maximum number of particles Nm allowed to appear inside a sphere of radius R, with this point as the center of the sphere. Both Nm and R serve as extra thermodynamic variables for maintaining a certain degree of spatial homogeneity in a supersaturated system. In a restricted canonical ensemble, at a given temperature and an overall density, series of local minima on the Helmholtz free energy surface F(Nm, R) are found subject to different (Nm, R) pairs. The true equilibrium metastable state is identified through the analysis of the formation free energies of Stillinger clusters of various sizes obtained from these restricted states. The simulation results of a supersaturated Lennard-Jones vapor at reduced temperature 0.7 including the vapor pressure isotherm, formation free energies of critical nuclei, and chemical potential differences are presented and analyzed. In addition, with slight modifications, the current algorithm can be applied to computing thermal properties of superheated liquids. PMID:27083734

  15. Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2003-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.

  16. New Detection Systems of Bacteria Using Highly Selective Media Designed by SMART: Selective Medium-Design Algorithm Restricted by Two Constraints

    PubMed Central

    Kawanishi, Takeshi; Shiraishi, Takuya; Okano, Yukari; Sugawara, Kyoko; Hashimoto, Masayoshi; Maejima, Kensaku; Komatsu, Ken; Kakizawa, Shigeyuki; Yamaji, Yasuyuki; Hamamoto, Hiroshi; Oshima, Kenro; Namba, Shigetou

    2011-01-01

    Culturing is an indispensable technique in microbiological research, and culturing with selective media has played a crucial role in the detection of pathogenic microorganisms and the isolation of commercially useful microorganisms from environmental samples. Although numerous selective media have been developed in empirical studies, unintended microorganisms often grow on such media probably due to the enormous numbers of microorganisms in the environment. Here, we present a novel strategy for designing highly selective media based on two selective agents, a carbon source and antimicrobials. We named our strategy SMART for highly Selective Medium-design Algorithm Restricted by Two constraints. To test whether the SMART method is applicable to a wide range of microorganisms, we developed selective media for Burkholderia glumae, Acidovorax avenae, Pectobacterium carotovorum, Ralstonia solanacearum, and Xanthomonas campestris. The series of media developed by SMART specifically allowed growth of the targeted bacteria. Because these selective media exhibited high specificity for growth of the target bacteria compared to established selective media, we applied three notable detection technologies: paper-based, flow cytometry-based, and color change-based detection systems for target bacteria species. SMART facilitates not only the development of novel techniques for detecting specific bacteria, but also our understanding of the ecology and epidemiology of the targeted bacteria. PMID:21304596

  17. Robust H∞ stabilization of a hard disk drive system with a single-stage actuator

    NASA Astrophysics Data System (ADS)

    Harno, Hendra G.; Kiin Woon, Raymond Song

    2015-04-01

    This paper considers a robust H∞ control problem for a hard disk drive system with a single stage actuator. The hard disk drive system is modeled as a linear time-invariant uncertain system where its uncertain parameters and high-order dynamics are considered as uncertainties satisfying integral quadratic constraints. The robust H∞ control problem is transformed into a nonlinear optimization problem with a pair of parameterized algebraic Riccati equations as nonconvex constraints. The nonlinear optimization problem is then solved using a differential evolution algorithm to find stabilizing solutions to the Riccati equations. These solutions are used for synthesizing an output feedback robust H∞ controller to stabilize the hard disk drive system with a specified disturbance attenuation level.

  18. Deformation Time-Series of the Lost-Hills Oil Field using a Multi-Baseline Interferometric SAR Inversion Algorithm with Finite Difference Smoothing Constraints

    NASA Astrophysics Data System (ADS)

    Werner, C. L.; Wegmüller, U.; Strozzi, T.

    2012-12-01

    The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities

  19. Order-to-chaos transition in the hardness of random Boolean satisfiability problems

    NASA Astrophysics Data System (ADS)

    Varga, Melinda; Sumi, Róbert; Toroczkai, Zoltán; Ercsey-Ravasz, Mária

    2016-05-01

    Transient chaos is a ubiquitous phenomenon characterizing the dynamics of phase-space trajectories evolving towards a steady-state attractor in physical systems as diverse as fluids, chemical reactions, and condensed matter systems. Here we show that transient chaos also appears in the dynamics of certain efficient algorithms searching for solutions of constraint satisfaction problems that include scheduling, circuit design, routing, database problems, and even Sudoku. In particular, we present a study of the emergence of hardness in Boolean satisfiability (k -SAT), a canonical class of constraint satisfaction problems, by using an analog deterministic algorithm based on a system of ordinary differential equations. Problem hardness is defined through the escape rate κ , an invariant measure of transient chaos of the dynamical system corresponding to the analog algorithm, and it expresses the rate at which the trajectory approaches a solution. We show that for a given density of constraints and fixed number of Boolean variables N , the hardness of formulas in random k -SAT ensembles has a wide variation, approximable by a lognormal distribution. We also show that when increasing the density of constraints α , hardness appears through a second-order phase transition at αχ in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic. A similar behavior is found in 4-SAT as well, however, such a transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter κ ˜NB |α - αχ|1-γ with 0 <γ <1 . We demonstrate that the transition is generated by the appearance of metastable basins in the solution space as the density of constraints α is increased.

  20. Order-to-chaos transition in the hardness of random Boolean satisfiability problems.

    PubMed

    Varga, Melinda; Sumi, Róbert; Toroczkai, Zoltán; Ercsey-Ravasz, Mária

    2016-05-01

    Transient chaos is a ubiquitous phenomenon characterizing the dynamics of phase-space trajectories evolving towards a steady-state attractor in physical systems as diverse as fluids, chemical reactions, and condensed matter systems. Here we show that transient chaos also appears in the dynamics of certain efficient algorithms searching for solutions of constraint satisfaction problems that include scheduling, circuit design, routing, database problems, and even Sudoku. In particular, we present a study of the emergence of hardness in Boolean satisfiability (k-SAT), a canonical class of constraint satisfaction problems, by using an analog deterministic algorithm based on a system of ordinary differential equations. Problem hardness is defined through the escape rate κ, an invariant measure of transient chaos of the dynamical system corresponding to the analog algorithm, and it expresses the rate at which the trajectory approaches a solution. We show that for a given density of constraints and fixed number of Boolean variables N, the hardness of formulas in random k-SAT ensembles has a wide variation, approximable by a lognormal distribution. We also show that when increasing the density of constraints α, hardness appears through a second-order phase transition at α_{χ} in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic. A similar behavior is found in 4-SAT as well, however, such a transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter κ∼N^{B|α-α_{χ}|^{1-γ}} with 0<γ<1. We demonstrate that the transition is generated by the appearance of metastable basins in the solution space as the density of constraints α is increased. PMID:27300884

  1. Order-to-chaos transition in the hardness of random Boolean satisfiability problems

    NASA Astrophysics Data System (ADS)

    Varga, Melinda; Sumi, Robert; Ercsey-Ravasz, Maria; Toroczkai, Zoltan

    Transient chaos is a phenomenon characterizing the dynamics of phase space trajectories evolving towards an attractor in physical systems. We show that transient chaos also appears in the dynamics of certain algorithms searching for solutions of constraint satisfaction problems (e.g., Sudoku). We present a study of the emergence of hardness in Boolean satisfiability (k-SAT) using an analog deterministic algorithm. Problem hardness is defined through the escape rate κ, an invariant measure of transient chaos, and it expresses the rate at which the trajectory approaches a solution. We show that the hardness in random k-SAT ensembles has a wide variation approximable by a lognormal distribution. We also show that when increasing the density of constraints α, hardness appears through a second-order phase transition at αc in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic, however, such transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter. We demonstrate that the transition is generated by the appearance of non-solution basins in the solution space as the density of constraints is increased.

  2. Portable Health Algorithms Test System

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  3. Compact location problems with budget and communication constraints

    SciTech Connect

    Krumke, S.O.; Noltemeier, H.; Ravi, S.S.; Marathe, M.V.

    1995-05-01

    We consider the problem of placing a specified number p of facilities on the nodes of a given network with two nonnegative edge-weight functions so as to minimize the diameter of the placement with respect to the first distance function under diameter or sum-constraints with respect to the second weight function. Define an ({alpha}, {beta})-approximation algorithm as a polynomial-time algorithm that produces a solution within a times the optimal function value, violating the constraint with respect to the second distance function by a factor of at most {beta}. We observe that in general obtaining an ({alpha}, {beta})-approximation for any fixed {alpha}, {beta} {ge} 1 is NP-hard for any of these problems. We present efficient approximation algorithms for the case, when both edge-weight functions obey the triangle inequality. For the problem of minimizing the diameter under a diameter Constraint with respect to the second weight-function, we provide a (2,2)-approximation algorithm. We. also show that no polynomial time algorithm can provide an ({alpha},2 {minus} {var_epsilon})- or (2 {minus} {var_epsilon},{beta})-approximation for any fixed {var_epsilon} > 0 and {alpha},{beta} {ge} 1, unless P = NP. This result is proved to remain true, even if one fixes {var_epsilon}{prime} > 0 and allows the algorithm to place only 2p/{vert_bar}VI{vert_bar}/{sup 6 {minus} {var_epsilon}{prime}} facilities. Our techniques can be extended to the case, when either the objective or the constraint is of sum-type and also to handle additional weights on the nodes of the graph.

  4. Strict Constraint Feasibility in Analysis and Design of Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.

  5. Constraints in Genetic Programming

    NASA Technical Reports Server (NTRS)

    Janikow, Cezary Z.

    1996-01-01

    Genetic programming refers to a class of genetic algorithms utilizing generic representation in the form of program trees. For a particular application, one needs to provide the set of functions, whose compositions determine the space of program structures being evolved, and the set of terminals, which determine the space of specific instances of those programs. The algorithm searches the space for the best program for a given problem, applying evolutionary mechanisms borrowed from nature. Genetic algorithms have shown great capabilities in approximately solving optimization problems which could not be approximated or solved with other methods. Genetic programming extends their capabilities to deal with a broader variety of problems. However, it also extends the size of the search space, which often becomes too large to be effectively searched even by evolutionary methods. Therefore, our objective is to utilize problem constraints, if such can be identified, to restrict this space. In this publication, we propose a generic constraint specification language, powerful enough for a broad class of problem constraints. This language has two elements -- one reduces only the number of program instances, the other reduces both the space of program structures as well as their instances. With this language, we define the minimal set of complete constraints, and a set of operators guaranteeing offspring validity from valid parents. We also show that these operators are not less efficient than the standard genetic programming operators if one preprocesses the constraints - the necessary mechanisms are identified.

  6. Must "Hard Problems" Be Hard?

    ERIC Educational Resources Information Center

    Kolata, Gina

    1985-01-01

    To determine how hard it is for computers to solve problems, researchers have classified groups of problems (polynomial hierarchy) according to how much time they seem to require for their solutions. A difficult and complex proof is offered which shows that a combinatorial approach (using Boolean circuits) may resolve the problem. (JN)

  7. Rigorous location of phase transitions in hard optimization problems.

    PubMed

    Achlioptas, Dimitris; Naor, Assaf; Peres, Yuval

    2005-06-01

    It is widely believed that for many optimization problems, no algorithm is substantially more efficient than exhaustive search. This means that finding optimal solutions for many practical problems is completely beyond any current or projected computational capacity. To understand the origin of this extreme 'hardness', computer scientists, mathematicians and physicists have been investigating for two decades a connection between computational complexity and phase transitions in random instances of constraint satisfaction problems. Here we present a mathematically rigorous method for locating such phase transitions. Our method works by analysing the distribution of distances between pairs of solutions as constraints are added. By identifying critical behaviour in the evolution of this distribution, we can pinpoint the threshold location for a number of problems, including the two most-studied ones: random k-SAT and random graph colouring. Our results prove that the heuristic predictions of statistical physics in this context are essentially correct. Moreover, we establish that random instances of constraint satisfaction problems have solutions well beyond the reach of any analysed algorithm. PMID:15944693

  8. "Wood already touched by fire is not hard to set alight": Comment on "Constraints to applying systems thinking concepts in health systems: A regional perspective from surveying stakeholders in Eastern Mediterranean countries".

    PubMed

    Agyepong, Irene Akua

    2015-03-01

    A major constraint to the application of any form of knowledge and principles is the awareness, understanding and acceptance of the knowledge and principles. Systems Thinking (ST) is a way of understanding and thinking about the nature of health systems and how to make and implement decisions within health systems to maximize desired and minimize undesired effects. A major constraint to applying ST within health systems in Low- and Middle-Income Countries (LMICs) would appear to be an awareness and understanding of ST and how to apply it. This is a fundamental constraint and in the increasing desire to enable the application of ST concepts in health systems in LMIC and understand and evaluate the effects; an essential first step is going to be enabling of a wide spread as well as deeper understanding of ST and how to apply this understanding. PMID:25774378

  9. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  10. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    PubMed Central

    Karthikeyan, M.; Sree Ranga Raja, T.

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710

  11. The Probabilistic Admissible Region with Additional Constraints

    NASA Astrophysics Data System (ADS)

    Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.

    The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea

  12. Improvements to the stand and hit algorithm

    SciTech Connect

    Boneh, A.; Boneh, S.; Caron, R.; Jibrin, S.

    1994-12-31

    The stand and hit algorithm is a probabilistic algorithm for detecting necessary constraints. The algorithm stands at a point in the feasible region and hits constraints by moving towards the boundary along randomly generated directions. In this talk we discuss methods for choosing the standing point. As well, we present the undetected first rule for determining the hit constraints.

  13. What dominates the X-ray emission of Andromeda at E>20 keV? New constraints from NuSTAR and Swift on a very bright, hard X-ray source

    NASA Astrophysics Data System (ADS)

    Yukita, Mihoko; Ptak, Andrew; Maccarone, Thomas J.; Hornschemeier, Ann E.; Wik, Daniel R.; Pottschmidt, Katja; Antoniou, Vallia; Baganoff, Frederick K.; Lehmer, Bret; Zezas, Andreas; Boyd, Patricia T.; Kennea, Jamie; Page, Kim L.

    2016-04-01

    Thanks to its better sensitivity and spatial resolution, NuSTAR allows us to investigate the E>10 keV properties of nearby galaxies. We now know that starburst galaxies, containing very young stellar populations, have X-ray spectra which drop quickly above 10 keV. We extend our investigation of hard X-ray properties to an older stellar population system, the bulge of M31. The NuSTAR and Swift simultaneous observations reveal a bright hard source dominating the M31 bulge above 20 keV, which is likely to be a counterpart of Swift J0042.6+4112 previously detected (but not classified) in the Swift BAT All-sky Hard X-ray Survey. This source had been classified as an XRB candidate in various Chandra and XMM-Newton studies; however, since it was not clear that it is the counterpart to the strong Swift J0042.6+4112 source at higher energies, the previous E < 10 keV observations did not generate much attention. The NuSTAR and Swift spectra of this source drop quickly at harder energies as observed in sources in starburst galaxies. The X-ray spectral properties of this source are very similar to those of an accreting pulsar; yet, we do not find a pulsation in the NuSTAR data. The existing deep HST images indicate no high mass donors at the location of this source, further suggesting that this source has an intermediate or low mass companion. The most likely scenario for the nature of this source is an X-ray pulsar with an intermediate/low mass companion similar to the Galactic Her X-1 system. We will also discuss other possibilities in more detail.

  14. Improving Steiner trees of a network under multiple constraints

    SciTech Connect

    Krumke, S.O.; Noltemeier, H.; Marathe, M.V.; Ravi, R.; Ravi, S.S.

    1996-07-01

    The authors consider the problem of decreasing the edge weights of a given network so that the modified network has a Steiner tree in which two performance measures are simultaneously optimized. They formulate these problems, referred to as bicriteria network improvement problems, by specifying a budget on the total modification cost, a constraint on one of the performance measures and using the other performance measure as a minimization objective. Network improvement problems are known to be NP-hard even when only one performance measure is considered. The authors present the first polynomial time approximation algorithms for bicriteria network improvement problems. The approximation algorithms are for two pairs of performance measures, namely (diameter, total cost) and (degree, total cost). These algorithms produce solutions which are within a logarithmic factor of the optimum value of the minimization objective while violating the constraints only by a logarithmic factor. The techniques also yield approximation schemes when the given network has bounded treewidth. Many of the approximation results can be extended to more general network design problems.

  15. Constraint Handling in Transmission Network Expansion Planning

    NASA Astrophysics Data System (ADS)

    Mallipeddi, R.; Verma, Ashu; Suganthan, P. N.; Panigrahi, B. K.; Bijwe, P. R.

    Transmission network expansion planning (TNEP) is a very important and complex problem in power system. Recently, the use of metaheuristic techniques to solve TNEP is gaining more importance due to their effectiveness in handling the inequality constraints and discrete values over the conventional gradient based methods. Evolutionary algorithms (EAs) generally perform unconstrained search and require some additional mechanism to handle constraints. In EA literature, various constraint handling techniques have been proposed. However, to solve TNEP the penalty function approach is commonly used while the other constraint handling methods are untested. In this paper, we evaluate the performance of different constraint handling methods like Superiority of Feasible Solutions (SF), Self adaptive Penalty (SP),E-Constraint (EC), Stochastic Ranking (SR) and the ensemble of constraint handling techniques (ECHT) on TNEP. The potential of different constraint handling methods and their ensemble is evaluated using an IEEE 24 bus system with and without security constraints.

  16. Artificial immune algorithm for multi-depot vehicle scheduling problems

    NASA Astrophysics Data System (ADS)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  17. Constraint-based scheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte

    1991-01-01

    The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocations for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its applications to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.

  18. Constraint-based scheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte

    1991-01-01

    The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.

  19. Constraint-based scheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte

    1993-01-01

    The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.

  20. FATIGUE OF BIOMATERIALS: HARD TISSUES

    PubMed Central

    Arola, D.; Bajaj, D.; Ivancik, J.; Majd, H.; Zhang, D.

    2009-01-01

    The fatigue and fracture behavior of hard tissues are topics of considerable interest today. This special group of organic materials comprises the highly mineralized and load-bearing tissues of the human body, and includes bone, cementum, dentin and enamel. An understanding of their fatigue behavior and the influence of loading conditions and physiological factors (e.g. aging and disease) on the mechanisms of degradation are essential for achieving lifelong health. But there is much more to this topic than the immediate medical issues. There are many challenges to characterizing the fatigue behavior of hard tissues, much of which is attributed to size constraints and the complexity of their microstructure. The relative importance of the constituents on the type and distribution of defects, rate of coalescence, and their contributions to the initiation and growth of cracks, are formidable topics that have not reached maturity. Hard tissues also provide a medium for learning and a source of inspiration in the design of new microstructures for engineering materials. This article briefly reviews fatigue of hard tissues with shared emphasis on current understanding, the challenges and the unanswered questions. PMID:20563239

  1. On Constraints in Assembly Planning

    SciTech Connect

    Calton, T.L.; Jones, R.E.; Wilson, R.H.

    1998-12-17

    Constraints on assembly plans vary depending on product, assembly facility, assembly volume, and many other factors. Assembly costs and other measures to optimize vary just as widely. To be effective, computer-aided assembly planning systems must allow users to express the plan selection criteria that appIy to their products and production environments. We begin this article by surveying the types of user criteria, both constraints and quality measures, that have been accepted by assembly planning systems to date. The survey is organized along several dimensions, including strategic vs. tactical criteria; manufacturing requirements VS. requirements of the automated planning process itself and the information needed to assess compliance with each criterion. The latter strongly influences the efficiency of planning. We then focus on constraints. We describe a framework to support a wide variety of user constraints for intuitive and efficient assembly planning. Our framework expresses all constraints on a sequencing level, specifying orders and conditions on part mating operations in a number of ways. Constraints are implemented as simple procedures that either accept or reject assembly operations proposed by the planner. For efficiency, some constraints are supplemented with special-purpose modifications to the planner's algorithms. Fast replanning enables an interactive plan-view-constrain-replan cycle that aids in constraint discovery and documentation. We describe an implementation of the framework in a computer-aided assembly planning system and experiments applying the system to a number of complex assemblies, including one with 472 parts.

  2. Reformulating Constraints for Compilability and Efficiency

    NASA Technical Reports Server (NTRS)

    Tong, Chris; Braudaway, Wesley; Mohan, Sunil; Voigt, Kerstin

    1992-01-01

    KBSDE is a knowledge compiler that uses a classification-based approach to map solution constraints in a task specification onto particular search algorithm components that will be responsible for satisfying those constraints (e.g., local constraints are incorporated in generators; global constraints are incorporated in either testers or hillclimbing patchers). Associated with each type of search algorithm component is a subcompiler that specializes in mapping constraints into components of that type. Each of these subcompilers in turn uses a classification-based approach, matching a constraint passed to it against one of several schemas, and applying a compilation technique associated with that schema. While much progress has occurred in our research since we first laid out our classification-based approach [Ton91], we focus in this paper on our reformulation research. Two important reformulation issues that arise out of the choice of a schema-based approach are: (1) compilability-- Can a constraint that does not directly match any of a particular subcompiler's schemas be reformulated into one that does? and (2) Efficiency-- If the efficiency of the compiled search algorithm depends on the compiler's performance, and the compiler's performance depends on the form in which the constraint was expressed, can we find forms for constraints which compile better, or reformulate constraints whose forms can be recognized as ones that compile poorly? In this paper, we describe a set of techniques we are developing for partially addressing these issues.

  3. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    NASA Astrophysics Data System (ADS)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  4. A Scheduling Algorithm for Replicated Real-Time Tasks

    NASA Technical Reports Server (NTRS)

    Yu, Albert C.; Lin, Kwei-Jay

    1991-01-01

    We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.

  5. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  6. Wear of hard materials by hard particles

    SciTech Connect

    Hawk, Jeffrey A.

    2003-10-01

    Hard materials, such as WC-Co, boron carbide, titanium diboride and composite carbide made up of Mo2C and WC, have been tested in abrasion and erosion conditions. These hard materials showed negligible wear in abrasion against SiC particles and erosion using Al2O3 particles. The WC-Co materials have the highest wear rate of these hard materials and a very different material removal mechanism. Wear mechanisms for these materials were different for each material with the overall wear rate controlled by binder composition and content and material grain size.

  7. Data assimilation with inequality constraints

    NASA Astrophysics Data System (ADS)

    Thacker, W. C.

    If values of variables in a numerical model are limited to specified ranges, these restrictions should be enforced when data are assimilated. The simplest option is to assimilate without regard for constraints and then to correct any violations without worrying about additional corrections implied by correlated errors. This paper addresses the incorporation of inequality constraints into the standard variational framework of optimal interpolation with emphasis on our limited knowledge of the underlying probability distributions. Simple examples involving only two or three variables are used to illustrate graphically how active constraints can be treated as error-free data when background errors obey a truncated multi-normal distribution. Using Lagrange multipliers, the formalism is expanded to encompass the active constraints. Two algorithms are presented, both relying on a solution ignoring the inequality constraints to discover violations to be enforced. While explicitly enforcing a subset can, via correlations, correct the others, pragmatism based on our poor knowledge of the underlying probability distributions suggests the expedient of enforcing them all explicitly to avoid the computationally expensive task of determining the minimum active set. If additional violations are encountered with these solutions, the process can be repeated. Simple examples are used to illustrate the algorithms and to examine the nature of the corrections implied by correlated errors.

  8. Foundations of support constraint machines.

    PubMed

    Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello

    2015-02-01

    The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses. PMID:25380338

  9. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  10. Simulation results for the Viterbi decoding algorithm

    NASA Technical Reports Server (NTRS)

    Batson, B. H.; Moorehead, R. W.; Taqvi, S. Z. H.

    1972-01-01

    Concepts involved in determining the performance of coded digital communications systems are introduced. The basic concepts of convolutional encoding and decoding are summarized, and hardware implementations of sequential and maximum likelihood decoders are described briefly. Results of parametric studies of the Viterbi decoding algorithm are summarized. Bit error probability is chosen as the measure of performance and is calculated, by using digital computer simulations, for various encoder and decoder parameters. Results are presented for code rates of one-half and one-third, for constraint lengths of 4 to 8, for both hard-decision and soft-decision bit detectors, and for several important systematic and nonsystematic codes. The effect of decoder block length on bit error rate also is considered, so that a more complete estimate of the relationship between performance and decoder complexity can be made.

  11. Network interdiction with budget constraints

    SciTech Connect

    Santhi, Nankakishore; Pan, Feng

    2009-01-01

    Several scenarios exist in the modern interconnected world which call for efficient network interdiction algorithms. Applications are varied, including computer network security, prevention of spreading of Internet worms, policing international smuggling networks, controlling spread of diseases and optimizing the operation of large public energy grids. In this paper we consider some natural network optimization questions related to the budget constrained interdiction problem over general graphs. Many of these questions turn out to be computationally hard to tackle. We present a particularly interesting practical form of the interdiction question which we show to be computationally tractable. A polynomial time algorithm is then presented for this problem.

  12. Parallel-Batch Scheduling and Transportation Coordination with Waiting Time Constraint

    PubMed Central

    Gong, Hua; Chen, Daheng; Xu, Ke

    2014-01-01

    This paper addresses a parallel-batch scheduling problem that incorporates transportation of raw materials or semifinished products before processing with waiting time constraint. The orders located at the different suppliers are transported by some vehicles to a manufacturing facility for further processing. One vehicle can load only one order in one shipment. Each order arriving at the facility must be processed in the limited waiting time. The orders are processed in batches on a parallel-batch machine, where a batch contains several orders and the processing time of the batch is the largest processing time of the orders in it. The goal is to find a schedule to minimize the sum of the total flow time and the production cost. We prove that the general problem is NP-hard in the strong sense. We also demonstrate that the problem with equal processing times on the machine is NP-hard. Furthermore, a dynamic programming algorithm in pseudopolynomial time is provided to prove its ordinarily NP-hardness. An optimal algorithm in polynomial time is presented to solve a special case with equal processing times and equal transportation times for each order. PMID:24883385

  13. Object-oriented algorithmic laboratory for ordering sparse matrices

    SciTech Connect

    Kumfert, G K

    2000-05-01

    We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively

  14. Ordering of hard particles between hard walls

    NASA Astrophysics Data System (ADS)

    Chrzanowska, A.; Teixeira, P. I. C.; Ehrentraut, H.; Cleaver, D. J.

    2001-05-01

    The structure of a fluid of hard Gaussian overlap particles of elongation κ = 5, confined between two hard walls, has been calculated from density-functional theory and Monte Carlo simulations. By using the exact expression for the excluded volume kernel (Velasco E and Mederos L 1998 J. Chem. Phys. 109 2361) and solving the appropriate Euler-Lagrange equation entirely numerically, we have been able to extend our theoretical predictions into the nematic phase, which had up till now remained relatively unexplored due to the high computational cost. Simulation reveals a rich adsorption behaviour with increasing bulk density, which is described semi-quantitatively by the theory without any adjustable parameters.

  15. Constraint Embedding for Multibody System Dynamics

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan

    2009-01-01

    This paper describes a constraint embedding approach for the handling of local closure constraints in multibody system dynamics. The approach uses spatial operator techniques to eliminate local-loop constraints from the system and effectively convert the system into tree-topology systems. This approach allows the direct derivation of recursive O(N) techniques for solving the system dynamics and avoiding the expensive steps that would otherwise be required for handling the closedchain dynamics. The approach is very effective for systems where the constraints are confined to small-subgraphs within the system topology. The paper provides background on the spatial operator O(N) algorithms, the extensions for handling embedded constraints, and concludes with some examples of such constraints.

  16. Constraint-based interactive assembly planning

    SciTech Connect

    Jones, R.E.; Wilson, R.H.; Calton, T.L.

    1997-03-01

    The constraints on assembly plans vary depending on the product, assembly facility, assembly volume, and many other factors. This paper describes the principles and implementation of a framework that supports a wide variety of user-specified constraints for interactive assembly planning. Constraints from many sources can be expressed on a sequencing level, specifying orders and conditions on part mating operations in a number of ways. All constraints are implemented as filters that either accept or reject assembly operations proposed by the planner. For efficiency, some constraints are supplemented with special-purpose modifications to the planner`s algorithms. Replanning is fast enough to enable a natural plan-view-constrain-replan cycle that aids in constraint discovery and documentation. We describe an implementation of the framework in a computer-aided assembly planning system and experiments applying the system to several complex assemblies. 12 refs., 2 figs., 3 tabs.

  17. Using constraints to model disjunctions in rule-based reasoning

    SciTech Connect

    Liu, Bing; Jaffar, Joxan

    1996-12-31

    Rule-based systems have long been widely used for building expert systems to perform practical knowledge intensive tasks. One important issue that has not been addressed satisfactorily is the disjunction, and this significantly limits their problem solving power. In this paper, we show that some important types of disjunction can be modeled with Constraint Satisfaction Problem (CSP) techniques, employing their simple representation schemes and efficient algorithms. A key idea is that disjunctions are represented as constraint variables, relations among disjunctions are represented as constraints, and rule chaining is integrated with constraint solving. In this integration, a constraint variable or a constraint is regarded as a special fact, and rules can be written with constraints, and information about constraints. Chaining of rules may trigger constraint propagation, and constraint propagation may cause firing of rules. A prototype system (called CFR) based on this idea has been implemented.

  18. Extensions of output variance constrained controllers to hard constraints

    NASA Technical Reports Server (NTRS)

    Skelton, R.; Zhu, G.

    1989-01-01

    Covariance Controllers assign specified matrix values to the state covariance. A number of robustness results are directly related to the covariance matrix. The conservatism in known upperbounds on the H infinity, L infinity, and L (sub 2) norms for stability and disturbance robustness of linear uncertain systems using covariance controllers is illustrated with examples. These results are illustrated for continuous and discrete time systems. **** ONLY 2 BLOCK MARKERS FOUND -- RETRY *****

  19. Hardness Tester for Polyur

    NASA Technical Reports Server (NTRS)

    Hauser, D. L.; Buras, D. F.; Corbin, J. M.

    1987-01-01

    Rubber-hardness tester modified for use on rigid polyurethane foam. Provides objective basis for evaluation of improvements in foam manufacturing and inspection. Typical acceptance criterion requires minimum hardness reading of 80 on modified tester. With adequate correlation tests, modified tester used to measure indirectly tensile and compressive strengths of foam.

  20. Session: Hard Rock Penetration

    SciTech Connect

    Tennyson, George P. Jr.; Dunn, James C.; Drumheller, Douglas S.; Glowka, David A.; Lysne, Peter

    1992-01-01

    This session at the Geothermal Energy Program Review X: Geothermal Energy and the Utility Market consisted of five presentations: ''Hard Rock Penetration - Summary'' by George P. Tennyson, Jr.; ''Overview - Hard Rock Penetration'' by James C. Dunn; ''An Overview of Acoustic Telemetry'' by Douglas S. Drumheller; ''Lost Circulation Technology Development Status'' by David A. Glowka; ''Downhole Memory-Logging Tools'' by Peter Lysne.

  1. Adiabatic Quantum Programming: Minor Embedding With Hard Faults

    SciTech Connect

    Klymko, Christine F; Sullivan, Blair D; Humble, Travis S

    2013-01-01

    Adiabatic quantum programming defines the time-dependent mapping of a quantum algorithm into the hardware or logical fabric. An essential programming step is the embedding of problem-specific information into the logical fabric to define the quantum computational transformation. We present algorithms for embedding arbitrary instances of the adiabatic quantum optimization algorithm into a square lattice of specialized unit cells. Our methods are shown to be extensible in fabric growth, linear in time, and quadratic in logical footprint. In addition, we provide methods for accommodating hard faults in the logical fabric without invoking approximations to the original problem. These hard fault-tolerant embedding algorithms are expected to prove useful for benchmarking the adiabatic quantum optimization algorithm on existing quantum logical hardware. We illustrate this versatility through numerical studies of embeddabilty versus hard fault rates in square lattices of complete bipartite unit cells.

  2. The hard metal diseases.

    PubMed

    Cugell, D W

    1992-06-01

    Hard metal is a mixture of tungsten carbide and cobalt, to which small amounts of other metals may be added. It is widely used for industrial purposes whenever extreme hardness and high temperature resistance are needed, such as for cutting tools, oil well drilling bits, and jet engine exhaust ports. Cobalt is the component of hard metal that can be a health hazard. Respiratory diseases occur in workers exposed to cobalt--either in the production of hard metal, from machining hard metal parts, or from other sources. Adverse pulmonary reactions include asthma, hypersensitivity pneumonitis, and interstitial fibrosis. A peculiar, almost unique form of lung fibrosis, giant cell interstitial pneumonia, is closely linked with cobalt exposure. PMID:1511554

  3. Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem

    NASA Astrophysics Data System (ADS)

    Tein, Lim Huai; Ramli, Razamin

    2014-12-01

    Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.

  4. Data Structures and Algorithms.

    ERIC Educational Resources Information Center

    Wirth, Niklaus

    1984-01-01

    Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)

  5. The Algorithm Selection Problem

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)

    1994-01-01

    Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.

  6. Boolean constraint satisfaction problems for reaction networks

    NASA Astrophysics Data System (ADS)

    Seganti, A.; De Martino, A.; Ricci-Tersenghi, F.

    2013-09-01

    We define and study a class of (random) Boolean constraint satisfaction problems representing minimal feasibility constraints for networks of chemical reactions. The constraints we consider encode, respectively, for hard mass-balance conditions (where the consumption and production fluxes of each chemical species are matched) and for soft mass-balance conditions (where a net production of compounds is in principle allowed). We solve these constraint satisfaction problems under the Bethe approximation and derive the corresponding belief propagation equations, which involve eight different messages. The statistical properties of ensembles of random problems are studied via the population dynamics methods. By varying a chemical potential attached to the activity of reactions, we find first-order transitions and strong hysteresis, suggesting a non-trivial structure in the space of feasible solutions.

  7. Organizing Your Hard Disk.

    ERIC Educational Resources Information Center

    Stocker, H. Robert; Hilton, Thomas S. E.

    1991-01-01

    Suggests strategies that make hard disk organization easy and efficient, such as making, changing, and removing directories; grouping files by subject; naming files effectively; backing up efficiently; and using PATH. (JOW)

  8. A Space-Bounded Anytime Algorithm for the Multiple Longest Common Subsequence Problem

    PubMed Central

    Yang, Jiaoyun; Xu, Yun; Shang, Yi; Chen, Guoliang

    2014-01-01

    The multiple longest common subsequence (MLCS) problem, related to the identification of sequence similarity, is an important problem in many fields. As an NP-hard problem, its exact algorithms have difficulty in handling large-scale data and time- and space-efficient algorithms are required in real-world applications. To deal with time constraints, anytime algorithms have been proposed to generate good solutions with a reasonable time. However, there exists little work on space-efficient MLCS algorithms. In this paper, we formulate the MLCS problem into a graph search problem and present two space-efficient anytime MLCS algorithms, SA-MLCS and SLA-MLCS. SA-MLCS uses an iterative beam widening search strategy to reduce space usage during the iterative process of finding better solutions. Based on SA-MLCS, SLA-MLCS, a space-bounded algorithm, is developed to avoid space usage from exceeding available memory. SLA-MLCS uses a replacing strategy when SA-MLCS reaches a given space bound. Experimental results show SA-MLCS and SLA-MLCS use an order of magnitude less space and time than the state-of-the-art approximate algorithm MLCS-APP while finding better solutions. Compared to the state-of-the-art anytime algorithm Pro-MLCS, SA-MLCS and SLA-MLCS can solve an order of magnitude larger size instances. Furthermore, SLA-MLCS can find much better solutions than SA-MLCS on large size instances. PMID:25400485

  9. Genetic algorithm-based neural fuzzy decision tree for mixed scheduling in ATM networks.

    PubMed

    Lin, Chin-Teng; Chung, I-Fang; Pu, Her-Chang; Lee', Tsern-Huei; Chang, Jyh-Yeong

    2002-01-01

    Future broadband integrated services networks based on asynchronous transfer mode (ATM) technology are expected to support multiple types of multimedia information with diverse statistical characteristics and quality of service (QoS) requirements. To meet these requirements, efficient scheduling methods are important for traffic control in ATM networks. Among general scheduling schemes, the rate monotonic algorithm is simple enough to be used in high-speed networks, but does not attain the high system utilization of the deadline driven algorithm. However, the deadline driven scheme is computationally complex and hard to implement in hardware. The mixed scheduling algorithm is a combination of the rate monotonic algorithm and the deadline driven algorithm; thus it can provide most of the benefits of these two algorithms. In this paper, we use the mixed scheduling algorithm to achieve high system utilization under the hardware constraint. Because there is no analytic method for schedulability testing of mixed scheduling, we propose a genetic algorithm-based neural fuzzy decision tree (GANFDT) to realize it in a real-time environment. The GANFDT combines a GA and a neural fuzzy network into a binary classification tree. This approach also exploits the power of the classification tree. Simulation results show that the GANFDT provides an efficient way of carrying out mixed scheduling in ATM networks. PMID:18244889

  10. How 'hard' are hard-rock deformations?

    NASA Astrophysics Data System (ADS)

    van Loon, A. J.

    2003-04-01

    The study of soft-rock deformations has received increasing attention during the past two decades, and much progress has been made in the understanding of their genesis. It is also recognized now that soft-rock deformations—which show a wide variety in size and shape—occur frequently in sediments deposited in almost all types of environments. In spite of this, deformations occurring in lithified rocks are still relatively rarely attributed to sedimentary or early-diagenetic processes. Particularly faults in hard rocks are still commonly ascribed to tectonics, commonly without a discussion about a possible non-tectonic origin at a stage that the sediments were still unlithified. Misinterpretations of both the sedimentary and the structural history of hard-rock successions may result from the negligence of a possible soft-sediment origin of specific deformations. It is therefore suggested that a re-evaluation of these histories, keeping the present-day knowledge about soft-sediment deformations in mind, may give new insights into the geological history of numerous sedimentary successions in which the deformations have not been studied from both a sedimentological and a structural point of view.

  11. Linear-time algorithms for scheduling on parallel processors

    SciTech Connect

    Monma, C.L.

    1982-01-01

    Linear-time algorithms are presented for several problems of scheduling n equal-length tasks on m identical parallel processors subject to precedence constraints. This improves upon previous time bounds for the maximum lateness problem with treelike precedence constraints, the number-of-late-tasks problem without precedence constraints, and the one machine maximum lateness problem with general precedence constraints. 5 references.

  12. Constraint monitoring in TOSCA

    NASA Technical Reports Server (NTRS)

    Beck, Howard

    1992-01-01

    The Job-Shop Scheduling Problem (JSSP) deals with the allocation of resources over time to factory operations. Allocations are subject to various constraints (e.g., production precedence relationships, factory capacity constraints, and limits on the allowable number of machine setups) which must be satisfied for a schedule to be valid. The identification of constraint violations and the monitoring of constraint threats plays a vital role in schedule generation in terms of the following: (1) directing the scheduling process; and (2) informing scheduling decisions. This paper describes a general mechanism for identifying constraint violations and monitoring threats to the satisfaction of constraints throughout schedule generation.

  13. A Framework for Parallel Nonlinear Optimization by Partitioning Localized Constraints

    SciTech Connect

    Xu, You; Chen, Yixin

    2008-06-28

    We present a novel parallel framework for solving large-scale continuous nonlinear optimization problems based on constraint partitioning. The framework distributes constraints and variables to parallel processors and uses an existing solver to handle the partitioned subproblems. In contrast to most previous decomposition methods that require either separability or convexity of constraints, our approach is based on a new constraint partitioning theory and can handle nonconvex problems with inseparable global constraints. We also propose a hypergraph partitioning method to recognize the problem structure. Experimental results show that the proposed parallel algorithm can efficiently solve some difficult test cases.

  14. Hard tissue laser procedures.

    PubMed

    Gimbel, C B

    2000-10-01

    A more conservative, less invasive treatment of the carious lesion has intrigued researchers and clinicians for decades. With over 170 million restorations placed worldwide each year, many of which could be treated using a laser, there exists an increasing need for understanding hard tissue laser procedures. An historical review of past scientific and clinical hard research, biophysics, and histology are discussed. A complete review of present applications and procedures along with their capabilities and limitations will give the clinician a better understanding. Clinical case studies, along with guidelines for tooth preparation and hard tissue laser applications and technological advances for diagnosis and treatment will give the clinician a look into the future. PMID:11048281

  15. Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems

    PubMed Central

    Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo

    2015-01-01

    Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016

  16. Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems.

    PubMed

    Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo

    2015-01-01

    Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016

  17. Running in Hard Times

    ERIC Educational Resources Information Center

    Berry, John N., III

    2009-01-01

    Roberta Stevens and Kent Oliver are campaigning hard for the presidency of the American Library Association (ALA). Stevens is outreach projects and partnerships officer at the Library of Congress. Oliver is executive director of the Stark County District Library in Canton, Ohio. They have debated, discussed, and posted web sites, Facebook pages,…

  18. CSI: Hard Drive

    ERIC Educational Resources Information Center

    Sturgeon, Julie

    2008-01-01

    Acting on information from students who reported seeing a classmate looking at inappropriate material on a school computer, school officials used forensics software to plunge the depths of the PC's hard drive, searching for evidence of improper activity. Images were found in a deleted Internet Explorer cache as well as deleted file space.…

  19. Budgeting in Hard Times.

    ERIC Educational Resources Information Center

    Parrino, Frank M.

    2003-01-01

    Interviews with school board members and administrators produced a list of suggestions for balancing a budget in hard times. Among these are changing calendars and schedules to reduce heating and cooling costs; sharing personnel; rescheduling some extracurricular activities; and forming cooperative agreements with other districts. (MLF)

  20. Diffractive hard scattering

    SciTech Connect

    Berger, E.L.; Collins, J.C.; Soper, D.E.; Sterman, G.

    1986-03-01

    I discuss events in high energy hadron collisions that contain a hard scattering, in the sense that very heavy quarks or high P/sub T/ jets are produced, yet are diffractive, in the sense that one of the incident hadrons is scattered with only a small energy loss. 8 refs.

  1. Quantum defragmentation algorithm

    SciTech Connect

    Burgarth, Daniel; Giovannetti, Vittorio

    2010-08-15

    In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.

  2. Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search

    PubMed Central

    Rajalakshmi, S.; Maguteeswaran, R.

    2015-01-01

    A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established. PMID:26495429

  3. Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search.

    PubMed

    Rajalakshmi, S; Maguteeswaran, R

    2015-01-01

    A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established. PMID:26495429

  4. Non-Evolutionary Algorithms for Scheduling Dependent Tasks in Distributed Heterogeneous Computing Environments

    SciTech Connect

    Wayne F. Boyer; Gurdeep S. Hura

    2005-09-01

    The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized task orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,

  5. Work Hard. Be Nice

    ERIC Educational Resources Information Center

    Mathews, Jay

    2009-01-01

    In 1994, fresh from a two-year stint with Teach for America, Mike Feinberg and Dave Levin inaugurated the Knowledge Is Power Program (KIPP) in Houston with an enrollment of 49 5th graders. By this Fall, 75 KIPP schools will be up and running, setting children from poor and minority families on a path to college through a combination of hard work,…

  6. Hard Times Hit Schools

    ERIC Educational Resources Information Center

    McNeil, Michele

    2008-01-01

    Hard-to-grasp dollar amounts are forcing real cuts in K-12 education at a time when the cost of fueling buses and providing school lunches is increasing and the demands of the federal No Child Left Behind Act still loom larger over states and districts. "One of the real challenges is to continue progress in light of the economy," said Gale Gaines,…

  7. SUPER HARD SURFACED POLYMERS

    SciTech Connect

    Mansur, Louis K; Bhattacharya, R; Blau, Peter Julian; Clemons, Art; Eberle, Cliff; Evans, H B; Janke, Christopher James; Jolly, Brian C; Lee, E H; Leonard, Keith J; Trejo, Rosa M; Rivard, John D

    2010-01-01

    High energy ion beam surface treatments were applied to a selected group of polymers. Of the six materials in the present study, four were thermoplastics (polycarbonate, polyethylene, polyethylene terephthalate, and polystyrene) and two were thermosets (epoxy and polyimide). The particular epoxy evaluated in this work is one of the resins used in formulating fiber reinforced composites for military helicopter blades. Measures of mechanical properties of the near surface regions were obtained by nanoindentation hardness and pin on disk wear. Attempts were also made to measure erosion resistance by particle impact. All materials were hardness tested. Pristine materials were very soft, having values in the range of approximately 0.1 to 0.5 GPa. Ion beam treatment increased hardness by up to 50 times compared to untreated materials. For reference, all materials were hardened to values higher than those typical of stainless steels. Wear tests were carried out on three of the materials, PET, PI and epoxy. On the ion beam treated epoxy no wear could be detected, whereas the untreated material showed significant wear.

  8. Direct handling of equality constraints in multilevel optimization

    NASA Technical Reports Server (NTRS)

    Renaud, John E.; Gabriele, Gary A.

    1990-01-01

    In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.

  9. Highly irregular quantum constraints

    NASA Astrophysics Data System (ADS)

    Klauder, John R.; Little, J. Scott

    2006-05-01

    Motivated by a recent paper of Louko and Molgado, we consider a simple system with a single classical constraint R(q) = 0. If ql denotes a generic solution to R(q) = 0, our examples include cases where R'(ql) ≠ 0 (regular constraint) and R'(ql) = 0 (irregular constraint) of varying order as well as the case where R(q) = 0 for an interval, such as a <= q <= b. Quantization of irregular constraints is normally not considered; however, using the projection operator formalism we provide a satisfactory quantization which reduces to the constrained classical system when planck → 0. It is noteworthy that irregular constraints change the observable aspects of a theory as compared to strictly regular constraints.

  10. Join-Graph Propagation Algorithms

    PubMed Central

    Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina

    2010-01-01

    The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057

  11. Constructive neural network learning algorithms

    SciTech Connect

    Parekh, R.; Yang, Jihoon; Honavar, V.

    1996-12-31

    Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.

  12. On the Complexity of Constraint-Based Theory Extraction

    NASA Astrophysics Data System (ADS)

    Boley, Mario; Gärtner, Thomas

    In this paper we rule out output polynomial listing algorithms for the general problem of discovering theories for a conjunction of monotone and anti-monotone constraints as well as for the particular subproblem in which all constraints are frequency-based. For the general problem we prove a concrete exponential lower time bound that holds for any correct algorithm and even in cases in which the size of the theory as well as the only previous bound are constant. For the case of frequency-based constraints our result holds unless P = NP. These findings motivate further research to identify tractable subproblems and justify approaches with exponential worst case complexity.

  13. Ultrasonic characterization of materials hardness

    PubMed

    Badidi Bouda A; Benchaala; Alem

    2000-03-01

    In this paper, an experimental technique has been developed to measure velocities and attenuation of ultrasonic waves through a steel with a variable hardness. A correlation between ultrasonic measurements and steel hardness was investigated. PMID:10829663

  14. Quiet planting in the locked constraints satisfaction problems

    SciTech Connect

    Zdeborova, Lenka; Krzakala, Florent

    2009-01-01

    We study the planted ensemble of locked constraint satisfaction problems. We describe the connection between the random and planted ensembles. The use of the cavity method is combined with arguments from reconstruction on trees and first and second moment considerations; in particular the connection with the reconstruction on trees appears to be crucial. Our main result is the location of the hard region in the planted ensemble, thus providing hard satisfiable benchmarks. In a part of that hard region instances have with high probability a single satisfying assignment.

  15. Improved multi-objective ant colony optimization algorithm and its application in complex reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing

    2013-09-01

    The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and

  16. Monte Carlo algorithm for least dependent non-negative mixture decomposition.

    PubMed

    Astakhov, Sergey A; Stögbauer, Harald; Kraskov, Alexander; Grassberger, Peter

    2006-03-01

    We propose a simulated annealing algorithm (stochastic non-negative independent component analysis, SNICA) for blind decomposition of linear mixtures of non-negative sources with non-negative coefficients. The demixing is based on a Metropolis-type Monte Carlo search for least dependent components, with the mutual information between recovered components as a cost function and their non-negativity as a hard constraint. Elementary moves are shears in two-dimensional subspaces and rotations in three-dimensional subspaces. The algorithm is geared at decomposing signals whose probability densities peak at zero, the case typical in analytical spectroscopy and multivariate curve resolution. The decomposition performance on large samples of synthetic mixtures and experimental data is much better than that of traditional blind source separation methods based on principal component analysis (MILCA, FastICA, RADICAL) and chemometrics techniques (SIMPLISMA, ALS, BTEM). PMID:16503615

  17. On Reformulating Planning as Dynamic Constraint Satisfaction

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Jonsson, Ari K.; Morris, Paul; Koga, Dennis (Technical Monitor)

    2000-01-01

    In recent years, researchers have reformulated STRIPS planning problems as SAT problems or CSPs. In this paper, we discuss the Constraint-Based Interval Planning (CBIP) paradigm, which can represent planning problems incorporating interval time and resources. We describe how to reformulate mutual exclusion constraints for a CBIP-based system, the Extendible Uniform Remote Operations Planner Architecture (EUROPA). We show that reformulations involving dynamic variable domains restrict the algorithms which can be used to solve the resulting DCSP. We present an alternative formulation which does not employ dynamic domains, and describe the relative merits of the different reformulations.

  18. Hard-pan soils - Management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hard pans, hard layers, or compacted horizons, either surface or subsurface, are universal problems that limit crop production. Hard layers can be caused by traffic or soil genetic properties that result in horizons with high density or cemented soil particles; these horizons have elevated penetrati...

  19. How Do You Like Your Equilibrium Selection Problems? Hard, or Very Hard?

    NASA Astrophysics Data System (ADS)

    Goldberg, Paul W.

    The PPAD-completeness of Nash equilibrium computation is taken as evidence that the problem is computationally hard in the worst case. This evidence is necessarily rather weak, in the sense that PPAD is only know to lie "between P and NP", and there is not a strong prospect of showing it to be as hard as NP. Of course, the problem of finding an equilibrium that has certain sought-after properties should be at least as hard as finding an unrestricted one, thus we have for example the NP-hardness of finding equilibria that are socially optimal (or indeed that have various efficiently checkable properties), the results of Gilboa and Zemel [6], and Conitzer and Sandholm [3]. In the talk I will give an overview of this topic, and a summary of recent progress showing that the equilibria that are found by the Lemke-Howson algorithm, as well as related homotopy methods, are PSPACE-complete to compute. Thus we show that there are no short cuts to the Lemke-Howson solutions, subject only to the hardness of PSPACE. I mention some open problems.

  20. Powered Descent Guidance with General Thrust-Pointing Constraints

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Acikmese, Behcet; Blackmore, Lars

    2013-01-01

    The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.

  1. Hard metal composition

    DOEpatents

    Sheinberg, Haskell

    1986-01-01

    A composition of matter having a Rockwell A hardness of at least 85 is formed from a precursor mixture comprising between 3 and 10 weight percent boron carbide and the remainder a metal mixture comprising from 70 to 90 percent tungsten or molybdenum, with the remainder of the metal mixture comprising nickel and iron or a mixture thereof. The composition has a relatively low density of between 7 to 14 g/cc. The precursor is preferably hot pressed to yield a composition having greater than 100% of theoretical density.

  2. Hard metal composition

    DOEpatents

    Sheinberg, H.

    1983-07-26

    A composition of matter having a Rockwell A hardness of at least 85 is formed from a precursor mixture comprising between 3 and 10 wt % boron carbide and the remainder a metal mixture comprising from 70 to 90% tungsten or molybdenum, with the remainder of the metal mixture comprising nickel and iron or a mixture thereof. The composition has a relatively low density of between 7 and 14 g/cc. The precursor is preferably hot pressed to yield a composition having greater than 100% of theoretical density.

  3. Timeline-Based Space Operations Scheduling with External Constraints

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Tran, Daniel; Rabideau, Gregg; Schaffer, Steve; Mandl, Daniel; Frye, Stuart

    2010-01-01

    We describe a timeline-based scheduling algorithm developed for mission operations of the EO-1 earth observing satellite. We first describe the range of operational constraints for operations focusing on maneuver and thermal constraints that cannot be modeled in typical planner/schedulers. We then describe a greedy heuristic scheduling algorithm and compare its performance to both the prior scheduling algorithm - documenting an over 50% increase in scenes scheduled with estimated value of millions of dollars US. We also compare to a relaxed optimal scheduler showing that the greedy scheduler produces schedules with scene count within 15% of an upper bound on optimal schedules.

  4. Creating Positive Task Constraints

    ERIC Educational Resources Information Center

    Mally, Kristi K.

    2006-01-01

    Constraints are characteristics of the individual, the task, or the environment that mold and shape movement choices and performances. Constraints can be positive--encouraging proficient movements or negative--discouraging movement or promoting ineffective movements. Physical educators must analyze, evaluate, and determine the effect various…

  5. Credit Constraints in Education

    ERIC Educational Resources Information Center

    Lochner, Lance; Monge-Naranjo, Alexander

    2012-01-01

    We review studies of the impact of credit constraints on the accumulation of human capital. Evidence suggests that credit constraints have recently become important for schooling and other aspects of households' behavior. We highlight the importance of early childhood investments, as their response largely determines the impact of credit…

  6. Constraint Reasoning Over Strings

    NASA Technical Reports Server (NTRS)

    Koga, Dennis (Technical Monitor); Golden, Keith; Pang, Wanlin

    2003-01-01

    This paper discusses an approach to representing and reasoning about constraints over strings. We discuss how many string domains can often be concisely represented using regular languages, and how constraints over strings, and domain operations on sets of strings, can be carried out using this representation.

  7. Evolutionary Algorithm for Calculating Available Transfer Capability

    NASA Astrophysics Data System (ADS)

    Šošić, Darko; Škokljev, Ivan

    2013-09-01

    The paper presents an evolutionary algorithm for calculating available transfer capability (ATC). ATC is a measure of the transfer capability remaining in the physical transmission network for further commercial activity over and above already committed uses. In this paper, MATLAB software is used to determine the ATC between any bus in deregulated power systems without violating system constraints such as thermal, voltage, and stability constraints. The algorithm is applied on IEEE 5 bus system and on IEEE 30 bus system.

  8. Constraints in Quantum Geometrodynamics

    NASA Astrophysics Data System (ADS)

    Gentle, Adrian P.; George, Nathan D.; Miller, Warner A.; Kheyfets, Arkady

    We compare different treatments of the constraints in canonical quantum gravity. The standard approach on the superspace of 3-geometries treats the constraints as the sole carriers of the dynamic content of the theory, thus rendering the traditional dynamical equations obsolete. Quantization of the constraints in both the Dirac and ADM square root Hamiltonian approaches leads to the well known problems of time evolution. These problems of time are of both an interpretational and technical nature. In contrast, the geometrodynamic quantization procedure on the superspace of the true dynamical variables separates the issues of quantization from the enforcement of the constraints. The resulting theory takes into account states that are off-shell with respect to the constraints, and thus avoids the problems of time. We develop, for the first time, the geometrodynamic quantization formalism in a general setting and show that it retains all essential features previously illustrated in the context of homogeneous cosmologies.

  9. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  10. Hard Metal Disease

    PubMed Central

    Bech, A. O.; Kipling, M. D.; Heather, J. C.

    1962-01-01

    In Great Britain there have been no published reports of respiratory disease occurring amongst workers in the hard metal (tungsten carbide) industry. In this paper the clinical and radiological findings in six cases and the pathological findings in one are described. In two cases physiological studies indicated mild alveolar diffusion defects. Histological examination in a fatal case revealed diffuse pulmonary interstitial fibrosis with marked peribronchial and perivascular fibrosis and bronchial epithelial hyperplasia and metaplasia. Radiological surveys revealed the sporadic occurrence and low incidence of the disease. The alterations in respiratory mechanics which occurred in two workers following a day's exposure to dust are described. Airborne dust concentrations are given. The industrial process is outlined and the literature is reviewed. The toxicity of the metals is discussed, and our findings are compared with those reported from Europe and the United States. We are of the opinion that the changes which we would describe as hard metal disease are caused by the inhalation of dust at work and that the component responsible may be cobalt. Images PMID:13970036

  11. Spins, phonons, and hardness

    SciTech Connect

    Gilman, J.J.

    1996-12-31

    In crystals (and/or glasses) with localized sp{sup 3} or spd-bonding orbitals, dislocations have very low mobilities, making the crystals very hard. Classical Peierls-Nabarro theory does not account for the low mobility. The breaking of spin-pair bonds which creates internal free-radicals must be considered. Therefore, a theory based on quantum mechanics has been proposed (Science, 261, 1436 (1993)). It has been applied successfully to diamond, Si, Ge, SiC, and with a modification to TiC and WC. It has recently been extended to account for the temperature independence of the hardness of silicon at low temperatures together with strong softening at temperatures above the Debye temperature. It is quantitatively consistent with the behaviors of the Group 4 elements (C, Si, Ge, Sn) when their Debye temperatures are used as normalizing factors; and appears to be consistent with data for TiC if an Einstein temperature for carbon is used. Since the Debye temperature marks the approximate point at which phonons of atomic wavelengths become excited (as contrasted with collective acoustic waves), this confirms the idea that the process which limits dislocation mobility is localized to atomic dimensions (sharp kinks).

  12. Approximate resolution of hard numbering problems

    SciTech Connect

    Bailleux, O.; Chabrier, J.J.

    1996-12-31

    We present a new method for estimating the number of solutions of constraint satisfaction problems. We use a stochastic forward checking algorithm for drawing a sample of paths from a search tree. With this sample, we compute two values related to the number of solutions of a CSP instance. First, an unbiased estimate, second, a lower bound with an arbitrary low error probability. We will describe applications to the Boolean Satisfiability problem and the Queens problem. We shall give some experimental results for these problems.

  13. Total-variation regularization with bound constraints

    SciTech Connect

    Chartrand, Rick; Wohlberg, Brendt

    2009-01-01

    We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.

  14. A fast full constraints unmixing method

    NASA Astrophysics Data System (ADS)

    Ye, Zhang; Wei, Ran; Wang, Qing Yan

    2012-10-01

    Mixed pixels are inevitable due to low-spatial resolutions of hyperspectral image (HSI). Linear spectrum mixture model (LSMM) is a classical mathematical model to relate the spectrum of mixing substance to corresponding individual components. The solving of LSMM, namely unmixing, is essentially a linear optimization problem with constraints, which is usually consisting of iterations implemented on decent direction and stopping criterion to terminate algorithms. Such criterion must be properly set in order to balance the accuracy and speed of solution. However, the criterion in existing algorithm is too strict, which maybe lead to convergence rate reducing. In this paper, by broaden constraints in unmixing, a new stopping rule is proposed, which can reduce rate of convergence. The experiments results prove both in runtime and iteration numbers that our method can accelerate convergence processing with only cost of little quality decrease in resulting.

  15. Level of constraint in revision knee arthroplasty.

    PubMed

    Indelli, Pier Francesco; Giori, Nick; Maloney, William

    2015-12-01

    Revision total knee arthroplasty (TKA) in the setting of major bone deficiency and/or soft tissue laxity might require increasing levels of constraint to restore knee stability. However, increasing the level of constraint not always correlates with mid-to-long-term satisfactory results. Recently, modular components as tantalum cones and titanium sleeves have been introduced to the market with the goal of obtaining better fixation where bone deficiency is an issue; theoretically, satisfactory meta-diaphyseal fixation can reduce the mechanical stress at the level of the joint line, reducing the need for high levels of constraint. This article reviews the recent literature on the surgical management of the unstable TKA with the goal to propose a modern surgical algorithm for adult reconstruction surgeons. PMID:26373770

  16. Improving hard disk data security using a hardware encryptor

    NASA Astrophysics Data System (ADS)

    Walewski, Andrzej

    2008-01-01

    This paper describes the design path of a hard disk encryption device. It outlines the analysis of design requirements, trends in data security, presentation of the IDE transfer protocol and finally the way of choosing the method, algorithm and parameters of encryption.

  17. New Hardness Results for Diophantine Approximation

    NASA Astrophysics Data System (ADS)

    Eisenbrand, Friedrich; Rothvoß, Thomas

    We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.

  18. Constraint Embedding Technique for Multibody System Dynamics

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with

  19. HIGHER ORDER HARD EDGE END FIELD EFFECTS.

    SciTech Connect

    BERG,J.S.

    2004-09-14

    In most cases, nonlinearities from magnets must be properly included in tracking and analysis to properly compute quantities of interest, in particular chromatic properties and dynamic aperture. One source of nonlinearities in magnets that is often important and cannot be avoided is the nonlinearity arising at the end of a magnet due to the longitudinal variation of the field at the end of the magnet. Part of this effect is independent of the longitudinal of the end. It is lowest order in the body field of the magnet, and is the result of taking a limit as the length over which the field at the end varies approaches zero. This is referred to as a ''hard edge'' end field. This effect has been computed previously to lowest order in the transverse variables. This paper describes a method to compute this effect to arbitrary order in the transverse variables, under certain constraints.

  20. Optimization of Blade Stiffened Composite Panel under Buckling and Strength Constraints

    NASA Astrophysics Data System (ADS)

    Todoroki, Akira; Sekishiro, Masato

    This paper deals with multiple constraints for dimension and stacking-sequence optimization of a blade-stiffened composite panel. In a previous study, a multiple objective genetic algorithm using a Kriging response surface with a buckling load constraint was the target. The present study focuses on dimension and stacking-sequence optimization with both a buckling load constraint and a fracture constraint. Multiple constraints complicate the process of selecting sampling analyses to improve the Kriging response surface. The proposed method resolves this problem using the most-critical-constraint approach. The new approach is applied to a blade stiffened composite panel and the approach is shown to be efficient.

  1. I. Thermal evolution of Ganymede and implications for surface features. II. Magnetohydrodynamic constraints on deep zonal flow in the giant planets. III. A fast finite-element algorithm for two-dimensional photoclinometry

    SciTech Connect

    Kirk, R.L.

    1987-01-01

    Thermal evolution of Ganymede from a hot start is modeled. On cooling ice I forms above the liquid H/sub 2/O and dense ices at higher entropy below it. A novel diapiric instability is proposed to occur if the ocean thins enough, mixing these layers and perhaps leading to resurfacing and groove formation. Rising warm-ice diapirs may cause a dramatic heat pulse and fracturing at the surface, and provide material for surface flows. Timing of the pulse depends on ice rheology but could agree with crater-density dates for resurfacing. Origins of the Ganymede-Callisto dichotomy in light of the model are discussed. Based on estimates of the conductivity of H/sub 2/ (Jupiter, Saturn) and H/sub 2/O (Uranus, Neptune), the zonal winds of the giant planets will, if they penetrate below the visible atmosphere, interact with the magnetic field well outside the metallic core. The scaling argument is supported by a model with zonal velocity constant on concentric cylinders, the Lorentz torque on each balanced by viscous stresses. The problem of two-dimensional photoclinometry, i.e. reconstruction of a surface from its image, is formulated in terms of finite elements and a fast algorithm using Newton-SOR iteration accelerated by multigridding is presented.

  2. Optimizing selection with several constraints in poultry breeding.

    PubMed

    Chapuis, H; Pincent, C; Colleau, J J

    2016-02-01

    Poultry breeding schemes permanently face the need to control the evolution of coancestry and some critical traits, while selecting for a main breeding objective. The main aims of this article are first to present an efficient selection algorithm adapted to this situation and then to measure how the severity of constraints impacted on the degree of loss for the main trait, compared to BLUP selection on the main trait, without any constraint. Broiler dam and sire line schemes were mimicked by simulation over 10 generations and selection was carried out on the main trait under constraints for coancestry and for another trait, antagonistic with the main trait. The selection algorithm was a special simulated annealing (adaptative simulated annealing (ASA)). It was found to be rapid and able to meet constraints very accurately. A constraint on the second trait was found to induce an impact similar to or even greater than the impact of the constraint on coancestry. The family structure of selected poultry populations made it easy to control the evolution of coancestry at a reasonable cost but was not as useful for reducing the cost of controlling evolution of the antagonistic traits. Multiple constraints impacted almost additively on the genetic gain for the main trait. Adding constraints for several traits would therefore be justified in real life breeding schemes, possibly after evaluating their impact through simulated annealing. PMID:26220593

  3. Conflict-Aware Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Wang, Yeou-Fang; Borden, Chester

    2006-01-01

    conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.

  4. Probabilistic Constraint Logic Programming. Formal Foundations of Quantitative and Statistical Inference in Constraint-Based Natural Language Processing

    NASA Astrophysics Data System (ADS)

    Riezler, Stefan

    2000-08-01

    In this thesis, we present two approaches to a rigorous mathematical and algorithmic foundation of quantitative and statistical inference in constraint-based natural language processing. The first approach, called quantitative constraint logic programming, is conceptualized in a clear logical framework, and presents a sound and complete system of quantitative inference for definite clauses annotated with subjective weights. This approach combines a rigorous formal semantics for quantitative inference based on subjective weights with efficient weight-based pruning for constraint-based systems. The second approach, called probabilistic constraint logic programming, introduces a log-linear probability distribution on the proof trees of a constraint logic program and an algorithm for statistical inference of the parameters and properties of such probability models from incomplete, i.e., unparsed data. The possibility of defining arbitrary properties of proof trees as properties of the log-linear probability model and efficiently estimating appropriate parameter values for them permits the probabilistic modeling of arbitrary context-dependencies in constraint logic programs. The usefulness of these ideas is evaluated empirically in a small-scale experiment on finding the correct parses of a constraint-based grammar. In addition, we address the problem of computational intractability of the calculation of expectations in the inference task and present various techniques to approximately solve this task. Moreover, we present an approximate heuristic technique for searching for the most probable analysis in probabilistic constraint logic programs.

  5. Adiabatic quantum programming: minor embedding with hard faults

    NASA Astrophysics Data System (ADS)

    Klymko, Christine; Sullivan, Blair D.; Humble, Travis S.

    2013-11-01

    Adiabatic quantum programming defines the time-dependent mapping of a quantum algorithm into an underlying hardware or logical fabric. An essential step is embedding problem-specific information into the quantum logical fabric. We present algorithms for embedding arbitrary instances of the adiabatic quantum optimization algorithm into a square lattice of specialized unit cells. These methods extend with fabric growth while scaling linearly in time and quadratically in footprint. We also provide methods for handling hard faults in the logical fabric without invoking approximations to the original problem and illustrate their versatility through numerical studies of embeddability versus fault rates in square lattices of complete bipartite unit cells. The studies show that these algorithms are more resilient to faulty fabrics than naive embedding approaches, a feature which should prove useful in benchmarking the adiabatic quantum optimization algorithm on existing faulty hardware.

  6. Measuring the Hardness of Minerals

    ERIC Educational Resources Information Center

    Bushby, Jessica

    2005-01-01

    The author discusses Moh's hardness scale, a comparative scale for minerals, whereby the softest mineral (talc) is placed at 1 and the hardest mineral (diamond) is placed at 10, with all other minerals ordered in between, according to their hardness. Development history of the scale is outlined, as well as a description of how the scale is used…

  7. Exploiting sequential phonetic constraints in recognizing spoken words

    NASA Astrophysics Data System (ADS)

    Huttenlocher, D. P.

    1985-10-01

    Machine recognition of spoken language requires developing more robust recognition algorithms. A recent study by Shipman and Zue suggest using partial descriptions of speech sounds to eliminate all but a handful of word candidates from a large lexicon. The current paper extends their work by investigating the power of partial phonetic descriptions for developing recognition algorithms. First, we demonstrate that sequences of manner of articulation classes are more reliable and provide more constraint than certain other classes. Alone these results are of limited utility, due to the high degree of variability in natural speech. This variability is not uniform however, as most modifications and deletions occur in unstressed syllables. Comparing the relative constraint provided by sounds in stressed versus unstressed syllables, we discover that the stressed syllables provide substantially more constraint. This indicates that recognition algorithms can be made more robust by exploiting the manner of articulation information in stressed syllables.

  8. Cyclic strength of hard metals

    SciTech Connect

    Sereda, N.N.; Gerikhanov, A.K.; Koval'chenko, M.S.; Pedanov, L.G.; Tsyban', V.A.

    1986-02-01

    The authors study the strength of hard-metal specimens and structural elements under conditions of cyclic loading since many elements of processing plants, equipment, and machines are made of hard metals. Fatigue tests were conducted on KTS-1N, KTSL-1, and KTNKh-70 materials, which are titanium carbide hard metals cemented with nickel-molybdenum, nickelcobalt-chromium, and nickel-chromium alloys, respectively. As a basis of comparison, the standard VK-15 (WC+15% Co) alloy was used. Some key physicomechanical characteristics of the materials investigated are presented. On time bases not exceeding 10/sup 6/ cycles, titanium carbide hard metals are comparable in fatigue resistance to the standard tungstencontaining hard metals.

  9. MISTIC: Radiation hard ECRIS

    NASA Astrophysics Data System (ADS)

    Labrecque, F.; Lecesne, N.; Bricault, P.

    2008-10-01

    The ISAC RIB facility at TRIUMF utilizes up to 100 μA from the 500 MeV H- cyclotron to produce RIB using the isotopic separation on line (ISOL) method. In the moment, we are mainly using a hot surface ion source and a laser ion source to produce our RIB. A FEBIAD ion source has been recently tested at ISAC, but these ion sources are not suitable for gaseous elements like N, O, F, Ne, … , A new type of ion source is then necessary. By combining a high frequency electromagnetic wave and a magnetic confinement, the ECRIS [R. Geller, Electron Cyclotron Resonance Ion Source and ECR Plasmas, Institute of Physics Publishing, Bristol, 1996], [1] (electron cyclotron resonance ion source) can produce high energy electrons essential for efficient ionization of those elements. To this end, a prototype ECRIS called MISTIC (monocharged ion source for TRIUMF and ISAC complex) has been built at TRIUMF using a design similar to the one developed at GANIL [GANIL (Grand Accélérateur National d'Ions Lourds), www.ganil.fr], [2] The high level radiation caused by the proximity to the target prevented us to use a conventional ECRIS. To achieve a radiation hard ion source, we used coils instead of permanent magnets to produce the magnetic confinement. Each coil is supplied by 1000 A-15 V power supply. The RF generator cover a frequency range from 2 to 8 GHz giving us all the versatility we need to characterize the ionization of the following elements: He, Ne, Ar, Kr, Xe, C, O, N, F. Isotopes of these elements are involved in star thermonuclear cycles and, consequently, very important for researches in nuclear astrophysics. Measures of efficiency, emittance and ionization time will be performed for each of those elements. Preliminary tests show that MISTIC is very stable over a large range of frequency, magnetic field and pressure.

  10. Constraints complicate centrifugal compressor depressurization

    SciTech Connect

    Key, B. ); Colbert, F.L. )

    1993-05-10

    Blowdown of a centrifugal compressor is complicated by process constraints that might require slowing the depressurization rate and by mechanical constraints for which a faster rate might be preferred. The paper describes design constraints such as gas leaks; thrust-bearing overload; system constraints; flare extinguishing; heat levels; and pressure drop.

  11. A new minimax algorithm

    NASA Technical Reports Server (NTRS)

    Vardi, A.

    1984-01-01

    The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.

  12. Constraint algebra in bigravity

    SciTech Connect

    Soloviev, V. O.

    2015-07-15

    The number of degrees of freedom in bigravity theory is found for a potential of general form and also for the potential proposed by de Rham, Gabadadze, and Tolley (dRGT). This aim is pursued via constructing a Hamiltonian formalismand studying the Poisson algebra of constraints. A general potential leads to a theory featuring four first-class constraints generated by general covariance. The vanishing of the respective Hessian is a crucial property of the dRGT potential, and this leads to the appearance of two additional second-class constraints and, hence, to the exclusion of a superfluous degree of freedom—that is, the Boulware—Deser ghost. The use of a method that permits avoiding an explicit expression for the dRGT potential is a distinctive feature of the present study.

  13. Cross-Modal Subspace Learning via Pairwise Constraints.

    PubMed

    He, Ran; Zhang, Man; Wang, Liang; Ji, Ye; Yin, Qiyue

    2015-12-01

    In multimedia applications, the text and image components in a web document form a pairwise constraint that potentially indicates the same semantic concept. This paper studies cross-modal learning via the pairwise constraint and aims to find the common structure hidden in different modalities. We first propose a compound regularization framework to address the pairwise constraint, which can be used as a general platform for developing cross-modal algorithms. For unsupervised learning, we propose a multi-modal subspace clustering method to learn a common structure for different modalities. For supervised learning, to reduce the semantic gap and the outliers in pairwise constraints, we propose a cross-modal matching method based on compound ℓ21 regularization. Extensive experiments demonstrate the benefits of joint text and image modeling with semantically induced pairwise constraints, and they show that the proposed cross-modal methods can further reduce the semantic gap between different modalities and improve the clustering/matching accuracy. PMID:26259218

  14. Soft Constraints in Interactive Behavior: The Case of Ignoring Perfect Knowledge in-the-World for Imperfect Knowledge in-the-Head

    ERIC Educational Resources Information Center

    Gray, Wayne D.; Fu, Wai-Tat

    2004-01-01

    Constraints and dependencies among the elements of embodied cognition form patterns or microstrategies of interactive behavior. Hard constraints determine which microstrategies are possible. Soft constraints determine which of the possible microstrategies are most likely to be selected. When selection is non-deliberate or automatic the least…

  15. Generalized arc consistency for global cardinality constraint

    SciTech Connect

    Regin, J.C.

    1996-12-31

    A global cardinality constraint (gcc) is specified in terms of a set of variables X = (x{sub 1},..., x{sub p}) which take their values in a subset of V = (v{sub 1},...,v{sub d}). It constrains the number of times a value v{sub i} {epsilon} V is assigned to a variable in X to be in an interval [l{sub i}, c{sub i}]. Cardinality constraints have proved very useful in many real-life problems, such as scheduling, timetabling, or resource allocation. A gcc is more general than a constraint of difference, which requires each interval to be. In this paper, we present an efficient way of implementing generalized arc consistency for a gcc. The algorithm we propose is based on a new theorem of flow theory. Its space complexity is O({vert_bar}X{vert_bar} {times} {vert_bar}V{vert_bar}) and its time complexity is O({vert_bar}X{vert_bar}{sup 2} {times} {vert_bar}V{vert_bar}). We also show how this algorithm can efficiently be combined with other filtering techniques.

  16. General heuristics algorithms for solving capacitated arc routing problem

    NASA Astrophysics Data System (ADS)

    Fadzli, Mohammad; Najwa, Nurul; Masran, Hafiz

    2015-05-01

    In this paper, we try to determine the near-optimum solution for the capacitated arc routing problem (CARP). In general, NP-hard CARP is a special graph theory specifically arises from street services such as residential waste collection and road maintenance. By purpose, the design of the CARP model and its solution techniques is to find optimum (or near-optimum) routing cost for a fleet of vehicles involved in operation. In other words, finding minimum-cost routing is compulsory in order to reduce overall operation cost that related with vehicles. In this article, we provide a combination of various heuristics algorithm to solve a real case of CARP in waste collection and benchmark instances. These heuristics work as a central engine in finding initial solutions or near-optimum in search space without violating the pre-setting constraints. The results clearly show that these heuristics algorithms could provide good initial solutions in both real-life and benchmark instances.

  17. Beta Backscatter Measures the Hardness of Rubber

    NASA Technical Reports Server (NTRS)

    Morrissey, E. T.; Roje, F. N.

    1986-01-01

    Nondestructive testing method determines hardness, on Shore scale, of room-temperature-vulcanizing silicone rubber. Measures backscattered beta particles; backscattered radiation count directly proportional to Shore hardness. Test set calibrated with specimen, Shore hardness known from mechanical durometer test. Specimen of unknown hardness tested, and radiation count recorded. Count compared with known sample to find Shore hardness of unknown.

  18. Fault-Tolerant, Radiation-Hard DSP

    NASA Technical Reports Server (NTRS)

    Czajkowski, David

    2011-01-01

    Commercial digital signal processors (DSPs) for use in high-speed satellite computers are challenged by the damaging effects of space radiation, mainly single event upsets (SEUs) and single event functional interrupts (SEFIs). Innovations have been developed for mitigating the effects of SEUs and SEFIs, enabling the use of very-highspeed commercial DSPs with improved SEU tolerances. Time-triple modular redundancy (TTMR) is a method of applying traditional triple modular redundancy on a single processor, exploiting the VLIW (very long instruction word) class of parallel processors. TTMR improves SEU rates substantially. SEFIs are solved by a SEFI-hardened core circuit, external to the microprocessor. It monitors the health of the processor, and if a SEFI occurs, forces the processor to return to performance through a series of escalating events. TTMR and hardened-core solutions were developed for both DSPs and reconfigurable field-programmable gate arrays (FPGAs). This includes advancement of TTMR algorithms for DSPs and reconfigurable FPGAs, plus a rad-hard, hardened-core integrated circuit that services both the DSP and FPGA. Additionally, a combined DSP and FPGA board architecture was fully developed into a rad-hard engineering product. This technology enables use of commercial off-the-shelf (COTS) DSPs in computers for satellite and other space applications, allowing rapid deployment at a much lower cost. Traditional rad-hard space computers are very expensive and typically have long lead times. These computers are either based on traditional rad-hard processors, which have extremely low computational performance, or triple modular redundant (TMR) FPGA arrays, which suffer from power and complexity issues. Even more frustrating is that the TMR arrays of FPGAs require a fixed, external rad-hard voting element, thereby causing them to lose much of their reconfiguration capability and in some cases significant speed reduction. The benefits of COTS high

  19. Multilevel algorithms for nonlinear optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.

  20. An MPEG-21-driven multimedia adaptation decision-taking engine based on constraint satisfaction problem

    NASA Astrophysics Data System (ADS)

    Feng, Xiao; Tang, Rui-chun; Zhai, Yi-li; Feng, Yu-qing; Hong, Bo-hai

    2013-07-01

    Multimedia adaptation decision-taking techniques based on context are considered. Constraint satisfaction problem-Based Content Adaptation Algorithm (CBCAA) is proposed. First the algorithm obtains and classifies context information using MPEG-21; then it builds the constraint model according to different types of context information, constraint satisfaction method is used to acquire Media Description Decision Set (MDDS); finally a bit-stream adaptation engine performs the multimedia transcoding. Simulation results prove that the presented algorithm offers an efficient solution for personalized multimedia adaptation in heterogeneous environments.

  1. Local parallel models for integration of stereo matching constraints and intrinsic image combination

    NASA Technical Reports Server (NTRS)

    Stewart, Charles V.

    1989-01-01

    Parallel relaxation computations such as those of connectionist networks offer a useful model for constraint integration and intrinsic image combination in developing a general-purpose stereo matching algorithm. This paper describes such a stereo algorithm that incorporates hierarchical, surface-structure, and edge-appearance constraints that are redefined and integrated at the level of individual candidate matches. The algorithm produces a high percentage of correct decisions on a wide variety of stereo pairs. Its few errors arise when the correlation measures defined by the constraints are either weakened or ambiguous, as in the case of periodic patterns in the images. Two additional mechanisms are discussed for overcoming the remaining errors.

  2. Scheduling Jobs with Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ferrolho, António; Crisóstomo, Manuel

    Most scheduling problems are NP-hard, the time required to solve the problem optimally increases exponentially with the size of the problem. Scheduling problems have important applications, and a number of heuristic algorithms have been proposed to determine relatively good solutions in polynomial time. Recently, genetic algorithms (GA) are successfully used to solve scheduling problems, as shown by the growing numbers of papers. GA are known as one of the most efficient algorithms for solving scheduling problems. But, when a GA is applied to scheduling problems various crossovers and mutations operators can be applicable. This paper presents and examines a new concept of genetic operators for scheduling problems. A software tool called hybrid and flexible genetic algorithm (HybFlexGA) was developed to examine the performance of various crossover and mutation operators by computing simulations of job scheduling problems.

  3. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  4. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  5. Prediction of binary hard-sphere crystal structures.

    PubMed

    Filion, Laura; Dijkstra, Marjolein

    2009-04-01

    We present a method based on a combination of a genetic algorithm and Monte Carlo simulations to predict close-packed crystal structures in hard-core systems. We employ this method to predict the binary crystal structures in a mixture of large and small hard spheres with various stoichiometries and diameter ratios between 0.4 and 0.84. In addition to known binary hard-sphere crystal structures similar to NaCl and AlB2, we predict additional crystal structures with the symmetry of CrB, gammaCuTi, alphaIrV, HgBr2, AuTe2, Ag2Se, and various structures for which an atomic analog was not found. In order to determine the crystal structures at infinite pressures, we calculate the maximum packing density as a function of size ratio for the crystal structures predicted by our GA using a simulated annealing approach. PMID:19518387

  6. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    NASA Astrophysics Data System (ADS)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  7. Hard-phase engineering in hard/soft nanocomposite magnets

    NASA Astrophysics Data System (ADS)

    Poudyal, Narayan; Rong, Chuanbing; Vuong Nguyen, Van; Liu, J. Ping

    2014-03-01

    Bulk SmCo/Fe(Co) based hard/soft nanocomposite magnets with different hard phases (1:5, 2:17, 2:7 and 1:3 types) were fabricated by high-energy ball-milling followed by a warm compaction process. Microstructural studies revealed a homogeneous distribution of bcc-Fe(Co) phase in the matrix of hard magnetic Sm-Co phase with grain size ⩽20 nm after severe plastic deformation and compaction. The small grain size leads to effective inter-phase exchange coupling as shown by the single-phase-like demagnetization behavior with enhanced remanence and energy product. Among the different hard phases investigated, it was found that the Sm2Co7-based nanocomposites can incorporate a higher soft phase content, and thus a larger reduction in rare-earth content compared with the 2:17, 1:5 and 1:3 phase-based nanocomposite with similar properties. (BH)max up to 17.6 MGOe was obtained for isotropic Sm2Co7/FeCo nanocomposite magnets with 40 wt% of the soft phase which is about 300% higher than the single-phase counterpart prepared under the same conditions. The results show that hard-phase engineering in nanocomposite magnets is an alternative approach to fabrication of high-strength nanocomposite magnets with reduced rare-earth content.

  8. A Framework for Dynamic Constraint Reasoning Using Procedural Constraints

    NASA Technical Reports Server (NTRS)

    Jonsson, Ari K.; Frank, Jeremy D.

    1999-01-01

    Many complex real-world decision and control problems contain an underlying constraint reasoning problem. This is particularly evident in a recently developed approach to planning, where almost all planning decisions are represented by constrained variables. This translates a significant part of the planning problem into a constraint network whose consistency determines the validity of the plan candidate. Since higher-level choices about control actions can add or remove variables and constraints, the underlying constraint network is invariably highly dynamic. Arbitrary domain-dependent constraints may be added to the constraint network and the constraint reasoning mechanism must be able to handle such constraints effectively. Additionally, real problems often require handling constraints over continuous variables. These requirements present a number of significant challenges for a constraint reasoning mechanism. In this paper, we introduce a general framework for handling dynamic constraint networks with real-valued variables, by using procedures to represent and effectively reason about general constraints. The framework is based on a sound theoretical foundation, and can be proven to be sound and complete under well-defined conditions. Furthermore, the framework provides hybrid reasoning capabilities, as alternative solution methods like mathematical programming can be incorporated into the framework, in the form of procedures.

  9. Identifying Regions Based on Flexible User Defined Constraints.

    PubMed

    Folch, David C; Spielman, Seth E

    2014-01-01

    The identification of regions is both a computational and conceptual challenge. Even with growing computational power, regionalization algorithms must rely on heuristic approaches in order to find solutions. Therefore, the constraints and evaluation criteria that define a region must be translated into an algorithm that can efficiently and effectively navigate the solution space to find the best solution. One limitation of many existing regionalization algorithms is a requirement that the number of regions be selected a priori. The max-p algorithm, introduced in Duque et al. (2012), does not have this requirement, and thus the number of regions is an output of, not an input to, the algorithm. In this paper we extend the max-p algorithm to allow for greater flexibility in the constraints available to define a feasible region, placing the focus squarely on the multidimensional characteristics of region. We also modify technical aspects of the algorithm to provide greater flexibility in its ability to search the solution space. Using synthetic spatial and attribute data we are able to show the algorithm's broad ability to identify regions in maps of varying complexity. We also conduct a large scale computational experiment to identify parameter settings that result in the greatest solution accuracy under various scenarios. The rules of thumb identified from the experiment produce maps that correctly assign areas to their "true" region with 94% average accuracy, with nearly 50 percent of the simulations reaching 100 percent accuracy. PMID:25018663

  10. Hiding quiet solutions in random constraint satisfaction problems

    SciTech Connect

    Zdeborova, Lenka; Krzakala, Florent

    2008-01-01

    We study constraint satisfaction problems on the so-called planted random ensemble. We show that for a certain class of problems, e.g., graph coloring, many of the properties of the usual random ensemble are quantitatively identical in the planted random ensemble. We study the structural phase transitions and the easy-hard-easy pattern in the average computational complexity. We also discuss the finite temperature phase diagram, finding a close connection with the liquid-glass-solid phenomenology.

  11. Hiding quiet solutions in random constraint satisfaction problems.

    PubMed

    Krzakala, Florent; Zdeborová, Lenka

    2009-06-12

    We study constraint satisfaction problems on the so-called planted random ensemble. We show that for a certain class of problems, e.g., graph coloring, many of the properties of the usual random ensemble are quantitatively identical in the planted random ensemble. We study the structural phase transitions and the easy-hard-easy pattern in the average computational complexity. We also discuss the finite temperature phase diagram, finding a close connection with the liquid-glass-solid phenomenology. PMID:19658978

  12. Haplotyping algorithms

    SciTech Connect

    Sobel, E.; Lange, K.; O`Connell, J.R.

    1996-12-31

    Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

  13. Dual-Byte-Marker Algorithm for Detecting JFIF Header

    NASA Astrophysics Data System (ADS)

    Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat

    The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.

  14. Structure Constraints in a Constraint-Based Planner

    NASA Technical Reports Server (NTRS)

    Pang, Wan-Lin; Golden, Keith

    2004-01-01

    In this paper we report our work on a new constraint domain, where variables can take structured values. Earth-science data processing (ESDP) is a planning domain that requires the ability to represent and reason about complex constraints over structured data, such as satellite images. This paper reports on a constraint-based planner for ESDP and similar domains. We discuss our approach for translating a planning problem into a constraint satisfaction problem (CSP) and for representing and reasoning about structured objects and constraints over structures.

  15. Practical engineering of hard spin-glass instances

    NASA Astrophysics Data System (ADS)

    Marshall, Jeffrey; Martin-Mayor, Victor; Hen, Itay

    2016-07-01

    Recent technological developments in the field of experimental quantum annealing have made prototypical annealing optimizers with hundreds of qubits commercially available. The experimental demonstration of a quantum speedup for optimization problems has since then become a coveted, albeit elusive goal. Recent studies have shown that the so far inconclusive results, regarding a quantum enhancement, may have been partly due to the benchmark problems used being unsuitable. In particular, these problems had inherently too simple a structure, allowing for both traditional resources and quantum annealers to solve them with no special efforts. The need therefore has arisen for the generation of harder benchmarks which would hopefully possess the discriminative power to separate classical scaling of performance with size from quantum. We introduce here a practical technique for the engineering of extremely hard spin-glass Ising-type problem instances that does not require "cherry picking" from large ensembles of randomly generated instances. We accomplish this by treating the generation of hard optimization problems itself as an optimization problem, for which we offer a heuristic algorithm that solves it. We demonstrate the genuine thermal hardness of our generated instances by examining them thermodynamically and analyzing their energy landscapes, as well as by testing the performance of various state-of-the-art algorithms on them. We argue that a proper characterization of the generated instances offers a practical, efficient way to properly benchmark experimental quantum annealers, as well as any other optimization algorithm.

  16. Ascent guidance algorithm using lidar wind measurements

    NASA Technical Reports Server (NTRS)

    Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.

    1990-01-01

    The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.

  17. Teaching Database Design with Constraint-Based Tutors

    ERIC Educational Resources Information Center

    Mitrovic, Antonija; Suraweera, Pramuditha

    2016-01-01

    Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…

  18. The TETRAD Project: Constraint Based Aids to Causal Model Specification.

    ERIC Educational Resources Information Center

    Scheines, Richard; Spirtes, Peter; Glymour, Clark; Meek, Christopher; Richardson, Thomas

    1998-01-01

    The TETRAD for constraint-based aids to causal model specification project and related work in computer science aims to apply standards of rigor and precision to the problem of using data and background knowledge to make inferences about a model's specifications. Several algorithms that are implemented in the TETRAD II program are presented. (SLD)

  19. A Novel Constraint for Thermodynamically Designing DNA Sequences

    PubMed Central

    Zhang, Qiang; Wang, Bin; Wei, Xiaopeng; Zhou, Changjun

    2013-01-01

    Biotechnological and biomolecular advances have introduced novel uses for DNA such as DNA computing, storage, and encryption. For these applications, DNA sequence design requires maximal desired (and minimal undesired) hybridizations, which are the product of a single new DNA strand from 2 single DNA strands. Here, we propose a novel constraint to design DNA sequences based on thermodynamic properties. Existing constraints for DNA design are based on the Hamming distance, a constraint that does not address the thermodynamic properties of the DNA sequence. Using a unique, improved genetic algorithm, we designed DNA sequence sets which satisfy different distance constraints and employ a free energy gap based on a minimum free energy (MFE) to gauge DNA sequences based on set thermodynamic properties. When compared to the best constraints of the Hamming distance, our method yielded better thermodynamic qualities. We then used our improved genetic algorithm to obtain lower-bound DNA sequence sets. Here, we discuss the effects of novel constraint parameters on the free energy gap. PMID:24015217

  20. A novel constraint for thermodynamically designing DNA sequences.

    PubMed

    Zhang, Qiang; Wang, Bin; Wei, Xiaopeng; Zhou, Changjun

    2013-01-01

    Biotechnological and biomolecular advances have introduced novel uses for DNA such as DNA computing, storage, and encryption. For these applications, DNA sequence design requires maximal desired (and minimal undesired) hybridizations, which are the product of a single new DNA strand from 2 single DNA strands. Here, we propose a novel constraint to design DNA sequences based on thermodynamic properties. Existing constraints for DNA design are based on the Hamming distance, a constraint that does not address the thermodynamic properties of the DNA sequence. Using a unique, improved genetic algorithm, we designed DNA sequence sets which satisfy different distance constraints and employ a free energy gap based on a minimum free energy (MFE) to gauge DNA sequences based on set thermodynamic properties. When compared to the best constraints of the Hamming distance, our method yielded better thermodynamic qualities. We then used our improved genetic algorithm to obtain lower-bound DNA sequence sets. Here, we discuss the effects of novel constraint parameters on the free energy gap. PMID:24015217

  1. A Framework for Optimal Control Allocation with Structural Load Constraints

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc

    2010-01-01

    Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.

  2. Unraveling Quantum Annealers using Classical Hardness

    PubMed Central

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  3. Unraveling Quantum Annealers using Classical Hardness.

    PubMed

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as 'D-Wave' chips, promise to solve practical optimization problems potentially faster than conventional 'classical' computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize 'temperature chaos' as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  4. Service-Oriented Architecture (SOA) Instantiation within a Hard Real-Time, Deterministic Combat System Environment

    ERIC Educational Resources Information Center

    Moreland, James D., Jr

    2013-01-01

    This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…

  5. A Constraint-Based Planner for Data Production

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Golden, Keith

    2005-01-01

    This paper presents a graph-based backtracking algorithm designed to support constrain-tbased planning in data production domains. This algorithm performs backtracking at two nested levels: the outer- backtracking following the structure of the planning graph to select planner subgoals and actions to achieve them and the inner-backtracking inside a subproblem associated with a selected action to find action parameter values. We show this algorithm works well in a planner applied to automating data production in an ecological forecasting system. We also discuss how the idea of multi-level backtracking may improve efficiency of solving semi-structured constraint problems.

  6. Adaptive laser link reconfiguration using constraint propagation

    NASA Technical Reports Server (NTRS)

    Crone, M. S.; Julich, P. M.; Cook, L. M.

    1993-01-01

    This paper describes Harris AI research performed on the Adaptive Link Reconfiguration (ALR) study for Rome Lab, and focuses on the application of constraint propagation to the problem of link reconfiguration for the proposed space based Strategic Defense System (SDS) Brilliant Pebbles (BP) communications system. According to the concept of operations at the time of the study, laser communications will exist between BP's and to ground entry points. Long-term links typical of RF transmission will not exist. This study addressed an initial implementation of BP's based on the Global Protection Against Limited Strikes (GPALS) SDI mission. The number of satellites and rings studied was representative of this problem. An orbital dynamics program was used to generate line-of-site data for the modeled architecture. This was input into a discrete event simulation implemented in the Harris developed COnstraint Propagation Expert System (COPES) Shell, developed initially on the Rome Lab BM/C3 study. Using a model of the network and several heuristics, the COPES shell was used to develop the Heuristic Adaptive Link Ordering (HALO) Algorithm to rank and order potential laser links according to probability of communication. A reduced set of links based on this ranking would then be used by a routing algorithm to select the next hop. This paper includes an overview of Constraint Propagation as an Artificial Intelligence technique and its embodiment in the COPES shell. It describes the design and implementation of both the simulation of the GPALS BP network and the HALO algorithm in COPES. This is described using a 59 Data Flow Diagram, State Transition Diagrams, and Structured English PDL. It describes a laser communications model and the heuristics involved in rank-ordering the potential communication links. The generation of simulation data is described along with its interface via COPES to the Harris developed View Net graphical tool for visual analysis of communications

  7. Constraints influencing sports wheelchair propulsion performance and injury risk

    PubMed Central

    2013-01-01

    The Paralympic Games are the pinnacle of sport for many athletes with a disability. A potential issue for many wheelchair athletes is how to train hard to maximise performance while also reducing the risk of injuries, particularly to the shoulder due to the accumulation of stress placed on this joint during activities of daily living, training and competition. The overall purpose of this narrative review was to use the constraints-led approach of dynamical systems theory to examine how various constraints acting upon the wheelchair-user interface may alter hand rim wheelchair performance during sporting activities, and to a lesser extent, their injury risk. As we found no studies involving Paralympic athletes that have directly utilised the dynamical systems approach to interpret their data, we have used this approach to select some potential constraints and discussed how they may alter wheelchair performance and/or injury risk. Organism constraints examined included player classifications, wheelchair setup, training and intrinsic injury risk factors. Task constraints examined the influence of velocity and types of locomotion (court sports vs racing) in wheelchair propulsion, while environmental constraints focused on forces that tend to oppose motion such as friction and surface inclination. Finally, the ecological validity of the research studies assessing wheelchair propulsion was critiqued prior to recommendations for practice and future research being given. PMID:23557065

  8. A Graph Based Backtracking Algorithm for Solving General CSPs

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Goodwin, Scott D.

    2003-01-01

    Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.

  9. Hard Work and Hard Data: Getting Our Message Out.

    ERIC Educational Resources Information Center

    Glau, Gregory R.

    Unless questions about student performance and student retention can be answered and unless educators are proactive in finding and publicizing such information, basic writing programs cannot determine if what they are doing is working. Hard data, especially from underrepresented groups, is needed to support these programs. At Arizona State…

  10. Future hard disk drive systems

    NASA Astrophysics Data System (ADS)

    Wood, Roger

    2009-03-01

    This paper briefly reviews the evolution of today's hard disk drive with the additional intention of orienting the reader to the overall mechanical and electrical architecture. The modern hard disk drive is a miracle of storage capacity and function together with remarkable economy of design. This paper presents a personal view of future customer requirements and the anticipated design evolution of the components. There are critical decisions and great challenges ahead for the key technologies of heads, media, head-disk interface, mechanics, and electronics.