Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
Gaining Algorithmic Insight through Simplifying Constraints.
ERIC Educational Resources Information Center
Ginat, David
2002-01-01
Discusses algorithmic problem solving in computer science education, particularly algorithmic insight, and focuses on the relevance and effectiveness of the heuristic simplifying constraints which involves simplification of a given problem to a problem in which constraints are imposed on the input data. Presents three examples involving…
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Easy and hard testbeds for real-time search algorithms
Koenig, S.; Simmons, R.G.
1996-12-31
Although researchers have studied which factors influence the behavior of traditional search algorithms, currently not much is known about how domain properties influence the performance of real-time search algorithms. In this paper we demonstrate, both theoretically and experimentally, that Eulerian state spaces (a super set of undirected state spaces) are very easy for some existing real-time search algorithms to solve: even real-time search algorithms that can be intractable, in general, are efficient for Eulerian state spaces. Because traditional real-time search testbeds (such as the eight puzzle and gridworlds) are Eulerian, they cannot be used to distinguish between efficient and inefficient real-time search algorithms. It follows that one has to use non-Eulerian domains to demonstrate the general superiority of a given algorithm. To this end, we present two classes of hard-to-search state spaces and demonstrate the performance of various real-time search algorithms on them.
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645
Time-reversible molecular dynamics algorithms with bond constraints
NASA Astrophysics Data System (ADS)
Toxvaerd, Søren; Heilmann, Ole J.; Ingebrigtsen, Trond; Schrøder, Thomas B.; Dyre, Jeppe C.
2009-08-01
Time-reversible molecular dynamics algorithms with bond constraints are derived. The algorithms are stable with and without a thermostat and in double precision as well as in single-precision arithmetic. Time reversibility is achieved by applying a central-difference expression for the velocities in the expression for Gauss' principle of least constraint. The imposed time symmetry results in a quadratic expression for the Lagrange multiplier. For a system of complex molecules with connected constraints the corresponding set of coupled quadratic equations is easily solved by a consecutive iteration scheme. The algorithms were tested on two models. One is a dumbbell model of Toluene, the other system consists of molecules with four connected constraints forming a triangle and a branch point of constraints. The equilibrium particle distributions and the mean-square particle displacements for the dumbbell model were compared to the corresponding functions obtained by GROMACS. The agreement is perfect within statistical error.
NASA Astrophysics Data System (ADS)
Tang, Qiuhua; Li, Zixiang; Zhang, Liping; Floudas, C. A.; Cao, Xiaojun
2015-09-01
Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost function. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.
A synthetic dataset for evaluating soft and hard fusion algorithms
NASA Astrophysics Data System (ADS)
Graham, Jacob L.; Hall, David L.; Rimland, Jeffrey
2011-06-01
There is an emerging demand for the development of data fusion techniques and algorithms that are capable of combining conventional "hard" sensor inputs such as video, radar, and multispectral sensor data with "soft" data including textual situation reports, open-source web information, and "hard/soft" data such as image or video data that includes human-generated annotations. New techniques that assist in sense-making over a wide range of vastly heterogeneous sources are critical to improving tactical situational awareness in counterinsurgency (COIN) and other asymmetric warfare situations. A major challenge in this area is the lack of realistic datasets available for test and evaluation of such algorithms. While "soft" message sets exist, they tend to be of limited use for data fusion applications due to the lack of critical message pedigree and other metadata. They also lack corresponding hard sensor data that presents reasonable "fusion opportunities" to evaluate the ability to make connections and inferences that span the soft and hard data sets. This paper outlines the design methodologies, content, and some potential use cases of a COIN-based synthetic soft and hard dataset created under a United States Multi-disciplinary University Research Initiative (MURI) program funded by the U.S. Army Research Office (ARO). The dataset includes realistic synthetic reports from a variety of sources, corresponding synthetic hard data, and an extensive supporting database that maintains "ground truth" through logical grouping of related data into "vignettes." The supporting database also maintains the pedigree of messages and other critical metadata.
Constraint identification and algorithm stabilization for degenerate nonlinear programs.
Wright, S. J.; Mathematics and Computer Science
2003-01-01
In the vicinity of a solution of a nonlinear programming problem at which both strict complementarity and linear independence of the active constraints may fail to hold, we describe a technique for distinguishing weakly active from strongly active constraints. We show that this information can be used to modify the sequential quadratic programming algorithm so that it exhibits superlinear convergence to the solution under assumptions weaker than those made in previous analyses.
An active set algorithm for nonlinear optimization with polyhedral constraints
NASA Astrophysics Data System (ADS)
Hager, William W.; Zhang, Hongchao
2016-08-01
A polyhedral active set algorithm PASA is developed for solving a nonlinear optimization problem whose feasible set is a polyhedron. Phase one of the algorithm is the gradient projection method, while phase two is any algorithm for solving a linearly constrained optimization problem. Rules are provided for branching between the two phases. Global convergence to a stationary point is established, while asymptotically PASA performs only phase two when either a nondegeneracy assumption holds, or the active constraints are linearly independent and a strong second-order sufficient optimality condition holds.
An iterative hard thresholding algorithm for CS MRI
NASA Astrophysics Data System (ADS)
Rajani, S. R.; Reddy, M. Ramasubba
2012-02-01
The recently proposed compressed sensing theory equips us with methods to recover exactly or approximately, high resolution images from very few encoded measurements of the scene. The traditional ill-posed problem of MRI image recovery from heavily under-sampled κ-space data can be thus solved using CS theory. Differing from the soft thresholding methods that have been used earlier in the case of CS MRI, we suggest a simple iterative hard thresholding algorithm which efficiently recovers diagnostic quality MRI images from highly incomplete κ-space measurements. The new multi-scale redundant systems, curvelets and contourlets having high directionality and anisotropy, and thus best suited for curved-edge representation are used in this iterative hard thresholding framework for CS MRI reconstruction and their performance is compared. The κ-space under-sampling schemes such as the variable density sampling and the more conventional radial sampling are experimented at the same sampling rate and the effect of encoding scheme on iterative hard thresholding compressed sensing reconstruction is studied.
Heinstein, M.W.
1997-10-01
A contact enforcement algorithm has been developed for matrix-free quasistatic finite element techniques. Matrix-free (iterative) solution algorithms such as nonlinear Conjugate Gradients (CG) and Dynamic Relaxation (DR) are distinctive in that the number of iterations required for convergence is typically of the same order as the number of degrees of freedom of the model. From iteration to iteration the contact normal and tangential forces vary significantly making contact constraint satisfaction tenuous. Furthermore, global determination and enforcement of the contact constraints every iteration could be questioned on the grounds of efficiency. This work addresses this situation by introducing an intermediate iteration for treating the active gap constraint and at the same time exactly (kinematically) enforcing the linearized gap rate constraint for both frictionless and frictional response.
Leaf Sequencing Algorithm Based on MLC Shape Constraint
NASA Astrophysics Data System (ADS)
Jing, Jia; Pei, Xi; Wang, Dong; Cao, Ruifen; Lin, Hui
2012-06-01
Intensity modulated radiation therapy (IMRT) requires the determination of the appropriate multileaf collimator settings to deliver an intensity map. The purpose of this work was to attempt to regulate the shape between adjacent multileaf collimator apertures by a leaf sequencing algorithm. To qualify and validate this algorithm, the integral test for the segment of the multileaf collimator of ARTS was performed with clinical intensity map experiments. By comparisons and analyses of the total number of monitor units and number of segments with benchmark results, the proposed algorithm performed well while the segment shape constraint produced segments with more compact shapes when delivering the planned intensity maps, which may help to reduce the multileaf collimator's specific effects.
Evolutionary algorithm based structure search for hard ruthenium carbides
NASA Astrophysics Data System (ADS)
Harikrishnan, G.; Ajith, K. M.; Chandra, Sharat; Valsakumar, M. C.
2015-12-01
An exhaustive structure search employing evolutionary algorithm and density functional theory has been carried out for ruthenium carbides, for the three stoichiometries Ru1C1, Ru2C1 and Ru3C1, yielding five lowest energy structures. These include the structures from the two reported syntheses of ruthenium carbides. Their emergence in the present structure search in stoichiometries, unlike the previously reported ones, is plausible in the light of the high temperature required for their synthesis. The mechanical stability and ductile character of all these systems are established by their elastic constants, and the dynamical stability of three of them by the phonon data. Rhombohedral structure ≤ft(R\\bar{3}m\\right) is found to be energetically the most stable one in Ru1C1 stoichiometry and hexagonal structure ≤ft( P\\bar{6}m2\\right) , the most stable in Ru3C1 stoichiometry. RuC-Zinc blende system is a semiconductor with a band gap of 0.618 eV while the other two stable systems are metallic. Employing a semi-empirical model based on the bond strength, the hardness of RuC-Zinc blende is found to be a significantly large value of ~37 GPa while a fairly large value of ~21GPa is obtained for the RuC-Rhombohedral system. The positive formation energies of these systems show that high temperature and possibly high pressure are necessary for their synthesis.
Emissivity range constraints algorithm for multi-wavelength pyrometer (MWP).
Xing, Jian; Rana, R S; Gu, Weihong
2016-08-22
In order to realize rapid and real temperature measurement for high temperature targets by multi-wavelength pyrometer (MWP), emissivity range constraints to optimize data processing algorithm without effect from emissivity has been developed. Through exploring the relation between emissivity deviation and true temperature by fitting of large number of data from different emissivity distribution target models, the effective search range of emissivity for every time iteration is obtained, so data processing time is greatly reduced. Simulation and experimental results indicate that calculation time is less by 0.2 seconds with 25K absolute error at 1800K true temperature, and the efficiency is improved by more than 90% compared with the previous algorithm. The method has advantages of simplicity, rapidity, and suitability for in-line high temperature measurement. PMID:27557198
A multiagent evolutionary algorithm for constraint satisfaction problems.
Liu, Jing; Zhong, Weicai; Jiao, Licheng
2006-02-01
With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10(4) to 10(7), and has a linear time complexity. Even for 10(7)-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained. PMID:16468566
NASA Astrophysics Data System (ADS)
Paksi, A. B. N.; Ma'ruf, A.
2016-02-01
In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.
NASA Astrophysics Data System (ADS)
Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.
2015-12-01
This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or
NASA Astrophysics Data System (ADS)
Virrueta, A.; Gaines, J.; O'Hern, C. S.; Regan, L.
2015-03-01
Current research in the O'Hern and Regan laboratories focuses on the development of hard-sphere models with stereochemical constraints for protein structure prediction as an alternative to molecular dynamics methods that utilize knowledge-based corrections in their force-fields. Beginning with simple hydrophobic dipeptides like valine, leucine, and isoleucine, we have shown that our model is able to reproduce the side-chain dihedral angle distributions derived from sets of high-resolution protein crystal structures. However, methionine remains an exception - our model yields a chi-3 side-chain dihedral angle distribution that is relatively uniform from 60 to 300 degrees, while the observed distribution displays peaks at 60, 180, and 300 degrees. Our goal is to resolve this discrepancy by considering clashes with neighboring residues, and averaging the reduced distribution of allowable methionine structures taken from a set of crystallized proteins. We will also re-evaluate the electron density maps from which these protein structures are derived to ensure that the methionines and their local environments are correctly modeled. This work will ultimately serve as a tool for computing side-chain entropy and protein stability. A. V. is supported by an NSF Graduate Research Fellowship and a Ford Foundation Fellowship. J. G. is supported by NIH training Grant NIH-5T15LM007056-28.
Approximation algorithms for NEXTtime-hard periodically specified problems and domino problems
Marathe, M.V.; Hunt, H.B., III; Stearns, R.E.; Rosenkrantz, D.J.
1996-02-01
We study the efficient approximability of two general class of problems: (1) optimization versions of the domino problems studies in [Ha85, Ha86, vEB83, SB84] and (2) graph and satisfiability problems when specified using various kinds of periodic specifications. Both easiness and hardness results are obtained. Our efficient approximation algorithms and schemes are based on extensions of the ideas. Two of properties of our results obtained here are: (1) For the first time, efficient approximation algorithms and schemes have been developed for natural NEXPTIME-complete problems. (2) Our results are the first polynomial time approximation algorithms with good performance guarantees for `hard` problems specified using various kinds of periodic specifications considered in this paper. Our results significantly extend the results in [HW94, Wa93, MH+94].
Model predictive driving simulator motion cueing algorithm with actuator-based constraints
NASA Astrophysics Data System (ADS)
Garrett, Nikhil J. I.; Best, Matthew C.
2013-08-01
The simulator motion cueing problem has been considered extensively in the literature; approaches based on linear filtering and optimal control have been presented and shown to perform reasonably well. More recently, model predictive control (MPC) has been considered as a variant of the optimal control approach; MPC is perhaps an obvious candidate for motion cueing due to its ability to deal with constraints, in this case the platform workspace boundary. This paper presents an MPC-based cueing algorithm that, unlike other algorithms, uses the actuator positions and velocities as the constraints. The result is a cueing algorithm that can make better use of the platform workspace whilst ensuring that its bounds are never exceeded. The algorithm is shown to perform well against the classical cueing algorithm and an algorithm previously proposed by the authors, both in simulation and in tests with human drivers.
Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chiou, Jin-Chern
1990-01-01
Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.
Parallelized event chain algorithm for dense hard sphere and polymer systems
Kampmann, Tobias A. Boltz, Horst-Holger; Kierfeld, Jan
2015-01-15
We combine parallelization and cluster Monte Carlo for hard sphere systems and present a parallelized event chain algorithm for the hard disk system in two dimensions. For parallelization we use a spatial partitioning approach into simulation cells. We find that it is crucial for correctness to ensure detailed balance on the level of Monte Carlo sweeps by drawing the starting sphere of event chains within each simulation cell with replacement. We analyze the performance gains for the parallelized event chain and find a criterion for an optimal degree of parallelization. Because of the cluster nature of event chain moves massive parallelization will not be optimal. Finally, we discuss first applications of the event chain algorithm to dense polymer systems, i.e., bundle-forming solutions of attractive semiflexible polymers.
A fast multigrid algorithm for energy minimization under planar density constraints.
Ron, D.; Safro, I.; Brandt, A.; Mathematics and Computer Science; Weizmann Inst. of Science
2010-09-07
The two-dimensional layout optimization problem reinforced by the efficient space utilization demand has a wide spectrum of practical applications. Formulating the problem as a nonlinear minimization problem under planar equality and/or inequality density constraints, we present a linear time multigrid algorithm for solving a correction to this problem. The method is demonstrated in various graph drawing (visualization) instances.
A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem
Bayardo, R.J. Jr.; Miranker, D.P.
1996-12-31
Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-space-bounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevance-bounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes.
On-line reentry guidance algorithm with both path and no-fly zone constraints
NASA Astrophysics Data System (ADS)
Zhang, Da; Liu, Lei; Wang, Yongji
2015-12-01
This study proposes an on-line predictor-corrector reentry guidance algorithm that satisfies path and no-fly zone constraints for hypersonic vehicles with a high lift-to-drag ratio. The proposed guidance algorithm can generate a feasible trajectory at each guidance cycle during the entry flight. In the longitudinal profile, numerical predictor-corrector approaches are used to predict the flight capability from current flight states to expected terminal states and to generate an on-line reference drag acceleration profile. The path constraints on heat rate, aerodynamic load, and dynamic pressure are implemented as a part of the predictor-corrector algorithm. A tracking control law is then designed to track the reference drag acceleration profile. In the lateral profile, a novel guidance algorithm is presented. The velocity azimuth angle error threshold and artificial potential field method are used to reduce heading error and to avoid the no-fly zone. Simulated results for nominal and dispersed cases show that the proposed guidance algorithm not only can avoid the no-fly zone but can also steer a typical entry vehicle along a feasible 3D trajectory that satisfies both terminal and path constraints.
NEW CONSTRAINTS ON THE BLACK HOLE LOW/HARD STATE INNER ACCRETION FLOW WITH NuSTAR
Miller, J. M.; King, A. L.; Tomsick, J. A.; Boggs, S. E.; Bachetti, M.; Wilkins, D.; Christensen, F. E.; Craig, W. W.; Fabian, A. C.; Kara, E.; Grefenstette, B. W.; Harrison, F. A.; Hailey, C. J.; Stern, D. K; Zhang, W. W.
2015-01-20
We report on an observation of the Galactic black hole candidate GRS 1739–278 during its 2014 outburst, obtained with NuSTAR. The source was captured at the peak of a rising ''low/hard'' state, at a flux of ∼0.3 Crab. A broad, skewed iron line and disk reflection spectrum are revealed. Fits to the sensitive NuSTAR spectra with a number of relativistically blurred disk reflection models yield strong geometrical constraints on the disk and hard X-ray ''corona''. Two models that explicitly assume a ''lamp post'' corona find its base to have a vertical height above the black hole of h=5{sub −2}{sup +7} GM/c{sup 2} and h = 18 ± 4 GM/c {sup 2} (90% confidence errors); models that do not assume a ''lamp post'' return emissivity profiles that are broadly consistent with coronae of this size. Given that X-ray microlensing studies of quasars and reverberation lags in Seyferts find similarly compact coronae, observations may now signal that compact coronae are fundamental across the black hole mass scale. All of the models fit to GRS 1739–278 find that the accretion disk extends very close to the black hole—the least stringent constraint is r{sub in}=5{sub −4}{sup +3} GM/c{sup 2}. Only two of the models deliver meaningful spin constraints, but a = 0.8 ± 0.2 is consistent with all of the fits. Overall, the data provide especially compelling evidence of an association between compact hard X-ray coronae and the base of relativistic radio jets in black holes.
Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas
NASA Technical Reports Server (NTRS)
Smith, Barbara M.; Bennett, Sean
1992-01-01
A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.
Zhang, Jinkai; Rivard, Benoit; Rogge, D.M.
2008-01-01
Spectral mixing is a problem inherent to remote sensing data and results in few image pixel spectra representing ″pure″ targets. Linear spectral mixture analysis is designed to address this problem and it assumes that the pixel-to-pixel variability in a scene results from varying proportions of spectral endmembers. In this paper we present a different endmember-search algorithm called the Successive Projection Algorithm (SPA). SPA builds on convex geometry and orthogonal projection common to other endmember search algorithms by including a constraint on the spatial adjacency of endmember candidate pixels. Consequently it can reduce the susceptibility to outlier pixels and generates realistic endmembers.This is demonstrated using two case studies (AVIRIS Cuprite cube and Probe-1 imagery for Baffin Island) where image endmembers can be validated with ground truth data. The SPA algorithm extracts endmembers from hyperspectral data without having to reduce the data dimensionality. It uses the spectral angle (alike IEA) and the spatial adjacency of pixels in the image to constrain the selection of candidate pixels representing an endmember. We designed SPA based on the observation that many targets have spatial continuity (e.g. bedrock lithologies) in imagery and thus a spatial constraint would be beneficial in the endmember search. An additional product of the SPA is data describing the change of the simplex volume ratio between successive iterations during the endmember extraction. It illustrates the influence of a new endmember on the data structure, and provides information on the convergence of the algorithm. It can provide a general guideline to constrain the total number of endmembers in a search.
An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints
Sung, Jinmo; Jeong, Bongju
2014-01-01
Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments. PMID:24701158
Precise algorithm to generate random sequential addition of hard hyperspheres at saturation.
Zhang, G; Torquato, S
2013-11-01
The study of the packing of hard hyperspheres in d-dimensional Euclidean space R^{d} has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. E 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g(2)(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed "decorrelation" principle, and the degree of "hyperuniformity" (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the second moment of inertia of the average
Precise algorithm to generate random sequential addition of hard hyperspheres at saturation
NASA Astrophysics Data System (ADS)
Zhang, G.; Torquato, S.
2013-11-01
The study of the packing of hard hyperspheres in d-dimensional Euclidean space Rd has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.74.061308 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g2(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed “decorrelation” principle, and the degree of “hyperuniformity” (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the
An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.
Zhang, Ye; Yu, Tenglong; Wang, Wenwu
2014-01-01
Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms. PMID:25126605
Martín H, José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter α∈N. Nevertheless, here it is proved that the probability of requiring a value of α>k to obtain a solution for a random graph decreases exponentially: P(α>k)≤2(-(k+1)), making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
Sun, Liping; Luo, Yonglong; Ding, Xintao; Zhang, Ji
2014-01-01
An important component of a spatial clustering algorithm is the distance measure between sample points in object space. In this paper, the traditional Euclidean distance measure is replaced with innovative obstacle distance measure for spatial clustering under obstacle constraints. Firstly, we present a path searching algorithm to approximate the obstacle distance between two points for dealing with obstacles and facilitators. Taking obstacle distance as similarity metric, we subsequently propose the artificial immune clustering with obstacle entity (AICOE) algorithm for clustering spatial point data in the presence of obstacles and facilitators. Finally, the paper presents a comparative analysis of AICOE algorithm and the classical clustering algorithms. Our clustering model based on artificial immune system is also applied to the case of public facility location problem in order to establish the practical applicability of our approach. By using the clone selection principle and updating the cluster centers based on the elite antibodies, the AICOE algorithm is able to achieve the global optimum and better clustering effect. PMID:25435862
NASA Astrophysics Data System (ADS)
Lahanas, Michael; Schreibmann, Eduard; Baltas, Dimos
2003-09-01
We consider the behaviour of the limited memory L-BFGS algorithm as a representative constraint-free gradient-based algorithm which is used for multiobjective (MO) dose optimization for intensity modulated radiotherapy (IMRT). Using a parameter transformation, the positivity constraint problem of negative beam fluences is entirely eliminated: a feature which to date has not been fully understood by all investigators. We analyse the global convergence properties of L-BFGS by searching for the existence and the influence of possible local minima. With a fast simulated annealing (FSA) algorithm we examine whether the L-BFGS solutions are globally Pareto optimal. The three examples used in our analysis are a brain tumour, a prostate tumour and a test case with a C-shaped PTV. In 1% of the optimizations global convergence is violated. A simple mechanism practically eliminates the influence of this failure and the obtained solutions are globally optimal. A single-objective dose optimization requires less than 4 s for 5400 parameters and 40 000 sampling points. The elimination of the problem of negative beam fluences and the high computational speed permit constraint-free gradient-based optimization algorithms to be used for MO dose optimization. In this situation, a representative spectrum of possible solutions is obtained which contains information such as the trade-off between the objectives and range of dose values. Using simple decision making tools the best of all the possible solutions can be chosen. We perform an MO dose optimization for the three examples and compare the spectra of solutions, firstly using recommended critical dose values for the organs at risk and secondly, setting these dose values to zero.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Bowen, J.; Dozier, G.
1996-12-31
This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.
A constraint-based search algorithm for parameter identification of environmental models
NASA Astrophysics Data System (ADS)
Gharari, S.; Shafiei, M.; Hrachowitz, M.; Kumar, R.; Fenicia, F.; Gupta, H. V.; Savenije, H. H. G.
2014-12-01
Many environmental systems models, such as conceptual rainfall-runoff models, rely on model calibration for parameter identification. For this, an observed output time series (such as runoff) is needed, but frequently not available (e.g., when making predictions in ungauged basins). In this study, we provide an alternative approach for parameter identification using constraints based on two types of restrictions derived from prior (or expert) knowledge. The first, called parameter constraints, restricts the solution space based on realistic relationships that must hold between the different model parameters while the second, called process constraints requires that additional realism relationships between the fluxes and state variables must be satisfied. Specifically, we propose a search algorithm for finding parameter sets that simultaneously satisfy such constraints, based on stepwise sampling of the parameter space. Such parameter sets have the desirable property of being consistent with the modeler's intuition of how the catchment functions, and can (if necessary) serve as prior information for further investigations by reducing the prior uncertainties associated with both calibration and prediction.
Mathews, David H.; Disney, Matthew D.; Childs, Jessica L.; Schroeder, Susan J.; Zuker, Michael; Turner, Douglas H.
2004-01-01
A dynamic programming algorithm for prediction of RNA secondary structure has been revised to accommodate folding constraints determined by chemical modification and to include free energy increments for coaxial stacking of helices when they are either adjacent or separated by a single mismatch. Furthermore, free energy parameters are revised to account for recent experimental results for terminal mismatches and hairpin, bulge, internal, and multibranch loops. To demonstrate the applicability of this method, in vivo modification was performed on 5S rRNA in both Escherichia coli and Candida albicans with 1-cyclohexyl-3-(2-morpholinoethyl) carbodiimide metho-p-toluene sulfonate, dimethyl sulfate, and kethoxal. The percentage of known base pairs in the predicted structure increased from 26.3% to 86.8% for the E. coli sequence by using modification constraints. For C. albicans, the accuracy remained 87.5% both with and without modification data. On average, for these sequences and a set of 14 sequences with known secondary structure and chemical modification data taken from the literature, accuracy improves from 67% to 76%. This enhancement primarily reflects improvement for three sequences that are predicted with <40% accuracy on the basis of energetics alone. For these sequences, inclusion of chemical modification constraints improves the average accuracy from 28% to 78%. For the 11 sequences with <6% pseudoknotted base pairs, structures predicted with constraints from chemical modification contain on average 84% of known canonical base pairs. PMID:15123812
Formal analysis, hardness, and algorithms for extracting internal structure of test-based problems.
Jaśkowski, Wojciech; Krawiec, Krzysztof
2011-01-01
Problems in which some elementary entities interact with each other are common in computational intelligence. This scenario, typical for coevolving artificial life agents, learning strategies for games, and machine learning from examples, can be formalized as a test-based problem and conveniently embedded in the common conceptual framework of coevolution. In test-based problems, candidate solutions are evaluated on a number of test cases (agents, opponents, examples). It has been recently shown that every test of such problem can be regarded as a separate objective, and the whole problem as multi-objective optimization. Research on reducing the number of such objectives while preserving the relations between candidate solutions and tests led to the notions of underlying objectives and internal problem structure, which can be formalized as a coordinate system that spatially arranges candidate solutions and tests. The coordinate system that spans the minimal number of axes determines the so-called dimension of a problem and, being an inherent property of every problem, is of particular interest. In this study, we investigate in-depth the formalism of a coordinate system and its properties, relate them to properties of partially ordered sets, and design an exact algorithm for finding a minimal coordinate system. We also prove that this problem is NP-hard and come up with a heuristic which is superior to the best algorithm proposed so far. Finally, we apply the algorithms to three abstract problems and demonstrate that the dimension of the problem is typically much lower than the number of tests, and for some problems converges to the intrinsic parameter of the problem--its a priori dimension. PMID:21815770
Suzaku Constraints on Soft and Hard Excess Emissions from Abell 2199
NASA Astrophysics Data System (ADS)
Kawaharada, Madoka; Makishima, Kazuo; Kitaguchi, Takao; Okuyama, Sho; Nakazawa, Kazuhiro; Fukazawa, Yasushi
2010-02-01
The nearby (z = 0.03015) cluster of galaxies Abell 2199 was observed by Suzaku in X-rays, with five pointings for ˜20ks each. From the XIS data, the temperature and metal abundance profiles were derived out to ˜700 kpc (0.4 times the virial radius). Both of these quantities decrease gradually from the center to peripheries by a factor of ˜2, while the oxygen abundance tends to be flat. The temperature within 12' (˜430 kpc) is ˜4 keV, and the 0.5-10 keV X-ray luminosity integrated up to 30' is (2.9±0.1) × 1044 erg s-1, in agreement with previous XMM-Newton measurements. Above this thermal emission, no significant excess was found either in the XIS range below ˜1 keV, or in the HXD-PIN range above ˜15 keV. The 90%-confidence upper limit on the emission measure of an assumed 0.2 keV warm gas is (3.7-7.5) × 1062 cm-3 arcmin-2, which is 3.7-7.6 times tighter than the detection reported with XMM-Newton. The 90%-confidence upper limit on the 20-80 keV luminosity of any power-law component is 1.8 × 1043 erg s-1, assuming a photon index of 2.0. Although this upper limit does not reject the possible 2.1σ detection by the BeppoSAX PDS, it is a factor of 2.1 tighter than that of the PDS if both are considered upper limits. The non-detection of the hard excess can be reconciled with the upper limit on diffuse radio emission, without invoking very low magnetic fields (<0.073μG) which were suggested previously.
Frutos, M; Méndez, M; Tohmé, F; Broz, D
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502
Frutos, M.; Méndez, M.; Tohmé, F.; Broz, D.
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502
NASA Astrophysics Data System (ADS)
Sunaguchi, Naoki; Yuasa, Tetsuya; Ando, Masami
2013-09-01
We propose a reconstruction algorithm for analyzer-based phase-contrast computed tomography (CT) applicable to biological samples including hard tissue that may generate conspicuous artifacts with the conventional reconstruction method. The algorithm is an iterative procedure that goes back and forth between a tomogram and its sinogram through the Radon transform and CT reconstruction, while imposing a priori information in individual regions. We demonstrate the efficacy of the algorithm using synthetic data generated by computer simulation reflecting actual experimental conditions and actual data acquired from a rat foot by a dark field imaging system.
RNAiFOLD: a constraint programming algorithm for RNA inverse folding and molecular design.
Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan
2013-04-01
Synthetic biology is a rapidly emerging discipline with long-term ramifications that range from single-molecule detection within cells to the creation of synthetic genomes and novel life forms. Truly phenomenal results have been obtained by pioneering groups--for instance, the combinatorial synthesis of genetic networks, genome synthesis using BioBricks, and hybridization chain reaction (HCR), in which stable DNA monomers assemble only upon exposure to a target DNA fragment, biomolecular self-assembly pathways, etc. Such work strongly suggests that nanotechnology and synthetic biology together seem poised to constitute the most transformative development of the 21st century. In this paper, we present a Constraint Programming (CP) approach to solve the RNA inverse folding problem. Given a target RNA secondary structure, we determine an RNA sequence which folds into the target structure; i.e. whose minimum free energy structure is the target structure. Our approach represents a step forward in RNA design--we produce the first complete RNA inverse folding approach which allows for the specification of a wide range of design constraints. We also introduce a Large Neighborhood Search approach which allows us to tackle larger instances at the cost of losing completeness, while retaining the advantages of meeting design constraints (motif, GC-content, etc.). Results demonstrate that our software, RNAiFold, performs as well or better than all state-of-the-art approaches; nevertheless, our approach is unique in terms of completeness, flexibility, and the support of various design constraints. The algorithms presented in this paper are publicly available via the interactive webserver http://bioinformatics.bc.edu/clotelab/RNAiFold; additionally, the source code can be downloaded from that site. PMID:23600819
NASA Astrophysics Data System (ADS)
Khatibinia, Mohsen; Sadegh Naseralavi, Seyed
2014-12-01
Structural optimization on shape and sizing with frequency constraints is well-known as a highly nonlinear dynamic optimization problem with several local optimum solutions. Hence, efficient optimization algorithms should be utilized to solve this problem. In this study, orthogonal multi-gravitational search algorithm (OMGSA) as a meta-heuristic algorithm is introduced to solve truss optimization on shape and sizing with frequency constraints. The OMGSA is a hybrid approach based on a combination of multi-gravitational search algorithm (multi-GSA) and an orthogonal crossover (OC). In multi-GSA, the population is split into several sub-populations. Then, each sub-population is independently evaluated by an improved gravitational search algorithm (IGSA). Furthermore, the OC is used in the proposed OMGSA in order to find and exploit the global solution in the search space. The capability of OMGSA is demonstrated through six benchmark examples. Numerical results show that the proposed OMGSA outperform the other optimization techniques.
Hard Data Analytics Problems Make for Better Data Analysis Algorithms: Bioinformatics as an Example
Widera, Paweł; Lazzarini, Nicola; Krasnogor, Natalio
2014-01-01
Abstract Data mining and knowledge discovery techniques have greatly progressed in the last decade. They are now able to handle larger and larger datasets, process heterogeneous information, integrate complex metadata, and extract and visualize new knowledge. Often these advances were driven by new challenges arising from real-world domains, with biology and biotechnology a prime source of diverse and hard (e.g., high volume, high throughput, high variety, and high noise) data analytics problems. The aim of this article is to show the broad spectrum of data mining tasks and challenges present in biological data, and how these challenges have driven us over the years to design new data mining and knowledge discovery procedures for biodata. This is illustrated with the help of two kinds of case studies. The first kind is focused on the field of protein structure prediction, where we have contributed in several areas: by designing, through regression, functions that can distinguish between good and bad models of a protein's predicted structure; by creating new measures to characterize aspects of a protein's structure associated with individual positions in a protein's sequence, measures containing information that might be useful for protein structure prediction; and by creating accurate estimators of these structural aspects. The second kind of case study is focused on omics data analytics, a class of biological data characterized for having extremely high dimensionalities. Our methods were able not only to generate very accurate classification models, but also to discover new biological knowledge that was later ratified by experimentalists. Finally, we describe several strategies to tightly integrate knowledge extraction and data mining in order to create a new class of biodata mining algorithms that can natively embrace the complexity of biological data, efficiently generate accurate information in the form of classification/regression models, and extract valuable
Homotopy Algorithm for Optimal Control Problems with a Second-order State Constraint
Hermant, Audrey
2010-02-15
This paper deals with optimal control problems with a regular second-order state constraint and a scalar control, satisfying the strengthened Legendre-Clebsch condition. We study the stability of structure of stationary points. It is shown that under a uniform strict complementarity assumption, boundary arcs are stable under sufficiently smooth perturbations of the data. On the contrary, nonreducible touch points are not stable under perturbations. We show that under some reasonable conditions, either a boundary arc or a second touch point may appear. Those results allow us to design an homotopy algorithm which automatically detects the structure of the trajectory and initializes the shooting parameters associated with boundary arcs and touch points.
Line Matching Algorithm for Aerial Image Combining image and object space similarity constraints
NASA Astrophysics Data System (ADS)
Wang, Jingxue; Wang, Weixi; Li, Xiaoming; Cao, Zhenyu; Zhu, Hong; Li, Miao; He, Biao; Zhao, Zhigang
2016-06-01
A new straight line matching method for aerial images is proposed in this paper. Compared to previous works, similarity constraints combining radiometric information in image and geometry attributes in object plane are employed in these methods. Firstly, initial candidate lines and the elevation values of lines projection plane are determined by corresponding points in neighborhoods of reference lines. Secondly, project reference line and candidate lines back forward onto the plane, and then similarity measure constraints are enforced to reduce the number of candidates and to determine the finial corresponding lines in a hierarchical way. Thirdly, "one-to-many" and "many-to-one" matching results are transformed into "one-to-one" by merging many lines into the new one, and the errors are eliminated simultaneously. Finally, endpoints of corresponding lines are detected by line expansion process combing with "image-object-image" mapping mode. Experimental results show that the proposed algorithm can be able to obtain reliable line matching results for aerial images.
Research on imaging ranging algorithm base on constraint matching of trinocular vision
NASA Astrophysics Data System (ADS)
Ye, Pan; Li, Li; Jin, Wei-Qi; Jiang, Yu-tong
2014-11-01
Binocular stereo vision is a common passive ranging method, which directly simulates the approach of human visual. It can flexibly measure the stereo information in a complex condition. However there is a problem that binocular vision ranging accuracy is not high , one of the reasons is the low precision of the stereo image pairs matching . In this paper, based on trinocular vision imaging ranging algorithm of constraint matching, we use trinocular visual ranging system which is composed of three parallel placed cameras to image and achieve distance measurement of the target. we use calibration method of Zhang to calibrate the cameras, firstly, the three cameras are calibrated respectively, then using the results to get three groups binocular calibration. Thereby the relative position information of each camera are obtained. The using of the information obtained by the third camera can reduce ambiguity of corresponding points matching in a Binocular camera system. limiting search space by the epipolar constraint and improve the matching speed, filtering the distance information , eliminate interference information which brings by the feature points on the prospect and background to obtain a more accurate distance result of target. Experimental results show that, this method can overcome the limitations of binocular vision ranging , effectively improving the range accuracy.
An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction
ERIC Educational Resources Information Center
Bhasin, Harpreet
2011-01-01
Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…
A Greedy reassignment algorithm for the PBS minimum monitor unit constraint.
Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin
2016-06-21
Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90 ± 1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5
A Greedy reassignment algorithm for the PBS minimum monitor unit constraint
NASA Astrophysics Data System (ADS)
Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin
2016-06-01
Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90 ± 1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5
NASA Astrophysics Data System (ADS)
Zhao, Jingtao; Peng, Suping; Du, Wenfeng
2016-02-01
We consider sparsity-constraint inversion method for detecting seismic small-scale discontinuities, such as edges, faults and cavities, which provide rich information about petroleum reservoirs. However, where there is karstification and interference caused by macro-scale fault systems, these seismic small-scale discontinuities are hard to identify when using currently available discontinuity-detection methods. In the subsurface, these small-scale discontinuities are separately and sparsely distributed and their seismic responses occupy a very small part of seismic image. Considering these sparsity and non-smooth features, we propose an effective L 2-L 0 norm model for improvement of their resolution. First, we apply a low-order plane-wave destruction method to eliminate macro-scale smooth events. Then, based the residual data, we use a nonlinear structure-enhancing filter to build a L 2-L 0 norm model. In searching for its solution, an efficient and fast convergent penalty decomposition method is employed. The proposed method can achieve a significant improvement in enhancing seismic small-scale discontinuities. Numerical experiment and field data application demonstrate the effectiveness and feasibility of the proposed method in studying the relevant geology of these reservoirs.
NASA Astrophysics Data System (ADS)
Guo, Peng; Cheng, Wenming; Wang, Yi
2014-10-01
The quay crane scheduling problem (QCSP) determines the handling sequence of tasks at ship bays by a set of cranes assigned to a container vessel such that the vessel's service time is minimized. A number of heuristics or meta-heuristics have been proposed to obtain the near-optimal solutions to overcome the NP-hardness of the problem. In this article, the idea of generalized extremal optimization (GEO) is adapted to solve the QCSP with respect to various interference constraints. The resulting GEO is termed the modified GEO. A randomized searching method for neighbouring task-to-QC assignments to an incumbent task-to-QC assignment is developed in executing the modified GEO. In addition, a unidirectional search decoding scheme is employed to transform a task-to-QC assignment to an active quay crane schedule. The effectiveness of the developed GEO is tested on a suite of benchmark problems introduced by K.H. Kim and Y.M. Park in 2004 (European Journal of Operational Research, Vol. 156, No. 3). Compared with other well-known existing approaches, the experiment results show that the proposed modified GEO is capable of obtaining the optimal or near-optimal solution in a reasonable time, especially for large-sized problems.
Liang, Mei; Sun, Xiao-gang; Luan, Mei-sheng
2015-10-01
Temperature measurement is one of the important factors for ensuring product quality, reducing production cost and ensuring experiment safety in industrial manufacture and scientific experiment. Radiation thermometry is the main method for non-contact temperature measurement. The second measurement (SM) method is one of the common methods in the multispectral radiation thermometry. However, the SM method cannot be applied to on-line data processing. To solve the problems, a rapid inversion method for multispectral radiation true temperature measurement is proposed and constraint conditions of emissivity model are introduced based on the multispectral brightness temperature model. For non-blackbody, it can be drawn that emissivity is an increasing function in the interval if the brightness temperature is an increasing function or a constant function in a range and emissivity satisfies an inequality of emissivity and wavelength in that interval if the brightness temperature is a decreasing function in a range, according to the relationship of brightness temperatures at different wavelengths. The construction of emissivity assumption values is reduced from multiclass to one class and avoiding the unnecessary emissivity construction with emissivity model constraint conditions on the basis of brightness temperature information. Simulation experiments and comparisons for two different temperature points are carried out based on five measured targets with five representative variation trends of real emissivity. decreasing monotonically, increasing monotonically, first decreasing with wavelength and then increasing, first increasing and then decreasing and fluctuating with wavelength randomly. The simulation results show that compared with the SM method, for the same target under the same initial temperature and emissivity search range, the processing speed of the proposed algorithm is increased by 19.16%-43.45% with the same precision and the same calculation results
Parvizi, A; Van den Broek, W; Koch, C T
2016-04-18
The transport of intensity equation (TIE) is widely applied for recovering wave fronts from an intensity measurement and a measurement of its variation along the direction of propagation. In order to get around the problem of non-uniqueness and ill-conditionedness of the solution of the TIE in the very common case of unspecified boundary conditions or noisy data, additional constraints to the solution are necessary. Although from a numerical optimization point of view, convex constraint as imposed to by total variation minimization is preferable, we will show that in many cases non-convex constraints are necessary to overcome the low-frequency artifacts so typical for convex constraints. We will provide simulated and experimental examples that demonstrate the superiority of solutions to the TIE obtained by our recently introduced gradient flipping algorithm over a total variation constrained solution. PMID:27137272
Genetic algorithm to design Laue lenses with optimal performance for focusing hard X- and γ-rays
NASA Astrophysics Data System (ADS)
Camattari, Riccardo; Guidi, Vincenzo
2014-10-01
To focus hard X- and γ-rays it is possible to use a Laue lens as a concentrator. With this optics it is possible to improve the detection of radiation for several applications, from the observation of the most violent phenomena in the sky to nuclear medicine applications for diagnostic and therapeutic purposes. We implemented a code named LaueGen, which is based on a genetic algorithm and aims to design optimized Laue lenses. The genetic algorithm was selected because optimizing a Laue lens is a complex and discretized problem. The output of the code consists of the design of a Laue lens, which is composed of diffracting crystals that are selected and arranged in such a way as to maximize the lens performance. The code allows managing crystals of any material and crystallographic orientation. The program is structured in such a way that the user can control all the initial lens parameters. As a result, LaueGen is highly versatile and can be used to design very small lenses, for example, for nuclear medicine, or very large lenses, for example, for satellite-borne astrophysical missions.
NASA Astrophysics Data System (ADS)
Chang, Cheng; Xu, Wei; Chen-Wiegart, Yu-chen Karen; Wang, Jun; Yu, Dantong
2013-12-01
X-ray Absorption Near Edge Structure (XANES) imaging, an advanced absorption spectroscopy technique, at the Transmission X-ray Microscopy (TXM) Beamline X8C of NSLS enables high-resolution chemical mapping (a.k.a. chemical composition identification or chemical spectra fitting). Two-Dimensional (2D) chemical mapping has been successfully applied to study many functional materials to decide the percentages of chemical components at each pixel position of the material images. In chemical mapping, the attenuation coefficient spectrum of the material (sample) can be fitted with the weighted sum of standard spectra of individual chemical compositions, where the weights are the percentages to be calculated. In this paper, we first implemented and compared two fitting approaches: (i) a brute force enumeration method, and (ii) a constrained least square minimization algorithm proposed by us. Next, as 2D spectra fitting can be conducted pixel by pixel, so theoretically, both methods can be implemented in parallel. In order to demonstrate the feasibility of parallel computing in the chemical mapping problem and investigate how much efficiency improvement can be achieved, we used the second approach as an example and implemented a parallel version for a multi-core computer cluster. Finally we used a novel way to visualize the calculated chemical compositions, by which domain scientists could grasp the percentage difference easily without looking into the real data.
Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson
2006-08-01
We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.
Williams, P.T.
1993-09-01
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.
NASA Astrophysics Data System (ADS)
Krauze, W.; Makowski, P.; Kujawińska, M.
2015-06-01
Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.
NASA Astrophysics Data System (ADS)
Sun, Junfeng; Chang, Qin; Hu, Xiaohui; Yang, Yueling
2015-04-01
In this paper, we investigate the contributions of hard spectator scattering and annihilation in B → PV decays within the QCD factorization framework. With available experimental data on B → πK* , ρK , πρ and Kϕ decays, comprehensive χ2 analyses of the parameters XA,Hi,f (ρA,Hi,f, ϕA,Hi,f) are performed, where XAf (XAi) and XH are used to parameterize the endpoint divergences of the (non)factorizable annihilation and hard spectator scattering amplitudes, respectively. Based on χ2 analyses, it is observed that (1) The topology-dependent parameterization scheme is feasible for B → PV decays; (2) At the current accuracy of experimental measurements and theoretical evaluations, XH = XAi is allowed by B → PV decays, but XH ≠ XAf at 68% C.L.; (3) With the simplification XH = XAi, parameters XAf and XAi should be treated individually. The above-described findings are very similar to those obtained from B → PP decays. Numerically, for B → PV decays, we obtain (ρA,Hi ,ϕA,Hi [ ° ]) = (2.87-1.95+0.66 , -145-21+14) and (ρAf, ϕAf [ ° ]) = (0.91-0.13+0.12 , -37-9+10) at 68% C.L. With the best-fit values, most of the theoretical results are in good agreement with the experimental data within errors. However, significant corrections to the color-suppressed tree amplitude α2 related to a large ρH result in the wrong sign for ACPdir (B- →π0K*-) compared with the most recent BABAR data, which presents a new obstacle in solving "ππ" and "πK" puzzles through α2. A crosscheck with measurements at Belle (or Belle II) and LHCb, which offer higher precision, is urgently expected to confirm or refute such possible mismatch.
NASA Astrophysics Data System (ADS)
Virrueta, Alejandro; Zhou, Alice; O'Hern, Corey; Regan, Lynne
2014-03-01
Molecular dynamics methods have significantly advanced the understanding of protein folding and stability. However, current force-fields cannot accurately calculate and rank the stability of modified or de novo proteins. One possible reason is that current force-fields use knowledge-based corrections that improve dihedral angle sampling, but do not satisfy the stereochemical constraints for amino acids. I propose the use of simple hard-sphere models for amino acids with stereochemical constraints taken from high-resolution protein crystal structures. This model can enable a correct consideration of the entropy of side-chain rotations, and may be sufficient to predict the effects of single residue mutations in the hydrophobic cores of staphylococcal nuclease and T4 lysozyme on stability changes. I will computationally count the total number of allowed side-chain conformations Ω and calculate the associated entropy, S = kBln(Ω) , before and after each mutation. I will then rank the stability of the mutated cores based on my computed entropy changes, and compare my results with structural and thermodynamic data published by the Stites and Matthews groups. If successful, this project will provide a novel framework for the evaluation of entropic protein stabilities, and serve as a possible tool for computational protein design.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions
Semenov, Alexander; Zaikin, Oleg
2016-01-01
In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method. PMID:27190753
NASA Astrophysics Data System (ADS)
Li, Dongxing; Zhao, Yan; Dong, Xu
2008-03-01
In general image restoration, the point spread function (PSF) of the imaging system, and the observation noise, are known a priori information. The aero-optics effect is yielded when the objects ( e.g, missile, aircraft etc.) are flying in high speed or ultrasonic speed. In this situation, the PSF and the observation noise are unknown a priori. The identification and the restoration of the turbulence degraded images is a challenging problem in the world. The algorithm based on the nonnegativity and support constraints recursive inverse filtering (NAS-RIF) is proposed in order to identify and restore the turbulence degraded images. The NAS-RIF technique applies to situations in which the scene consists of a finite support object against a uniformly black, grey, or white background. The restoration procedure of NAS-RIF involves recursive filtering of the blurred image to minimize a convex cost function. The algorithm proposed in this paper is that the turbulence degraded image is filtered before it passes the recursive filter. The conjugate gradient minimization routine was used for minimization of the NAS-RIF cost function. The algorithm based on the NAS-RIF is used to identify and restore the wind tunnel tested images. The experimental results show that the restoration effect is improved obviously.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Torczon, Virginia
1998-01-01
We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.
Shankar, T.J.; Sokhansanj, Shahabaddine
2010-02-01
Crossover and mutation are the main search operators of genetic algorithm, one of the most important features which distinguish it from other search algorithms like simulated annealing. A genetic algorithm adopts crossover and mutation as their main genetic operators. The present work was aimed to see the effect of genetic algorithm operators like crossover and mutation (Pc & Pm), population size (n), and number of iterations (I) on predicting the minimum hardness (N) of the biomaterial extrudate. The second order polynomial regression equation developed for the extrudate property hardness in terms of the independent variables like barrel temperature, screw speed, fish content of the feed, and feed moisture content was used as the objective function in the GA analysis. A simple genetic algorithm (SGA) with a crossover and mutation operators was used in the present study. A program was developed in C language for a SGA with a rank based fitness selection method. The upper limit of population and iterations were fixed at 100. It was observed that increasing population and iterations the prediction of function minimum improved drastically. Minimum predicted hardness values were achievable with a medium population of 50, iterations of 50 and crossover and mutation probabilities of 50 % and 0.5 %. Further the Pareto charts indicated that the effect of Pc was found to be more significant when population is 50 and Pm played a major role at low population ( 10). A crossover probability of 50 % and mutation probability of 0.5 % are the threshold values for the convergence of GA to reach a global search space. A minimum predicted hardness value of 3.82 (N) was observed for n = 60 and I = 100 and Pc & Pm of 85 % and 0.5 %.
NASA Astrophysics Data System (ADS)
Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang
2016-08-01
Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Berger, Gilles; Million-Picallion, Lisa; Lefevre, Grégory; Delaunay, Sophie
2015-04-01
Introduction: The hydrothermal crystallization of silicates phases in the Si-Al-Fe system may lead to industrial constraints that can be encountered in the nuclear industry in at least two contexts: the geological repository for nuclear wastes and the formation of hard sludges in the steam generator of the PWR nuclear plants. In the first situation, the chemical reactions between the Fe-canister and the surrounding clays have been extensively studied in laboratory [1-7] and pilot experiments [8]. These studies demonstrated that the high reactivity of metallic iron leads to the formation of Fe-silicates, berthierine like, in a wide range of temperature. By contrast, the formation of deposits in the steam generators of PWR plants, called hard sludges, is a newer and less studied issue which can affect the reactor performance. Experiments: We present here a preliminary set of experiments reproducing the formation of hard sludges under conditions representative of the steam generator of PWR power plant: 275°C, diluted solutions maintained at low potential by hydrazine addition and at alkaline pH by low concentrations of amines and ammoniac. Magnetite, a corrosion by-product of the secondary circuit, is the source of iron while aqueous Si and Al, the major impurities in this system, are supplied either as trace elements in the circulating solution or by addition of amorphous silica and alumina when considering confined zones. The fluid chemistry is monitored by sampling aliquots of the solution. Eh and pH are continuously measured by hydrothermal Cormet© electrodes implanted in a titanium hydrothermal reactor. The transformation, or not, of the solid fraction was examined post-mortem. These experiments evidenced the role of Al colloids as precursor of cements composed of kaolinite and boehmite, and the passivation of amorphous silica (becoming unreactive) likely by sorption of aqueous iron. But no Fe-bearing was formed by contrast to many published studies on the Fe
NASA Astrophysics Data System (ADS)
Huseyin Turan, Hasan; Kasap, Nihat; Savran, Huseyin
2014-03-01
Nowadays, every firm uses telecommunication networks in different amounts and ways in order to complete their daily operations. In this article, we investigate an optimisation problem that a firm faces when acquiring network capacity from a market in which there exist several network providers offering different pricing and quality of service (QoS) schemes. The QoS level guaranteed by network providers and the minimum quality level of service, which is needed for accomplishing the operations are denoted as fuzzy numbers in order to handle the non-deterministic nature of the telecommunication network environment. Interestingly, the mathematical formulation of the aforementioned problem leads to the special case of a well-known two-dimensional bin packing problem, which is famous for its computational complexity. We propose two different heuristic solution procedures that have the capability of solving the resulting nonlinear mixed integer programming model with fuzzy constraints. In conclusion, the efficiency of each algorithm is tested in several test instances to demonstrate the applicability of the methodology.
Lonchampt, J.; Fessart, K.
2013-07-01
The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in the Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
NASA Astrophysics Data System (ADS)
Beghein, C.; Lebedev, S.; van der Hilst, R.
2005-12-01
Interstation dispersion curves can be used to obtain regional 1D profiles of the crust and upper mantle. Unlike phase velocity maps, dispersion curves can be determined with small errors and for a broad frequency band. We want to determine what features interstation surface wave dispersion curves can constrain. Using synthetic data and the Neighbourhood Algorithm, a direct search approach that provides a full statistical assessment of model uncertainites and trade-offs, we investigate how well crustal and upper mantle structure can be recovered with fundamental Love and Rayleigh waves. We also determine how strong are the trade-offs between the different parameters and what depth resolution can we expect to achieve with the current level of precision of this type of data. Synthetic dispersion curves between approximately 7 and 340s were assigned realistic error bars, i.e. an increase of the relative uncertainty with the period but with an amplitude consistent with the one achieve in ``real'' measurements. These dispersion curves were generated by two types of isotropic model differing only by their crustal structure. One represents an oceanic region (shallow Moho) and the other corresponds to an archean continental area with a larger Moho depth. Preliminary results show that while the Moho depth, the shear-velocity structure in the transition zone, between 200 and 410km depth, and between the base of the crust and 50km depth are generally well recovered, crustal structure and Vs between between 50 and 200km depth are more difficult to constrain with Love waves or Rayleigh waves alone because of some trade-off between the two layers. When these two layers are put together, the resolution of Vs between 50 and 100km depth apperas to improve. Stucture deeper than the transition zone is not constrained by the data because of a lack of sensitivity. We explore the possibility of differentiating between an upper and lower crust as well, and we investigate whether a joint
Guturu, Parthasarathy; Dantu, Ram
2008-06-01
Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite. PMID:18558530
NASA Astrophysics Data System (ADS)
de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein
2012-12-01
In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009), 10.1103/PhysRevLett.103.188302] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011), 10.1103/PhysRevLett.107.155501] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.
de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein
2012-12-01
In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009)] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011)] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations. PMID:23231211
Statistical Physics of Hard Optimization Problems
NASA Astrophysics Data System (ADS)
Zdeborová, Lenka
2008-06-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the NP-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this thesis is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.
Statistical physics of hard optimization problems
NASA Astrophysics Data System (ADS)
Zdeborová, Lenka
2009-06-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.
NASA Astrophysics Data System (ADS)
Trunfio, Roberto
2015-06-01
In a recent article, Guo, Cheng and Wang proposed a randomized search algorithm, called modified generalized extremal optimization (MGEO), to solve the quay crane scheduling problem for container groups under the assumption that schedules are unidirectional. The authors claim that the proposed algorithm is capable of finding new best solutions with respect to a well-known set of benchmark instances taken from the literature. However, as shown in this note, there are some errors in their work that can be detected by analysing the Gantt charts of two solutions provided by MGEO. In addition, some comments on the method used to evaluate the schedule corresponding to a task-to-quay crane assignment and on the search scheme of the proposed algorithm are provided. Finally, to assess the effectiveness of the proposed algorithm, the computational experiments are repeated and additional computational experiments are provided.
Bloom, Joshua S.; Prochaska, J.X.; Pooley, D.; Blake, C.W.; Foley, R.J.; Jha, S.; Ramirez-Ruiz, E.; Granot, J.; Filippenko, A.V.; Sigurdsson, S.; Barth, A.J.; Chen, H.-W.; Cooper, M.C.; Falco, E.E.; Gal, R.R.; Gerke, B.F.; Gladders, M.D.; Greene, J.E.; Hennanwi, J.; Ho, L.C.; Hurley, K.; /UC, Berkeley, Astron. Dept. /Lick Observ. /Harvard-Smithsonian Ctr. Astrophys. /Princeton, Inst. Advanced Study /KIPAC, Menlo Park /Penn State U., Astron. Astrophys. /UC, Irvine /MIT, MKI /UC, Davis /UC, Berkeley /Carnegie Inst. Observ. /UC, Berkeley, Space Sci. Dept. /Michigan U. /LBL, Berkeley /Spitzer Space Telescope
2005-06-07
The localization of the short-duration, hard-spectrum gamma-ray burst GRB050509b by the Swift satellite was a watershed event. Never before had a member of this mysterious subclass of classic GRBs been rapidly and precisely positioned in a sky accessible to the bevy of ground-based follow-up facilities. Thanks to the nearly immediate relay of the GRB position by Swift, we began imaging the GRB field 8 minutes after the burst and have continued during the 8 days since. Though the Swift X-ray Telescope (XRT) discovered an X-ray afterglow of GRB050509b, the first ever of a short-hard burst, thus far no convincing optical/infrared candidate afterglow or supernova has been found for the object. We present a re-analysis of the XRT afterglow and find an absolute position of R.A. = 12h36m13.59s, Decl. = +28{sup o}59'04.9'' (J2000), with a 1{sigma} uncertainty of 3.68'' in R.A., 3.52'' in Decl.; this is about 4'' to the west of the XRT position reported previously. Close to this position is a bright elliptical galaxy with redshift z = 0.2248 {+-} 0.0002, about 1' from the center of a rich cluster of galaxies. This cluster has detectable diffuse emission, with a temperature of kT = 5.25{sub -1.68}{sup +3.36} keV. We also find several ({approx}11) much fainter galaxies consistent with the XRT position from deep Keck imaging and have obtained Gemini spectra of several of these sources. Nevertheless we argue, based on positional coincidences, that the GRB and the bright elliptical are likely to be physically related. We thus have discovered reasonable evidence that at least some short-duration, hard-spectra GRBs are at cosmological distances. We also explore the connection of the properties of the burst and the afterglow, finding that GRB050509b was underluminous in both of these relative to long-duration GRBs. However, we also demonstrate that the ratio of the blast-wave energy to the {gamma}-ray energy is consistent with that of long-duration GRBs. We thus find plausible
Boosting Set Constraint Propagation for Network Design
NASA Astrophysics Data System (ADS)
Yip, Justin; van Hentenryck, Pascal; Gervet, Carmen
This paper reconsiders the deployment of synchronous optical networks (SONET), an optimization problem naturally expressed in terms of set variables. Earlier approaches, using either MIP or CP technologies, focused on symmetry breaking, including the use of SBDS, and the design of effective branching strategies. This paper advocates an orthogonal approach and argues that the thrashing behavior experienced in earlier attempts is primarily due to a lack of pruning. It studies how to improve domain filtering by taking a more global view of the application and imposing redundant global constraints. The technical results include novel hardness results, propagation algorithms for global constraints, and inference rules. The paper also evaluates the contributions experimentally by presenting a novel model with static symmetric-breaking constraints and a static variable ordering which is many orders of magnitude faster than existing approaches.
Temporal Constraint Reasoning With Preferences
NASA Technical Reports Server (NTRS)
Khatib, Lina; Morris, Paul; Morris, Robert; Rossi, Francesca
2001-01-01
A number of reasoning problems involving the manipulation of temporal information can naturally be viewed as implicitly inducing an ordering of potential local decisions involving time (specifically, associated with durations or orderings of events) on the basis of preferences. For example. a pair of events might be constrained to occur in a certain order, and, in addition. it might be preferable that the delay between them be as large, or as small, as possible. This paper explores problems in which a set of temporal constraints is specified, where each constraint is associated with preference criteria for making local decisions about the events involved in the constraint, and a reasoner must infer a complete solution to the problem such that, to the extent possible, these local preferences are met in the best way. A constraint framework for reasoning about time is generalized to allow for preferences over event distances and durations, and we study the complexity of solving problems in the resulting formalism. It is shown that while in general such problems are NP-hard, some restrictions on the shape of the preference functions, and on the structure of the preference set, can be enforced to achieve tractability. In these cases, a simple generalization of a single-source shortest path algorithm can be used to compute a globally preferred solution in polynomial time.
NGC 5548: LACK OF A BROAD Fe K{alpha} LINE AND CONSTRAINTS ON THE LOCATION OF THE HARD X-RAY SOURCE
Brenneman, L. W.; Elvis, M.; Krongold, Y.; Liu, Y.; Mathur, S.
2012-01-01
We present an analysis of the co-added and individual 0.7-40 keV spectra from seven Suzaku observations of the Sy 1.5 galaxy NGC 5548 taken over a period of eight weeks. We conclude that the source has a moderately ionized, three-zone warm absorber, a power-law continuum, and exhibits contributions from cold, distant reflection. Relativistic reflection signatures are not significantly detected in the co-added data, and we place an upper limit on the equivalent width of a relativistically broad Fe K{alpha} line at EW {<=} 26 eV at 90% confidence. Thus NGC 5548 can be labeled as a 'weak' type 1 active galactic nucleus (AGN) in terms of its observed inner disk reflection signatures, in contrast to sources with very broad, strong iron lines such as MCG-6-30-15, which are likely much fewer in number. We compare physical properties of NGC 5548 and MCG-6-30-15 that might explain this difference in their reflection properties. Though there is some evidence that NGC 5548 may harbor a truncated inner accretion disk, this evidence is inconclusive, so we also consider light bending of the hard X-ray continuum emission in order to explain the lack of relativistic reflection in our observation. If the absence of a broad Fe K{alpha} line is interpreted in the light-bending context, we conclude that the source of the hard X-ray continuum lies at radii r{sub s} {approx}> 100 r{sub g}. We note, however, that light-bending models must be expanded to include a broader range of physical parameter space in order to adequately explain the spectral and timing properties of average AGNs, rather than just those with strong, broad iron lines.
NASA Astrophysics Data System (ADS)
Castro, Marcelo A.; Thomasson, David; Avila, Nilo A.; Hufton, Jennifer; Senseney, Justin; Johnson, Reed F.; Dyall, Julie
2013-03-01
Monkeypox virus is an emerging zoonotic pathogen that results in up to 10% mortality in humans. Knowledge of clinical manifestations and temporal progression of monkeypox disease is limited to data collected from rare outbreaks in remote regions of Central and West Africa. Clinical observations show that monkeypox infection resembles variola infection. Given the limited capability to study monkeypox disease in humans, characterization of the disease in animal models is required. A previous work focused on the identification of inflammatory patterns using PET/CT image modality in two non-human primates previously inoculated with the virus. In this work we extended techniques used in computer-aided detection of lung tumors to identify inflammatory lesions from monkeypox virus infection and their progression using CT images. Accurate estimation of partial volumes of lung lesions via segmentation is difficult because of poor discrimination between blood vessels, diseased regions, and outer structures. We used hard C-means algorithm in conjunction with landmark based registration to estimate the extent of monkeypox virus induced disease before inoculation and after disease progression. Automated estimation is in close agreement with manual segmentation.
NASA Astrophysics Data System (ADS)
Wang, Ke; Huang, Zhi; Zhong, Zhihua
2014-11-01
Due to the large variations of environment with ever-changing background and vehicles with different shapes, colors and appearances, to implement a real-time on-board vehicle recognition system with high adaptability, efficiency and robustness in complicated environments, remains challenging. This paper introduces a simultaneous detection and tracking framework for robust on-board vehicle recognition based on monocular vision technology. The framework utilizes a novel layered machine learning and particle filter to build a multi-vehicle detection and tracking system. In the vehicle detection stage, a layered machine learning method is presented, which combines coarse-search and fine-search to obtain the target using the AdaBoost-based training algorithm. The pavement segmentation method based on characteristic similarity is proposed to estimate the most likely pavement area. Efficiency and accuracy are enhanced by restricting vehicle detection within the downsized area of pavement. In vehicle tracking stage, a multi-objective tracking algorithm based on target state management and particle filter is proposed. The proposed system is evaluated by roadway video captured in a variety of traffics, illumination, and weather conditions. The evaluating results show that, under conditions of proper illumination and clear vehicle appearance, the proposed system achieves 91.2% detection rate and 2.6% false detection rate. Experiments compared to typical algorithms show that, the presented algorithm reduces the false detection rate nearly by half at the cost of decreasing 2.7%-8.6% detection rate. This paper proposes a multi-vehicle detection and tracking system, which is promising for implementation in an on-board vehicle recognition system with high precision, strong robustness and low computational cost.
NASA Astrophysics Data System (ADS)
Nie, Chu; Geng, Jun; Marlow, William H.
2016-04-01
In order to improve the sampling of restricted microstates in our previous work [C. Nie, J. Geng, and W. H. Marlow, J. Chem. Phys. 127, 154505 (2007); 128, 234310 (2008)] and quantitatively predict thermal properties of supersaturated vapors, an extension is made to the Corti and Debenedetti subcell constraint algorithm [D. S. Corti and P. Debenedetti, Chem. Eng. Sci. 49, 2717 (1994)], which restricts the maximum allowed local density at any point in a simulation box. The maximum allowed local density at a point in a simulation box is defined by the maximum number of particles Nm allowed to appear inside a sphere of radius R, with this point as the center of the sphere. Both Nm and R serve as extra thermodynamic variables for maintaining a certain degree of spatial homogeneity in a supersaturated system. In a restricted canonical ensemble, at a given temperature and an overall density, series of local minima on the Helmholtz free energy surface F(Nm, R) are found subject to different (Nm, R) pairs. The true equilibrium metastable state is identified through the analysis of the formation free energies of Stillinger clusters of various sizes obtained from these restricted states. The simulation results of a supersaturated Lennard-Jones vapor at reduced temperature 0.7 including the vapor pressure isotherm, formation free energies of critical nuclei, and chemical potential differences are presented and analyzed. In addition, with slight modifications, the current algorithm can be applied to computing thermal properties of superheated liquids.
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Soldati, Nicola; Calhoun, Vince D.; Bruzzone, Lorenzo; Jovicich, Jorge
2013-01-01
Independent component analysis (ICA) techniques offer a data-driven possibility to analyze brain functional MRI data in real-time. Typical ICA methods used in functional magnetic resonance imaging (fMRI), however, have been until now mostly developed and optimized for the off-line case in which all data is available. Real-time experiments are ill-posed for ICA in that several constraints are added: limited data, limited analysis time and dynamic changes in the data and computational speed. Previous studies have shown that particular choices of ICA parameters can be used to monitor real-time fMRI (rt-fMRI) brain activation, but it is unknown how other choices would perform. In this rt-fMRI simulation study we investigate and compare the performance of 14 different publicly available ICA algorithms systematically sampling different growing window lengths (WLs), model order (MO) as well as a priori conditions (none, spatial or temporal). Performance is evaluated by computing the spatial and temporal correlation to a target component as well as computation time. Four algorithms are identified as best performing (constrained ICA, fastICA, amuse, and evd), with their corresponding parameter choices. Both spatial and temporal priors are found to provide equal or improved performances in similarity to the target compared with their off-line counterpart, with greatly reduced computation costs. This study suggests parameter choices that can be further investigated in a sliding-window approach for a rt-fMRI experiment. PMID:23378835
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Nie, Chu; Geng, Jun; Marlow, William H
2016-04-14
In order to improve the sampling of restricted microstates in our previous work [C. Nie, J. Geng, and W. H. Marlow, J. Chem. Phys. 127, 154505 (2007); 128, 234310 (2008)] and quantitatively predict thermal properties of supersaturated vapors, an extension is made to the Corti and Debenedetti subcell constraint algorithm [D. S. Corti and P. Debenedetti, Chem. Eng. Sci. 49, 2717 (1994)], which restricts the maximum allowed local density at any point in a simulation box. The maximum allowed local density at a point in a simulation box is defined by the maximum number of particles Nm allowed to appear inside a sphere of radius R, with this point as the center of the sphere. Both Nm and R serve as extra thermodynamic variables for maintaining a certain degree of spatial homogeneity in a supersaturated system. In a restricted canonical ensemble, at a given temperature and an overall density, series of local minima on the Helmholtz free energy surface F(Nm, R) are found subject to different (Nm, R) pairs. The true equilibrium metastable state is identified through the analysis of the formation free energies of Stillinger clusters of various sizes obtained from these restricted states. The simulation results of a supersaturated Lennard-Jones vapor at reduced temperature 0.7 including the vapor pressure isotherm, formation free energies of critical nuclei, and chemical potential differences are presented and analyzed. In addition, with slight modifications, the current algorithm can be applied to computing thermal properties of superheated liquids. PMID:27083734
Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2003-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.
Kawanishi, Takeshi; Shiraishi, Takuya; Okano, Yukari; Sugawara, Kyoko; Hashimoto, Masayoshi; Maejima, Kensaku; Komatsu, Ken; Kakizawa, Shigeyuki; Yamaji, Yasuyuki; Hamamoto, Hiroshi; Oshima, Kenro; Namba, Shigetou
2011-01-01
Culturing is an indispensable technique in microbiological research, and culturing with selective media has played a crucial role in the detection of pathogenic microorganisms and the isolation of commercially useful microorganisms from environmental samples. Although numerous selective media have been developed in empirical studies, unintended microorganisms often grow on such media probably due to the enormous numbers of microorganisms in the environment. Here, we present a novel strategy for designing highly selective media based on two selective agents, a carbon source and antimicrobials. We named our strategy SMART for highly Selective Medium-design Algorithm Restricted by Two constraints. To test whether the SMART method is applicable to a wide range of microorganisms, we developed selective media for Burkholderia glumae, Acidovorax avenae, Pectobacterium carotovorum, Ralstonia solanacearum, and Xanthomonas campestris. The series of media developed by SMART specifically allowed growth of the targeted bacteria. Because these selective media exhibited high specificity for growth of the target bacteria compared to established selective media, we applied three notable detection technologies: paper-based, flow cytometry-based, and color change-based detection systems for target bacteria species. SMART facilitates not only the development of novel techniques for detecting specific bacteria, but also our understanding of the ecology and epidemiology of the targeted bacteria. PMID:21304596
Robust H∞ stabilization of a hard disk drive system with a single-stage actuator
NASA Astrophysics Data System (ADS)
Harno, Hendra G.; Kiin Woon, Raymond Song
2015-04-01
This paper considers a robust H∞ control problem for a hard disk drive system with a single stage actuator. The hard disk drive system is modeled as a linear time-invariant uncertain system where its uncertain parameters and high-order dynamics are considered as uncertainties satisfying integral quadratic constraints. The robust H∞ control problem is transformed into a nonlinear optimization problem with a pair of parameterized algebraic Riccati equations as nonconvex constraints. The nonlinear optimization problem is then solved using a differential evolution algorithm to find stabilizing solutions to the Riccati equations. These solutions are used for synthesizing an output feedback robust H∞ controller to stabilize the hard disk drive system with a specified disturbance attenuation level.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmüller, U.; Strozzi, T.
2012-12-01
The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities
Order-to-chaos transition in the hardness of random Boolean satisfiability problems
NASA Astrophysics Data System (ADS)
Varga, Melinda; Sumi, Róbert; Toroczkai, Zoltán; Ercsey-Ravasz, Mária
2016-05-01
Transient chaos is a ubiquitous phenomenon characterizing the dynamics of phase-space trajectories evolving towards a steady-state attractor in physical systems as diverse as fluids, chemical reactions, and condensed matter systems. Here we show that transient chaos also appears in the dynamics of certain efficient algorithms searching for solutions of constraint satisfaction problems that include scheduling, circuit design, routing, database problems, and even Sudoku. In particular, we present a study of the emergence of hardness in Boolean satisfiability (k -SAT), a canonical class of constraint satisfaction problems, by using an analog deterministic algorithm based on a system of ordinary differential equations. Problem hardness is defined through the escape rate κ , an invariant measure of transient chaos of the dynamical system corresponding to the analog algorithm, and it expresses the rate at which the trajectory approaches a solution. We show that for a given density of constraints and fixed number of Boolean variables N , the hardness of formulas in random k -SAT ensembles has a wide variation, approximable by a lognormal distribution. We also show that when increasing the density of constraints α , hardness appears through a second-order phase transition at αχ in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic. A similar behavior is found in 4-SAT as well, however, such a transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter κ ˜NB |α - αχ|1-γ with 0 <γ <1 . We demonstrate that the transition is generated by the appearance of metastable basins in the solution space as the density of constraints α is increased.
Order-to-chaos transition in the hardness of random Boolean satisfiability problems.
Varga, Melinda; Sumi, Róbert; Toroczkai, Zoltán; Ercsey-Ravasz, Mária
2016-05-01
Transient chaos is a ubiquitous phenomenon characterizing the dynamics of phase-space trajectories evolving towards a steady-state attractor in physical systems as diverse as fluids, chemical reactions, and condensed matter systems. Here we show that transient chaos also appears in the dynamics of certain efficient algorithms searching for solutions of constraint satisfaction problems that include scheduling, circuit design, routing, database problems, and even Sudoku. In particular, we present a study of the emergence of hardness in Boolean satisfiability (k-SAT), a canonical class of constraint satisfaction problems, by using an analog deterministic algorithm based on a system of ordinary differential equations. Problem hardness is defined through the escape rate κ, an invariant measure of transient chaos of the dynamical system corresponding to the analog algorithm, and it expresses the rate at which the trajectory approaches a solution. We show that for a given density of constraints and fixed number of Boolean variables N, the hardness of formulas in random k-SAT ensembles has a wide variation, approximable by a lognormal distribution. We also show that when increasing the density of constraints α, hardness appears through a second-order phase transition at α_{χ} in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic. A similar behavior is found in 4-SAT as well, however, such a transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter κ∼N^{B|α-α_{χ}|^{1-γ}} with 0<γ<1. We demonstrate that the transition is generated by the appearance of metastable basins in the solution space as the density of constraints α is increased. PMID:27300884
Order-to-chaos transition in the hardness of random Boolean satisfiability problems
NASA Astrophysics Data System (ADS)
Varga, Melinda; Sumi, Robert; Ercsey-Ravasz, Maria; Toroczkai, Zoltan
Transient chaos is a phenomenon characterizing the dynamics of phase space trajectories evolving towards an attractor in physical systems. We show that transient chaos also appears in the dynamics of certain algorithms searching for solutions of constraint satisfaction problems (e.g., Sudoku). We present a study of the emergence of hardness in Boolean satisfiability (k-SAT) using an analog deterministic algorithm. Problem hardness is defined through the escape rate κ, an invariant measure of transient chaos, and it expresses the rate at which the trajectory approaches a solution. We show that the hardness in random k-SAT ensembles has a wide variation approximable by a lognormal distribution. We also show that when increasing the density of constraints α, hardness appears through a second-order phase transition at αc in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic, however, such transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter. We demonstrate that the transition is generated by the appearance of non-solution basins in the solution space as the density of constraints is increased.
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Compact location problems with budget and communication constraints
Krumke, S.O.; Noltemeier, H.; Ravi, S.S.; Marathe, M.V.
1995-05-01
We consider the problem of placing a specified number p of facilities on the nodes of a given network with two nonnegative edge-weight functions so as to minimize the diameter of the placement with respect to the first distance function under diameter or sum-constraints with respect to the second weight function. Define an ({alpha}, {beta})-approximation algorithm as a polynomial-time algorithm that produces a solution within a times the optimal function value, violating the constraint with respect to the second distance function by a factor of at most {beta}. We observe that in general obtaining an ({alpha}, {beta})-approximation for any fixed {alpha}, {beta} {ge} 1 is NP-hard for any of these problems. We present efficient approximation algorithms for the case, when both edge-weight functions obey the triangle inequality. For the problem of minimizing the diameter under a diameter Constraint with respect to the second weight-function, we provide a (2,2)-approximation algorithm. We. also show that no polynomial time algorithm can provide an ({alpha},2 {minus} {var_epsilon})- or (2 {minus} {var_epsilon},{beta})-approximation for any fixed {var_epsilon} > 0 and {alpha},{beta} {ge} 1, unless P = NP. This result is proved to remain true, even if one fixes {var_epsilon}{prime} > 0 and allows the algorithm to place only 2p/{vert_bar}VI{vert_bar}/{sup 6 {minus} {var_epsilon}{prime}} facilities. Our techniques can be extended to the case, when either the objective or the constraint is of sum-type and also to handle additional weights on the nodes of the graph.
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Constraints in Genetic Programming
NASA Technical Reports Server (NTRS)
Janikow, Cezary Z.
1996-01-01
Genetic programming refers to a class of genetic algorithms utilizing generic representation in the form of program trees. For a particular application, one needs to provide the set of functions, whose compositions determine the space of program structures being evolved, and the set of terminals, which determine the space of specific instances of those programs. The algorithm searches the space for the best program for a given problem, applying evolutionary mechanisms borrowed from nature. Genetic algorithms have shown great capabilities in approximately solving optimization problems which could not be approximated or solved with other methods. Genetic programming extends their capabilities to deal with a broader variety of problems. However, it also extends the size of the search space, which often becomes too large to be effectively searched even by evolutionary methods. Therefore, our objective is to utilize problem constraints, if such can be identified, to restrict this space. In this publication, we propose a generic constraint specification language, powerful enough for a broad class of problem constraints. This language has two elements -- one reduces only the number of program instances, the other reduces both the space of program structures as well as their instances. With this language, we define the minimal set of complete constraints, and a set of operators guaranteeing offspring validity from valid parents. We also show that these operators are not less efficient than the standard genetic programming operators if one preprocesses the constraints - the necessary mechanisms are identified.
ERIC Educational Resources Information Center
Kolata, Gina
1985-01-01
To determine how hard it is for computers to solve problems, researchers have classified groups of problems (polynomial hierarchy) according to how much time they seem to require for their solutions. A difficult and complex proof is offered which shows that a combinatorial approach (using Boolean circuits) may resolve the problem. (JN)
Rigorous location of phase transitions in hard optimization problems.
Achlioptas, Dimitris; Naor, Assaf; Peres, Yuval
2005-06-01
It is widely believed that for many optimization problems, no algorithm is substantially more efficient than exhaustive search. This means that finding optimal solutions for many practical problems is completely beyond any current or projected computational capacity. To understand the origin of this extreme 'hardness', computer scientists, mathematicians and physicists have been investigating for two decades a connection between computational complexity and phase transitions in random instances of constraint satisfaction problems. Here we present a mathematically rigorous method for locating such phase transitions. Our method works by analysing the distribution of distances between pairs of solutions as constraints are added. By identifying critical behaviour in the evolution of this distribution, we can pinpoint the threshold location for a number of problems, including the two most-studied ones: random k-SAT and random graph colouring. Our results prove that the heuristic predictions of statistical physics in this context are essentially correct. Moreover, we establish that random instances of constraint satisfaction problems have solutions well beyond the reach of any analysed algorithm. PMID:15944693
Agyepong, Irene Akua
2015-03-01
A major constraint to the application of any form of knowledge and principles is the awareness, understanding and acceptance of the knowledge and principles. Systems Thinking (ST) is a way of understanding and thinking about the nature of health systems and how to make and implement decisions within health systems to maximize desired and minimize undesired effects. A major constraint to applying ST within health systems in Low- and Middle-Income Countries (LMICs) would appear to be an awareness and understanding of ST and how to apply it. This is a fundamental constraint and in the increasing desire to enable the application of ST concepts in health systems in LMIC and understand and evaluate the effects; an essential first step is going to be enabling of a wide spread as well as deeper understanding of ST and how to apply this understanding. PMID:25774378
NASA Technical Reports Server (NTRS)
Knox, C. E.
1983-01-01
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch
Karthikeyan, M.; Sree Ranga Raja, T.
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710
The Probabilistic Admissible Region with Additional Constraints
NASA Astrophysics Data System (ADS)
Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.
The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea
Improvements to the stand and hit algorithm
Boneh, A.; Boneh, S.; Caron, R.; Jibrin, S.
1994-12-31
The stand and hit algorithm is a probabilistic algorithm for detecting necessary constraints. The algorithm stands at a point in the feasible region and hits constraints by moving towards the boundary along randomly generated directions. In this talk we discuss methods for choosing the standing point. As well, we present the undetected first rule for determining the hit constraints.
NASA Astrophysics Data System (ADS)
Yukita, Mihoko; Ptak, Andrew; Maccarone, Thomas J.; Hornschemeier, Ann E.; Wik, Daniel R.; Pottschmidt, Katja; Antoniou, Vallia; Baganoff, Frederick K.; Lehmer, Bret; Zezas, Andreas; Boyd, Patricia T.; Kennea, Jamie; Page, Kim L.
2016-04-01
Thanks to its better sensitivity and spatial resolution, NuSTAR allows us to investigate the E>10 keV properties of nearby galaxies. We now know that starburst galaxies, containing very young stellar populations, have X-ray spectra which drop quickly above 10 keV. We extend our investigation of hard X-ray properties to an older stellar population system, the bulge of M31. The NuSTAR and Swift simultaneous observations reveal a bright hard source dominating the M31 bulge above 20 keV, which is likely to be a counterpart of Swift J0042.6+4112 previously detected (but not classified) in the Swift BAT All-sky Hard X-ray Survey. This source had been classified as an XRB candidate in various Chandra and XMM-Newton studies; however, since it was not clear that it is the counterpart to the strong Swift J0042.6+4112 source at higher energies, the previous E < 10 keV observations did not generate much attention. The NuSTAR and Swift spectra of this source drop quickly at harder energies as observed in sources in starburst galaxies. The X-ray spectral properties of this source are very similar to those of an accreting pulsar; yet, we do not find a pulsation in the NuSTAR data. The existing deep HST images indicate no high mass donors at the location of this source, further suggesting that this source has an intermediate or low mass companion. The most likely scenario for the nature of this source is an X-ray pulsar with an intermediate/low mass companion similar to the Galactic Her X-1 system. We will also discuss other possibilities in more detail.
Improving Steiner trees of a network under multiple constraints
Krumke, S.O.; Noltemeier, H.; Marathe, M.V.; Ravi, R.; Ravi, S.S.
1996-07-01
The authors consider the problem of decreasing the edge weights of a given network so that the modified network has a Steiner tree in which two performance measures are simultaneously optimized. They formulate these problems, referred to as bicriteria network improvement problems, by specifying a budget on the total modification cost, a constraint on one of the performance measures and using the other performance measure as a minimization objective. Network improvement problems are known to be NP-hard even when only one performance measure is considered. The authors present the first polynomial time approximation algorithms for bicriteria network improvement problems. The approximation algorithms are for two pairs of performance measures, namely (diameter, total cost) and (degree, total cost). These algorithms produce solutions which are within a logarithmic factor of the optimum value of the minimization objective while violating the constraints only by a logarithmic factor. The techniques also yield approximation schemes when the given network has bounded treewidth. Many of the approximation results can be extended to more general network design problems.
Constraint Handling in Transmission Network Expansion Planning
NASA Astrophysics Data System (ADS)
Mallipeddi, R.; Verma, Ashu; Suganthan, P. N.; Panigrahi, B. K.; Bijwe, P. R.
Transmission network expansion planning (TNEP) is a very important and complex problem in power system. Recently, the use of metaheuristic techniques to solve TNEP is gaining more importance due to their effectiveness in handling the inequality constraints and discrete values over the conventional gradient based methods. Evolutionary algorithms (EAs) generally perform unconstrained search and require some additional mechanism to handle constraints. In EA literature, various constraint handling techniques have been proposed. However, to solve TNEP the penalty function approach is commonly used while the other constraint handling methods are untested. In this paper, we evaluate the performance of different constraint handling methods like Superiority of Feasible Solutions (SF), Self adaptive Penalty (SP),E-Constraint (EC), Stochastic Ranking (SR) and the ensemble of constraint handling techniques (ECHT) on TNEP. The potential of different constraint handling methods and their ensemble is evaluated using an IEEE 24 bus system with and without security constraints.
Artificial immune algorithm for multi-depot vehicle scheduling problems
NASA Astrophysics Data System (ADS)
Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling
2008-10-01
In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1991-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1993-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1991-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocations for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its applications to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
FATIGUE OF BIOMATERIALS: HARD TISSUES
Arola, D.; Bajaj, D.; Ivancik, J.; Majd, H.; Zhang, D.
2009-01-01
The fatigue and fracture behavior of hard tissues are topics of considerable interest today. This special group of organic materials comprises the highly mineralized and load-bearing tissues of the human body, and includes bone, cementum, dentin and enamel. An understanding of their fatigue behavior and the influence of loading conditions and physiological factors (e.g. aging and disease) on the mechanisms of degradation are essential for achieving lifelong health. But there is much more to this topic than the immediate medical issues. There are many challenges to characterizing the fatigue behavior of hard tissues, much of which is attributed to size constraints and the complexity of their microstructure. The relative importance of the constituents on the type and distribution of defects, rate of coalescence, and their contributions to the initiation and growth of cracks, are formidable topics that have not reached maturity. Hard tissues also provide a medium for learning and a source of inspiration in the design of new microstructures for engineering materials. This article briefly reviews fatigue of hard tissues with shared emphasis on current understanding, the challenges and the unanswered questions. PMID:20563239
On Constraints in Assembly Planning
Calton, T.L.; Jones, R.E.; Wilson, R.H.
1998-12-17
Constraints on assembly plans vary depending on product, assembly facility, assembly volume, and many other factors. Assembly costs and other measures to optimize vary just as widely. To be effective, computer-aided assembly planning systems must allow users to express the plan selection criteria that appIy to their products and production environments. We begin this article by surveying the types of user criteria, both constraints and quality measures, that have been accepted by assembly planning systems to date. The survey is organized along several dimensions, including strategic vs. tactical criteria; manufacturing requirements VS. requirements of the automated planning process itself and the information needed to assess compliance with each criterion. The latter strongly influences the efficiency of planning. We then focus on constraints. We describe a framework to support a wide variety of user constraints for intuitive and efficient assembly planning. Our framework expresses all constraints on a sequencing level, specifying orders and conditions on part mating operations in a number of ways. Constraints are implemented as simple procedures that either accept or reject assembly operations proposed by the planner. For efficiency, some constraints are supplemented with special-purpose modifications to the planner's algorithms. Fast replanning enables an interactive plan-view-constrain-replan cycle that aids in constraint discovery and documentation. We describe an implementation of the framework in a computer-aided assembly planning system and experiments applying the system to a number of complex assemblies, including one with 472 parts.
Reformulating Constraints for Compilability and Efficiency
NASA Technical Reports Server (NTRS)
Tong, Chris; Braudaway, Wesley; Mohan, Sunil; Voigt, Kerstin
1992-01-01
KBSDE is a knowledge compiler that uses a classification-based approach to map solution constraints in a task specification onto particular search algorithm components that will be responsible for satisfying those constraints (e.g., local constraints are incorporated in generators; global constraints are incorporated in either testers or hillclimbing patchers). Associated with each type of search algorithm component is a subcompiler that specializes in mapping constraints into components of that type. Each of these subcompilers in turn uses a classification-based approach, matching a constraint passed to it against one of several schemas, and applying a compilation technique associated with that schema. While much progress has occurred in our research since we first laid out our classification-based approach [Ton91], we focus in this paper on our reformulation research. Two important reformulation issues that arise out of the choice of a schema-based approach are: (1) compilability-- Can a constraint that does not directly match any of a particular subcompiler's schemas be reformulated into one that does? and (2) Efficiency-- If the efficiency of the compiled search algorithm depends on the compiler's performance, and the compiler's performance depends on the form in which the constraint was expressed, can we find forms for constraints which compile better, or reformulate constraints whose forms can be recognized as ones that compile poorly? In this paper, we describe a set of techniques we are developing for partially addressing these issues.
NASA Astrophysics Data System (ADS)
Li, Yuzhong
Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.
A Scheduling Algorithm for Replicated Real-Time Tasks
NASA Technical Reports Server (NTRS)
Yu, Albert C.; Lin, Kwei-Jay
1991-01-01
We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Wear of hard materials by hard particles
Hawk, Jeffrey A.
2003-10-01
Hard materials, such as WC-Co, boron carbide, titanium diboride and composite carbide made up of Mo2C and WC, have been tested in abrasion and erosion conditions. These hard materials showed negligible wear in abrasion against SiC particles and erosion using Al2O3 particles. The WC-Co materials have the highest wear rate of these hard materials and a very different material removal mechanism. Wear mechanisms for these materials were different for each material with the overall wear rate controlled by binder composition and content and material grain size.
Data assimilation with inequality constraints
NASA Astrophysics Data System (ADS)
Thacker, W. C.
If values of variables in a numerical model are limited to specified ranges, these restrictions should be enforced when data are assimilated. The simplest option is to assimilate without regard for constraints and then to correct any violations without worrying about additional corrections implied by correlated errors. This paper addresses the incorporation of inequality constraints into the standard variational framework of optimal interpolation with emphasis on our limited knowledge of the underlying probability distributions. Simple examples involving only two or three variables are used to illustrate graphically how active constraints can be treated as error-free data when background errors obey a truncated multi-normal distribution. Using Lagrange multipliers, the formalism is expanded to encompass the active constraints. Two algorithms are presented, both relying on a solution ignoring the inequality constraints to discover violations to be enforced. While explicitly enforcing a subset can, via correlations, correct the others, pragmatism based on our poor knowledge of the underlying probability distributions suggests the expedient of enforcing them all explicitly to avoid the computationally expensive task of determining the minimum active set. If additional violations are encountered with these solutions, the process can be repeated. Simple examples are used to illustrate the algorithms and to examine the nature of the corrections implied by correlated errors.
Foundations of support constraint machines.
Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello
2015-02-01
The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses. PMID:25380338
A Monte Carlo Approach for Adaptive Testing with Content Constraints
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander
2008-01-01
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
Network interdiction with budget constraints
Santhi, Nankakishore; Pan, Feng
2009-01-01
Several scenarios exist in the modern interconnected world which call for efficient network interdiction algorithms. Applications are varied, including computer network security, prevention of spreading of Internet worms, policing international smuggling networks, controlling spread of diseases and optimizing the operation of large public energy grids. In this paper we consider some natural network optimization questions related to the budget constrained interdiction problem over general graphs. Many of these questions turn out to be computationally hard to tackle. We present a particularly interesting practical form of the interdiction question which we show to be computationally tractable. A polynomial time algorithm is then presented for this problem.
Simulation results for the Viterbi decoding algorithm
NASA Technical Reports Server (NTRS)
Batson, B. H.; Moorehead, R. W.; Taqvi, S. Z. H.
1972-01-01
Concepts involved in determining the performance of coded digital communications systems are introduced. The basic concepts of convolutional encoding and decoding are summarized, and hardware implementations of sequential and maximum likelihood decoders are described briefly. Results of parametric studies of the Viterbi decoding algorithm are summarized. Bit error probability is chosen as the measure of performance and is calculated, by using digital computer simulations, for various encoder and decoder parameters. Results are presented for code rates of one-half and one-third, for constraint lengths of 4 to 8, for both hard-decision and soft-decision bit detectors, and for several important systematic and nonsystematic codes. The effect of decoder block length on bit error rate also is considered, so that a more complete estimate of the relationship between performance and decoder complexity can be made.
Parallel-Batch Scheduling and Transportation Coordination with Waiting Time Constraint
Gong, Hua; Chen, Daheng; Xu, Ke
2014-01-01
This paper addresses a parallel-batch scheduling problem that incorporates transportation of raw materials or semifinished products before processing with waiting time constraint. The orders located at the different suppliers are transported by some vehicles to a manufacturing facility for further processing. One vehicle can load only one order in one shipment. Each order arriving at the facility must be processed in the limited waiting time. The orders are processed in batches on a parallel-batch machine, where a batch contains several orders and the processing time of the batch is the largest processing time of the orders in it. The goal is to find a schedule to minimize the sum of the total flow time and the production cost. We prove that the general problem is NP-hard in the strong sense. We also demonstrate that the problem with equal processing times on the machine is NP-hard. Furthermore, a dynamic programming algorithm in pseudopolynomial time is provided to prove its ordinarily NP-hardness. An optimal algorithm in polynomial time is presented to solve a special case with equal processing times and equal transportation times for each order. PMID:24883385
Ordering of hard particles between hard walls
NASA Astrophysics Data System (ADS)
Chrzanowska, A.; Teixeira, P. I. C.; Ehrentraut, H.; Cleaver, D. J.
2001-05-01
The structure of a fluid of hard Gaussian overlap particles of elongation κ = 5, confined between two hard walls, has been calculated from density-functional theory and Monte Carlo simulations. By using the exact expression for the excluded volume kernel (Velasco E and Mederos L 1998 J. Chem. Phys. 109 2361) and solving the appropriate Euler-Lagrange equation entirely numerically, we have been able to extend our theoretical predictions into the nematic phase, which had up till now remained relatively unexplored due to the high computational cost. Simulation reveals a rich adsorption behaviour with increasing bulk density, which is described semi-quantitatively by the theory without any adjustable parameters.
Object-oriented algorithmic laboratory for ordering sparse matrices
Kumfert, G K
2000-05-01
We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively
Constraint-based interactive assembly planning
Jones, R.E.; Wilson, R.H.; Calton, T.L.
1997-03-01
The constraints on assembly plans vary depending on the product, assembly facility, assembly volume, and many other factors. This paper describes the principles and implementation of a framework that supports a wide variety of user-specified constraints for interactive assembly planning. Constraints from many sources can be expressed on a sequencing level, specifying orders and conditions on part mating operations in a number of ways. All constraints are implemented as filters that either accept or reject assembly operations proposed by the planner. For efficiency, some constraints are supplemented with special-purpose modifications to the planner`s algorithms. Replanning is fast enough to enable a natural plan-view-constrain-replan cycle that aids in constraint discovery and documentation. We describe an implementation of the framework in a computer-aided assembly planning system and experiments applying the system to several complex assemblies. 12 refs., 2 figs., 3 tabs.
Constraint Embedding for Multibody System Dynamics
NASA Technical Reports Server (NTRS)
Jain, Abhinandan
2009-01-01
This paper describes a constraint embedding approach for the handling of local closure constraints in multibody system dynamics. The approach uses spatial operator techniques to eliminate local-loop constraints from the system and effectively convert the system into tree-topology systems. This approach allows the direct derivation of recursive O(N) techniques for solving the system dynamics and avoiding the expensive steps that would otherwise be required for handling the closedchain dynamics. The approach is very effective for systems where the constraints are confined to small-subgraphs within the system topology. The paper provides background on the spatial operator O(N) algorithms, the extensions for handling embedded constraints, and concludes with some examples of such constraints.
Extensions of output variance constrained controllers to hard constraints
NASA Technical Reports Server (NTRS)
Skelton, R.; Zhu, G.
1989-01-01
Covariance Controllers assign specified matrix values to the state covariance. A number of robustness results are directly related to the covariance matrix. The conservatism in known upperbounds on the H infinity, L infinity, and L (sub 2) norms for stability and disturbance robustness of linear uncertain systems using covariance controllers is illustrated with examples. These results are illustrated for continuous and discrete time systems. **** ONLY 2 BLOCK MARKERS FOUND -- RETRY *****
Using constraints to model disjunctions in rule-based reasoning
Liu, Bing; Jaffar, Joxan
1996-12-31
Rule-based systems have long been widely used for building expert systems to perform practical knowledge intensive tasks. One important issue that has not been addressed satisfactorily is the disjunction, and this significantly limits their problem solving power. In this paper, we show that some important types of disjunction can be modeled with Constraint Satisfaction Problem (CSP) techniques, employing their simple representation schemes and efficient algorithms. A key idea is that disjunctions are represented as constraint variables, relations among disjunctions are represented as constraints, and rule chaining is integrated with constraint solving. In this integration, a constraint variable or a constraint is regarded as a special fact, and rules can be written with constraints, and information about constraints. Chaining of rules may trigger constraint propagation, and constraint propagation may cause firing of rules. A prototype system (called CFR) based on this idea has been implemented.
Session: Hard Rock Penetration
Tennyson, George P. Jr.; Dunn, James C.; Drumheller, Douglas S.; Glowka, David A.; Lysne, Peter
1992-01-01
This session at the Geothermal Energy Program Review X: Geothermal Energy and the Utility Market consisted of five presentations: ''Hard Rock Penetration - Summary'' by George P. Tennyson, Jr.; ''Overview - Hard Rock Penetration'' by James C. Dunn; ''An Overview of Acoustic Telemetry'' by Douglas S. Drumheller; ''Lost Circulation Technology Development Status'' by David A. Glowka; ''Downhole Memory-Logging Tools'' by Peter Lysne.
NASA Technical Reports Server (NTRS)
Hauser, D. L.; Buras, D. F.; Corbin, J. M.
1987-01-01
Rubber-hardness tester modified for use on rigid polyurethane foam. Provides objective basis for evaluation of improvements in foam manufacturing and inspection. Typical acceptance criterion requires minimum hardness reading of 80 on modified tester. With adequate correlation tests, modified tester used to measure indirectly tensile and compressive strengths of foam.
Adiabatic Quantum Programming: Minor Embedding With Hard Faults
Klymko, Christine F; Sullivan, Blair D; Humble, Travis S
2013-01-01
Adiabatic quantum programming defines the time-dependent mapping of a quantum algorithm into the hardware or logical fabric. An essential programming step is the embedding of problem-specific information into the logical fabric to define the quantum computational transformation. We present algorithms for embedding arbitrary instances of the adiabatic quantum optimization algorithm into a square lattice of specialized unit cells. Our methods are shown to be extensible in fabric growth, linear in time, and quadratic in logical footprint. In addition, we provide methods for accommodating hard faults in the logical fabric without invoking approximations to the original problem. These hard fault-tolerant embedding algorithms are expected to prove useful for benchmarking the adiabatic quantum optimization algorithm on existing quantum logical hardware. We illustrate this versatility through numerical studies of embeddabilty versus hard fault rates in square lattices of complete bipartite unit cells.
Cugell, D W
1992-06-01
Hard metal is a mixture of tungsten carbide and cobalt, to which small amounts of other metals may be added. It is widely used for industrial purposes whenever extreme hardness and high temperature resistance are needed, such as for cutting tools, oil well drilling bits, and jet engine exhaust ports. Cobalt is the component of hard metal that can be a health hazard. Respiratory diseases occur in workers exposed to cobalt--either in the production of hard metal, from machining hard metal parts, or from other sources. Adverse pulmonary reactions include asthma, hypersensitivity pneumonitis, and interstitial fibrosis. A peculiar, almost unique form of lung fibrosis, giant cell interstitial pneumonia, is closely linked with cobalt exposure. PMID:1511554
Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem
NASA Astrophysics Data System (ADS)
Tein, Lim Huai; Ramli, Razamin
2014-12-01
Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.
Data Structures and Algorithms.
ERIC Educational Resources Information Center
Wirth, Niklaus
1984-01-01
Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
Boolean constraint satisfaction problems for reaction networks
NASA Astrophysics Data System (ADS)
Seganti, A.; De Martino, A.; Ricci-Tersenghi, F.
2013-09-01
We define and study a class of (random) Boolean constraint satisfaction problems representing minimal feasibility constraints for networks of chemical reactions. The constraints we consider encode, respectively, for hard mass-balance conditions (where the consumption and production fluxes of each chemical species are matched) and for soft mass-balance conditions (where a net production of compounds is in principle allowed). We solve these constraint satisfaction problems under the Bethe approximation and derive the corresponding belief propagation equations, which involve eight different messages. The statistical properties of ensembles of random problems are studied via the population dynamics methods. By varying a chemical potential attached to the activity of reactions, we find first-order transitions and strong hysteresis, suggesting a non-trivial structure in the space of feasible solutions.
ERIC Educational Resources Information Center
Stocker, H. Robert; Hilton, Thomas S. E.
1991-01-01
Suggests strategies that make hard disk organization easy and efficient, such as making, changing, and removing directories; grouping files by subject; naming files effectively; backing up efficiently; and using PATH. (JOW)
A Space-Bounded Anytime Algorithm for the Multiple Longest Common Subsequence Problem
Yang, Jiaoyun; Xu, Yun; Shang, Yi; Chen, Guoliang
2014-01-01
The multiple longest common subsequence (MLCS) problem, related to the identification of sequence similarity, is an important problem in many fields. As an NP-hard problem, its exact algorithms have difficulty in handling large-scale data and time- and space-efficient algorithms are required in real-world applications. To deal with time constraints, anytime algorithms have been proposed to generate good solutions with a reasonable time. However, there exists little work on space-efficient MLCS algorithms. In this paper, we formulate the MLCS problem into a graph search problem and present two space-efficient anytime MLCS algorithms, SA-MLCS and SLA-MLCS. SA-MLCS uses an iterative beam widening search strategy to reduce space usage during the iterative process of finding better solutions. Based on SA-MLCS, SLA-MLCS, a space-bounded algorithm, is developed to avoid space usage from exceeding available memory. SLA-MLCS uses a replacing strategy when SA-MLCS reaches a given space bound. Experimental results show SA-MLCS and SLA-MLCS use an order of magnitude less space and time than the state-of-the-art approximate algorithm MLCS-APP while finding better solutions. Compared to the state-of-the-art anytime algorithm Pro-MLCS, SA-MLCS and SLA-MLCS can solve an order of magnitude larger size instances. Furthermore, SLA-MLCS can find much better solutions than SA-MLCS on large size instances. PMID:25400485
Genetic algorithm-based neural fuzzy decision tree for mixed scheduling in ATM networks.
Lin, Chin-Teng; Chung, I-Fang; Pu, Her-Chang; Lee', Tsern-Huei; Chang, Jyh-Yeong
2002-01-01
Future broadband integrated services networks based on asynchronous transfer mode (ATM) technology are expected to support multiple types of multimedia information with diverse statistical characteristics and quality of service (QoS) requirements. To meet these requirements, efficient scheduling methods are important for traffic control in ATM networks. Among general scheduling schemes, the rate monotonic algorithm is simple enough to be used in high-speed networks, but does not attain the high system utilization of the deadline driven algorithm. However, the deadline driven scheme is computationally complex and hard to implement in hardware. The mixed scheduling algorithm is a combination of the rate monotonic algorithm and the deadline driven algorithm; thus it can provide most of the benefits of these two algorithms. In this paper, we use the mixed scheduling algorithm to achieve high system utilization under the hardware constraint. Because there is no analytic method for schedulability testing of mixed scheduling, we propose a genetic algorithm-based neural fuzzy decision tree (GANFDT) to realize it in a real-time environment. The GANFDT combines a GA and a neural fuzzy network into a binary classification tree. This approach also exploits the power of the classification tree. Simulation results show that the GANFDT provides an efficient way of carrying out mixed scheduling in ATM networks. PMID:18244889
How 'hard' are hard-rock deformations?
NASA Astrophysics Data System (ADS)
van Loon, A. J.
2003-04-01
The study of soft-rock deformations has received increasing attention during the past two decades, and much progress has been made in the understanding of their genesis. It is also recognized now that soft-rock deformations—which show a wide variety in size and shape—occur frequently in sediments deposited in almost all types of environments. In spite of this, deformations occurring in lithified rocks are still relatively rarely attributed to sedimentary or early-diagenetic processes. Particularly faults in hard rocks are still commonly ascribed to tectonics, commonly without a discussion about a possible non-tectonic origin at a stage that the sediments were still unlithified. Misinterpretations of both the sedimentary and the structural history of hard-rock successions may result from the negligence of a possible soft-sediment origin of specific deformations. It is therefore suggested that a re-evaluation of these histories, keeping the present-day knowledge about soft-sediment deformations in mind, may give new insights into the geological history of numerous sedimentary successions in which the deformations have not been studied from both a sedimentological and a structural point of view.
Linear-time algorithms for scheduling on parallel processors
Monma, C.L.
1982-01-01
Linear-time algorithms are presented for several problems of scheduling n equal-length tasks on m identical parallel processors subject to precedence constraints. This improves upon previous time bounds for the maximum lateness problem with treelike precedence constraints, the number-of-late-tasks problem without precedence constraints, and the one machine maximum lateness problem with general precedence constraints. 5 references.
Constraint monitoring in TOSCA
NASA Technical Reports Server (NTRS)
Beck, Howard
1992-01-01
The Job-Shop Scheduling Problem (JSSP) deals with the allocation of resources over time to factory operations. Allocations are subject to various constraints (e.g., production precedence relationships, factory capacity constraints, and limits on the allowable number of machine setups) which must be satisfied for a schedule to be valid. The identification of constraint violations and the monitoring of constraint threats plays a vital role in schedule generation in terms of the following: (1) directing the scheduling process; and (2) informing scheduling decisions. This paper describes a general mechanism for identifying constraint violations and monitoring threats to the satisfaction of constraints throughout schedule generation.
A Framework for Parallel Nonlinear Optimization by Partitioning Localized Constraints
Xu, You; Chen, Yixin
2008-06-28
We present a novel parallel framework for solving large-scale continuous nonlinear optimization problems based on constraint partitioning. The framework distributes constraints and variables to parallel processors and uses an existing solver to handle the partitioned subproblems. In contrast to most previous decomposition methods that require either separability or convexity of constraints, our approach is based on a new constraint partitioning theory and can handle nonconvex problems with inseparable global constraints. We also propose a hypergraph partitioning method to recognize the problem structure. Experimental results show that the proposed parallel algorithm can efficiently solve some difficult test cases.
Gimbel, C B
2000-10-01
A more conservative, less invasive treatment of the carious lesion has intrigued researchers and clinicians for decades. With over 170 million restorations placed worldwide each year, many of which could be treated using a laser, there exists an increasing need for understanding hard tissue laser procedures. An historical review of past scientific and clinical hard research, biophysics, and histology are discussed. A complete review of present applications and procedures along with their capabilities and limitations will give the clinician a better understanding. Clinical case studies, along with guidelines for tooth preparation and hard tissue laser applications and technological advances for diagnosis and treatment will give the clinician a look into the future. PMID:11048281
Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems
Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo
2015-01-01
Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016
Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems.
Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo
2015-01-01
Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016
ERIC Educational Resources Information Center
Sturgeon, Julie
2008-01-01
Acting on information from students who reported seeing a classmate looking at inappropriate material on a school computer, school officials used forensics software to plunge the depths of the PC's hard drive, searching for evidence of improper activity. Images were found in a deleted Internet Explorer cache as well as deleted file space.…
ERIC Educational Resources Information Center
Berry, John N., III
2009-01-01
Roberta Stevens and Kent Oliver are campaigning hard for the presidency of the American Library Association (ALA). Stevens is outreach projects and partnerships officer at the Library of Congress. Oliver is executive director of the Stark County District Library in Canton, Ohio. They have debated, discussed, and posted web sites, Facebook pages,…
ERIC Educational Resources Information Center
Parrino, Frank M.
2003-01-01
Interviews with school board members and administrators produced a list of suggestions for balancing a budget in hard times. Among these are changing calendars and schedules to reduce heating and cooling costs; sharing personnel; rescheduling some extracurricular activities; and forming cooperative agreements with other districts. (MLF)
Berger, E.L.; Collins, J.C.; Soper, D.E.; Sterman, G.
1986-03-01
I discuss events in high energy hadron collisions that contain a hard scattering, in the sense that very heavy quarks or high P/sub T/ jets are produced, yet are diffractive, in the sense that one of the incident hadrons is scattered with only a small energy loss. 8 refs.
Quantum defragmentation algorithm
Burgarth, Daniel; Giovannetti, Vittorio
2010-08-15
In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.
Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search
Rajalakshmi, S.; Maguteeswaran, R.
2015-01-01
A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established. PMID:26495429
Wayne F. Boyer; Gurdeep S. Hura
2005-09-01
The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized task orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,
Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search.
Rajalakshmi, S; Maguteeswaran, R
2015-01-01
A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established. PMID:26495429
ERIC Educational Resources Information Center
McNeil, Michele
2008-01-01
Hard-to-grasp dollar amounts are forcing real cuts in K-12 education at a time when the cost of fueling buses and providing school lunches is increasing and the demands of the federal No Child Left Behind Act still loom larger over states and districts. "One of the real challenges is to continue progress in light of the economy," said Gale Gaines,…
ERIC Educational Resources Information Center
Mathews, Jay
2009-01-01
In 1994, fresh from a two-year stint with Teach for America, Mike Feinberg and Dave Levin inaugurated the Knowledge Is Power Program (KIPP) in Houston with an enrollment of 49 5th graders. By this Fall, 75 KIPP schools will be up and running, setting children from poor and minority families on a path to college through a combination of hard work,…
Mansur, Louis K; Bhattacharya, R; Blau, Peter Julian; Clemons, Art; Eberle, Cliff; Evans, H B; Janke, Christopher James; Jolly, Brian C; Lee, E H; Leonard, Keith J; Trejo, Rosa M; Rivard, John D
2010-01-01
High energy ion beam surface treatments were applied to a selected group of polymers. Of the six materials in the present study, four were thermoplastics (polycarbonate, polyethylene, polyethylene terephthalate, and polystyrene) and two were thermosets (epoxy and polyimide). The particular epoxy evaluated in this work is one of the resins used in formulating fiber reinforced composites for military helicopter blades. Measures of mechanical properties of the near surface regions were obtained by nanoindentation hardness and pin on disk wear. Attempts were also made to measure erosion resistance by particle impact. All materials were hardness tested. Pristine materials were very soft, having values in the range of approximately 0.1 to 0.5 GPa. Ion beam treatment increased hardness by up to 50 times compared to untreated materials. For reference, all materials were hardened to values higher than those typical of stainless steels. Wear tests were carried out on three of the materials, PET, PI and epoxy. On the ion beam treated epoxy no wear could be detected, whereas the untreated material showed significant wear.
Direct handling of equality constraints in multilevel optimization
NASA Technical Reports Server (NTRS)
Renaud, John E.; Gabriele, Gary A.
1990-01-01
In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.
Highly irregular quantum constraints
NASA Astrophysics Data System (ADS)
Klauder, John R.; Little, J. Scott
2006-05-01
Motivated by a recent paper of Louko and Molgado, we consider a simple system with a single classical constraint R(q) = 0. If ql denotes a generic solution to R(q) = 0, our examples include cases where R'(ql) ≠ 0 (regular constraint) and R'(ql) = 0 (irregular constraint) of varying order as well as the case where R(q) = 0 for an interval, such as a <= q <= b. Quantization of irregular constraints is normally not considered; however, using the projection operator formalism we provide a satisfactory quantization which reduces to the constrained classical system when planck → 0. It is noteworthy that irregular constraints change the observable aspects of a theory as compared to strictly regular constraints.
Join-Graph Propagation Algorithms
Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina
2010-01-01
The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057
Constructive neural network learning algorithms
Parekh, R.; Yang, Jihoon; Honavar, V.
1996-12-31
Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.
On the Complexity of Constraint-Based Theory Extraction
NASA Astrophysics Data System (ADS)
Boley, Mario; Gärtner, Thomas
In this paper we rule out output polynomial listing algorithms for the general problem of discovering theories for a conjunction of monotone and anti-monotone constraints as well as for the particular subproblem in which all constraints are frequency-based. For the general problem we prove a concrete exponential lower time bound that holds for any correct algorithm and even in cases in which the size of the theory as well as the only previous bound are constant. For the case of frequency-based constraints our result holds unless P = NP. These findings motivate further research to identify tractable subproblems and justify approaches with exponential worst case complexity.
Ultrasonic characterization of materials hardness
Badidi Bouda A; Benchaala; Alem
2000-03-01
In this paper, an experimental technique has been developed to measure velocities and attenuation of ultrasonic waves through a steel with a variable hardness. A correlation between ultrasonic measurements and steel hardness was investigated. PMID:10829663
Quiet planting in the locked constraints satisfaction problems
Zdeborova, Lenka; Krzakala, Florent
2009-01-01
We study the planted ensemble of locked constraint satisfaction problems. We describe the connection between the random and planted ensembles. The use of the cavity method is combined with arguments from reconstruction on trees and first and second moment considerations; in particular the connection with the reconstruction on trees appears to be crucial. Our main result is the location of the hard region in the planted ensemble, thus providing hard satisfiable benchmarks. In a part of that hard region instances have with high probability a single satisfying assignment.
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and
Monte Carlo algorithm for least dependent non-negative mixture decomposition.
Astakhov, Sergey A; Stögbauer, Harald; Kraskov, Alexander; Grassberger, Peter
2006-03-01
We propose a simulated annealing algorithm (stochastic non-negative independent component analysis, SNICA) for blind decomposition of linear mixtures of non-negative sources with non-negative coefficients. The demixing is based on a Metropolis-type Monte Carlo search for least dependent components, with the mutual information between recovered components as a cost function and their non-negativity as a hard constraint. Elementary moves are shears in two-dimensional subspaces and rotations in three-dimensional subspaces. The algorithm is geared at decomposing signals whose probability densities peak at zero, the case typical in analytical spectroscopy and multivariate curve resolution. The decomposition performance on large samples of synthetic mixtures and experimental data is much better than that of traditional blind source separation methods based on principal component analysis (MILCA, FastICA, RADICAL) and chemometrics techniques (SIMPLISMA, ALS, BTEM). PMID:16503615
On Reformulating Planning as Dynamic Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Jonsson, Ari K.; Morris, Paul; Koga, Dennis (Technical Monitor)
2000-01-01
In recent years, researchers have reformulated STRIPS planning problems as SAT problems or CSPs. In this paper, we discuss the Constraint-Based Interval Planning (CBIP) paradigm, which can represent planning problems incorporating interval time and resources. We describe how to reformulate mutual exclusion constraints for a CBIP-based system, the Extendible Uniform Remote Operations Planner Architecture (EUROPA). We show that reformulations involving dynamic variable domains restrict the algorithms which can be used to solve the resulting DCSP. We present an alternative formulation which does not employ dynamic domains, and describe the relative merits of the different reformulations.
Technology Transfer Automated Retrieval System (TEKTRAN)
Hard pans, hard layers, or compacted horizons, either surface or subsurface, are universal problems that limit crop production. Hard layers can be caused by traffic or soil genetic properties that result in horizons with high density or cemented soil particles; these horizons have elevated penetrati...
How Do You Like Your Equilibrium Selection Problems? Hard, or Very Hard?
NASA Astrophysics Data System (ADS)
Goldberg, Paul W.
The PPAD-completeness of Nash equilibrium computation is taken as evidence that the problem is computationally hard in the worst case. This evidence is necessarily rather weak, in the sense that PPAD is only know to lie "between P and NP", and there is not a strong prospect of showing it to be as hard as NP. Of course, the problem of finding an equilibrium that has certain sought-after properties should be at least as hard as finding an unrestricted one, thus we have for example the NP-hardness of finding equilibria that are socially optimal (or indeed that have various efficiently checkable properties), the results of Gilboa and Zemel [6], and Conitzer and Sandholm [3]. In the talk I will give an overview of this topic, and a summary of recent progress showing that the equilibria that are found by the Lemke-Howson algorithm, as well as related homotopy methods, are PSPACE-complete to compute. Thus we show that there are no short cuts to the Lemke-Howson solutions, subject only to the hardness of PSPACE. I mention some open problems.
Sheinberg, Haskell
1986-01-01
A composition of matter having a Rockwell A hardness of at least 85 is formed from a precursor mixture comprising between 3 and 10 weight percent boron carbide and the remainder a metal mixture comprising from 70 to 90 percent tungsten or molybdenum, with the remainder of the metal mixture comprising nickel and iron or a mixture thereof. The composition has a relatively low density of between 7 to 14 g/cc. The precursor is preferably hot pressed to yield a composition having greater than 100% of theoretical density.
Sheinberg, H.
1983-07-26
A composition of matter having a Rockwell A hardness of at least 85 is formed from a precursor mixture comprising between 3 and 10 wt % boron carbide and the remainder a metal mixture comprising from 70 to 90% tungsten or molybdenum, with the remainder of the metal mixture comprising nickel and iron or a mixture thereof. The composition has a relatively low density of between 7 and 14 g/cc. The precursor is preferably hot pressed to yield a composition having greater than 100% of theoretical density.
Powered Descent Guidance with General Thrust-Pointing Constraints
NASA Technical Reports Server (NTRS)
Carson, John M., III; Acikmese, Behcet; Blackmore, Lars
2013-01-01
The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.
Creating Positive Task Constraints
ERIC Educational Resources Information Center
Mally, Kristi K.
2006-01-01
Constraints are characteristics of the individual, the task, or the environment that mold and shape movement choices and performances. Constraints can be positive--encouraging proficient movements or negative--discouraging movement or promoting ineffective movements. Physical educators must analyze, evaluate, and determine the effect various…
Constraint Reasoning Over Strings
NASA Technical Reports Server (NTRS)
Koga, Dennis (Technical Monitor); Golden, Keith; Pang, Wanlin
2003-01-01
This paper discusses an approach to representing and reasoning about constraints over strings. We discuss how many string domains can often be concisely represented using regular languages, and how constraints over strings, and domain operations on sets of strings, can be carried out using this representation.
Credit Constraints in Education
ERIC Educational Resources Information Center
Lochner, Lance; Monge-Naranjo, Alexander
2012-01-01
We review studies of the impact of credit constraints on the accumulation of human capital. Evidence suggests that credit constraints have recently become important for schooling and other aspects of households' behavior. We highlight the importance of early childhood investments, as their response largely determines the impact of credit…
Timeline-Based Space Operations Scheduling with External Constraints
NASA Technical Reports Server (NTRS)
Chien, Steve; Tran, Daniel; Rabideau, Gregg; Schaffer, Steve; Mandl, Daniel; Frye, Stuart
2010-01-01
We describe a timeline-based scheduling algorithm developed for mission operations of the EO-1 earth observing satellite. We first describe the range of operational constraints for operations focusing on maneuver and thermal constraints that cannot be modeled in typical planner/schedulers. We then describe a greedy heuristic scheduling algorithm and compare its performance to both the prior scheduling algorithm - documenting an over 50% increase in scenes scheduled with estimated value of millions of dollars US. We also compare to a relaxed optimal scheduler showing that the greedy scheduler produces schedules with scene count within 15% of an upper bound on optimal schedules.
Evolutionary Algorithm for Calculating Available Transfer Capability
NASA Astrophysics Data System (ADS)
Šošić, Darko; Škokljev, Ivan
2013-09-01
The paper presents an evolutionary algorithm for calculating available transfer capability (ATC). ATC is a measure of the transfer capability remaining in the physical transmission network for further commercial activity over and above already committed uses. In this paper, MATLAB software is used to determine the ATC between any bus in deregulated power systems without violating system constraints such as thermal, voltage, and stability constraints. The algorithm is applied on IEEE 5 bus system and on IEEE 30 bus system.
Constraints in Quantum Geometrodynamics
NASA Astrophysics Data System (ADS)
Gentle, Adrian P.; George, Nathan D.; Miller, Warner A.; Kheyfets, Arkady
We compare different treatments of the constraints in canonical quantum gravity. The standard approach on the superspace of 3-geometries treats the constraints as the sole carriers of the dynamic content of the theory, thus rendering the traditional dynamical equations obsolete. Quantization of the constraints in both the Dirac and ADM square root Hamiltonian approaches leads to the well known problems of time evolution. These problems of time are of both an interpretational and technical nature. In contrast, the geometrodynamic quantization procedure on the superspace of the true dynamical variables separates the issues of quantization from the enforcement of the constraints. The resulting theory takes into account states that are off-shell with respect to the constraints, and thus avoids the problems of time. We develop, for the first time, the geometrodynamic quantization formalism in a general setting and show that it retains all essential features previously illustrated in the context of homogeneous cosmologies.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Bech, A. O.; Kipling, M. D.; Heather, J. C.
1962-01-01
In Great Britain there have been no published reports of respiratory disease occurring amongst workers in the hard metal (tungsten carbide) industry. In this paper the clinical and radiological findings in six cases and the pathological findings in one are described. In two cases physiological studies indicated mild alveolar diffusion defects. Histological examination in a fatal case revealed diffuse pulmonary interstitial fibrosis with marked peribronchial and perivascular fibrosis and bronchial epithelial hyperplasia and metaplasia. Radiological surveys revealed the sporadic occurrence and low incidence of the disease. The alterations in respiratory mechanics which occurred in two workers following a day's exposure to dust are described. Airborne dust concentrations are given. The industrial process is outlined and the literature is reviewed. The toxicity of the metals is discussed, and our findings are compared with those reported from Europe and the United States. We are of the opinion that the changes which we would describe as hard metal disease are caused by the inhalation of dust at work and that the component responsible may be cobalt. Images PMID:13970036
Gilman, J.J.
1996-12-31
In crystals (and/or glasses) with localized sp{sup 3} or spd-bonding orbitals, dislocations have very low mobilities, making the crystals very hard. Classical Peierls-Nabarro theory does not account for the low mobility. The breaking of spin-pair bonds which creates internal free-radicals must be considered. Therefore, a theory based on quantum mechanics has been proposed (Science, 261, 1436 (1993)). It has been applied successfully to diamond, Si, Ge, SiC, and with a modification to TiC and WC. It has recently been extended to account for the temperature independence of the hardness of silicon at low temperatures together with strong softening at temperatures above the Debye temperature. It is quantitatively consistent with the behaviors of the Group 4 elements (C, Si, Ge, Sn) when their Debye temperatures are used as normalizing factors; and appears to be consistent with data for TiC if an Einstein temperature for carbon is used. Since the Debye temperature marks the approximate point at which phonons of atomic wavelengths become excited (as contrasted with collective acoustic waves), this confirms the idea that the process which limits dislocation mobility is localized to atomic dimensions (sharp kinks).
Approximate resolution of hard numbering problems
Bailleux, O.; Chabrier, J.J.
1996-12-31
We present a new method for estimating the number of solutions of constraint satisfaction problems. We use a stochastic forward checking algorithm for drawing a sample of paths from a search tree. With this sample, we compute two values related to the number of solutions of a CSP instance. First, an unbiased estimate, second, a lower bound with an arbitrary low error probability. We will describe applications to the Boolean Satisfiability problem and the Queens problem. We shall give some experimental results for these problems.
Total-variation regularization with bound constraints
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
A fast full constraints unmixing method
NASA Astrophysics Data System (ADS)
Ye, Zhang; Wei, Ran; Wang, Qing Yan
2012-10-01
Mixed pixels are inevitable due to low-spatial resolutions of hyperspectral image (HSI). Linear spectrum mixture model (LSMM) is a classical mathematical model to relate the spectrum of mixing substance to corresponding individual components. The solving of LSMM, namely unmixing, is essentially a linear optimization problem with constraints, which is usually consisting of iterations implemented on decent direction and stopping criterion to terminate algorithms. Such criterion must be properly set in order to balance the accuracy and speed of solution. However, the criterion in existing algorithm is too strict, which maybe lead to convergence rate reducing. In this paper, by broaden constraints in unmixing, a new stopping rule is proposed, which can reduce rate of convergence. The experiments results prove both in runtime and iteration numbers that our method can accelerate convergence processing with only cost of little quality decrease in resulting.
Level of constraint in revision knee arthroplasty.
Indelli, Pier Francesco; Giori, Nick; Maloney, William
2015-12-01
Revision total knee arthroplasty (TKA) in the setting of major bone deficiency and/or soft tissue laxity might require increasing levels of constraint to restore knee stability. However, increasing the level of constraint not always correlates with mid-to-long-term satisfactory results. Recently, modular components as tantalum cones and titanium sleeves have been introduced to the market with the goal of obtaining better fixation where bone deficiency is an issue; theoretically, satisfactory meta-diaphyseal fixation can reduce the mechanical stress at the level of the joint line, reducing the need for high levels of constraint. This article reviews the recent literature on the surgical management of the unstable TKA with the goal to propose a modern surgical algorithm for adult reconstruction surgeons. PMID:26373770
Improving hard disk data security using a hardware encryptor
NASA Astrophysics Data System (ADS)
Walewski, Andrzej
2008-01-01
This paper describes the design path of a hard disk encryption device. It outlines the analysis of design requirements, trends in data security, presentation of the IDE transfer protocol and finally the way of choosing the method, algorithm and parameters of encryption.
New Hardness Results for Diophantine Approximation
NASA Astrophysics Data System (ADS)
Eisenbrand, Friedrich; Rothvoß, Thomas
We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.
Constraint Embedding Technique for Multibody System Dynamics
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Cheng, Michael K.
2011-01-01
Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with
HIGHER ORDER HARD EDGE END FIELD EFFECTS.
BERG,J.S.
2004-09-14
In most cases, nonlinearities from magnets must be properly included in tracking and analysis to properly compute quantities of interest, in particular chromatic properties and dynamic aperture. One source of nonlinearities in magnets that is often important and cannot be avoided is the nonlinearity arising at the end of a magnet due to the longitudinal variation of the field at the end of the magnet. Part of this effect is independent of the longitudinal of the end. It is lowest order in the body field of the magnet, and is the result of taking a limit as the length over which the field at the end varies approaches zero. This is referred to as a ''hard edge'' end field. This effect has been computed previously to lowest order in the transverse variables. This paper describes a method to compute this effect to arbitrary order in the transverse variables, under certain constraints.
Optimization of Blade Stiffened Composite Panel under Buckling and Strength Constraints
NASA Astrophysics Data System (ADS)
Todoroki, Akira; Sekishiro, Masato
This paper deals with multiple constraints for dimension and stacking-sequence optimization of a blade-stiffened composite panel. In a previous study, a multiple objective genetic algorithm using a Kriging response surface with a buckling load constraint was the target. The present study focuses on dimension and stacking-sequence optimization with both a buckling load constraint and a fracture constraint. Multiple constraints complicate the process of selecting sampling analyses to improve the Kriging response surface. The proposed method resolves this problem using the most-critical-constraint approach. The new approach is applied to a blade stiffened composite panel and the approach is shown to be efficient.
Kirk, R.L.
1987-01-01
Thermal evolution of Ganymede from a hot start is modeled. On cooling ice I forms above the liquid H/sub 2/O and dense ices at higher entropy below it. A novel diapiric instability is proposed to occur if the ocean thins enough, mixing these layers and perhaps leading to resurfacing and groove formation. Rising warm-ice diapirs may cause a dramatic heat pulse and fracturing at the surface, and provide material for surface flows. Timing of the pulse depends on ice rheology but could agree with crater-density dates for resurfacing. Origins of the Ganymede-Callisto dichotomy in light of the model are discussed. Based on estimates of the conductivity of H/sub 2/ (Jupiter, Saturn) and H/sub 2/O (Uranus, Neptune), the zonal winds of the giant planets will, if they penetrate below the visible atmosphere, interact with the magnetic field well outside the metallic core. The scaling argument is supported by a model with zonal velocity constant on concentric cylinders, the Lorentz torque on each balanced by viscous stresses. The problem of two-dimensional photoclinometry, i.e. reconstruction of a surface from its image, is formulated in terms of finite elements and a fast algorithm using Newton-SOR iteration accelerated by multigridding is presented.
Optimizing selection with several constraints in poultry breeding.
Chapuis, H; Pincent, C; Colleau, J J
2016-02-01
Poultry breeding schemes permanently face the need to control the evolution of coancestry and some critical traits, while selecting for a main breeding objective. The main aims of this article are first to present an efficient selection algorithm adapted to this situation and then to measure how the severity of constraints impacted on the degree of loss for the main trait, compared to BLUP selection on the main trait, without any constraint. Broiler dam and sire line schemes were mimicked by simulation over 10 generations and selection was carried out on the main trait under constraints for coancestry and for another trait, antagonistic with the main trait. The selection algorithm was a special simulated annealing (adaptative simulated annealing (ASA)). It was found to be rapid and able to meet constraints very accurately. A constraint on the second trait was found to induce an impact similar to or even greater than the impact of the constraint on coancestry. The family structure of selected poultry populations made it easy to control the evolution of coancestry at a reasonable cost but was not as useful for reducing the cost of controlling evolution of the antagonistic traits. Multiple constraints impacted almost additively on the genetic gain for the main trait. Adding constraints for several traits would therefore be justified in real life breeding schemes, possibly after evaluating their impact through simulated annealing. PMID:26220593
Conflict-Aware Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Borden, Chester
2006-01-01
conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.
NASA Astrophysics Data System (ADS)
Riezler, Stefan
2000-08-01
In this thesis, we present two approaches to a rigorous mathematical and algorithmic foundation of quantitative and statistical inference in constraint-based natural language processing. The first approach, called quantitative constraint logic programming, is conceptualized in a clear logical framework, and presents a sound and complete system of quantitative inference for definite clauses annotated with subjective weights. This approach combines a rigorous formal semantics for quantitative inference based on subjective weights with efficient weight-based pruning for constraint-based systems. The second approach, called probabilistic constraint logic programming, introduces a log-linear probability distribution on the proof trees of a constraint logic program and an algorithm for statistical inference of the parameters and properties of such probability models from incomplete, i.e., unparsed data. The possibility of defining arbitrary properties of proof trees as properties of the log-linear probability model and efficiently estimating appropriate parameter values for them permits the probabilistic modeling of arbitrary context-dependencies in constraint logic programs. The usefulness of these ideas is evaluated empirically in a small-scale experiment on finding the correct parses of a constraint-based grammar. In addition, we address the problem of computational intractability of the calculation of expectations in the inference task and present various techniques to approximately solve this task. Moreover, we present an approximate heuristic technique for searching for the most probable analysis in probabilistic constraint logic programs.
Adiabatic quantum programming: minor embedding with hard faults
NASA Astrophysics Data System (ADS)
Klymko, Christine; Sullivan, Blair D.; Humble, Travis S.
2013-11-01
Adiabatic quantum programming defines the time-dependent mapping of a quantum algorithm into an underlying hardware or logical fabric. An essential step is embedding problem-specific information into the quantum logical fabric. We present algorithms for embedding arbitrary instances of the adiabatic quantum optimization algorithm into a square lattice of specialized unit cells. These methods extend with fabric growth while scaling linearly in time and quadratically in footprint. We also provide methods for handling hard faults in the logical fabric without invoking approximations to the original problem and illustrate their versatility through numerical studies of embeddability versus fault rates in square lattices of complete bipartite unit cells. The studies show that these algorithms are more resilient to faulty fabrics than naive embedding approaches, a feature which should prove useful in benchmarking the adiabatic quantum optimization algorithm on existing faulty hardware.
Measuring the Hardness of Minerals
ERIC Educational Resources Information Center
Bushby, Jessica
2005-01-01
The author discusses Moh's hardness scale, a comparative scale for minerals, whereby the softest mineral (talc) is placed at 1 and the hardest mineral (diamond) is placed at 10, with all other minerals ordered in between, according to their hardness. Development history of the scale is outlined, as well as a description of how the scale is used…
Exploiting sequential phonetic constraints in recognizing spoken words
NASA Astrophysics Data System (ADS)
Huttenlocher, D. P.
1985-10-01
Machine recognition of spoken language requires developing more robust recognition algorithms. A recent study by Shipman and Zue suggest using partial descriptions of speech sounds to eliminate all but a handful of word candidates from a large lexicon. The current paper extends their work by investigating the power of partial phonetic descriptions for developing recognition algorithms. First, we demonstrate that sequences of manner of articulation classes are more reliable and provide more constraint than certain other classes. Alone these results are of limited utility, due to the high degree of variability in natural speech. This variability is not uniform however, as most modifications and deletions occur in unstressed syllables. Comparing the relative constraint provided by sounds in stressed versus unstressed syllables, we discover that the stressed syllables provide substantially more constraint. This indicates that recognition algorithms can be made more robust by exploiting the manner of articulation information in stressed syllables.
Cyclic strength of hard metals
Sereda, N.N.; Gerikhanov, A.K.; Koval'chenko, M.S.; Pedanov, L.G.; Tsyban', V.A.
1986-02-01
The authors study the strength of hard-metal specimens and structural elements under conditions of cyclic loading since many elements of processing plants, equipment, and machines are made of hard metals. Fatigue tests were conducted on KTS-1N, KTSL-1, and KTNKh-70 materials, which are titanium carbide hard metals cemented with nickel-molybdenum, nickelcobalt-chromium, and nickel-chromium alloys, respectively. As a basis of comparison, the standard VK-15 (WC+15% Co) alloy was used. Some key physicomechanical characteristics of the materials investigated are presented. On time bases not exceeding 10/sup 6/ cycles, titanium carbide hard metals are comparable in fatigue resistance to the standard tungstencontaining hard metals.
NASA Astrophysics Data System (ADS)
Labrecque, F.; Lecesne, N.; Bricault, P.
2008-10-01
The ISAC RIB facility at TRIUMF utilizes up to 100 μA from the 500 MeV H- cyclotron to produce RIB using the isotopic separation on line (ISOL) method. In the moment, we are mainly using a hot surface ion source and a laser ion source to produce our RIB. A FEBIAD ion source has been recently tested at ISAC, but these ion sources are not suitable for gaseous elements like N, O, F, Ne, … , A new type of ion source is then necessary. By combining a high frequency electromagnetic wave and a magnetic confinement, the ECRIS [R. Geller, Electron Cyclotron Resonance Ion Source and ECR Plasmas, Institute of Physics Publishing, Bristol, 1996], [1] (electron cyclotron resonance ion source) can produce high energy electrons essential for efficient ionization of those elements. To this end, a prototype ECRIS called MISTIC (monocharged ion source for TRIUMF and ISAC complex) has been built at TRIUMF using a design similar to the one developed at GANIL [GANIL (Grand Accélérateur National d'Ions Lourds), www.ganil.fr], [2] The high level radiation caused by the proximity to the target prevented us to use a conventional ECRIS. To achieve a radiation hard ion source, we used coils instead of permanent magnets to produce the magnetic confinement. Each coil is supplied by 1000 A-15 V power supply. The RF generator cover a frequency range from 2 to 8 GHz giving us all the versatility we need to characterize the ionization of the following elements: He, Ne, Ar, Kr, Xe, C, O, N, F. Isotopes of these elements are involved in star thermonuclear cycles and, consequently, very important for researches in nuclear astrophysics. Measures of efficiency, emittance and ionization time will be performed for each of those elements. Preliminary tests show that MISTIC is very stable over a large range of frequency, magnetic field and pressure.
Constraints complicate centrifugal compressor depressurization
Key, B. ); Colbert, F.L. )
1993-05-10
Blowdown of a centrifugal compressor is complicated by process constraints that might require slowing the depressurization rate and by mechanical constraints for which a faster rate might be preferred. The paper describes design constraints such as gas leaks; thrust-bearing overload; system constraints; flare extinguishing; heat levels; and pressure drop.
NASA Technical Reports Server (NTRS)
Vardi, A.
1984-01-01
The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.
Constraint algebra in bigravity
Soloviev, V. O.
2015-07-15
The number of degrees of freedom in bigravity theory is found for a potential of general form and also for the potential proposed by de Rham, Gabadadze, and Tolley (dRGT). This aim is pursued via constructing a Hamiltonian formalismand studying the Poisson algebra of constraints. A general potential leads to a theory featuring four first-class constraints generated by general covariance. The vanishing of the respective Hessian is a crucial property of the dRGT potential, and this leads to the appearance of two additional second-class constraints and, hence, to the exclusion of a superfluous degree of freedom—that is, the Boulware—Deser ghost. The use of a method that permits avoiding an explicit expression for the dRGT potential is a distinctive feature of the present study.
Cross-Modal Subspace Learning via Pairwise Constraints.
He, Ran; Zhang, Man; Wang, Liang; Ji, Ye; Yin, Qiyue
2015-12-01
In multimedia applications, the text and image components in a web document form a pairwise constraint that potentially indicates the same semantic concept. This paper studies cross-modal learning via the pairwise constraint and aims to find the common structure hidden in different modalities. We first propose a compound regularization framework to address the pairwise constraint, which can be used as a general platform for developing cross-modal algorithms. For unsupervised learning, we propose a multi-modal subspace clustering method to learn a common structure for different modalities. For supervised learning, to reduce the semantic gap and the outliers in pairwise constraints, we propose a cross-modal matching method based on compound ℓ21 regularization. Extensive experiments demonstrate the benefits of joint text and image modeling with semantically induced pairwise constraints, and they show that the proposed cross-modal methods can further reduce the semantic gap between different modalities and improve the clustering/matching accuracy. PMID:26259218
ERIC Educational Resources Information Center
Gray, Wayne D.; Fu, Wai-Tat
2004-01-01
Constraints and dependencies among the elements of embodied cognition form patterns or microstrategies of interactive behavior. Hard constraints determine which microstrategies are possible. Soft constraints determine which of the possible microstrategies are most likely to be selected. When selection is non-deliberate or automatic the least…
Generalized arc consistency for global cardinality constraint
Regin, J.C.
1996-12-31
A global cardinality constraint (gcc) is specified in terms of a set of variables X = (x{sub 1},..., x{sub p}) which take their values in a subset of V = (v{sub 1},...,v{sub d}). It constrains the number of times a value v{sub i} {epsilon} V is assigned to a variable in X to be in an interval [l{sub i}, c{sub i}]. Cardinality constraints have proved very useful in many real-life problems, such as scheduling, timetabling, or resource allocation. A gcc is more general than a constraint of difference, which requires each interval to be. In this paper, we present an efficient way of implementing generalized arc consistency for a gcc. The algorithm we propose is based on a new theorem of flow theory. Its space complexity is O({vert_bar}X{vert_bar} {times} {vert_bar}V{vert_bar}) and its time complexity is O({vert_bar}X{vert_bar}{sup 2} {times} {vert_bar}V{vert_bar}). We also show how this algorithm can efficiently be combined with other filtering techniques.
General heuristics algorithms for solving capacitated arc routing problem
NASA Astrophysics Data System (ADS)
Fadzli, Mohammad; Najwa, Nurul; Masran, Hafiz
2015-05-01
In this paper, we try to determine the near-optimum solution for the capacitated arc routing problem (CARP). In general, NP-hard CARP is a special graph theory specifically arises from street services such as residential waste collection and road maintenance. By purpose, the design of the CARP model and its solution techniques is to find optimum (or near-optimum) routing cost for a fleet of vehicles involved in operation. In other words, finding minimum-cost routing is compulsory in order to reduce overall operation cost that related with vehicles. In this article, we provide a combination of various heuristics algorithm to solve a real case of CARP in waste collection and benchmark instances. These heuristics work as a central engine in finding initial solutions or near-optimum in search space without violating the pre-setting constraints. The results clearly show that these heuristics algorithms could provide good initial solutions in both real-life and benchmark instances.
Beta Backscatter Measures the Hardness of Rubber
NASA Technical Reports Server (NTRS)
Morrissey, E. T.; Roje, F. N.
1986-01-01
Nondestructive testing method determines hardness, on Shore scale, of room-temperature-vulcanizing silicone rubber. Measures backscattered beta particles; backscattered radiation count directly proportional to Shore hardness. Test set calibrated with specimen, Shore hardness known from mechanical durometer test. Specimen of unknown hardness tested, and radiation count recorded. Count compared with known sample to find Shore hardness of unknown.
Fault-Tolerant, Radiation-Hard DSP
NASA Technical Reports Server (NTRS)
Czajkowski, David
2011-01-01
Commercial digital signal processors (DSPs) for use in high-speed satellite computers are challenged by the damaging effects of space radiation, mainly single event upsets (SEUs) and single event functional interrupts (SEFIs). Innovations have been developed for mitigating the effects of SEUs and SEFIs, enabling the use of very-highspeed commercial DSPs with improved SEU tolerances. Time-triple modular redundancy (TTMR) is a method of applying traditional triple modular redundancy on a single processor, exploiting the VLIW (very long instruction word) class of parallel processors. TTMR improves SEU rates substantially. SEFIs are solved by a SEFI-hardened core circuit, external to the microprocessor. It monitors the health of the processor, and if a SEFI occurs, forces the processor to return to performance through a series of escalating events. TTMR and hardened-core solutions were developed for both DSPs and reconfigurable field-programmable gate arrays (FPGAs). This includes advancement of TTMR algorithms for DSPs and reconfigurable FPGAs, plus a rad-hard, hardened-core integrated circuit that services both the DSP and FPGA. Additionally, a combined DSP and FPGA board architecture was fully developed into a rad-hard engineering product. This technology enables use of commercial off-the-shelf (COTS) DSPs in computers for satellite and other space applications, allowing rapid deployment at a much lower cost. Traditional rad-hard space computers are very expensive and typically have long lead times. These computers are either based on traditional rad-hard processors, which have extremely low computational performance, or triple modular redundant (TMR) FPGA arrays, which suffer from power and complexity issues. Even more frustrating is that the TMR arrays of FPGAs require a fixed, external rad-hard voting element, thereby causing them to lose much of their reconfiguration capability and in some cases significant speed reduction. The benefits of COTS high
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
NASA Astrophysics Data System (ADS)
Feng, Xiao; Tang, Rui-chun; Zhai, Yi-li; Feng, Yu-qing; Hong, Bo-hai
2013-07-01
Multimedia adaptation decision-taking techniques based on context are considered. Constraint satisfaction problem-Based Content Adaptation Algorithm (CBCAA) is proposed. First the algorithm obtains and classifies context information using MPEG-21; then it builds the constraint model according to different types of context information, constraint satisfaction method is used to acquire Media Description Decision Set (MDDS); finally a bit-stream adaptation engine performs the multimedia transcoding. Simulation results prove that the presented algorithm offers an efficient solution for personalized multimedia adaptation in heterogeneous environments.
Local parallel models for integration of stereo matching constraints and intrinsic image combination
NASA Technical Reports Server (NTRS)
Stewart, Charles V.
1989-01-01
Parallel relaxation computations such as those of connectionist networks offer a useful model for constraint integration and intrinsic image combination in developing a general-purpose stereo matching algorithm. This paper describes such a stereo algorithm that incorporates hierarchical, surface-structure, and edge-appearance constraints that are redefined and integrated at the level of individual candidate matches. The algorithm produces a high percentage of correct decisions on a wide variety of stereo pairs. Its few errors arise when the correlation measures defined by the constraints are either weakened or ambiguous, as in the case of periodic patterns in the images. Two additional mechanisms are discussed for overcoming the remaining errors.
Scheduling Jobs with Genetic Algorithms
NASA Astrophysics Data System (ADS)
Ferrolho, António; Crisóstomo, Manuel
Most scheduling problems are NP-hard, the time required to solve the problem optimally increases exponentially with the size of the problem. Scheduling problems have important applications, and a number of heuristic algorithms have been proposed to determine relatively good solutions in polynomial time. Recently, genetic algorithms (GA) are successfully used to solve scheduling problems, as shown by the growing numbers of papers. GA are known as one of the most efficient algorithms for solving scheduling problems. But, when a GA is applied to scheduling problems various crossovers and mutations operators can be applicable. This paper presents and examines a new concept of genetic operators for scheduling problems. A software tool called hybrid and flexible genetic algorithm (HybFlexGA) was developed to examine the performance of various crossover and mutation operators by computing simulations of job scheduling problems.
NASA Astrophysics Data System (ADS)
Zheng, Genrang; Lin, ZhengChun
The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.
Prediction of binary hard-sphere crystal structures.
Filion, Laura; Dijkstra, Marjolein
2009-04-01
We present a method based on a combination of a genetic algorithm and Monte Carlo simulations to predict close-packed crystal structures in hard-core systems. We employ this method to predict the binary crystal structures in a mixture of large and small hard spheres with various stoichiometries and diameter ratios between 0.4 and 0.84. In addition to known binary hard-sphere crystal structures similar to NaCl and AlB2, we predict additional crystal structures with the symmetry of CrB, gammaCuTi, alphaIrV, HgBr2, AuTe2, Ag2Se, and various structures for which an atomic analog was not found. In order to determine the crystal structures at infinite pressures, we calculate the maximum packing density as a function of size ratio for the crystal structures predicted by our GA using a simulated annealing approach. PMID:19518387
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations
NASA Astrophysics Data System (ADS)
Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi
We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.
Hard-phase engineering in hard/soft nanocomposite magnets
NASA Astrophysics Data System (ADS)
Poudyal, Narayan; Rong, Chuanbing; Vuong Nguyen, Van; Liu, J. Ping
2014-03-01
Bulk SmCo/Fe(Co) based hard/soft nanocomposite magnets with different hard phases (1:5, 2:17, 2:7 and 1:3 types) were fabricated by high-energy ball-milling followed by a warm compaction process. Microstructural studies revealed a homogeneous distribution of bcc-Fe(Co) phase in the matrix of hard magnetic Sm-Co phase with grain size ⩽20 nm after severe plastic deformation and compaction. The small grain size leads to effective inter-phase exchange coupling as shown by the single-phase-like demagnetization behavior with enhanced remanence and energy product. Among the different hard phases investigated, it was found that the Sm2Co7-based nanocomposites can incorporate a higher soft phase content, and thus a larger reduction in rare-earth content compared with the 2:17, 1:5 and 1:3 phase-based nanocomposite with similar properties. (BH)max up to 17.6 MGOe was obtained for isotropic Sm2Co7/FeCo nanocomposite magnets with 40 wt% of the soft phase which is about 300% higher than the single-phase counterpart prepared under the same conditions. The results show that hard-phase engineering in nanocomposite magnets is an alternative approach to fabrication of high-strength nanocomposite magnets with reduced rare-earth content.
A Framework for Dynamic Constraint Reasoning Using Procedural Constraints
NASA Technical Reports Server (NTRS)
Jonsson, Ari K.; Frank, Jeremy D.
1999-01-01
Many complex real-world decision and control problems contain an underlying constraint reasoning problem. This is particularly evident in a recently developed approach to planning, where almost all planning decisions are represented by constrained variables. This translates a significant part of the planning problem into a constraint network whose consistency determines the validity of the plan candidate. Since higher-level choices about control actions can add or remove variables and constraints, the underlying constraint network is invariably highly dynamic. Arbitrary domain-dependent constraints may be added to the constraint network and the constraint reasoning mechanism must be able to handle such constraints effectively. Additionally, real problems often require handling constraints over continuous variables. These requirements present a number of significant challenges for a constraint reasoning mechanism. In this paper, we introduce a general framework for handling dynamic constraint networks with real-valued variables, by using procedures to represent and effectively reason about general constraints. The framework is based on a sound theoretical foundation, and can be proven to be sound and complete under well-defined conditions. Furthermore, the framework provides hybrid reasoning capabilities, as alternative solution methods like mathematical programming can be incorporated into the framework, in the form of procedures.
Identifying Regions Based on Flexible User Defined Constraints.
Folch, David C; Spielman, Seth E
2014-01-01
The identification of regions is both a computational and conceptual challenge. Even with growing computational power, regionalization algorithms must rely on heuristic approaches in order to find solutions. Therefore, the constraints and evaluation criteria that define a region must be translated into an algorithm that can efficiently and effectively navigate the solution space to find the best solution. One limitation of many existing regionalization algorithms is a requirement that the number of regions be selected a priori. The max-p algorithm, introduced in Duque et al. (2012), does not have this requirement, and thus the number of regions is an output of, not an input to, the algorithm. In this paper we extend the max-p algorithm to allow for greater flexibility in the constraints available to define a feasible region, placing the focus squarely on the multidimensional characteristics of region. We also modify technical aspects of the algorithm to provide greater flexibility in its ability to search the solution space. Using synthetic spatial and attribute data we are able to show the algorithm's broad ability to identify regions in maps of varying complexity. We also conduct a large scale computational experiment to identify parameter settings that result in the greatest solution accuracy under various scenarios. The rules of thumb identified from the experiment produce maps that correctly assign areas to their "true" region with 94% average accuracy, with nearly 50 percent of the simulations reaching 100 percent accuracy. PMID:25018663
Hiding quiet solutions in random constraint satisfaction problems
Zdeborova, Lenka; Krzakala, Florent
2008-01-01
We study constraint satisfaction problems on the so-called planted random ensemble. We show that for a certain class of problems, e.g., graph coloring, many of the properties of the usual random ensemble are quantitatively identical in the planted random ensemble. We study the structural phase transitions and the easy-hard-easy pattern in the average computational complexity. We also discuss the finite temperature phase diagram, finding a close connection with the liquid-glass-solid phenomenology.
Hiding quiet solutions in random constraint satisfaction problems.
Krzakala, Florent; Zdeborová, Lenka
2009-06-12
We study constraint satisfaction problems on the so-called planted random ensemble. We show that for a certain class of problems, e.g., graph coloring, many of the properties of the usual random ensemble are quantitatively identical in the planted random ensemble. We study the structural phase transitions and the easy-hard-easy pattern in the average computational complexity. We also discuss the finite temperature phase diagram, finding a close connection with the liquid-glass-solid phenomenology. PMID:19658978
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
Structure Constraints in a Constraint-Based Planner
NASA Technical Reports Server (NTRS)
Pang, Wan-Lin; Golden, Keith
2004-01-01
In this paper we report our work on a new constraint domain, where variables can take structured values. Earth-science data processing (ESDP) is a planning domain that requires the ability to represent and reason about complex constraints over structured data, such as satellite images. This paper reports on a constraint-based planner for ESDP and similar domains. We discuss our approach for translating a planning problem into a constraint satisfaction problem (CSP) and for representing and reasoning about structured objects and constraints over structures.
Dual-Byte-Marker Algorithm for Detecting JFIF Header
NASA Astrophysics Data System (ADS)
Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat
The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.
Practical engineering of hard spin-glass instances
NASA Astrophysics Data System (ADS)
Marshall, Jeffrey; Martin-Mayor, Victor; Hen, Itay
2016-07-01
Recent technological developments in the field of experimental quantum annealing have made prototypical annealing optimizers with hundreds of qubits commercially available. The experimental demonstration of a quantum speedup for optimization problems has since then become a coveted, albeit elusive goal. Recent studies have shown that the so far inconclusive results, regarding a quantum enhancement, may have been partly due to the benchmark problems used being unsuitable. In particular, these problems had inherently too simple a structure, allowing for both traditional resources and quantum annealers to solve them with no special efforts. The need therefore has arisen for the generation of harder benchmarks which would hopefully possess the discriminative power to separate classical scaling of performance with size from quantum. We introduce here a practical technique for the engineering of extremely hard spin-glass Ising-type problem instances that does not require "cherry picking" from large ensembles of randomly generated instances. We accomplish this by treating the generation of hard optimization problems itself as an optimization problem, for which we offer a heuristic algorithm that solves it. We demonstrate the genuine thermal hardness of our generated instances by examining them thermodynamically and analyzing their energy landscapes, as well as by testing the performance of various state-of-the-art algorithms on them. We argue that a proper characterization of the generated instances offers a practical, efficient way to properly benchmark experimental quantum annealers, as well as any other optimization algorithm.
Teaching Database Design with Constraint-Based Tutors
ERIC Educational Resources Information Center
Mitrovic, Antonija; Suraweera, Pramuditha
2016-01-01
Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…
The TETRAD Project: Constraint Based Aids to Causal Model Specification.
ERIC Educational Resources Information Center
Scheines, Richard; Spirtes, Peter; Glymour, Clark; Meek, Christopher; Richardson, Thomas
1998-01-01
The TETRAD for constraint-based aids to causal model specification project and related work in computer science aims to apply standards of rigor and precision to the problem of using data and background knowledge to make inferences about a model's specifications. Several algorithms that are implemented in the TETRAD II program are presented. (SLD)
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
A Novel Constraint for Thermodynamically Designing DNA Sequences
Zhang, Qiang; Wang, Bin; Wei, Xiaopeng; Zhou, Changjun
2013-01-01
Biotechnological and biomolecular advances have introduced novel uses for DNA such as DNA computing, storage, and encryption. For these applications, DNA sequence design requires maximal desired (and minimal undesired) hybridizations, which are the product of a single new DNA strand from 2 single DNA strands. Here, we propose a novel constraint to design DNA sequences based on thermodynamic properties. Existing constraints for DNA design are based on the Hamming distance, a constraint that does not address the thermodynamic properties of the DNA sequence. Using a unique, improved genetic algorithm, we designed DNA sequence sets which satisfy different distance constraints and employ a free energy gap based on a minimum free energy (MFE) to gauge DNA sequences based on set thermodynamic properties. When compared to the best constraints of the Hamming distance, our method yielded better thermodynamic qualities. We then used our improved genetic algorithm to obtain lower-bound DNA sequence sets. Here, we discuss the effects of novel constraint parameters on the free energy gap. PMID:24015217
A novel constraint for thermodynamically designing DNA sequences.
Zhang, Qiang; Wang, Bin; Wei, Xiaopeng; Zhou, Changjun
2013-01-01
Biotechnological and biomolecular advances have introduced novel uses for DNA such as DNA computing, storage, and encryption. For these applications, DNA sequence design requires maximal desired (and minimal undesired) hybridizations, which are the product of a single new DNA strand from 2 single DNA strands. Here, we propose a novel constraint to design DNA sequences based on thermodynamic properties. Existing constraints for DNA design are based on the Hamming distance, a constraint that does not address the thermodynamic properties of the DNA sequence. Using a unique, improved genetic algorithm, we designed DNA sequence sets which satisfy different distance constraints and employ a free energy gap based on a minimum free energy (MFE) to gauge DNA sequences based on set thermodynamic properties. When compared to the best constraints of the Hamming distance, our method yielded better thermodynamic qualities. We then used our improved genetic algorithm to obtain lower-bound DNA sequence sets. Here, we discuss the effects of novel constraint parameters on the free energy gap. PMID:24015217
A Framework for Optimal Control Allocation with Structural Load Constraints
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc
2010-01-01
Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.
Unraveling Quantum Annealers using Classical Hardness.
Martin-Mayor, Victor; Hen, Itay
2015-01-01
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as 'D-Wave' chips, promise to solve practical optimization problems potentially faster than conventional 'classical' computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize 'temperature chaos' as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257
Unraveling Quantum Annealers using Classical Hardness
Martin-Mayor, Victor; Hen, Itay
2015-01-01
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257
ERIC Educational Resources Information Center
Moreland, James D., Jr
2013-01-01
This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…
A Constraint-Based Planner for Data Production
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Golden, Keith
2005-01-01
This paper presents a graph-based backtracking algorithm designed to support constrain-tbased planning in data production domains. This algorithm performs backtracking at two nested levels: the outer- backtracking following the structure of the planning graph to select planner subgoals and actions to achieve them and the inner-backtracking inside a subproblem associated with a selected action to find action parameter values. We show this algorithm works well in a planner applied to automating data production in an ecological forecasting system. We also discuss how the idea of multi-level backtracking may improve efficiency of solving semi-structured constraint problems.
Adaptive laser link reconfiguration using constraint propagation
NASA Technical Reports Server (NTRS)
Crone, M. S.; Julich, P. M.; Cook, L. M.
1993-01-01
This paper describes Harris AI research performed on the Adaptive Link Reconfiguration (ALR) study for Rome Lab, and focuses on the application of constraint propagation to the problem of link reconfiguration for the proposed space based Strategic Defense System (SDS) Brilliant Pebbles (BP) communications system. According to the concept of operations at the time of the study, laser communications will exist between BP's and to ground entry points. Long-term links typical of RF transmission will not exist. This study addressed an initial implementation of BP's based on the Global Protection Against Limited Strikes (GPALS) SDI mission. The number of satellites and rings studied was representative of this problem. An orbital dynamics program was used to generate line-of-site data for the modeled architecture. This was input into a discrete event simulation implemented in the Harris developed COnstraint Propagation Expert System (COPES) Shell, developed initially on the Rome Lab BM/C3 study. Using a model of the network and several heuristics, the COPES shell was used to develop the Heuristic Adaptive Link Ordering (HALO) Algorithm to rank and order potential laser links according to probability of communication. A reduced set of links based on this ranking would then be used by a routing algorithm to select the next hop. This paper includes an overview of Constraint Propagation as an Artificial Intelligence technique and its embodiment in the COPES shell. It describes the design and implementation of both the simulation of the GPALS BP network and the HALO algorithm in COPES. This is described using a 59 Data Flow Diagram, State Transition Diagrams, and Structured English PDL. It describes a laser communications model and the heuristics involved in rank-ordering the potential communication links. The generation of simulation data is described along with its interface via COPES to the Harris developed View Net graphical tool for visual analysis of communications
Constraints influencing sports wheelchair propulsion performance and injury risk
2013-01-01
The Paralympic Games are the pinnacle of sport for many athletes with a disability. A potential issue for many wheelchair athletes is how to train hard to maximise performance while also reducing the risk of injuries, particularly to the shoulder due to the accumulation of stress placed on this joint during activities of daily living, training and competition. The overall purpose of this narrative review was to use the constraints-led approach of dynamical systems theory to examine how various constraints acting upon the wheelchair-user interface may alter hand rim wheelchair performance during sporting activities, and to a lesser extent, their injury risk. As we found no studies involving Paralympic athletes that have directly utilised the dynamical systems approach to interpret their data, we have used this approach to select some potential constraints and discussed how they may alter wheelchair performance and/or injury risk. Organism constraints examined included player classifications, wheelchair setup, training and intrinsic injury risk factors. Task constraints examined the influence of velocity and types of locomotion (court sports vs racing) in wheelchair propulsion, while environmental constraints focused on forces that tend to oppose motion such as friction and surface inclination. Finally, the ecological validity of the research studies assessing wheelchair propulsion was critiqued prior to recommendations for practice and future research being given. PMID:23557065
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
Hard Work and Hard Data: Getting Our Message Out.
ERIC Educational Resources Information Center
Glau, Gregory R.
Unless questions about student performance and student retention can be answered and unless educators are proactive in finding and publicizing such information, basic writing programs cannot determine if what they are doing is working. Hard data, especially from underrepresented groups, is needed to support these programs. At Arizona State…
Future hard disk drive systems
NASA Astrophysics Data System (ADS)
Wood, Roger
2009-03-01
This paper briefly reviews the evolution of today's hard disk drive with the additional intention of orienting the reader to the overall mechanical and electrical architecture. The modern hard disk drive is a miracle of storage capacity and function together with remarkable economy of design. This paper presents a personal view of future customer requirements and the anticipated design evolution of the components. There are critical decisions and great challenges ahead for the key technologies of heads, media, head-disk interface, mechanics, and electronics.
NASA Astrophysics Data System (ADS)
Overgaard Rasmussen, Christine
2016-07-01
We present an overview of the options for diffraction implemented in the general-purpose event generator Pythia 8 [1]. We review the existing model for soft diffraction and present a new model for hard diffraction. Both models use the Pomeron approach pioneered by Ingelman and Schlein, factorising the diffractive cross section into a Pomeron flux and a Pomeron PDF, with several choices for both implemented in Pythia 8. The model of hard diffraction is implemented as a part of the multiparton interactions (MPI) framework, thus introducing a dynamical gap survival probability that explicitly breaks factorisation.
Magnetic levitation for hard superconductors
Kordyuk, A.A.
1998-01-01
An approach for calculating the interaction between a hard superconductor and a permanent magnet in the field-cooled case is proposed. The exact solutions were obtained for the point magnetic dipole over a flat ideally hard superconductor. We have shown that such an approach is adaptable to a wide practical range of melt-textured high-temperature superconductors{close_quote} systems with magnetic levitation. In this case, the energy losses can be calculated from the alternating magnetic field distribution on the superconducting sample surface. {copyright} {ital 1998 American Institute of Physics.}
Asteroseismic constraints for Gaia
NASA Astrophysics Data System (ADS)
Creevey, O. L.; Thévenin, F.
2012-12-01
Distances from the Gaia mission will no doubt improve our understanding of stellar physics by providing an excellent constraint on the luminosity of the star. However, it is also clear that high precision stellar properties from, for example, asteroseismology, will also provide a needed input constraint in order to calibrate the methods that Gaia will use, e.g. stellar models or GSP_Phot. For solar-like stars (F, G, K IV/V), asteroseismic data delivers at the least two very important quantities: (1) the average large frequency separation < Δ ν > and (2) the frequency corresponding to the maximum of the modulated-amplitude spectrum ν_{max}. Both of these quantities are related directly to stellar parameters (radius and mass) and in particular their combination (gravity and density). We show how the precision in < Δ ν >, ν_{max}, and atmospheric parameters T_{eff} and [Fe/H] affect the determination of gravity (log g) for a sample of well-known stars. We find that log g can be determined within less than 0.02 dex accuracy for our sample while considering precisions in the data expected for V˜12 stars from Kepler data. We also derive masses and radii which are accurate to within 1σ of the accepted values. This study validates the subsequent use of all of the available asteroseismic data on solar-like stars from the Kepler field (>500 IV/V stars) in order to provide a very important constraint for Gaia calibration of GSP_Phot} through the use of log g. We note that while we concentrate on IV/V stars, both the CoRoT and Kepler fields contain asteroseismic data on thousands of giant stars which will also provide useful calibration measures.
Conversion cascading constraint-aware adaptive routing for WDM optical networks
NASA Astrophysics Data System (ADS)
Gao, Xingbo; Bassiouni, Mostafa A.; Li, Guifang
2007-03-01
We examine the negative impact of wavelength conversion cascading on the performance of all-optical routing. When data in a circuit-switched connection are routed all optically from source to destination, each wavelength conversion performed along the lightpath of the connection causes some signal-to-noise deterioration. If the distortion of the signal quality becomes significant enough, the receiver would not be able to recover the original data. There is therefore an upper bound (threshold) on the number of wavelength conversions that a signal can go through when it is switched optically from its source to its destination. This constraint, which we refer to as the conversion cascading constraint, has largely been ignored by previous performance evaluation studies on all-optical routing. We proceed to show that existing static and dynamic routing and wavelength-assignment algorithms largely fail in the presence of the conversion cascading constraints. We then propose two constraint-aware dynamic algorithms: The first, greedy constraint-aware routing algorithm, minimizes the number of wavelength conversions for each connection establishing, and the second, weighted adaptive constraint-aware routing (W-ACAR) algorithm, considers the distribution of free wavelengths, the length of each route, and the conversion cascading constraints, jointly. The results conclusively demonstrate that the proposed algorithms, especially W-ACAR, can achieve much better blocking performance in the environment of full and sparse wavelength conversion.
Self-organization and clustering algorithms
NASA Technical Reports Server (NTRS)
Bezdek, James C.
1991-01-01
Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed.
Practical Cleanroom Operations Constraints
NASA Technical Reports Server (NTRS)
Hughes, David; Ginyard, Amani
2007-01-01
This viewgraph presentation reviews the GSFC Cleanroom Facility i.e., Spacecraft Systems Development and Integration Facility (SSDIF) with particular interest in its use during the development of the Wide Field Camera 3 (WFC3). The SSDIF is described and a diagram of the SSDIF is shown. A Constraint Table was created for consistency within Contamination Control Team. This table is shown. Another table that shows the activities that were allowed during the integration under given WFC3 condition and activity location is presented. Three decision trees are shown for different phases of the work: (1) Hardware Relocation, Hardware Work, and Contamination Control Operations.
Superresolution via sparsity constraints
NASA Technical Reports Server (NTRS)
Donoho, David L.
1992-01-01
The problem of recovering a measure mu supported on a lattice of span Delta is considered under the condition that measurements are only available concerning the Fourier Transform at frequencies of Omega or less. If Omega is much smaller than the Nyquist frequency pi/Delta and the measurements are noisy, then stable recovery of mu is generally impossible. It is shown here that if, in addition, it is known that mu satisfies certain sparsity constraints, then stable recovery is possible. This finding validates practical efforts in spectroscopy, seismic prospecting, and astronomy to provide superresolution by imposing support limitations in reconstruction.
TRACON Aircraft Arrival Planning and Optimization Through Spatial Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Bergh, Christopher P.; Krzeczowski, Kenneth J.; Davis, Thomas J.; Denery, Dallas G. (Technical Monitor)
1995-01-01
A new aircraft arrival planning and optimization algorithm has been incorporated into the Final Approach Spacing Tool (FAST) in the Center-TRACON Automation System (CTAS) developed at NASA-Ames Research Center. FAST simulations have been conducted over three years involving full-proficiency, level five air traffic controllers from around the United States. From these simulations an algorithm, called Spatial Constraint Satisfaction, has been designed, coded, undergone testing, and soon will begin field evaluation at the Dallas-Fort Worth and Denver International airport facilities. The purpose of this new design is an attempt to show that the generation of efficient and conflict free aircraft arrival plans at the runway does not guarantee an operationally acceptable arrival plan upstream from the runway -information encompassing the entire arrival airspace must be used in order to create an acceptable aircraft arrival plan. This new design includes functions available previously but additionally includes necessary representations of controller preferences and workload, operationally required amounts of extra separation, and integrates aircraft conflict resolution. As a result, the Spatial Constraint Satisfaction algorithm produces an optimized aircraft arrival plan that is more acceptable in terms of arrival procedures and air traffic controller workload. This paper discusses the current Air Traffic Control arrival planning procedures, previous work in this field, the design of the Spatial Constraint Satisfaction algorithm, and the results of recent evaluations of the algorithm.
Evaluation of Open-Source Hard Real Time Software Packages
NASA Technical Reports Server (NTRS)
Mattei, Nicholas S.
2004-01-01
Reliable software is, at times, hard to find. No piece of software can be guaranteed to work in every situation that may arise during its use here at Glenn Research Center or in space. The job of the Software Assurance (SA) group in the Risk Management Office is to rigorously test the software in an effort to ensure it matches the contract specifications. In some cases the SA team also researches new alternatives for selected software packages. This testing and research is an integral part of the department of Safety and Mission Assurance. Real Time operation in reference to a computer system is a particular style of handing the timing and manner with which inputs and outputs are handled. A real time system executes these commands and appropriate processing within a defined timing constraint. Within this definition there are two other classifications of real time systems: hard and soft. A soft real time system is one in which if the particular timing constraints are not rigidly met there will be no critical results. On the other hand, a hard real time system is one in which if the timing constraints are not met the results could be catastrophic. An example of a soft real time system is a DVD decoder. If the particular piece of data from the input is not decoded and displayed to the screen at exactly the correct moment nothing critical will become of it, the user may not even notice it. However, a hard real time system is needed to control the timing of fuel injections or steering on the Space Shuttle; a delay of even a fraction of a second could be catastrophic in such a complex system. The current real time system employed by most NASA projects is Wind River's VxWorks operating system. This is a proprietary operating system that can be configured to work with many of NASA s needs and it provides very accurate and reliable hard real time performance. The down side is that since it is a proprietary operating system it is also costly to implement. The prospect of
Measurement kernel design for compressive imaging under device constraints
NASA Astrophysics Data System (ADS)
Shilling, Richard; Muise, Robert
2013-05-01
We look at the design of projective measurements for compressive imaging based upon image priors and device constraints. If one assumes that image patches from natural imagery can be modeled as a low rank manifold, we develop an optimality criterion for a measurement matrix based upon separating the canonical elements of the manifold prior. We then describe a stochastic search algorithm for finding the optimal measurements under device constraints based upon a subspace mismatch algorithm. The algorithm is then tested on a prototype compressive imaging device designed to collect an 8x4 array of projective measurements simultaneously. This work is based upon work supported by DARPA and the SPAWAR System Center Pacific under Contract No. N66001-11-C-4092. The views expressed are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
Symbolic Constraint Maintenance Grid
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
Version 3.1 of Symbolic Constraint Maintenance Grid (SCMG) is a software system that provides a general conceptual framework for utilizing pre-existing programming techniques to perform symbolic transformations of data. SCMG also provides a language (and an associated communication method and protocol) for representing constraints on the original non-symbolic data. SCMG provides a facility for exchanging information between numeric and symbolic components without knowing the details of the components themselves. In essence, it integrates symbolic software tools (for diagnosis, prognosis, and planning) with non-artificial-intelligence software. SCMG executes a process of symbolic summarization and monitoring of continuous time series data that are being abstractly represented as symbolic templates of information exchange. This summarization process enables such symbolic- reasoning computing systems as artificial- intelligence planning systems to evaluate the significance and effects of channels of data more efficiently than would otherwise be possible. As a result of the increased efficiency in representation, reasoning software can monitor more channels and is thus able to perform monitoring and control functions more effectively.
Hard sphere packings within cylinders.
Fu, Lin; Steinhardt, William; Zhao, Hao; Socolar, Joshua E S; Charbonneau, Patrick
2016-02-23
Arrangements of identical hard spheres confined to a cylinder with hard walls have been used to model experimental systems, such as fullerenes in nanotubes and colloidal wire assembly. Finding the densest configurations, called close packings, of hard spheres of diameter σ in a cylinder of diameter D is a purely geometric problem that grows increasingly complex as D/σ increases, and little is thus known about the regime for D > 2.873σ. In this work, we extend the identification of close packings up to D = 4.00σ by adapting Torquato-Jiao's adaptive-shrinking-cell formulation and sequential-linear-programming (SLP) technique. We identify 17 new structures, almost all of them chiral. Beyond D ≈ 2.85σ, most of the structures consist of an outer shell and an inner core that compete for being close packed. In some cases, the shell adopts its own maximum density configuration, and the stacking of core spheres within it is quasiperiodic. In other cases, an interplay between the two components is observed, which may result in simple periodic structures. In yet other cases, the very distinction between the core and shell vanishes, resulting in more exotic packing geometries, including some that are three-dimensional extensions of structures obtained from packing hard disks in a circle. PMID:26843132
Metrics for Hard Goods Merchandising.
ERIC Educational Resources Information Center
Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.
Designed to meet the job-related metric measurement needs of students interested in hard goods merchandising, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…
Approximation Schemes for Scheduling with Availability Constraints
NASA Astrophysics Data System (ADS)
Fu, Bin; Huo, Yumei; Zhao, Hairong
We investigate the problems of scheduling n weighted jobs to m identical machines with availability constraints. We consider two different models of availability constraints: the preventive model where the unavailability is due to preventive machine maintenance, and the fixed job model where the unavailability is due to a priori assignment of some of the n jobs to certain machines at certain times. Both models have applications such as turnaround scheduling or overlay computing. In both models, the objective is to minimize the total weighted completion time. We assume that m is a constant, and the jobs are non-resumable. For the preventive model, it has been shown that there is no approximation algorithm if all machines have unavailable intervals even when w i = p i for all jobs. In this paper, we assume there is one machine permanently available and the processing time of each job is equal to its weight for all jobs. We develop the first PTAS when there are constant number of unavailable intervals. One main feature of our algorithm is that the classification of large and small jobs is with respect to each individual interval, thus not fixed. This classification allows us (1) to enumerate the assignments of large jobs efficiently; (2) and to move small jobs around without increasing the objective value too much, and thus derive our PTAS. Then we show that there is no FPTAS in this case unless P = NP.
Hard processes in hadronic interactions
Satz, H. |; Wang, X.N.
1995-07-01
Quantum chromodynamics is today accepted as the fundamental theory of strong interactions, even though most hadronic collisions lead to final states for which quantitative QCD predictions are still lacking. It therefore seems worthwhile to take stock of where we stand today and to what extent the presently available data on hard processes in hadronic collisions can be accounted for in terms of QCD. This is one reason for this work. The second reason - and in fact its original trigger - is the search for the quark-gluon plasma in high energy nuclear collisions. The hard processes to be considered here are the production of prompt photons, Drell-Yan dileptons, open charm, quarkonium states, and hard jets. For each of these, we discuss the present theoretical understanding, compare the resulting predictions to available data, and then show what behaviour it leads to at RHIC and LHC energies. All of these processes have the structure mentioned above: they contain a hard partonic interaction, calculable perturbatively, but also the non-perturbative parton distribution within a hadron. These parton distributions, however, can be studied theoretically in terms of counting rule arguments, and they can be checked independently by measurements of the parton structure functions in deep inelastic lepton-hadron scattering. The present volume is the work of Hard Probe Collaboration, a group of theorists who are interested in the problem and were willing to dedicate a considerable amount of their time and work on it. The necessary preparation, planning and coordination of the project were carried out in two workshops of two weeks` duration each, in February 1994 at CERn in Geneva andin July 1994 at LBL in Berkeley.
A quantitative model for interpreting nanometer scale hardness measurements of thin films
Poisl, W.H.; Fabes, B.D.; Oliver, W.C.
1993-09-01
A model was developed to determine hardness of thin films from hardness versus depth curves, given film thickness and substrate hardness. The model is developed by dividing the measured hardness into film and substrate contributions based on the projected areas of both the film and substrate under the indenter. The model incorporates constraints on the deformation of the film by the surrounding material in the film, the substrate, and friction at the indenter/film and film/substrate interfaces. These constraints increase the pressure that the film can withstand and account for the increase in measured hardness as the indenter approaches the substrate. The model is evaluated by fitting the predicted hardness versus depth curves obtained from titanium and Ta{sub 2}O{sub 5} films of varying thicknesses on sapphire substrates. The model is also able to describe experimental data for Ta{sub 2}O{sub 5} films on sapphire with a carbon layer between the film and the substrate by a reduction in the interfacial strength from that obtained for a film without an interfacial carbon layer.
Naeem, Muhammad; Pareek, Udit; Lee, Daniel C; Anpalagan, Alagan
2013-01-01
Due to the rapid increase in the usage and demand of wireless sensor networks (WSN), the limited frequency spectrum available for WSN applications will be extremely crowded in the near future. More sensor devices also mean more recharging/replacement of batteries, which will cause significant impact on the global carbon footprint. In this paper, we propose a relay-assisted cognitive radio sensor network (CRSN) that allocates communication resources in an environmentally friendly manner. We use shared band amplify and forward relaying for cooperative communication in the proposed CRSN. We present a multi-objective optimization architecture for resource allocation in a green cooperative cognitive radio sensor network (GC-CRSN). The proposed multi-objective framework jointly performs relay assignment and power allocation in GC-CRSN, while optimizing two conflicting objectives. The first objective is to maximize the total throughput, and the second objective is to minimize the total transmission power of CRSN. The proposed relay assignment and power allocation problem is a non-convex mixed-integer non-linear optimization problem (NC-MINLP), which is generally non-deterministic polynomial-time (NP)-hard. We introduce a hybrid heuristic algorithm for this problem. The hybrid heuristic includes an estimation-of-distribution algorithm (EDA) for performing power allocation and iterative greedy schemes for constraint satisfaction and relay assignment. We analyze the throughput and power consumption tradeoff in GC-CRSN. A detailed analysis of the performance of the proposed algorithm is presented with the simulation results. PMID:23584119
Naeem, Muhammad; Pareek, Udit; Lee, Daniel C.; Anpalagan, Alagan
2013-01-01
Due to the rapid increase in the usage and demand of wireless sensor networks (WSN), the limited frequency spectrum available for WSN applications will be extremely crowded in the near future. More sensor devices also mean more recharging/replacement of batteries, which will cause significant impact on the global carbon footprint. In this paper, we propose a relay-assisted cognitive radio sensor network (CRSN) that allocates communication resources in an environmentally friendly manner. We use shared band amplify and forward relaying for cooperative communication in the proposed CRSN. We present a multi-objective optimization architecture for resource allocation in a green cooperative cognitive radio sensor network (GC-CRSN). The proposed multi-objective framework jointly performs relay assignment and power allocation in GC-CRSN, while optimizing two conflicting objectives. The first objective is to maximize the total throughput, and the second objective is to minimize the total transmission power of CRSN. The proposed relay assignment and power allocation problem is a non-convex mixed-integer non-linear optimization problem (NC-MINLP), which is generally non-deterministic polynomial-time (NP)-hard. We introduce a hybrid heuristic algorithm for this problem. The hybrid heuristic includes an estimation-of-distribution algorithm (EDA) for performing power allocation and iterative greedy schemes for constraint satisfaction and relay assignment. We analyze the throughput and power consumption tradeoff in GC-CRSN. A detailed analysis of the performance of the proposed algorithm is presented with the simulation results. PMID:23584119
Wang, Xiang; Huang, Zhitao; Zhou, Yiyu
2012-01-01
Signal of interest (SOI) extraction is a vital issue in communication signal processing. In this paper, we propose two novel iterative algorithms for extracting SOIs from instantaneous mixtures, which explores the spatial constraint corresponding to the Directions of Arrival (DOAs) of the SOIs as a priori information into the constrained Independent Component Analysis (cICA) framework. The first algorithm utilizes the spatial constraint to form a new constrained optimization problem under the previous cICA framework which requires various user parameters, i.e., Lagrange parameter and threshold measuring the accuracy degree of the spatial constraint, while the second algorithm incorporates the spatial constraints to select specific initialization of extracting vectors. The major difference between the two novel algorithms is that the former incorporates the prior information into the learning process of the iterative algorithm and the latter utilizes the prior information to select the specific initialization vector. Therefore, no extra parameters are necessary in the learning process, which makes the algorithm simpler and more reliable and helps to improve the speed of extraction. Meanwhile, the convergence condition for the spatial constraints is analyzed. Compared with the conventional techniques, i.e., MVDR, numerical simulation results demonstrate the effectiveness, robustness and higher performance of the proposed algorithms. PMID:23012531
Wang, Xiang; Huang, Zhitao; Zhou, Yiyu
2012-01-01
Signal of interest (SOI) extraction is a vital issue in communication signal processing. In this paper, we propose two novel iterative algorithms for extracting SOIs from instantaneous mixtures, which explores the spatial constraint corresponding to the Directions of Arrival (DOAs) of the SOIs as a priori information into the constrained Independent Component Analysis (cICA) framework. The first algorithm utilizes the spatial constraint to form a new constrained optimization problem under the previous cICA framework which requires various user parameters, i.e., Lagrange parameter and threshold measuring the accuracy degree of the spatial constraint, while the second algorithm incorporates the spatial constraints to select specific initialization of extracting vectors. The major difference between the two novel algorithms is that the former incorporates the prior information into the learning process of the iterative algorithm and the latter utilizes the prior information to select the specific initialization vector. Therefore, no extra parameters are necessary in the learning process, which makes the algorithm simpler and more reliable and helps to improve the speed of extraction. Meanwhile, the convergence condition for the spatial constraints is analyzed. Compared with the conventional techniques, i.e., MVDR, numerical simulation results demonstrate the effectiveness, robustness and higher performance of the proposed algorithms. PMID:23012531
A global approach to kinematic path planning to robots with holonomic and nonholonomic constraints
NASA Technical Reports Server (NTRS)
Divelbiss, Adam; Seereeram, Sanjeev; Wen, John T.
1993-01-01
Robots in applications may be subject to holonomic or nonholonomic constraints. Examples of holonomic constraints include a manipulator constrained through the contact with the environment, e.g., inserting a part, turning a crank, etc., and multiple manipulators constrained through a common payload. Examples of nonholonomic constraints include no-slip constraints on mobile robot wheels, local normal rotation constraints for soft finger and rolling contacts in grasping, and conservation of angular momentum of in-orbit space robots. The above examples all involve equality constraints; in applications, there are usually additional inequality constraints such as robot joint limits, self collision and environment collision avoidance constraints, steering angle constraints in mobile robots, etc. The problem of finding a kinematically feasible path that satisfies a given set of holonomic and nonholonomic constraints, of both equality and inequality types is addressed. The path planning problem is first posed as a finite time nonlinear control problem. This problem is subsequently transformed to a static root finding problem in an augmented space which can then be iteratively solved. The algorithm has shown promising results in planning feasible paths for redundant arms satisfying Cartesian path following and goal endpoint specifications, and mobile vehicles with multiple trailers. In contrast to local approaches, this algorithm is less prone to problems such as singularities and local minima.
Relative constraints and evolution
NASA Astrophysics Data System (ADS)
Ochoa, Juan G. Diaz
2014-03-01
Several mathematical models of evolving systems assume that changes in the micro-states are constrained to the search of an optimal value in a local or global objective function. However, the concept of evolution requires a continuous change in the environment and species, making difficult the definition of absolute optimal values in objective functions. In this paper, we define constraints that are not absolute but relative to local micro-states, introducing a rupture in the invariance of the phase space of the system. This conceptual basis is useful to define alternative mathematical models for biological (or in general complex) evolving systems. We illustrate this concept with a modified Ising model, which can be useful to understand and model problems like the somatic evolution of cancer.
ϑ-SHAKE: An extension to SHAKE for the explicit treatment of angular constraints
NASA Astrophysics Data System (ADS)
Gonnet, Pedro; Walther, Jens H.; Koumoutsakos, Petros
2009-03-01
This paper presents ϑ-SHAKE, an extension to SHAKE, an algorithm for the resolution of holonomic constraints in molecular dynamics simulations, which allows for the explicit treatment of angular constraints. We show that this treatment is more efficient than the use of fictitious bonds, significantly reducing the overlap between the individual constraints and thus accelerating convergence. The new algorithm is compared with SHAKE, M-SHAKE, the matrix-based approach described by Ciccotti and Ryckaert and P-SHAKE for rigid water and octane.
Neural constraints on learning
Sadtler, Patrick T.; Quick, Kristin M.; Golub, Matthew D.; Chase, Steven M.; Ryu, Stephen I.; Tyler-Kabara, Elizabeth C.; Yu, Byron M.; Batista, Aaron P.
2014-01-01
Motor, sensory, and cognitive learning require networks of neurons to generate new activity patterns. Because some behaviors are easier to learn than others1,2, we wondered if some neural activity patterns are easier to generate than others. We asked whether the existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define the constraint. We employed a closed-loop intracortical brain-computer interface (BCI) learning paradigm in which Rhesus monkeys controlled a computer cursor by modulating neural activity patterns in primary motor cortex. Using the BCI paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. These patterns comprise a low-dimensional space (termed the intrinsic manifold, or IM) within the high-dimensional neural firing rate space. They presumably reflect constraints imposed by the underlying neural circuitry. We found that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the IM. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the IM. This result suggests that the existing structure of a network can shape learning. On the timescale of hours, it appears to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already possess3,4. PMID:25164754
Neural constraints on learning.
Sadtler, Patrick T; Quick, Kristin M; Golub, Matthew D; Chase, Steven M; Ryu, Stephen I; Tyler-Kabara, Elizabeth C; Yu, Byron M; Batista, Aaron P
2014-08-28
Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define this constraint. We employed a closed-loop intracortical brain-computer interface learning paradigm in which Rhesus macaques (Macaca mulatta) controlled a computer cursor by modulating neural activity patterns in the primary motor cortex. Using the brain-computer interface paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. The activity of a neural population can be represented in a high-dimensional space (termed the neural space), wherein each dimension corresponds to the activity of one neuron. These characteristic activity patterns comprise a low-dimensional subspace (termed the intrinsic manifold) within the neural space. The intrinsic manifold presumably reflects constraints imposed by the underlying neural circuitry. Here we show that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the intrinsic manifold. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the intrinsic manifold. These results suggest that the existing structure of a network can shape learning. On a timescale of hours, it seems to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Nanomechanics of hard films on compliant substrates.
Reedy, Earl David, Jr.; Emerson, John Allen; Bahr, David F.; Moody, Neville Reid; Zhou, Xiao Wang; Hales, Lucas; Adams, David Price; Yeager,John; Nyugen, Thao D.; Corona, Edmundo; Kennedy, Marian S.; Cordill, Megan J.
2009-09-01
Development of flexible thin film systems for biomedical, homeland security and environmental sensing applications has increased dramatically in recent years [1,2,3,4]. These systems typically combine traditional semiconductor technology with new flexible substrates, allowing for both the high electron mobility of semiconductors and the flexibility of polymers. The devices have the ability to be easily integrated into components and show promise for advanced design concepts, ranging from innovative microelectronics to MEMS and NEMS devices. These devices often contain layers of thin polymer, ceramic and metallic films where differing properties can lead to large residual stresses [5]. As long as the films remain substrate-bonded, they may deform far beyond their freestanding counterpart. Once debonded, substrate constraint disappears leading to film failure where compressive stresses can lead to wrinkling, delamination, and buckling [6,7,8] while tensile stresses can lead to film fracture and decohesion [9,10,11]. In all cases, performance depends on film adhesion. Experimentally it is difficult to measure adhesion. It is often studied using tape [12], pull off [13,14,15], and peel tests [16,17]. More recent techniques for measuring adhesion include scratch testing [18,19,20,21], four point bending [22,23,24], indentation [25,26,27], spontaneous blisters [28,29] and stressed overlayers [7,26,30,31,32,33]. Nevertheless, sample design and test techniques must be tailored for each system. There is a large body of elastic thin film fracture and elastic contact mechanics solutions for elastic films on rigid substrates in the published literature [5,7,34,35,36]. More recent work has extended these solutions to films on compliant substrates and show that increasing compliance markedly changes fracture energies compared with rigid elastic solution results [37,38]. However, the introduction of inelastic substrate response significantly complicates the problem [10,39,40]. As
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-08-30
We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-02-28
We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
Ultrasonic material hardness depth measurement
Good, M.S.; Schuster, G.J.; Skorpik, J.R.
1997-07-08
The invention is an ultrasonic surface hardness depth measurement apparatus and method permitting rapid determination of hardness depth of shafts, rods, tubes and other cylindrical parts. The apparatus of the invention has a part handler, sensor, ultrasonic electronics component, computer, computer instruction sets, and may include a display screen. The part handler has a vessel filled with a couplant, and a part rotator for rotating a cylindrical metal part with respect to the sensor. The part handler further has a surface follower upon which the sensor is mounted, thereby maintaining a constant distance between the sensor and the exterior surface of the cylindrical metal part. The sensor is mounted so that a front surface of the sensor is within the vessel with couplant between the front surface of the sensor and the part. 12 figs.
Hard Photodisintegration of 3He
NASA Astrophysics Data System (ADS)
Granados, Carlos
2011-02-01
Large angle photodisintegration of two nucleons from the 3He nucleus is studied within the framework of the hard rescattering model (HRM). In the HRM the incoming photon is absorbed by one nucleon's valence quark that then undergoes a hard rescattering reaction with a valence quark from the second nucleon producing two nucleons emerging at large transverse momentum . Parameter free cross sections for pp and pn break up channels are calculated through the input of experimental cross sections on pp and pn elastic scattering. The calculated cross section for pp breakup and its predicted energy dependency are in good agreement with recent experimental data. Predictions on spectator momentum distributions and helicity transfer are also presented.
Weld cladding of hard surfaces
NASA Astrophysics Data System (ADS)
Habrekke, T.
1993-02-01
A literature study about clad welding of hard surfaces on steel is performed. The purpose was to see what kind of methods are mainly used, and particular attention is paid to clad welding of rolls. The main impression from this study is that several methods are in use. Some of these must be considered as 'too exotic' for the aim of the program, such as laser build-up welding. However, clad welding of hard surfaces to rolls is widely used around the world, and there is no need for particularly advanced welding methods to perform the work. The welding consumables and the way the welding is carried out is of more important character. The report will give some comments to this, and hopefully will give a short review of the current technology in this field.
Ultrasonic material hardness depth measurement
Good, Morris S.; Schuster, George J.; Skorpik, James R.
1997-01-01
The invention is an ultrasonic surface hardness depth measurement apparatus and method permitting rapid determination of hardness depth of shafts, rods, tubes and other cylindrical parts. The apparatus of the invention has a part handler, sensor, ultrasonic electronics component, computer, computer instruction sets, and may include a display screen. The part handler has a vessel filled with a couplant, and a part rotator for rotating a cylindrical metal part with respect to the sensor. The part handler further has a surface follower upon which the sensor is mounted, thereby maintaining a constant distance between the sensor and the exterior surface of the cylindrical metal part. The sensor is mounted so that a front surface of the sensor is within the vessel with couplant between the front surface of the sensor and the part.
Sahoo, Pradyumna Kumar; Mandal, Palash Kumar; Ghosh, Saradindu
2014-01-01
Schwannomas are benign encapsulated perineural tumors. The head and neck region is the most common site. Intraoral origin is seen in only 1% of cases, tongue being the most common site; its location in the palate is rare. We report a case of hard-palate schwannoma with bony erosion which was immunohistochemically confirmed. The tumor was excised completely intraorally. After two months of follow-up, the defect was found to be completely covered with palatal mucosa. PMID:25298716
Microwave assisted hard rock cutting
Lindroth, David P.; Morrell, Roger J.; Blair, James R.
1991-01-01
An apparatus for the sequential fracturing and cutting of subsurface volume of hard rock (102) in the strata (101) of a mining environment (100) by subjecting the volume of rock to a beam (25) of microwave energy to fracture the subsurface volume of rock by differential expansion; and , then bringing the cutting edge (52) of a piece of conventional mining machinery (50) into contact with the fractured rock (102).
Low dose hard x-ray contact microscopy assisted by a photoelectric conversion layer
Gomella, Andrew; Martin, Eric W.; Lynch, Susanna K.; Wen, Han; Morgan, Nicole Y.
2013-04-15
Hard x-ray contact microscopy provides images of dense samples at resolutions of tens of nanometers. However, the required beam intensity can only be delivered by synchrotron sources. We report on the use of a gold photoelectric conversion layer to lower the exposure dose by a factor of 40 to 50, allowing hard x-ray contact microscopy to be performed with a compact x-ray tube. We demonstrate the method in imaging the transmission pattern of a type of hard x-ray grating that cannot be fitted into conventional x-ray microscopes due to its size and shape. Generally the method is easy to implement and can record images of samples in the hard x-ray region over a large area in a single exposure, without some of the geometric constraints associated with x-ray microscopes based on zone-plate or other magnifying optics.
An information-based neural approach to constraint satisfaction.
Jönsson, H; Söderberg, B
2001-08-01
A novel artificial neural network approach to constraint satisfaction problems is presented. Based on information-theoretical considerations, it differs from a conventional mean-field approach in the form of the resulting free energy. The method, implemented as an annealing algorithm, is numerically explored on a testbed of K-SAT problems. The performance shows a dramatic improvement over that of a conventional mean-field approach and is comparable to that of a state-of-the-art dedicated heuristic (GSAT+walk). The real strength of the method, however, lies in its generality. With minor modifications, it is applicable to arbitrary types of discrete constraint satisfaction problems. PMID:11506672
NASA Astrophysics Data System (ADS)
Perera, Yibran; Gottmann, Jens; Husmann, Andreas; Klotzbuecher, Thomas; Kreutz, Ernst-Wolfgang; Poprawe, Reinhart
2001-06-01
The deposition of different hard ceramics coatings as Al2O3, ZrO2, c-BN and DLC thin films by pulsed laser deposition (PLD) has been of increasing interest as alternative process compared to the latest progress in CVD and PVD deposition. For instance, in pulsed laser deposition, the properties of the resulting thin films are influenced by the composition, ionization state, density, kinetic and excitation energies of the particles of the vapor/plasma. In order to deposit hard ceramics with different properties and applications, various substrates as Pt/Ti/Si multilayer, glass (fused silica), steel, polymethylmethacrylate (PMMA), polycarbonate (PC), Si(100) and Si(111) are used. These thin films are deposited either by excimer laser radiation ((lambda) equals 248 nm) or by CO2 laser radiation ((lambda) equals 10.6 micrometers ). To characterize the structural, optical and mechanical properties of the hard ceramics thin films, different techniques as Raman spectroscopy, ellipsometry, FTIR spectroscopy and nanoindentation are used.
NASA Astrophysics Data System (ADS)
Berkowitz, Max; Parr, Robert G.
1988-02-01
Hardness and softness kernels η(r,r') and s(r,r') are defined for the ground state of an atomic or molecular electronic system, and the previously defined local hardness and softness η(r) and s(r) and global hardness and softness η and S are obtained from them. The physical meaning of s(r), as a charge capacitance, is discussed (following Huheey and Politzer), and two alternative ``hardness'' indices are identified and briefly discussed.
Stochastic Hard-Sphere Dynamics for Hydrodynamics of Non-Ideal Fluids
Donev, A; Alder, B J; Garcia, A L
2008-02-26
A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.
Solution of the NP-hard total tardiness minimization problem in scheduling theory
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2007-06-01
The classical NP-hard (in the ordinary sense) problem of scheduling jobs in order to minimize the total tardiness for a single machine 1‖Σ T j is considered. An NP-hard instance of the problem is completely analyzed. A procedure for partitioning the initial set of jobs into subsets is proposed. Algorithms are constructed for finding an optimal schedule depending on the number of subsets. The complexity of the algorithms is O( n 2Σ p j ), where n is the number of jobs and p j is the processing time of the jth job ( j = 1, 2, …, n).
Velocity and energy distributions in microcanonical ensembles of hard spheres.
Scalas, Enrico; Gabriel, Adrian T; Martin, Edgar; Germano, Guido
2015-08-01
In a microcanonical ensemble (constant NVE, hard reflecting walls) and in a molecular dynamics ensemble (constant NVEPG, periodic boundary conditions) with a number N of smooth elastic hard spheres in a d-dimensional volume V having a total energy E, a total momentum P, and an overall center of mass position G, the individual velocity components, velocity moduli, and energies have transformed beta distributions with different arguments and shape parameters depending on d, N, E, the boundary conditions, and possible symmetries in the initial conditions. This can be shown marginalizing the joint distribution of individual energies, which is a symmetric Dirichlet distribution. In the thermodynamic limit the beta distributions converge to gamma distributions with different arguments and shape or scale parameters, corresponding respectively to the Gaussian, i.e., Maxwell-Boltzmann, Maxwell, and Boltzmann or Boltzmann-Gibbs distribution. These analytical results agree with molecular dynamics and Monte Carlo simulations with different numbers of hard disks or spheres and hard reflecting walls or periodic boundary conditions. The agreement is perfect with our Monte Carlo algorithm, which acts only on velocities independently of positions with the collision versor sampled uniformly on a unit half sphere in d dimensions, while slight deviations appear with our molecular dynamics simulations for the smallest values of N. PMID:26382376
Velocity and energy distributions in microcanonical ensembles of hard spheres
NASA Astrophysics Data System (ADS)
Scalas, Enrico; Gabriel, Adrian T.; Martin, Edgar; Germano, Guido
2015-08-01
In a microcanonical ensemble (constant N V E , hard reflecting walls) and in a molecular dynamics ensemble (constant N V E PG , periodic boundary conditions) with a number N of smooth elastic hard spheres in a d -dimensional volume V having a total energy E , a total momentum P , and an overall center of mass position G , the individual velocity components, velocity moduli, and energies have transformed beta distributions with different arguments and shape parameters depending on d , N , E , the boundary conditions, and possible symmetries in the initial conditions. This can be shown marginalizing the joint distribution of individual energies, which is a symmetric Dirichlet distribution. In the thermodynamic limit the beta distributions converge to gamma distributions with different arguments and shape or scale parameters, corresponding respectively to the Gaussian, i.e., Maxwell-Boltzmann, Maxwell, and Boltzmann or Boltzmann-Gibbs distribution. These analytical results agree with molecular dynamics and Monte Carlo simulations with different numbers of hard disks or spheres and hard reflecting walls or periodic boundary conditions. The agreement is perfect with our Monte Carlo algorithm, which acts only on velocities independently of positions with the collision versor sampled uniformly on a unit half sphere in d dimensions, while slight deviations appear with our molecular dynamics simulations for the smallest values of N .
Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem
NASA Astrophysics Data System (ADS)
Chen, Wei
2015-07-01
In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.
Seismological Constraints on Geodynamics
NASA Astrophysics Data System (ADS)
Lomnitz, C.
2004-12-01
Earth is an open thermodynamic system radiating heat energy into space. A transition from geostatic earth models such as PREM to geodynamical models is needed. We discuss possible thermodynamic constraints on the variables that govern the distribution of forces and flows in the deep Earth. In this paper we assume that the temperature distribution is time-invariant, so that all flows vanish at steady state except for the heat flow Jq per unit area (Kuiken, 1994). Superscript 0 will refer to the steady state while x denotes the excited state of the system. We may write σ 0=(J{q}0ṡX{q}0)/T where Xq is the conjugate force corresponding to Jq, and σ is the rate of entropy production per unit volume. Consider now what happens after the occurrence of an earthquake at time t=0 and location (0,0,0). The earthquake introduces a stress drop Δ P(x,y,z) at all points of the system. Response flows are directed along the gradients toward the epicentral area, and the entropy production will increase with time as (Prigogine, 1947) σ x(t)=σ 0+α {1}/(t+β )+α {2}/(t+β )2+etc A seismological constraint on the parameters may be obtained from Omori's empirical relation N(t)=p/(t+q) where N(t) is the number of aftershocks at time t following the main shock. It may be assumed that p/q\\sim\\alpha_{1}/\\beta times a constant. Another useful constraint is the Mexican-hat geometry of the seismic transient as obtained e.g. from InSAR radar interferometry. For strike-slip events such as Landers the distribution of \\DeltaP is quadrantal, and an oval-shaped seismicity gap develops about the epicenter. A weak outer triggering maxiμm is found at a distance of about 17 fault lengths. Such patterns may be extracted from earthquake catalogs by statistical analysis (Lomnitz, 1996). Finally, the energy of the perturbation must be at least equal to the recovery energy. The total energy expended in an aftershock sequence can be found approximately by integrating the local contribution over
Credit Constraints for Higher Education
ERIC Educational Resources Information Center
Solis, Alex
2012-01-01
This paper exploits a natural experiment that produces exogenous variation on credit access to determine the effect on college enrollment. The paper assess how important are credit constraints to explain the gap in college enrollment by family income, and what would be the gap if credit constraints are eliminated. Progress in college and dropout…
Fixed Costs and Hours Constraints
ERIC Educational Resources Information Center
Johnson, William R.
2011-01-01
Hours constraints are typically identified by worker responses to questions asking whether they would prefer a job with more hours and more pay or fewer hours and less pay. Because jobs with different hours but the same rate of pay may be infeasible when there are fixed costs of employment or mandatory overtime premia, the constraint in those…
NP-hardness of decoding quantum error-correction codes
Hsieh, Min-Hsiu; Le Gall, Francois
2011-05-15
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
A Constraint Integer Programming Approach for Resource-Constrained Project Scheduling
NASA Astrophysics Data System (ADS)
Berthold, Timo; Heinz, Stefan; Lübbecke, Marco E.; Möhring, Rolf H.; Schulz, Jens
We propose a hybrid approach for solving the resource-constrained project scheduling problem which is an extremely hard to solve combinatorial optimization problem of practical relevance. Jobs have to be scheduled on (renewable) resources subject to precedence constraints such that the resource capacities are never exceeded and the latest completion time of all jobs is minimized.
An SMP soft classification algorithm for remote sensing
NASA Astrophysics Data System (ADS)
Phillips, Rhonda D.; Watson, Layne T.; Easterling, David R.; Wynne, Randolph H.
2014-07-01
This work introduces a symmetric multiprocessing (SMP) version of the continuous iterative guided spectral class rejection (CIGSCR) algorithm, a semiautomated classification algorithm for remote sensing (multispectral) images. The algorithm uses soft data clusters to produce a soft classification containing inherently more information than a comparable hard classification at an increased computational cost. Previous work suggests that similar algorithms achieve good parallel scalability, motivating the parallel algorithm development work here. Experimental results of applying parallel CIGSCR to an image with approximately 108 pixels and six bands demonstrate superlinear speedup. A soft two class classification is generated in just over 4 min using 32 processors.
Nonlinear Global Optimization Using Curdling Algorithm
Energy Science and Technology Software Center (ESTSC)
1996-03-01
An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,more » gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.« less
A timeline algorithm for astronomy missions
NASA Technical Reports Server (NTRS)
Moore, J. E.; Guffin, O. T.
1975-01-01
An algorithm is presented for generating viewing timelines for orbital astronomy missions of the pointing (nonsurvey/scan) type. The algorithm establishes a target sequence from a list of candidate targets in a way which maximizes total viewing time. Two special cases are treated. One concerns dim targets which, due to lighting constraints, are scheduled only during the antipolar portion of each orbit. They normally require long observation times extending over several revolutions. A minimum slew heuristic is employed to select the sequence of dim targets. The other case deals with bright, or short duration, targets, which have less restrictive lighting constraints and are scheduled during the portion of each orbit when dim targets cannot be viewed. Since this process moves much more rapidly than the dim path, an enumeration algorithm is used to select the sequence that maximizes total viewing time.
Calculating the free energy of nearly jammed hard-particle packings using molecular dynamics
NASA Astrophysics Data System (ADS)
Donev, Aleksandar; Stillinger, Frank H.; Torquato, Salvatore
2007-07-01
We present a new event-driven molecular dynamics (MD) algorithm for measuring the free energy of nearly jammed packings of spherical and non-spherical hard particles. This Bounding Cell Molecular Dynamics (BCMD) algorithm exactly calculates the free-energy of a single-occupancy cell (SOC) model in which each particle is restricted to a neighborhood of its initial position using a hard-wall bounding cell. Our MD algorithm generalizes previous ones in the literature by enabling us to study non-spherical particles as well as to measure the free-energy change during continuous irreversible transformations. Moreover, we make connections to the well-studied problem of computing the volume of convex bodies in high dimensions using random walks. We test and verify the numerical accuracy of the method by comparing against rigorous asymptotic results for the free energy of jammed and isostatic disordered packings of both hard spheres and ellipsoids, for which the free energy can be calculated directly as the volume of a high-dimensional simplex. We also compare our results to previously published Monte Carlo results for hard-sphere crystals near melting and jamming and find excellent agreement. We have successfully used the BCMD algorithm to determine the configurational and free-volume contributions to the free energy of glassy states of binary hard disks [A. Donev, F.H. Stillinger, S. Torquato, Do binary hard disks exhibit an ideal glass transition? Phys. Rev. Lett. 96 (22) (2006) 225502]. The algorithm can also be used to determine phases with locally- or globally-minimal free energy, to calculate the free-energy cost of point and extended crystal defects, or to calculate the elastic moduli of glassy or crystalline solids, among other potential applications.
The Hard Problem of Cooperation
Eriksson, Kimmo; Strimling, Pontus
2012-01-01
Based on individual variation in cooperative inclinations, we define the “hard problem of cooperation” as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition) change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior. PMID:22792282
The hard problem of cooperation.
Eriksson, Kimmo; Strimling, Pontus
2012-01-01
Based on individual variation in cooperative inclinations, we define the "hard problem of cooperation" as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition) change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior. PMID:22792282
Making Nozzles From Hard Materials
NASA Technical Reports Server (NTRS)
Wells, Dennis L.
1989-01-01
Proposed method of electrical-discharge machining (EDM) cuts hard materials like silicon carbide into smoothly contoured parts. Concept developed for fabrication of interior and exterior surfaces and internal cooling channels of convergent/divergent nozzles. EDM wire at skew angle theta creates hyperboloidal cavity in tube. Wire offset from axis of tube and from axis of rotation by distance equal to throat radius. Maintaining same skew angle as that used to cut hyperboloidal inner surface but using larger offset, cooling channel cut in material near inner hyperboloidal surface.
Radiation Hardness Assurance (RHA) Guideline
NASA Technical Reports Server (NTRS)
Campola, Michael J.
2016-01-01
Radiation Hardness Assurance (RHA) consists of all activities undertaken to ensure that the electronics and materials of a space system perform to their design specifications after exposure to the mission space environment. The subset of interests for NEPP and the REAG, are EEE parts. It is important to register that all of these undertakings are in a feedback loop and require constant iteration and updating throughout the mission life. More detail can be found in the reference materials on applicable test data for usage on parts.
Evolutionary constraints or opportunities?
Sharov, Alexei A.
2014-01-01
Natural selection is traditionally viewed as a leading factor of evolution, whereas variation is assumed to be random and non-directional. Any order in variation is attributed to epigenetic or developmental constraints that can hinder the action of natural selection. In contrast I consider the positive role of epigenetic mechanisms in evolution because they provide organisms with opportunities for rapid adaptive change. Because the term “constraint” has negative connotations, I use the term “regulated variation” to emphasize the adaptive nature of phenotypic variation, which helps populations and species to survive and evolve in changing environments. The capacity to produce regulated variation is a phenotypic property, which is not described in the genome. Instead, the genome acts as a switchboard, where mostly random mutations switch “on” or “off” preexisting functional capacities of organism components. Thus, there are two channels of heredity: informational (genomic) and structure-functional (phenotypic). Functional capacities of organisms most likely emerged in a chain of modifications and combinations of more simple ancestral functions. The role of DNA has been to keep records of these changes (without describing the result) so that they can be reproduced in the following generations. Evolutionary opportunities include adjustments of individual functions, multitasking, connection between various components of an organism, and interaction between organisms. The adaptive nature of regulated variation can be explained by the differential success of lineages in macro-evolution. Lineages with more advantageous patterns of regulated variation are likely to produce more species and secure more resources (i.e., long-term lineage selection). PMID:24769155
Infrared Kuiper Belt Constraints
Teplitz, V.L.; Stern, S.A.; Anderson, J.D.; Rosenbaum, D.; Scalise, R.J.; Wentzler, P.
1999-05-01
We compute the temperature and IR signal of particles of radius {ital a} and albedo {alpha} at heliocentric distance {ital R}, taking into account the emissivity effect, and give an interpolating formula for the result. We compare with analyses of {ital COBE} DIRBE data by others (including recent detection of the cosmic IR background) for various values of heliocentric distance {ital R}, particle radius {ital a}, and particle albedo {alpha}. We then apply these results to a recently developed picture of the Kuiper belt as a two-sector disk with a nearby, low-density sector (40{lt}R{lt}50{endash}90 AU) and a more distant sector with a higher density. We consider the case in which passage through a molecular cloud essentially cleans the solar system of dust. We apply a simple model of dust production by comet collisions and removal by the Poynting-Robertson effect to find limits on total and dust masses in the near and far sectors as a function of time since such a passage. Finally, we compare Kuiper belt IR spectra for various parameter values. Results of this work include: (1) numerical limits on Kuiper belt dust as a function of ({ital R}, {ital a}, {alpha}) on the basis of four alternative sets of constraints, including those following from recent discovery of the cosmic IR background by Hauser et al.; (2) application to the two-sector Kuiper belt model, finding mass limits and spectrum shape for different values of relevant parameters including dependence on time elapsed since last passage through a molecular cloud cleared the outer solar system of dust; and (3) potential use of spectral information to determine time since last passage of the Sun through a giant molecular cloud. {copyright} {ital {copyright} 1999.} {ital The American Astronomical Society}
A constrained optimization algorithm based on the simplex search method
NASA Astrophysics Data System (ADS)
Mehta, Vivek Kumar; Dasgupta, Bhaskar
2012-05-01
In this article, a robust method is presented for handling constraints with the Nelder and Mead simplex search method, which is a direct search algorithm for multidimensional unconstrained optimization. The proposed method is free from the limitations of previous attempts that demand the initial simplex to be feasible or a projection of infeasible points to the nonlinear constraint boundaries. The method is tested on several benchmark problems and the results are compared with various evolutionary algorithms available in the literature. The proposed method is found to be competitive with respect to the existing algorithms in terms of effectiveness and efficiency.
A Hybrid Constraint Representation and Reasoning Framework
NASA Technical Reports Server (NTRS)
Golden, Keith; Pang, Wan-Lin
2003-01-01
This paper introduces JNET, a novel constraint representation and reasoning framework that supports procedural constraints and constraint attachments, providing a flexible way of integrating the constraint reasoner with a run- time software environment. Attachments in JNET are constraints over arbitrary Java objects, which are defined using Java code, at runtime, with no changes to the JNET source code.
An improved image deconvolution approach using local constraint
NASA Astrophysics Data System (ADS)
Zhao, Jufeng; Feng, Huajun; Xu, Zhihai; Li, Qi
2012-03-01
Conventional deblurring approaches such as the Richardson-Lucy (RL) algorithm will introduce strong noise and ringing artifacts, though the point spread function (PSF) is known. Since it is difficult to estimate an accurate PSF in real imaging system, the results of those algorithms will be worse. A spatial weight matrix (SWM) is adopted as local constraint, which is incorporated into image statistical prior to improve the RL approach. Experiments show that our approach can make a good balance between preserving image details and suppressing ringing artifacts and noise.
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545
MM Algorithms for Geometric and Signomial Programming
Lange, Kenneth; Zhou, Hua
2013-01-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545
Russian Doll Search for solving Constraint Optimization problems
Verfaillie, G.; Lemaitre, M.
1996-12-31
If the Constraint Satisfaction framework has been extended to deal with Constraint Optimization problems, it appears that optimization is far more complex than satisfaction. One of the causes of the inefficiency of complete tree search methods, like Depth First Branch and Bound, lies in the poor quality of the lower bound on the global valuation of a partial assignment, even when using Forward Checking techniques. In this paper, we introduce the Russian Doll Search algorithm which replaces one search by n successive searches on nested subproblems (n being the number of problem variables), records the results of each search and uses them later, when solving larger subproblems, in order to improve the lower bound on the global valuation of any partial assignment. On small random problems and on large real scheduling problems, this algorithm yields surprisingly good results, which greatly improve as the problems get more constrained and the bandwidth of the used variable ordering diminishes.
Automatic Constraint Detection for 2D Layout Regularization.
Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter
2016-08-01
In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art. PMID:26394426
Landscape analysis of constraint satisfaction problems.
Krzakala, Florent; Kurchan, Jorge
2007-08-01
We discuss an analysis of constraint satisfaction problems, such as sphere packing, K-SAT, and graph coloring, in terms of an effective energy landscape. Several intriguing geometrical properties of the solution space become in this light familiar in terms of the well-studied ones of rugged (glassy) energy landscapes. A benchmark algorithm naturally suggested by this construction finds solutions in polynomial time up to a point beyond the clustering and in some cases even the thermodynamic transitions. This point has a simple geometric meaning and can be in principle determined with standard statistical mechanical methods, thus pushing the analytic bound up to which problems are guaranteed to be easy. We illustrate this for the graph 3- and 4-coloring problem. For packing problems the present discussion allows to better characterize the J-point, proposed as a systematic definition of random close packing, and to place it in the context of other theories of glasses. PMID:17930021
Packing Boxes into Multiple Containers Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Menghani, Deepak; Guha, Anirban
2016-07-01
Container loading problems have been studied extensively in the literature and various analytical, heuristic and metaheuristic methods have been proposed. This paper presents two different variants of a genetic algorithm framework for the three-dimensional container loading problem for optimally loading boxes into multiple containers with constraints. The algorithms are designed so that it is easy to incorporate various constraints found in real life problems. The algorithms are tested on data of standard test cases from literature and are found to compare well with the benchmark algorithms in terms of utilization of containers. This, along with the ability to easily incorporate a wide range of practical constraints, makes them attractive for implementation in real life scenarios.
Hardness correlation for uranium and its alloys
Humphreys, D L; Romig, Jr, A D
1983-03-01
The hardness of 16 different uranium-titanium (U-Ti) alloys was measured on six (6) different hardness scales (R/sub A/, R/sub B/, R/sub C/, R/sub D/, Knoop, and Vickers). The alloys contained between 0.75 and 2.0 wt % Ti. All of the alloys were solutionized (850/sup 0/C, 1 h) and ice-water quenched to produce a supersaturated martensitic phase. A range of hardnesses was obtained by aging the samples for various times and temperatures. The correlation of various hardness scales was shown to be virtually identical to the hardness-scale correlation for steels. For more-accurate conversion from one hardness scale to another, least-squares-curve fits were determined for the various hardness-scale correlations. 34 figures, 5 tables.
Hard and Soft Safety Verifications
NASA Technical Reports Server (NTRS)
Wetherholt, Jon; Anderson, Brenda
2012-01-01
The purpose of this paper is to examine the differences between and the effects of hard and soft safety verifications. Initially, the terminology should be defined and clarified. A hard safety verification is datum which demonstrates how a safety control is enacted. An example of this is relief valve testing. A soft safety verification is something which is usually described as nice to have but it is not necessary to prove safe operation. An example of a soft verification is the loss of the Solid Rocket Booster (SRB) casings from Shuttle flight, STS-4. When the main parachutes failed, the casings impacted the water and sank. In the nose cap of the SRBs, video cameras recorded the release of the parachutes to determine safe operation and to provide information for potential anomaly resolution. Generally, examination of the casings and nozzles contributed to understanding of the newly developed boosters and their operation. Safety verification of SRB operation was demonstrated by examination for erosion or wear of the casings and nozzle. Loss of the SRBs and associated data did not delay the launch of the next Shuttle flight.
Applying Motion Constraints Based on Test Data
NASA Technical Reports Server (NTRS)
Burlone, Michael
2014-01-01
MSC ADAMS is a simulation software that is used to analyze multibody dynamics. Using user subroutines, it is possible to apply motion constraints to the rigid bodies so that they match the motion profile collected from test data. This presentation describes the process of taking test data and passing it to ADAMS using user subroutines, and uses the Morpheus free-flight 4 test as an example of motion data used for this purpose. Morpheus is the name of a prototype lander vehicle built by NASA that serves as a test bed for various experimental technologies (see backup slides for details) MSC.ADAMS"TM" is used to play back telemetry data (vehicle orientation and position) from each test as the inputs to a 6-DoF general motion constraint (details in backup slides) The MSC.ADAMS"TM" playback simulations allow engineers to examine and analyze flight trajectory as well as observe vehicle motion from any angle and at any playback speed. This facilitates the development of robust and stable control algorithms, increasing reliability and reducing development costs of this developmental engine The simulation also incorporates a 3D model of the artificial hazard field, allowing engineers to visualize and measure performance of the developmental autonomous landing and hazard avoidance technology ADAMS is a multibody dynamics solver. It uses forces, constraints, and mass properties to numerically integrate equations of motion. The ADAMS solver will ask the motion subroutine for position, velocity, and acceleration values at various time steps. Those values must be continuous over the whole time domain. Each degree of freedom in the telemetry data can be examined separately; however, linear interpolation of the telemetry data is invalid, since there will be discontinuities in velocity and acceleration.
Self-accelerating massive gravity: Hidden constraints and characteristics
NASA Astrophysics Data System (ADS)
Motloch, Pavel; Hu, Wayne; Motohashi, Hayato
2016-05-01
Self-accelerating backgrounds in massive gravity provide an arena to explore the Cauchy problem for derivatively coupled fields that obey complex constraints which reduce the phase space degrees of freedom. We present here an algorithm based on the Kronecker form of a matrix pencil that finds all hidden constraints, for example those associated with derivatives of the equations of motion, and characteristic curves for any 1 +1 dimensional system of linear partial differential equations. With the Regge-Wheeler-Zerilli decomposition of metric perturbations into angular momentum and parity states, this technique applies to fully 3 +1 dimensional perturbations of massive gravity around any spherically symmetric self-accelerating background. Five spin modes of the massive graviton propagate once the constraints are imposed: two spin-2 modes with luminal characteristics present in the massless theory as well as two spin-1 modes and one spin-0 mode. Although the new modes all possess the same—typically spacelike—characteristic curves, the spin-1 modes are parabolic while the spin-0 modes are hyperbolic. The joint system, which remains coupled by nonderivative terms, cannot be solved as a simple Cauchy problem from a single noncharacteristic surface. We also illustrate the generality of the algorithm with other cases where derivative constraints reduce the number of propagating degrees of freedom or order of the equations.
Unique sodium phosphosilicate glasses designed through extended topological constraint theory.
Zeng, Huidan; Jiang, Qi; Liu, Zhao; Li, Xiang; Ren, Jing; Chen, Guorong; Liu, Fude; Peng, Shou
2014-05-15
Sodium phosphosilicate glasses exhibit unique properties with mixed network formers, and have various potential applications. However, proper understanding on the network structures and property-oriented methodology based on compositional changes are lacking. In this study, we have developed an extended topological constraint theory and applied it successfully to analyze the composition dependence of glass transition temperature (Tg) and hardness of sodium phosphosilicate glasses. It was found that the hardness and Tg of glasses do not always increase with the content of SiO2, and there exist maximum hardness and Tg at a certain content of SiO2. In particular, a unique glass (20Na2O-17SiO2-63P2O5) exhibits a low glass transition temperature (589 K) but still has relatively high hardness (4.42 GPa) mainly due to the high fraction of highly coordinated network former Si((6)). Because of its convenient forming and manufacturing, such kind of phosphosilicate glasses has a lot of valuable applications in optical fibers, optical amplifiers, biomaterials, and fuel cells. Also, such methodology can be applied to other types of phosphosilicate glasses with similar structures. PMID:24779999
A very high speed hard decision sequential decoder.
NASA Technical Reports Server (NTRS)
Gilhousen, K. S.; Lumb, D. R.
1972-01-01
Description of a 40-megabit-per-second hard decision sequential decoder which employs the fastest commercially available digital integrated circuits - MECL III. With this decoder an internal computational rate of 70,000,000 computations per second has been achieved. The computational efficiency of the decoding algorithm has been improved by incorporating two modifications to the Fano algorithm - namely, 'double quick threshold loosening' and 'diagonal steps.' On the basis of preliminary results, an output error rate of 0.00001 can be achieved with E sub b/N sub zero less than 5.4 dB at data rates up to 40 megabits per second. The very high internal operating speed of the decoder represents a factor of five increase in speed over any previous sequential decoder.
Integrated Science--Reasons & Constraints.
ERIC Educational Resources Information Center
Fox, M.; Oliver, P. M.
1978-01-01
Describes the philosophy and development of an integrated science program in a British secondary school. Discusses constraints to the program including laboratory facilities, money, and fewer laboratory technicians. (MA)
Constraint-based stereo matching
NASA Technical Reports Server (NTRS)
Kuan, D. T.
1987-01-01
The major difficulty in stereo vision is the correspondence problem that requires matching features in two stereo images. Researchers describe a constraint-based stereo matching technique using local geometric constraints among edge segments to limit the search space and to resolve matching ambiguity. Edge segments are used as image features for stereo matching. Epipolar constraint and individual edge properties are used to determine possible initial matches between edge segments in a stereo image pair. Local edge geometric attributes such as continuity, junction structure, and edge neighborhood relations are used as constraints to guide the stereo matching process. The result is a locally consistent set of edge segment correspondences between stereo images. These locally consistent matches are used to generate higher-level hypotheses on extended edge segments and junctions to form more global contexts to achieve global consistency.
Spacecraft Attitude Maneuver Planning Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Kornfeld, Richard P.
2004-01-01
A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This approach for attitude path planning makes full use of a priori constraint knowledge and is computationally tractable enough to be executed onboard a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used as is or as an initial solution to initialize additional deterministic optimization algorithms. A number of representative case examples for time-fixed and time-varying conditions yielded search times that are typically on the order of minutes, thus demonstrating the viability of this method. This approach is applicable to all deep space and planet Earth missions requiring greater spacecraft autonomy, and greatly facilitates navigation and science observation planning.
Constrained Multiobjective Biogeography Optimization Algorithm
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591
Constrained multiobjective biogeography optimization algorithm.
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591
Fluid convection, constraint and causation
Bishop, Robert C.
2012-01-01
Complexity—nonlinear dynamics for my purposes in this essay—is rich with metaphysical and epistemological implications but is receiving sustained philosophical analysis only recently. I will explore some of the subtleties of causation and constraint in Rayleigh–Bénard convection as an example of a complex phenomenon, and extract some lessons for further philosophical reflection on top-down constraint and causation particularly with respect to causal foundationalism. PMID:23386955
NASA Astrophysics Data System (ADS)
Tsai, Chi-Yi; Song, Kai-Tai
2006-02-01
A novel heterogeneity-projection hard-decision adaptive interpolation (HPHD-AI) algorithm is proposed in this paper for color reproduction from Bayer mosaic images. The proposed algorithm aims to estimate the optimal interpolation direction and perform hard-decision interpolation, in which the decision is made before interpolation. To do so, a new heterogeneity-projection scheme based on spectral-spatial correlation is proposed to decide the best interpolation direction from the original mosaic image directly. Exploiting the proposed heterogeneity-projection scheme, a hard-decision rule can be designed easily to perform the interpolation. We have compared this technique with three recently proposed demosaicing techniques: Lu's, Gunturk's and Li's methods, by utilizing twenty-five natural images from Kodak PhotoCD. The experimental results show that HPHD-AI outperforms all of them in both PSNR values and S-CIELab ▵Ε* ab measures.
NASA Technical Reports Server (NTRS)
Rothschild, R. E.
1981-01-01
Past hard X-ray and lower energy satellite instruments are reviewed and it is shown that observation above 20 keV and up to hundreds of keV can provide much valuable information on the astrophysics of cosmic sources. To calculate possible sensitivities of future arrays, the efficiencies of a one-atmosphere inch gas counter (the HEAO-1 A-2 xenon filled HED3) and a 3 mm phoswich scintillator (the HEAO-1 A-4 Na1 LED1) were compared. Above 15 keV, the scintillator was more efficient. In a similar comparison, the sensitivity of germanium detectors did not differ much from that of the scintillators, except at high energies where the sensitivity would remain flat and not rise with loss of efficiency. Questions to be addressed concerning the physics of active galaxies and the diffuse radiation background, black holes, radio pulsars, X-ray pulsars, and galactic clusters are examined.
Development of radiation hard scintillators
Markley, F.; Woods, D.; Pla-Dalmau, A.; Foster, G. ); Blackburn, R. )
1992-05-01
Substantial improvements have been made in the radiation hardness of plastic scintillators. Cylinders of scintillating materials 2.2 cm in diameter and 1 cm thick have been exposed to 10 Mrads of gamma rays at a dose rate of 1 Mrad/h in a nitrogen atmosphere. One of the formulations tested showed an immediate decrease in pulse height of only 4% and has remained stable for 12 days while annealing in air. By comparison a commercial PVT scintillator showed an immediate decrease of 58% and after 43 days of annealing in air it improved to a 14% loss. The formulated sample consisted of 70 parts by weight of Dow polystyrene, 30 pbw of pentaphenyltrimethyltrisiloxane (Dow Corning DC 705 oil), 2 pbw of p-terphenyl, 0.2 pbw of tetraphenylbutadiene, and 0.5 pbw of UVASIL299LM from Ferro.
NASA Astrophysics Data System (ADS)
Radac, Mircea-Bogdan; Precup, Radu-Emil
2016-05-01
This paper presents the design and experimental validation of a new model-free data-driven iterative reference input tuning (IRIT) algorithm that solves a reference trajectory tracking problem as an optimization problem with control signal saturation constraints and control signal rate constraints. The IRIT algorithm design employs an experiment-based stochastic search algorithm to use the advantages of iterative learning control. The experimental results validate the IRIT algorithm applied to a non-linear aerodynamic position control system. The results prove that the IRIT algorithm offers the significant control system performance improvement by few iterations and experiments conducted on the real-world process and model-free parameter tuning.
Hard X-ray outbursts in LMXBs: the case of 4U 1705-44
NASA Astrophysics Data System (ADS)
D'Ai, Antonino
2011-10-01
We propose a 60 ks XMM-Newton observation of the atoll source 4U 1705-44 as a Target of Opportunity when the source is in the hard state. This observation will set the still lacking constraints on the shape of the reflection component in this spectral state. The XMM observation will be coupled with a weeks-long coverage, through periodic visits, made with the Swift satellite.
Fuzzy and hard clustering analysis for thyroid disease.
Azar, Ahmad Taher; El-Said, Shaimaa Ahmed; Hassanien, Aboul Ella
2013-07-01
Thyroid hormones produced by the thyroid gland help regulation of the body's metabolism. A variety of methods have been proposed in the literature for thyroid disease classification. As far as we know, clustering techniques have not been used in thyroid diseases data set so far. This paper proposes a comparison between hard and fuzzy clustering algorithms for thyroid diseases data set in order to find the optimal number of clusters. Different scalar validity measures are used in comparing the performances of the proposed clustering systems. To demonstrate the performance of each algorithm, the feature values that represent thyroid disease are used as input for the system. Several runs are carried out and recorded with a different number of clusters being specified for each run (between 2 and 11), so as to establish the optimum number of clusters. To find the optimal number of clusters, the so-called elbow criterion is applied. The experimental results revealed that for all algorithms, the elbow was located at c=3. The clustering results for all algorithms are then visualized by the Sammon mapping method to find a low-dimensional (normally 2D or 3D) representation of a set of points distributed in a high dimensional pattern space. At the end of this study, some recommendations are formulated to improve determining the actual number of clusters present in the data set. PMID:23357404
Laser-induced autofluorescence of oral cavity hard tissues
NASA Astrophysics Data System (ADS)
Borisova, E. G.; Uzunov, Tz. T.; Avramov, L. A.
2007-03-01
In current study oral cavity hard tissues autofluorescence was investigated to obtain more complete picture of their optical properties. As an excitation source nitrogen laser with parameters - 337,1 nm, 14 μJ, 10 Hz (ILGI-503, Russia) was used. In vitro spectra from enamel, dentine, cartilage, spongiosa and cortical part of the periodontal bones were registered using a fiber-optic microspectrometer (PC2000, "Ocean Optics" Inc., USA). Gingival fluorescence was also obtained for comparison of its spectral properties with that of hard oral tissues. Samples are characterized with significant differences of fluorescence properties one to another. It is clearly observed signal from different collagen types and collagen-cross links with maxima at 385, 430 and 480-490 nm. In dentine are observed only two maxima at 440 and 480 nm, related also to collagen structures. In samples of gingival and spongiosa were observed traces of hemoglobin - by its re-absorption at 545 and 575 nm, which distort the fluorescence spectra detected from these anatomic sites. Results, obtained in this study are foreseen to be used for development of algorithms for diagnosis and differentiation of teeth lesions and other problems of oral cavity hard tissues as periodontitis and gingivitis.
Aerocapture Guidance Algorithm Comparison Campaign
NASA Technical Reports Server (NTRS)
Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric
2002-01-01
The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.
JPIC-Rad-Hard JPEG2000 Image Compression ASIC
NASA Astrophysics Data System (ADS)
Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov
2010-08-01
JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.
A modified multilevel scheme for internal and external constraints in virtual environments.
Arikatla, Venkata S; De, Suvranu
2013-01-01
Multigrid algorithms are gaining popularity in virtual reality simulations as they have a theoretically optimal performance that scales linearly with the number of degrees of freedom of the simulation system. We propose a multilevel approach that combines the efficiency of the multigrid algorithms with the ability to resolve multi-body constraints during interactive simulations. First, we develop a single level modified block Gauss-Seidel (MBGS) smoother that can incorporate constraints. This is subsequently incorporated in a standard multigrid V-cycle with corrections for constraints to form the modified multigrid V-cycle (MMgV). Numerical results show that the solver can resolve constraints while achieving the theoretical performance of multigrid schemes. PMID:23400125
Genetic algorithms for the vehicle routing problem
NASA Astrophysics Data System (ADS)
Volna, Eva
2016-06-01
The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.
Better Polynomial Algorithms on Graphs of Bounded Rank-Width
NASA Astrophysics Data System (ADS)
Ganian, Robert; Hliněný, Petr
Although there exist many polynomial algorithms for NP-hard problems running on a bounded clique-width expression of the input graph, there exists only little comparable work on such algorithms for rank-width. We believe that one reason for this is the somewhat obscure and hard-to-grasp nature of rank-decompositions. Nevertheless, strong arguments for using the rank-width parameter have been given by recent formalisms independently developed by Courcelle and Kanté, by the authors, and by Bui-Xuan et al. This article focuses on designing formally clean and understandable "pseudopolynomial" (XP) algorithms solving "hard" problems (non-FPT) on graphs of bounded rank-width. Those include computing the chromatic number and polynomial or testing the Hamiltonicity of a graph and are extendable to many other problems.
Strangeness conservation constraints in hadron gas models
Tiwari, V.K.; Singh, S.K.; Uddin, S.; Singh, C.P.
1996-05-01
We examine the implications of the constraints arising due to strangeness conservation on the strangeness production in various existing thermal hadron-gas models. The dependence of strangeness chemical potential {mu}{sub {ital S}} on the baryon chemical potential {mu}{sub {ital B}} and temperature {ital T} is investigated. The incorporation of finite-size, hard-core, repulsive interactions in the thermodynamically consistent description of hot and dense hadron gas alters the results obtained for pointlike particles. We compare results in two extreme alternative cases: (1) {ital K} and {ital K}{sup {asterisk}} mesons are treated as point particles and they can penetrate all volumes occupied by baryons and antibaryons and (2) the volume occupied by the baryons and antibaryons is not accessible to them. We find that the results indeed depend on the assumptions made. Moreover, the anomalous results obtained for the ratios {bar {Xi}}/{Xi} and {bar {Lambda}}/{Lambda} rule out the second possibility. {copyright} {ital 1996 The American Physical Society.}
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Diverless hard-pipe connection systems for subsea pipelines and flowlines
Reddy, S.K.; Paull, B.M.; Hals, B.E.
1996-12-31
Hard-pipe tie-in jumpers, for diverless subsea connections between production manifolds and export pipelines (or satellite wells), are an economical alternative to traditional diverless connection methods life deflection and pull-in; and also to flexible pipe jumpers. A systems level approach to the design of the jumpers, which takes into consideration performance requirements, measurement methods, fabrication and installation constraints, as well as code requirements, is essential to making these connections economical and reliable. A dependable, ROV-friendly measurement system is key to making these connections possible. The parameters affecting the design of hard-pipe jumpers, and the relationship between these, are discussed in the context of minimizing cost while maintaining reliability. The applicability of pipeline codes to the design of hard-pipe jumpers is examined. The design, construction and installation of the Amoco Liuhua 11-1 pipeline tie-in jumpers are presented as a case study for applying these concepts.
Influence of transparent coating hardness on laser-generated ultrasonic waves
NASA Astrophysics Data System (ADS)
Guo, Yuning; Yang, Dexing; Feng, Wen; Chang, Ying
2013-01-01
Numerical models are established to investigate the influence of transparent coating hardness on the laser-generated thermoelastic force source and ultrasonic waves in coating-substrate systems by using finite element method. With the increase of coating hardness, the characteristic of longitudinal wave in substrate is more obvious due to the gradual increase of reactive force produced by coating constraint; the directivity patterns of longitudinal wave show that the energy concentration area transfers from bilateral area to the axial direction area gradually. Therefore, the directivity pattern can be regulated to obtain the better ultrasonic signals by coating different hardness materials. It is significant for further development of the experiment in composite evaluation and in extreme condition.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Developmental constraints on behavioural flexibility
Holekamp, Kay E.; Swanson, Eli M.; Van Meter, Page E.
2013-01-01
We suggest that variation in mammalian behavioural flexibility not accounted for by current socioecological models may be explained in part by developmental constraints. From our own work, we provide examples of constraints affecting variation in behavioural flexibility, not only among individuals, but also among species and higher taxonomic units. We first implicate organizational maternal effects of androgens in shaping individual differences in aggressive behaviour emitted by female spotted hyaenas throughout the lifespan. We then compare carnivores and primates with respect to their locomotor and craniofacial adaptations. We inquire whether antagonistic selection pressures on the skull might impose differential functional constraints on evolvability of skulls and brains in these two orders, thus ultimately affecting behavioural flexibility in each group. We suggest that, even when carnivores and primates would theoretically benefit from the same adaptations with respect to behavioural flexibility, carnivores may nevertheless exhibit less behavioural flexibility than primates because of constraints imposed by past adaptations in the morphology of the limbs and skull. Phylogenetic analysis consistent with this idea suggests greater evolutionary lability in relative brain size within families of primates than carnivores. Thus, consideration of developmental constraints may help elucidate variation in mammalian behavioural flexibility. PMID:23569298
Developmental constraints on behavioural flexibility.
Holekamp, Kay E; Swanson, Eli M; Van Meter, Page E
2013-05-19
We suggest that variation in mammalian behavioural flexibility not accounted for by current socioecological models may be explained in part by developmental constraints. From our own work, we provide examples of constraints affecting variation in behavioural flexibility, not only among individuals, but also among species and higher taxonomic units. We first implicate organizational maternal effects of androgens in shaping individual differences in aggressive behaviour emitted by female spotted hyaenas throughout the lifespan. We then compare carnivores and primates with respect to their locomotor and craniofacial adaptations. We inquire whether antagonistic selection pressures on the skull might impose differential functional constraints on evolvability of skulls and brains in these two orders, thus ultimately affecting behavioural flexibility in each group. We suggest that, even when carnivores and primates would theoretically benefit from the same adaptations with respect to behavioural flexibility, carnivores may nevertheless exhibit less behavioural flexibility than primates because of constraints imposed by past adaptations in the morphology of the limbs and skull. Phylogenetic analysis consistent with this idea suggests greater evolutionary lability in relative brain size within families of primates than carnivores. Thus, consideration of developmental constraints may help elucidate variation in mammalian behavioural flexibility. PMID:23569298
A new algorithm for constrained nonlinear least-squares problems, part 1
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, F. T.
1983-01-01
A Gauss-Newton algorithm is presented for solving nonlinear least squares problems. The problem statement may include simple bounds or more general constraints on the unknowns. The algorithm uses a trust region that allows the objective function to increase with logic for retreating to best values. The computations for the linear problem are done using a least squares system solver that allows for simple bounds and linear constraints. The trust region limits are defined by a box around the current point. In its current form the algorithm is effective only for problems with small residuals, linear constraints and dense Jacobian matrices. Results on a set of test problems are encouraging.
An Augmentation of G-Guidance Algorithms
NASA Technical Reports Server (NTRS)
Carson, John M. III; Acikmese, Behcet
2011-01-01
The original G-Guidance algorithm provided an autonomous guidance and control policy for small-body proximity operations that took into account uncertainty and dynamics disturbances. However, there was a lack of robustness in regards to object proximity while in autonomous mode. The modified GGuidance algorithm was augmented with a second operational mode that allows switching into a safety hover mode. This will cause a spacecraft to hover in place until a mission-planning algorithm can compute a safe new trajectory. No state or control constraints are violated. When a new, feasible state trajectory is calculated, the spacecraft will return to standard mode and maneuver toward the target. The main goal of this augmentation is to protect the spacecraft in the event that a landing surface or obstacle is closer or further than anticipated. The algorithm can be used for the mitigation of any unexpected trajectory or state changes that occur during standard mode operations
Scheduling Earth Observing Satellites with Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2003-01-01
We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.
Genetic algorithms and supernovae type Ia analysis
Bogdanos, Charalampos; Nesseris, Savvas E-mail: nesseris@nbi.dk
2009-05-15
We introduce genetic algorithms as a means to analyze supernovae type Ia data and extract model-independent constraints on the evolution of the Dark Energy equation of state w(z) {identical_to} P{sub DE}/{rho}{sub DE}. Specifically, we will give a brief introduction to the genetic algorithms along with some simple examples to illustrate their advantages and finally we will apply them to the supernovae type Ia data. We find that genetic algorithms can lead to results in line with already established parametric and non-parametric reconstruction methods and could be used as a complementary way of treating SNIa data. As a non-parametric method, genetic algorithms provide a model-independent way to analyze data and can minimize bias due to premature choice of a dark energy model.
Genetic map construction with constraints
Clark, D.A.; Rawlings, C.J.; Soursenot, S.
1994-12-31
A pilot program, CME, is described for generating a physical genetic map from hybridization fingerprinting data. CME is implemented in the parallel constraint logic programming language ElipSys. The features of constraint logic programming are used to enable the integration of preexisting mapping information (partial probe orders from cytogenetic maps and local physical maps) into the global map generation process, while parallelism enables the search space to be traversed more efficiently. CME was tested using data from chromosome 2 of Schizosaccharomyces pombe and was found able to generate maps as well as (and sometimes better than) a more traditional method. This paper illustrates the practical benefits of using a symbolic logic programming language and shows that the features of constraint handling and parallel execution bring the development of practical systems based on Al programming technologies nearer to being a reality.
The development of hard x-ray optics at MSFC
NASA Astrophysics Data System (ADS)
Ramsey, Brian D.; Elsner, Ron F.; Engelhaupt, Darell; Gubarev, Mikhail V.; Kolodziejczak, Jeffery J.; O'Dell, Stephen L.; Speegle, Chet O.; Weisskopf, Martin C.
2004-02-01
We have developed the electroformed-nickel replication process to enable us to fabricate light-weight, high-quality mirrors for the hard-x-ray region. Two projects currently utilizing this technology are the production of 240 mirror shells, of diameters ranging from 50 to 94 mm, for our HERO balloon payload, and 150- and 230-mm-diameter shells for a prototype Constellation-X hard-x-ray telescope module. The challenge for the former is to fabricate, mount, align and fly a large number of high-resolution mirrors within the constraints of a modest budget. For the latter, the challenge is to maintain high angular resolution despite weight-budget-driven mirror shell thicknesses (100 μm) which make the shells extremely sensitive to fabrication and handling stresses, and to ensure that the replication process does not degrade the ultra-smooth surface finish (~3 Å) required for eventual multilayer coatings. We present a progress report on these two programs.
Moving magnetic nanoparticles through soft-hard magnetic composite system
NASA Astrophysics Data System (ADS)
Subramanian, Hemachander; Han, Jong
2007-03-01
An important requirement during the design of a nano-electromechanical system is the ability to move a nanoparticle from one point to another in a predictable way. Through simulations, we demonstrate that soft-hard magnetic stuctures can help us move nanoparticles predictably. We simulated a 2-D system, in which the exchange-coupled soft-magnetic magnetization is frustrated with the boundary condition set by a hard magnetic array and rotating external field. We consider a geometry with three-fold degenerate magnetic local minima and show that the hysteretic transitions are manipulated by an external field. Due to the reduced interfacial energy from weak demagnetization energy in the composite magnets and magnetic hysteresis, the energy landscape can be manipulated in a well-defined and predictable manner. We apply this idea to control the movement of a magnetic particle placed on a non-magnetic layer on top of the structure. We are interested in extending this simple, preliminary study to include complex geometries. We expect that complex geometrical constraints would lead to interesting orbits of nanoparticles in these systems.
Magnetotail dynamics under isobaric constraints
NASA Technical Reports Server (NTRS)
Birn, Joachim; Schindler, Karl; Janicke, Lutz; Hesse, Michael
1994-01-01
Using linear theory and nonlinear MHD simulations, we investigate the resistive and ideal MHD stability of two-dimensional plasma configurations under the isobaric constraint dP/dt = 0, which in ideal MHD is equivalent to conserving the pressure function P = P(A), where A denotes the magnetic flux. This constraint is satisfied for incompressible modes, such as Alfven waves, and for systems undergoing energy losses. The linear stability analysis leads to a Schroedinger equation, which can be investigated by standard quantum mechanics procedures. We present an application to a typical stretched magnetotail configuration. For a one-dimensional sheet equilibrium characteristic properties of tearing instability are rediscovered. However, the maximum growth rate scales with the 1/7 power of the resistivity, which implies much faster growth than for the standard tearing mode (assuming that the resistivity is small). The same basic eigen-mode is found also for weakly two-dimensional equilibria, even in the ideal MHD limit. In this case the growth rate scales with the 1/4 power of the normal magnetic field. The results of the linear stability analysis are confirmed qualitatively by nonlinear dynamic MHD simulations. These results suggest the interesting possibility that substorm onset, or the thinning in the late growth phase, is caused by the release of a thermodynamic constraint without the (immediate) necessity of releasing the ideal MHD constraint. In the nonlinear regime the resistive and ideal developments differ in that the ideal mode does not lead to neutral line formation without the further release of the ideal MHD constraint; instead a thin current sheet forms. The isobaric constraint is critically discussed. Under perhaps more realistic adiabatic conditions the ideal mode appears to be stable but could be driven by external perturbations and thus generate the thin current sheet in the late growth phase, before a nonideal instability sets in.
Tan, Q; Huang, G H; Cai, Y P
2010-09-01
The existing inexact optimization methods based on interval-parameter linear programming can hardly address problems where coefficients in objective functions are subject to dual uncertainties. In this study, a superiority-inferiority-based inexact fuzzy two-stage mixed-integer linear programming (SI-IFTMILP) model was developed for supporting municipal solid waste management under uncertainty. The developed SI-IFTMILP approach is capable of tackling dual uncertainties presented as fuzzy boundary intervals (FuBIs) in not only constraints, but also objective functions. Uncertainties expressed as a combination of intervals and random variables could also be explicitly reflected. An algorithm with high computational efficiency was provided to solve SI-IFTMILP. SI-IFTMILP was then applied to a long-term waste management case to demonstrate its applicability. Useful interval solutions were obtained. SI-IFTMILP could help generate dynamic facility-expansion and waste-allocation plans, as well as provide corrective actions when anticipated waste management plans are violated. It could also greatly reduce system-violation risk and enhance system robustness through examining two sets of penalties resulting from variations in fuzziness and randomness. Moreover, four possible alternative models were formulated to solve the same problem; solutions from them were then compared with those from SI-IFTMILP. The results indicate that SI-IFTMILP could provide more reliable solutions than the alternatives. PMID:20580864
NASA Astrophysics Data System (ADS)
Nättilä, J.; Steiner, A. W.; Kajava, J. J. E.; Suleimanov, V. F.; Poutanen, J.
2016-06-01
The cooling phase of thermonuclear (type-I) X-ray bursts can be used to constrain neutron star (NS) compactness by comparing the observed cooling tracks of bursts to accurate theoretical atmosphere model calculations. By applying the so-called cooling tail method, where the information from the whole cooling track is used, we constrain the mass, radius, and distance for three different NSs in low-mass X-ray binaries 4U 1702-429, 4U 1724-307, and SAX J1810.8-260. Care is taken to use only the hard state bursts where it is thought that the NS surface alone is emitting. We then use a Markov chain Monte Carlo algorithm within a Bayesian framework to obtain a parameterized equation of state (EoS) of cold dense matter from our initial mass and radius constraints. This allows us to set limits on various nuclear parameters and to constrain an empirical pressure-density relationship for the dense matter. Our predicted EoS results in NS a radius between 10.5-12.8 km (95% confidence limits) for a mass of 1.4 M⊙, depending slightly on the assumed composition. Because of systematic errors and uncertainty in the composition, these results should be interpreted as lower limits for the radius.
Genetic Algorithms for Digital Quantum Simulations
NASA Astrophysics Data System (ADS)
Las Heras, U.; Alvarez-Rodriguez, U.; Solano, E.; Sanz, M.
2016-06-01
We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., if any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. ... 7 Agriculture 3 2012-01-01 2012-01-01 false Hard seed. 201.21 Section 201.21...
Code of Federal Regulations, 2013 CFR
2013-01-01
... any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. ... 7 Agriculture 3 2013-01-01 2013-01-01 false Hard seed. 201.30 Section 201.30...
Code of Federal Regulations, 2012 CFR
2012-01-01
... any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. ... 7 Agriculture 3 2012-01-01 2012-01-01 false Hard seed. 201.30 Section 201.30...
Code of Federal Regulations, 2010 CFR
2010-01-01
... any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. ... 7 Agriculture 3 2010-01-01 2010-01-01 false Hard seed. 201.30 Section 201.30...
Code of Federal Regulations, 2011 CFR
2011-01-01
... any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. ... 7 Agriculture 3 2011-01-01 2011-01-01 false Hard seed. 201.30 Section 201.30...
Code of Federal Regulations, 2011 CFR
2011-01-01
..., if any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. ... 7 Agriculture 3 2011-01-01 2011-01-01 false Hard seed. 201.21 Section 201.21...
Code of Federal Regulations, 2010 CFR
2010-01-01
..., if any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. ... 7 Agriculture 3 2010-01-01 2010-01-01 false Hard seed. 201.21 Section 201.21...
Code of Federal Regulations, 2013 CFR
2013-01-01
..., if any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. ... 7 Agriculture 3 2013-01-01 2013-01-01 false Hard seed. 201.21 Section 201.21...
Code of Federal Regulations, 2014 CFR
2014-01-01
..., if any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. ... 7 Agriculture 3 2014-01-01 2014-01-01 false Hard seed. 201.21 Section 201.21...
Code of Federal Regulations, 2014 CFR
2014-01-01
... any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. ... 7 Agriculture 3 2014-01-01 2014-01-01 false Hard seed. 201.30 Section 201.30...
Retraction of Hard, Lozano, and Tversky (2006)
ERIC Educational Resources Information Center
Hard, B. M.; Lozano, S. C.; Tversky, B.
2008-01-01
Reports a retraction of "Hierarchical encoding of behavior: Translating perception into action" by Bridgette Martin Hard, Sandra C. Lozano and Barbara Tversky (Journal of Experimental Psychology: General, 2006[Nov], Vol 135[4], 588-608). All authors retract this article. Co-author Tversky and co-author Hard believe that the research results cannot…
Scaling, dimensional analysis, and hardness measurements
NASA Astrophysics Data System (ADS)
Cheng, Yang-Tse; Cheng, Che-Min; Li, Zhiyong
2000-03-01
Hardness is one of the frequently used concepts in tribology. For nearly one hundred years, indentation experiments have been performed to obtain the hardness of materials. Recent years have seen significant improvements in indentation equipment and a growing need to measure the mechanical properties of materials on small scales. However, questions remain, including what properties can be measured using instrumented indention techniques and what is hardness? We discuss these basic questions using dimensional analysis together with finite element calculations. We derive scaling relationships for loading and unloading curve, initial unloading slope, contact depth, and hardness. Hardness is shown to depend on elastic, as well as plastic properties of materials. The conditions for "piling-up" and "sinking-in" of surface profiles in indentation are obtained. The methods for estimating contact area are examined. The work done during indentation is also studied. A relationship between hardness, elastic modulus, and the work of indentation is revealed. This relationship offers a new method for obtaining hardness and elastic modulus. In addition, we demonstrate that stress-strain relationships may not be uniquely determined from loading/unloading curves alone using a conical or pyramidal indenter. The dependence of hardness on indenter geometry is also studied. Finally, a scaling theory for indentation in power-law creep solids using self-similar indenters is developed. A connection between creep and "indentation size effect" is established.
"Hard Science" for Gifted 1st Graders
ERIC Educational Resources Information Center
DeGennaro, April
2006-01-01
"Hard Science" is designed to teach 1st grade gifted students accurate and high level science concepts. It is based upon their experience of the world and attempts to build a foundation for continued love and enjoyment of science. "Hard Science" provides field experiences and opportunities for hands-on discovery working beside experts in the field…
HARD SPRING WHEAT TECHNICAL COMMITTEE 2007 CROP
Technology Transfer Automated Retrieval System (TEKTRAN)
Twelve experimental lines of hard spring wheat were grown at up to five locations in 2007 and evaluated for kernel, milling, and bread baking quality against the check variety Glenn. Samples of wheat were submitted through the Wheat Quality Council and processed and milled at the USDA Hard Red Spri...
Hard Spring Wheat Technical Committee 2009 Crop
Technology Transfer Automated Retrieval System (TEKTRAN)
Thirteen hard spring wheat lines that were developed by breeders throughout the spring wheat region of the U. S. were grown at up to five locations in 2009 and evaluated for kernel, milling, and bread baking quality against the check variety Glenn. Samples of wheat were milled at the USDA Hard Red ...
Hard Spring Wheat Technical Committee, 2008 Crop.
Technology Transfer Automated Retrieval System (TEKTRAN)
Eleven hard spring wheat lines that were developed by breeders throughout the spring wheat region of the U. S. were grown at up to five locations in 2008 and evaluated for kernel, milling, and bread baking quality against the check variety Glenn. Samples of wheat were milled at the USDA Hard Red Sp...
Integrated optimization of nonlinear R/C frames with reliability constraints
NASA Technical Reports Server (NTRS)
Soeiro, Alfredo; Hoit, Marc
1989-01-01
A structural optimization algorithm was researched including global displacements as decision variables. The algorithm was applied to planar reinforced concrete frames with nonlinear material behavior submitted to static loading. The flexural performance of the elements was evaluated as a function of the actual stress-strain diagrams of the materials. Formation of rotational hinges with strain hardening were allowed and the equilibrium constraints were updated accordingly. The adequacy of the frames was guaranteed by imposing as constraints required reliability indices for the members, maximum global displacements for the structure and a maximum system probability of failure.
Hardness methods for testing maize kernels.
Fox, Glen; Manley, Marena
2009-07-01
Maize is a highly important crop to many countries around the world, through the sale of the maize crop to domestic processors and subsequent production of maize products and also provides a staple food to subsistance farms in undeveloped countries. In many countries, there have been long-term research efforts to develop a suitable hardness method that could assist the maize industry in improving efficiency in processing as well as possibly providing a quality specification for maize growers, which could attract a premium. This paper focuses specifically on hardness and reviews a number of methodologies as well as important biochemical aspects of maize that contribute to maize hardness used internationally. Numerous foods are produced from maize, and hardness has been described as having an impact on food quality. However, the basis of hardness and measurement of hardness are very general and would apply to any use of maize from any country. From the published literature, it would appear that one of the simpler methods used to measure hardness is a grinding step followed by a sieving step, using multiple sieve sizes. This would allow the range in hardness within a sample as well as average particle size and/or coarse/fine ratio to be calculated. Any of these parameters could easily be used as reference values for the development of near-infrared (NIR) spectroscopy calibrations. The development of precise NIR calibrations will provide an excellent tool for breeders, handlers, and processors to deliver specific cultivars in the case of growers and bulk loads in the case of handlers, thereby ensuring the most efficient use of maize by domestic and international processors. This paper also considers previous research describing the biochemical aspects of maize that have been related to maize hardness. Both starch and protein affect hardness, with most research focusing on the storage proteins (zeins). Both the content and composition of the zein fractions affect
Hardness Evolution of Gamma-Irradiated Polyoxymethylene
NASA Astrophysics Data System (ADS)
Hung, Chuan-Hao; Harmon, Julie P.; Lee, Sanboh
2016-04-01
This study focuses on analyzing hardness evolution in gamma-irradiated polyoxymethylene (POM) exposed to elevated temperatures after irradiation. Hardness increases with increasing annealing temperature and time, but decreases with increasing gamma ray dose. Hardness changes are attributed to defects generated in the microstructure and molecular structure. Gamma irradiation causes a decrease in the glass transition temperature, melting point, and extent of crystallinity. The kinetics of defects resulting in hardness changes follow a first-order structure relaxation. The rate constant adheres to an Arrhenius equation, and the corresponding activation energy decreases with increasing dose due to chain scission during gamma irradiation. The structure relaxation of POM has a lower energy barrier in crystalline regions than in amorphous ones. The hardness evolution in POM is an endothermic process due to the semi-crystalline nature of this polymer.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136
A prescription of Winograd's discrete Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Zohar, S.
1979-01-01
A detailed and complete description of Winograd's discrete Fourier transform algorithm (DFT) is presented omitting all proofs and derivations. The algorithm begins with the transfer of data from the input vector array to the working array where the actual transformation takes place, otherwise known as input scrambling and output unscrambling. The third array holds constraints required in the transformation stage that are evaluated in the precomputation stage. The algorithm is made up of several FORTRAN subroutines which are not to be confused with practical software algorithmic implementation since they are designed for clarity and not for speed.
NASA Astrophysics Data System (ADS)
Yedidia, Jonathan S.
2011-11-01
Message-passing algorithms can solve a wide variety of optimization, inference, and constraint satisfaction problems. The algorithms operate on factor graphs that visually represent and specify the structure of the problems. After describing some of their applications, I survey the family of belief propagation (BP) algorithms, beginning with a detailed description of the min-sum algorithm and its exactness on tree factor graphs, and then turning to a variety of more sophisticated BP algorithms, including free-energy based BP algorithms, "splitting" BP algorithms that generalize "tree-reweighted" BP, and the various BP algorithms that have been proposed to deal with problems with continuous variables. The Divide and Concur (DC) algorithm is a projection-based constraint satisfaction algorithm that deals naturally with continuous variables, and converges to exact answers for problems where the solution sets of the constraints are convex. I show how it exploits the "difference-map" dynamics to avoid traps that cause more naive alternating projection algorithms to fail for non-convex problems, and explain that it is a message-passing algorithm that can also be applied to optimization problems. The BP and DC algorithms are compared, both in terms of their fundamental justifications and their strengths and weaknesses.
NASA Technical Reports Server (NTRS)
Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide
2014-01-01
There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning
Multiple-constraints neural network solution for edge-pixel-based stereo correspondence problem
NASA Astrophysics Data System (ADS)
Hu, Joe-E.; Siy, Pepe
1993-03-01
This paper describes a fast and robust artificial neural network algorithm for solving the stereo correspondence problem in binocular vision. In this algorithm, the stereo correspondence problem is modelled as a cost minimization problem where the cost is the value of the matching function between the edge pixels along the same epipolar line. A multiple-constraint energy minimization neural network is implemented for this matching process. This algorithm differs from previous works in that it integrates ordering and geometry constraints in addition to uniqueness, continuity, and epipolar line constraint into a neural network implementation. The processing procedures are similar to that of the human vision processes. The edge pixels are divided into different clusters according to their orientation and contrast polarity. The matching is performed only between the edge pixels in the same clusters and at the same epipolar line. By following the epipolar line, the ordering constraint (the left-right relation between pixels) can be specified easily without building extra relational graphs as in the earlier works. The algorithm thus assigns artificial neurons which follow the same order of the pixels along an epipolar line to represent the matching candidate pairs. The algorithm is discussed in detail and experimental results using real images are presented.
Stability of Bareiss algorithm
NASA Astrophysics Data System (ADS)
Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.
1991-12-01
In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.
Contextual Constraints on Adolescents' Leisure.
ERIC Educational Resources Information Center
Silbereisen, Rainer K.
2003-01-01
Interlinks crucial cultural themes emerging from preceding chapters, highlighting the contextual constraints in adolescents' use of free time. Draws parallels across the nations discussed on issues related to how school molds leisure time, the balance of passive versus active leisure, timing of leisure pursuits, and the cumulative effect of…
Constraint elimination in dynamical systems
NASA Technical Reports Server (NTRS)
Singh, R. P.; Likins, P. W.
1989-01-01
Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.
Constraints on galaxy formation theories
NASA Technical Reports Server (NTRS)
Szalay, A. S.
1986-01-01
The present theories of galaxy formation are reviewed. The relation between peculiar velocities, temperature fluctuations of the microwave background and the correlation function of galaxies point to the possibility that galaxies do not form uniformly everywhere. The velocity data provide strong constraints on the theories even in the case when light does not follow mass of the universe.
Perceptual Constraints in Phonotactic Learning
ERIC Educational Resources Information Center
Endress, Ansgar D.; Mehler, Jacques
2010-01-01
Structural regularities in language have often been attributed to symbolic or statistical general purpose computations, whereas perceptual factors influencing such generalizations have received less interest. Here, we use phonotactic-like constraints as a case study to ask whether the structural properties of specific perceptual and memory…
[Orthogonal Vector Projection Algorithm for Spectral Unmixing].
Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li
2015-12-01
Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method. PMID:26964231
A disturbance based control/structure design algorithm
NASA Technical Reports Server (NTRS)
Mclaren, Mark D.; Slater, Gary L.
1989-01-01
Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.
Loop Closing Detection in RGB-D SLAM Combining Appearance and Geometric Constraints.
Zhang, Heng; Liu, Yanli; Tan, Jindong
2015-01-01
A kind of multi feature points matching algorithm fusing local geometric constraints is proposed for the purpose of quickly loop closing detection in RGB-D Simultaneous Localization and Mapping (SLAM). The visual feature is encoded with BRAND (binary robust appearance and normals descriptor), which efficiently combines appearance and geometric shape information from RGB-D images. Furthermore, the feature descriptors are stored using the Locality-Sensitive-Hashing (LSH) technique and hierarchical clustering trees are used to search for these binary features. Finally, the algorithm for matching of multi feature points using local geometric constraints is provided, which can effectively reject the possible false closure hypotheses. We demonstrate the efficiency of our algorithms by real-time RGB-D SLAM with loop closing detection in indoor image sequences taken with a handheld Kinect camera and comparative experiments using other algorithms in RTAB-Map dealing with a benchmark dataset. PMID:26102492
Loop Closing Detection in RGB-D SLAM Combining Appearance and Geometric Constraints
Zhang, Heng; Liu, Yanli; Tan, Jindong
2015-01-01
A kind of multi feature points matching algorithm fusing local geometric constraints is proposed for the purpose of quickly loop closing detection in RGB-D Simultaneous Localization and Mapping (SLAM). The visual feature is encoded with BRAND (binary robust appearance and normals descriptor), which efficiently combines appearance and geometric shape information from RGB-D images. Furthermore, the feature descriptors are stored using the Locality-Sensitive-Hashing (LSH) technique and hierarchical clustering trees are used to search for these binary features. Finally, the algorithm for matching of multi feature points using local geometric constraints is provided, which can effectively reject the possible false closure hypotheses. We demonstrate the efficiency of our algorithms by real-time RGB-D SLAM with loop closing detection in indoor image sequences taken with a handheld Kinect camera and comparative experiments using other algorithms in RTAB-Map dealing with a benchmark dataset. PMID:26102492
System engineering approach to GPM retrieval algorithms
Rose, C. R.; Chandrasekar, V.
2004-01-01
System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do
Constraints on cosmic distance duality relation from cosmological observations
NASA Astrophysics Data System (ADS)
Lv, Meng-Zhen; Xia, Jun-Qing
2016-09-01
In this paper, we use the model dependent method to revisit the constraint on the well-known cosmic distance duality relation (CDDR). By using the latest SNIa samples, such as Union2.1, JLA and SNLS, we find that the SNIa data alone cannot constrain the cosmic opacity parameter ε, which denotes the deviation from the CDDR, dL =dA(1 + z) 2 + ε, very well. The constraining power on ε from the luminosity distance indicator provided by SNIa and GRB is hardly to be improved at present. When we include other cosmological observations, such as the measurements of Hubble parameter, the baryon acoustic oscillations and the distance information from cosmic microwave background, we obtain the tightest constraint on the cosmic opacity parameter ε, namely the 68% C.L. limit: ε = 0.023 ± 0.018. Furthermore, we also consider the evolution of ε as a function of z using two methods, the parametrization and the principle component analysis, and do not find the evidence for the deviation from zero. Finally, we simulate the future SNIa and Hubble measurements and find the mock data could give very tight constraint on the cosmic opacity ε and verify the CDDR at high significance.
Constraint-Led Changes in Internal Variability in Running
Haudum, Anita; Birklbauer, Jürgen; Kröll, Josef; Müller, Erich
2012-01-01
We investigated the effect of a one-time application of elastic constraints on movement-inherent variability during treadmill running. Eleven males ran two 35-min intervals while surface EMG was measured. In one of two 35-min intervals, after 10 min of running without tubes, elastic tubes (between hip and heels) were attached, followed by another 5 min of running without tubes. To assess variability, stride-to-stride iEMG variability was calculated. Significant increases in variability (36 % to 74 %) were observed during tube running, whereas running without tubes after the tube running block showed no significant differences. Results show that elastic tubes affect variability on a muscular level despite the constant environmental conditions and underline the nervous system's adaptability to cope with somehow unpredictable constraints since stride duration was unaltered. Key points The elastic constraints led to an increase in iEMG variability but left stride duration variability unaltered. Runners adapted to the elastic cords, evident in an iEMG variability decrease over time towards normal running. Hardly any aftermaths were observed in the iEMG analyses when comparing normal running after the constrained running block to normal running. PMID:24149117
NASA Astrophysics Data System (ADS)
Afzalirad, Mojtaba; Rezaeian, Javad
2016-04-01
This study involves an unrelated parallel machine scheduling problem in which sequence-dependent set-up times, different release dates, machine eligibility and precedence constraints are considered to minimize total late works. A new mixed-integer programming model is presented and two efficient hybrid meta-heuristics, genetic algorithm and ant colony optimization, combined with the acceptance strategy of the simulated annealing algorithm (Metropolis acceptance rule), are proposed to solve this problem. Manifestly, the precedence constraints greatly increase the complexity of the scheduling problem to generate feasible solutions, especially in a parallel machine environment. In this research, a new corrective algorithm is proposed to obtain the feasibility in all stages of the algorithms. The performance of the proposed algorithms is evaluated in numerical examples. The results indicate that the suggested hybrid ant colony optimization statistically outperformed the proposed hybrid genetic algorithm in solving large-size test problems.
Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2005-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.
Algorithm Optimally Allocates Actuation of a Spacecraft
NASA Technical Reports Server (NTRS)
Motaghedi, Shi
2007-01-01
A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.
Srinivasan, Gautham; Srinivas, Chakravarthi Rangachari; Mathew, Anil C; Duraiswami, Divakar
2013-01-01
Background: Hardness of water is determined by the amount of salts (calcium carbonate [CaCO3] and magnesium sulphate [MgSO4]) present in water. The hardness of the water used for washing hair may cause fragility of hair. Objective: The objective of the following study is to compare the tensile strength and elasticity of hair treated in hard water and hair treated in distilled water. Materials and Methods: 10-15 strands of hair of length 15-20 cm, lost during combing were obtained from 15 volunteers. Each sample was cut in the middle to obtain 2 sets of hair per volunteer. One set of 15 samples was immersed in hard water and the other set in distilled water for 10 min on alternate days. Procedure was repeated for 30 days. The tensile strength and elasticity of the hair treated in hard water and distilled water was determined using INSTRON universal strength tester. Results: The CaCO3 and MgSO4 content of hard water and distilled water were determined as 212.5 ppm of CaCO3 and 10 ppm of CaCO3 respectively. The tensile strength and elasticity in each sample was determined and the mean values were compared using t-test. The mean (SD) of tensile strength of hair treated in hard water was 105.28 (27.59) and in distilled water was 103.66 (20.92). No statistical significance was observed in the tensile strength, t = 0.181, P = 0.858. The mean (SD) of elasticity of hair treated in hard water was 37.06 (2.24) and in distilled water was 36.84 (4.8). No statistical significance was observed in the elasticity, t = 0.161, P = 0.874. Conclusion: The hardness of water does not interfere with the tensile strength and elasticity of hair. PMID:24574692
Temporal and spectral characteristics of solar flare hard X-ray emission
NASA Technical Reports Server (NTRS)
Dennis, B. R.; Kiplinger, A. L.; Orwig, L. E.; Frost, K. J.
1985-01-01
Solar Maximum Mission observations of three flares that impose stringent constraints on physical models of the hard X-ray production during the impulsive phase are presented. Hard X-ray imaging observations of the flares on 1980 November 5 at 22:33 UT show two patches in the 16 to 30 keV images that are separated by 70,000 km and that brighten simultaneously to within 5 s. Observations to O V from one of the footprints show simultaneity of the brightening in this transition zone line and in the total hard X-ray flux to within a second or two. These results suggest but do not require the existence of electron beams in this flare. The rapid fluctuations of the hard X-ray flux within some flares on the time scales of 1 s also provide evidence for electron beams and limits on the time scale of the energy release mechanism. Observations of a flare on 1980 June 6 at 22:34 UT show variations in the 28 keV X-ray counting rate from one 20 ms interval to the next over a period of 10 s. The hard X-ray spectral variations measured with 128 ms time resolution for one 0.5 s spike during this flare are consistent with the predictions of thick-target non-thermal beam model.
Gap Detection for Genome-Scale Constraint-Based Models
Brooks, J. Paul; Burns, William P.; Fong, Stephen S.; Gowen, Chris M.; Roberts, Seth B.
2012-01-01
Constraint-based metabolic models are currently the most comprehensive system-wide models of cellular metabolism. Several challenges arise when building an in silico constraint-based model of an organism that need to be addressed before flux balance analysis (FBA) can be applied for simulations. An algorithm called FBA-Gap is presented here that aids the construction of a working model based on plausible modifications to a given list of reactions that are known to occur in the organism. When applied to a working model, the algorithm gives a hypothesis concerning a minimal medium for sustaining the cell in culture. The utility of the algorithm is demonstrated in creating a new model organism and is applied to four existing working models for generating hypotheses about culture media. In modifying a partial metabolic reconstruction so that biomass may be produced using FBA, the proposed method is more efficient than a previously proposed method in that fewer new reactions are added to complete the model. The proposed method is also more accurate than other approaches in that only biologically plausible reactions and exchange reactions are used. PMID:22997515
Simulated annealing algorithm for solving chambering student-case assignment problem
NASA Astrophysics Data System (ADS)
Ghazali, Saadiah; Abdul-Rahman, Syariza
2015-12-01
The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.
An Automated Cloud-edge Detection Algorithm Using Cloud Physics and Radar Data
NASA Technical Reports Server (NTRS)
Ward, Jennifer G.; Merceret, Francis J.; Grainger, Cedric A.
2003-01-01
An automated cloud edge detection algorithm was developed and extensively tested. The algorithm uses in-situ cloud physics data measured by a research aircraft coupled with ground-based weather radar measurements to determine whether the aircraft is in or out of cloud. Cloud edges are determined when the in/out state changes, subject to a hysteresis constraint. The hysteresis constraint prevents isolated transient cloud puffs or data dropouts from being identified as cloud boundaries. The algorithm was verified by detailed manual examination of the data set in comparison to the results from application of the automated algorithm.
Analysis of Hard Thin Film Coating
NASA Technical Reports Server (NTRS)
Shen, Dashen
1998-01-01
MSFC is interested in developing hard thin film coating for bearings. The wearing of the bearing is an important problem for space flight engine. Hard thin film coating can drastically improve the surface of the bearing and improve the wear-endurance of the bearing. However, many fundamental problems in surface physics, plasma deposition, etc, need further research. The approach is using electron cyclotron resonance chemical vapor deposition (ECRCVD) to deposit hard thin film an stainless steel bearing. The thin films in consideration include SiC, SiN and other materials. An ECRCVD deposition system is being assembled at MSFC.
Analysis of Hard Thin Film Coating
NASA Technical Reports Server (NTRS)
Shen, Dashen
1998-01-01
Marshall Space Flight Center (MSFC) is interested in developing hard thin film coating for bearings. The wearing of the bearing is an important problem for space flight engine. Hard thin film coating can drastically improve the surface of the bearing and improve the wear-endurance of the bearing. However, many fundamental problems in surface physics, plasma deposition, etc, need further research. The approach is using Electron Cyclotron Resonance Chemical Vapor Deposition (ECRCVD) to deposit hard thin film on stainless steel bearing. The thin films in consideration include SiC, SiN and other materials. An ECRCVD deposition system is being assembled at MSFC.
Prescription drug laws: justified hard paternalism.
Rainbolt, George W
1989-01-01
Prescription drug laws are justified as examples of permissible hard paternalism and not as soft paternalism, which is morally legitimated by the defective cognitive or affective state of the individual on whose behalf the action is performed. Other examples of hard paternalism are considered, along with two strategies for determining the limits of paternalism. It is concluded that instances of permissible hard paternalism exist and that the only acceptable strategy is to balance harm and benefit on a case-by-case basis. PMID:11650113
Theory of hard diffraction and rapidity gaps
Del Duca, V.
1996-02-01
In this talk we review the models describing the hard diffractive production of jets or more generally high-mass states in presence of rapidity gaps in hadron-hadron and lepton-hadron collisions. By rapidity gaps we mean regions on the lego plot in (pseudo)-rapidity and azimuthal angle where no hadrons are produced, between the jet(s) and an elastically scattered hadron (single hard diffraction) or between two jets (double hard diffraction). {copyright} {ital 1996 American Institute of Physics.}
Stress constraints in optimality criteria design
NASA Technical Reports Server (NTRS)
Levy, R.
1982-01-01
Procedures described emphasize the processing of stress constraints within optimality criteria designs for low structural weight with stress and compliance constraints. Prescreening criteria are used to partition stress constraints into either potentially active primary sets or passive secondary sets that require minimal processing. Side constraint boundaries for passive constraints are derived by projections from design histories to modify conventional stress-ratio boundaries. Other procedures described apply partial structural modification reanalysis to design variable groups to correct stress constraint violations of unfeasible designs. Sample problem results show effective design convergence and, in particular, advantages for reanalysis in obtaining lower feasible design weights.
Exploring stochasticity and imprecise knowledge based on linear inequality constraints.
Subbey, Sam; Planque, Benjamin; Lindstrøm, Ulf
2016-09-01
This paper explores the stochastic dynamics of a simple foodweb system using a network model that mimics interacting species in a biosystem. It is shown that the system can be described by a set of ordinary differential equations with real-valued uncertain parameters, which satisfy a set of linear inequality constraints. The constraints restrict the solution space to a bounded convex polytope. We present results from numerical experiments to show how the stochasticity and uncertainty characterizing the system can be captured by sampling the interior of the polytope with a prescribed probability rule, using the Hit-and-Run algorithm. The examples illustrate a parsimonious approach to modeling complex biosystems under vague knowledge. PMID:26746217
GA-Based Image Restoration by Isophote Constraint Optimization
NASA Astrophysics Data System (ADS)
Kim, Jong Bae; Kim, Hang Joon
2003-12-01
We propose an efficient technique for image restoration based on a genetic algorithm (GA) with an isophote constraint. In our technique, the image restoration problem is modeled as an optimization problem which, in our case, is solved by a cost function with isophote constraint that is minimized using a GA. We consider that an image is decomposed into isophotes based on connected components of constant intensity. The technique creates an optimal connection of all pairs of isophotes disconnected by a caption in the frame. For connecting the disconnected isophotes, we estimate the value of the smoothness, given by the best chromosomes of the GA and project this value in the isophote direction. Experimental results show a great possibility for automatic restoration of a region in an advertisement scene.
Finite element solution of optimal control problems with inequality constraints
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1990-01-01
A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.
A robust Feasible Directions algorithm for design synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1983-01-01
A nonlinear optimization algorithm is developed which combines the best features of the Method of Feasible Directions and the Generalized Reduced Gradient Method. This algorithm utilizes the direction-finding sub-problem from the Method of Feasible Directions to find a search direction which is equivalent to that of the Generalized Reduced Gradient Method, but does not require the addition of a large number of slack variables associated with inequality constraints. This method provides a core-efficient algorithm for the solution of optimization problems with a large number of inequality constraints. Further optimization efficiency is derived by introducing the concept of infrequent gradient calculations. In addition, it is found that the sensitivity of the optimum design to changes in the problem parameters can be obtained using this method without the need for second derivatives or Lagrange multipliers. A numerical example is given in order to demonstrate the efficiency of the algorithm and the sensitivity analysis.
NASA Astrophysics Data System (ADS)
Zhuang, Fang-Fang; Wang, Qi
2014-06-01
An approach is proposed for modeling and analyses of rigid multibody systems with frictional translation joints and driving constraints. The geometric constraints of translational joints with small clearance are treated as bilateral constraints by neglecting the impact between sliders and guides. Firstly, the normal forces acting on sliders, the driving constraint forces (or moments) and the constraint forces of smooth revolute joints are all described by complementary conditions. The frictional contacts are characterized by a setvalued force law of Coulomb's dry friction. Combined with the theory of the horizontal linear complementarity problem (HLCP), an event-driven scheme is used to detect the transitions of the contact situation between sliders and guides, and the stick-slip transitions of sliders, respectively. And then, all constraint forces in the system can be computed easily. Secondly, the dynamic equations of multibody systems are written at the acceleration-force level by the Lagrange multiplier technique, and the Baumgarte stabilization method is used to reduce the constraint drift. Finally, a numerical example is given to show some non-smooth dynamical behaviors of the studied system. The obtained results validate the feasibility of algorithm and the effect of constraint stabilization.
Scheduling with genetic algorithms
NASA Technical Reports Server (NTRS)
Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.
1994-01-01
In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.
Novel hard compositions and methods of preparation
Sheinberg, H.
1981-02-03
Novel very hard compositions of matter are prepared by using in all embodiments only a minor amount of a particular carbide (or materials which can form the carbide in situ when subjected to heat and pressure); and no strategic cobalt is needed. Under a particular range of conditions, densified compositions of matter of the invention are prepared having hardnesses on the Rockwell A test substantially equal to the hardness of pure tungsten carbide and to two of the hardest commercial cobalt-bonded tungsten carbides. Alternately, other compositions of the invention which have slightly lower hardnesses than those described above in one embodiment also possess the advantage of requiring no tungsten and in another embodiment possess the advantage of having a good fracture toughness value.
Automated radiation hard ASIC design tool
NASA Technical Reports Server (NTRS)
White, Mike; Bartholet, Bill; Baze, Mark
1993-01-01
A commercial based, foundry independent, compiler design tool (ChipCrafter) with custom radiation hardened library cells is described. A unique analysis approach allows low hardness risk for Application Specific IC's (ASIC's). Accomplishments, radiation test results, and applications are described.
Gray, C
1998-10-20
Federal Minister of Health Allan Rock appears committed to improved funding for the health care system, but this may be a hard sell in cabinet. He outlined his views during the CMA's recent annual meeting in Whitehorse. PMID:9834729
Financial Incentives for Staffing Hard Places.
ERIC Educational Resources Information Center
Prince, Cynthia D.
2002-01-01
Describes examples of financial incentives used to recruit teachers for low-achieving and hard-to-staff schools. Includes targeted salary increases, housing incentives, tuition assistance, and tax credits. (PKP)
21 CFR 133.150 - Hard cheeses.
Code of Federal Regulations, 2014 CFR
2014-04-01
... rennet, rennet paste, extract of rennet paste, or other safe and suitable milk-clotting enzyme that... minutes, or for a time and at a temperature equivalent thereto in phosphatase destruction. A hard...
21 CFR 133.150 - Hard cheeses.
Code of Federal Regulations, 2011 CFR
2011-04-01
... rennet, rennet paste, extract of rennet paste, or other safe and suitable milk-clotting enzyme that... minutes, or for a time and at a temperature equivalent thereto in phosphatase destruction. A hard...
21 CFR 133.150 - Hard cheeses.
Code of Federal Regulations, 2010 CFR
2010-04-01
... rennet, rennet paste, extract of rennet paste, or other safe and suitable milk-clotting enzyme that... minutes, or for a time and at a temperature equivalent thereto in phosphatase destruction. A hard...
Macroindentation hardness measurement-Modernization and applications.
Patel, Sarsvat; Sun, Changquan Calvin
2016-06-15
In this study, we first developed a modernized indentation technique for measuring tablet hardness. This technique is featured by rapid digital image capture, using a calibrated light microscope, and precise area-determination. We then systematically studied effects of key experimental parameters, including indentation force, speed, and holding time, on measured hardness of a very soft material, hydroxypropyl cellulose, and a very hard material, dibasic calcium phosphate, to cover a wide range of material properties. Based on the results, a holding period of 3min at the peak indentation load is recommended to minimize the effect of testing speed on H. Using this method, we show that an exponential decay function well describes the relationship between tablet hardness and porosity for seven commonly used pharmaceutical powders investigated in this work. We propose that H and H at zero porosity may be used to quantify the tablet deformability and powder plasticity, respectively. PMID:27130365
Electronic Teaching: Hard Disks and Networks.
ERIC Educational Resources Information Center
Howe, Samuel F.
1984-01-01
Describes floppy-disk and hard-disk based networks, electronic systems linking microcomputers together for the purpose of sharing peripheral devices, and presents points to remember when shopping for a network. (MBR)
Hard X-ray imaging from Explorer
NASA Technical Reports Server (NTRS)
Grindlay, J. E.; Murray, S. S.
1981-01-01
Coded aperture X-ray detectors were applied to obtain large increases in sensitivity as well as angular resolution. A hard X-ray coded aperture detector concept is described which enables very high sensitivity studies persistent hard X-ray sources and gamma ray bursts. Coded aperture imaging is employed so that approx. 2 min source locations can be derived within a 3 deg field of view. Gamma bursts were located initially to within approx. 2 deg and X-ray/hard X-ray spectra and timing, as well as precise locations, derived for possible burst afterglow emission. It is suggested that hard X-ray imaging should be conducted from an Explorer mission where long exposure times are possible.
Unitarity constraints on trimaximal mixing
Kumar, Sanjeev
2010-07-01
When the neutrino mass eigenstate {nu}{sub 2} is trimaximally mixed, the mixing matrix is called trimaximal. The middle column of the trimaximal mixing matrix is identical to tribimaximal mixing and the other two columns are subject to unitarity constraints. This corresponds to a mixing matrix with four independent parameters in the most general case. Apart from the two Majorana phases, the mixing matrix has only one free parameter in the CP conserving limit. Trimaximality results in interesting interplay between mixing angles and CP violation. A notion of maximal CP violation naturally emerges here: CP violation is maximal for maximal 2-3 mixing. Similarly, there is a natural constraint on the deviation from maximal 2-3 mixing which takes its maximal value in the CP conserving limit.
Algorithms for Multiple Fault Diagnosis With Unreliable Tests
NASA Technical Reports Server (NTRS)
Shakeri, Mojdeh; Raghavan, Vijaya; Pattipati, Krishna R.; Patterson-Hine, Ann
1997-01-01
In this paper, we consider the problem of constructing optimal and near-optimal multiple fault diagnosis (MFD) in bipartite systems with unreliable (imperfect) tests. It is known that exact computation of conditional probabilities for multiple fault diagnosis is NP-hard. The novel feature of our diagnostic algorithms is the use of Lagrangian relaxation and subgradient optimization methods to provide: (1) near optimal solutions for the MFD problem, and (2) upper bounds for an optimal branch-and-bound algorithm. The proposed method is illustrated using several examples. Computational results indicate that: (1) our algorithm has superior computational performance to the existing algorithms (approximately three orders of magnitude improvement), (2) the near optimal algorithm generates the most likely candidates with a very high accuracy, and (3) our algorithm can find the most likely candidates in systems with as many as 1000 faults.
Library of Continuation Algorithms
Energy Science and Technology Software Center (ESTSC)
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Efficient Controls for Finitely Convergent Sequential Algorithms
Chen, Wei; Herman, Gabor T.
2010-01-01
Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327
Managing Restaurant Tables using Constraints
NASA Astrophysics Data System (ADS)
Vidotto, Alfio; Brown, Kenneth N.; Beck, J. Christopher
Restaurant table management can have significant impact on both profitability and the customer experience. The core of the issue is a complex dynamic combinatorial problem. We show how to model the problem as constraint satisfaction, with extensions which generate flexible seating plans and which maintain stability when changes occur. We describe an implemented system which provides advice to users in real time. The system is currently being evaluated in a restaurant environment.
Macroscopic constraints on string unification
Taylor, T.R.
1989-03-01
The comparison of sting theory with experiment requires a huge extrapolation from the microscopic distances, of order of the Planck length, up to the macroscopic laboratory distances. The quantum effects give rise to large corrections to the macroscopic predictions of sting unification. I discus the model-independent constraints on the gravitational sector of string theory due to the inevitable existence of universal Fradkin-Tseytlin dilatons. 9 refs.
Breakdown of QCD factorization in hard diffraction
NASA Astrophysics Data System (ADS)
Kopeliovich, B. Z.
2016-07-01
Factorization of short- and long-distance interactions is severely broken in hard diffractive hadronic collisions. Interaction with the spectator partons leads to an interplay between soft and hard scales, which results in a leading twist behavior of the cross section, on the contrary to the higher twist predicted by factorization. This feature is explicitly demonstrated for diffractive radiation of abelian (Drell-Yan, gauge bosons, Higgs) and non-abelian (heavy flavors) particles.
A Novel Approach to Hardness Testing
NASA Technical Reports Server (NTRS)
Spiegel, F. Xavier; West, Harvey A.
1996-01-01
This paper gives a description of the application of a simple rebound time measuring device and relates the determination of relative hardness of a variety of common engineering metals. A relation between rebound time and hardness will be sought. The effect of geometry and surface condition will also be discussed in order to acquaint the student with the problems associated with this type of method.
Laser Ablatin of Dental Hard Tissue
Seka, W.; Rechmann, P.; Featherstone, J.D.B.; Fried, D.
2007-07-31
This paper discusses ablation of dental hard tissue using pulsed lasers. It focuses particularly on the relevant tissue and laser parameters and some of the basic ablation processes that are likely to occur. The importance of interstitial water and its phase transitions is discussed in some detail along with the ablation processes that may or may not directly involve water. The interplay between tissue parameters and laser parameters in the outcome of the removal of dental hard tissue is discussed in detail.
A deterministic algorithm for constrained enumeration of transmembrane protein folds.
Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.
2004-07-01
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
Constrained minimization of smooth functions using a genetic algorithm
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.; Pamadi, Bandu N.
1994-01-01
The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.
Optimal reactive planning with security constraints
Thomas, W.R.; Cheng, D.T.Y.; Dixon, A.M.; Thorp, J.D.; Dunnett, R.M.; Schaff, G.
1995-12-31
The National Grid Company (NGC) of England and Wales has developed a computer program, SCORPION, to help system planners optimize the location and size of new reactive compensation plant on the transmission system. The reactive power requirements of the NGC system have risen as a result of increased power flows and the shorter timescale on which power stations are commissioned and withdrawn from service. In view of the high costs involved, it is important that reactive compensation be installed as economically as possible, without compromising security. Traditional methods based on iterative use of a load flow program are labor intensive and subjective. SCORPION determines a near-optimal pattern of new reactive sources which are required to satisfy voltage constraints for normal and contingent states of operation of the transmission system. The algorithm processes the system states sequentially, instead of optimizing all of them simultaneously. This allows a large number of system states to be considered with an acceptable run time and computer memory requirement. Installed reactive sources are treated as continuous, rather than discrete, variables. However, the program has a restart facility which enables the user to add realistically sized reactive sources explicitly and thereby work towards a realizable solution to the planning problem.
Infrared Constraint on Ultraviolet Theories
Tsai, Yuhsin
2012-08-01
While our current paradigm of particle physics, the Standard Model (SM), has been extremely successful at explaining experiments, it is theoretically incomplete and must be embedded into a larger framework. In this thesis, we review the main motivations for theories beyond the SM (BSM) and the ways such theories can be constrained using low energy physics. The hierarchy problem, neutrino mass and the existence of dark matter (DM) are the main reasons why the SM is incomplete . Two of the most plausible theories that may solve the hierarchy problem are the Randall-Sundrum (RS) models and supersymmetry (SUSY). RS models usually suffer from strong flavor constraints, while SUSY models produce extra degrees of freedom that need to be hidden from current experiments. To show the importance of infrared (IR) physics constraints, we discuss the flavor bounds on the anarchic RS model in both the lepton and quark sectors. For SUSY models, we discuss the difficulties in obtaining a phenomenologically allowed gaugino mass, its relation to R-symmetry breaking, and how to build a model that avoids this problem. For the neutrino mass problem, we discuss the idea of generating small neutrino masses using compositeness. By requiring successful leptogenesis and the existence of warm dark matter (WDM), we can set various constraints on the hidden composite sector. Finally, to give an example of model independent bounds from collider experiments, we show how to constrain the DM–SM particle interactions using collider results with an effective coupling description.
Analysis of Space Tourism Constraints
NASA Astrophysics Data System (ADS)
Bonnal, Christophe
2002-01-01
Space tourism appears today as a new Eldorado in a relatively near future. Private operators are already proposing services for leisure trips in Low Earth Orbit, and some happy few even tested them. But are these exceptional events really marking the dawn of a new space age ? The constraints associated to the space tourism are severe : - the economical balance of space tourism is tricky; development costs of large manned - the technical definition of such large vehicles is challenging, mainly when considering - the physiological aptitude of passengers will have a major impact on the mission - the orbital environment will also lead to mission constraints on aspects such as radiation, However, these constraints never appear as show-stoppers and have to be dealt with pragmatically: - what are the recommendations one can make for future research in the field of space - which typical roadmap shall one consider to develop realistically this new market ? - what are the synergies with the conventional missions and with the existing infrastructure, - how can a phased development start soon ? The paper proposes hints aiming at improving the credibility of Space Tourism and describes the orientations to follow in order to solve the major hurdles found in such an exciting development.
Isocurvature constraints on portal couplings
NASA Astrophysics Data System (ADS)
Kainulainen, Kimmo; Nurmi, Sami; Tenkanen, Tommi; Tuominen, Kimmo; Vaskonen, Ville
2016-06-01
We consider portal models which are ultraweakly coupled with the Standard Model, and confront them with observational constraints on dark matter abundance and isocurvature perturbations. We assume the hidden sector to contain a real singlet scalar s and a sterile neutrino ψ coupled to s via a pseudoscalar Yukawa term. During inflation, a primordial condensate consisting of the singlet scalar s is generated, and its contribution to the isocurvature perturbations is imprinted onto the dark matter abundance. We compute the total dark matter abundance including the contributions from condensate decay and nonthermal production from the Standard Model sector. We then use the Planck limit on isocurvature perturbations to derive a novel constraint connecting dark matter mass and the singlet self coupling with the scale of inflation: mDM/GeV lesssim 0.2λs3/8 (H*/1011 GeV)‑3/2. This constraint is relevant in most portal models ultraweakly coupled with the Standard Model and containing light singlet scalar fields.
Constraint Based Modeling Going Multicellular
Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas
2016-01-01
Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches. PMID:26904548
Geist, G.A.; Howell, G.W.; Watkins, D.S.
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Origin of the computational hardness for learning with binary synapses
NASA Astrophysics Data System (ADS)
Huang, Haiping; Kabashima, Yoshiyuki
2014-11-01
Through supervised learning in a binary perceptron one is able to classify an extensive number of random patterns by a proper assignment of binary synaptic weights. However, to find such assignments in practice is quite a nontrivial task. The relation between the weight space structure and the algorithmic hardness has not yet been fully understood. To this end, we analytically derive the Franz-Parisi potential for the binary perceptron problem by starting from an equilibrium solution of weights and exploring the weight space structure around it. Our result reveals the geometrical organization of the weight space; the weight space is composed of isolated solutions, rather than clusters of exponentially many close-by solutions. The pointlike clusters far apart from each other in the weight space explain the previously observed glassy behavior of stochastic local search heuristics.
Transport coefficients for dense hard-disk systems.
García-Rojo, Ramón; Luding, Stefan; Brey, J Javier
2006-12-01
A study of the transport coefficients of a system of elastic hard disks based on the use of Helfand-Einstein expressions is reported. The self-diffusion, the viscosity, and the heat conductivity are examined with averaging techniques especially appropriate for event-driven molecular dynamics algorithms with periodic boundary conditions. The density and size dependence of the results are analyzed, and comparison with the predictions from Enskog's theory is carried out. In particular, the behavior of the transport coefficients in the vicinity of the fluid-solid transition is investigated and a striking power law divergence of the viscosity with density is obtained in this region, while all other examined transport coefficients show a drop in that density range in relation to the Enskog's prediction. Finally, the deviations are related to shear band instabilities and the concept of dilatancy. PMID:17280060
Classial lattice gauge fields with hard thermal loops
NASA Astrophysics Data System (ADS)
Hu, Chaoran
We design, implement, and test a novel lattice program which is aimed at the study of long-range physics in either an electroweak or a quark-gluon plasma at high temperatures. Our approach starts from a separation of short-range (hard) and long-range (soft) modes. Hard modes are represented as particles, while soft modes are represented as lattice fields. Such a treatment is motivated by the dual classical limits of quantum fields as waves and particles in the infrared and ultraviolet limits, respectively. By including these charged particles, we are able to simulate their influence, by the name of 'hard thermal loops' (HTL), on the soft modes. Our investigations are based on two sets of coupled differential equations: Wong equation and Yang- Mills equation. The former describes the evolution of charged particles in the background of a mean field; the latter is the equation of motion of the mean field. The numerical implementation uses a modified leap-frog algorithm with time-centered evaluations. The validity of our approach is evaluated and verified by evidences from both analytical calculations and numerical measurements. Extensive tests have been done by using the U(1) plasma as a test ground. These include the measurement of plasma frequencies, damping rates, dispersion relation, and linear responses. Similar investigations are also performed in the SU(2) case. The results agree very well with those from perturbative calculations. An application where the method developed here has proved to be successful is the study of Chern-Simons number diffusion, which has to do with the baryon number violation responsible for the observed matter-antimatter asymmetry in the Universe. We have measured the diffusion rate and verified a newly proposed scaling law. Other applications such as the study of energy loss, color diffusion in a quark-gluon plasma await further development.
LATENT DEMOGRAPHIC PROFILE ESTIMATION IN HARD-TO-REACH GROUPS
McCormick, Tyler H.; Zheng, Tian
2015-01-01
The sampling frame in most social science surveys excludes members of certain groups, known as hard-to-reach groups. These groups, or sub-populations, may be difficult to access (the homeless, e.g.), camouflaged by stigma (individuals with HIV/AIDS), or both (commercial sex workers). Even basic demographic information about these groups is typically unknown, especially in many developing nations. We present statistical models which leverage social network structure to estimate demographic characteristics of these subpopulations using Aggregated relational data (ARD), or questions of the form “How many X’s do you know?” Unlike other network-based techniques for reaching these groups, ARD require no special sampling strategy and are easily incorporated into standard surveys. ARD also do not require respondents to reveal their own group membership. We propose a Bayesian hierarchical model for estimating the demographic characteristics of hard-to-reach groups, or latent demographic profiles, using ARD. We propose two estimation techniques. First, we propose a Markov-chain Monte Carlo algorithm for existing data or cases where the full posterior distribution is of interest. For cases when new data can be collected, we propose guidelines and, based on these guidelines, propose a simple estimate motivated by a missing data approach. Using data from McCarty et al. [Human Organization 60 (2001) 28–39], we estimate the age and gender profiles of six hard-to-reach groups, such as individuals who have HIV, women who were raped, and homeless persons. We also evaluate our simple estimates using simulation studies. PMID:26966475
Geomagnetic field models incorporating physical constraints on the secular variation
NASA Technical Reports Server (NTRS)
Constable, Catherine; Parker, Robert L.
1993-01-01
This proposal has been concerned with methods for constructing geomagnetic field models that incorporate physical constraints on the secular variation. The principle goal that has been accomplished is the development of flexible algorithms designed to test whether the frozen flux approximation is adequate to describe the available geomagnetic data and their secular variation throughout this century. These have been applied to geomagnetic data from both the early and middle part of this century and convincingly demonstrate that there is no need to invoke violations of the frozen flux hypothesis in order to satisfy the available geomagnetic data.
A Hybrid Constraint Representation and Reasoning Framework
NASA Technical Reports Server (NTRS)
Golden, Keith; Pang, Wanlin
2004-01-01
In this paper, we introduce JNET, a novel constraint representation and reasoning framework that supports procedural constraints and constraint attachments, providing a flexible way of integrating the constraint system with a runtime software environment and improving its applicability. We describe how JNET is applied to a real-world problem - NASA's Earth-science data processing domain, and demonstrate how JNET can be extended, without any knowledge of how it is implemented, to meet the growing demands of real-world applications.
Density equalizing map projections: A new algorithm
Merrill, D.W.; Selvin, S.; Mohr, M.S.
1992-02-01
In the study of geographic disease clusters, an alternative to traditional methods based on rates is to analyze case locations on a transformed map in which population density is everywhere equal. Although the analyst`s task is thereby simplified, the specification of the density equalizing map projection (DEMP) itself is not simple and continues to be the subject of considerable research. Here a new DEMP algorithm is described, which avoids some of the difficulties of earlier approaches. The new algorithm (a) avoids illegal overlapping of transformed polygons; (b) finds the unique solution that minimizes map distortion; (c) provides constant magnification over each map polygon; (d) defines a continuous transformation over the entire map domain; (e) defines an inverse transformation; (f) can accept optional constraints such as fixed boundaries; and (g) can use commercially supported minimization software. Work is continuing to improve computing efficiency and improve the algorithm.
Density equalizing map projections: A new algorithm
Merrill, D.W.; Selvin, S.; Mohr, M.S.
1992-02-01
In the study of geographic disease clusters, an alternative to traditional methods based on rates is to analyze case locations on a transformed map in which population density is everywhere equal. Although the analyst's task is thereby simplified, the specification of the density equalizing map projection (DEMP) itself is not simple and continues to be the subject of considerable research. Here a new DEMP algorithm is described, which avoids some of the difficulties of earlier approaches. The new algorithm (a) avoids illegal overlapping of transformed polygons; (b) finds the unique solution that minimizes map distortion; (c) provides constant magnification over each map polygon; (d) defines a continuous transformation over the entire map domain; (e) defines an inverse transformation; (f) can accept optional constraints such as fixed boundaries; and (g) can use commercially supported minimization software. Work is continuing to improve computing efficiency and improve the algorithm.
Genetic Algorithm Approaches for Actuator Placement
NASA Technical Reports Server (NTRS)
Crossley, William A.
2000-01-01
This research investigated genetic algorithm approaches for smart actuator placement to provide aircraft maneuverability without requiring hinged flaps or other control surfaces. The effort supported goals of the Multidisciplinary Design Optimization focus efforts in NASA's Aircraft au program. This work helped to properly identify various aspects of the genetic algorithm operators and parameters that allow for placement of discrete control actuators/effectors. An improved problem definition, including better definition of the objective function and constraints, resulted from this research effort. The work conducted for this research used a geometrically simple wing model; however, an increasing number of potential actuator placement locations were incorporated to illustrate the ability of the GA to determine promising actuator placement arrangements. This effort's major result is a useful genetic algorithm-based approach to assist in the discrete actuator/effector placement problem.
An algorithm for constrained one-step inversion of spectral CT data
NASA Astrophysics Data System (ADS)
Foygel Barber, Rina; Sidky, Emil Y.; Gilat Schmidt, Taly; Pan, Xiaochuan
2016-05-01
We develop a primal-dual algorithm that allows for one-step inversion of spectral CT transmission photon counts data to a basis map decomposition. The algorithm allows for image constraints to be enforced on the basis maps during the inversion. The derivation of the algorithm makes use of a local upper bounding quadratic approximation to generate descent steps for non-convex spectral CT data discrepancy terms, combined with a new convex-concave optimization algorithm. Convergence of the algorithm is demonstrated on simulated spectral CT data. Simulations with noise and anthropomorphic phantoms show examples of how to employ the constrained one-step algorithm for spectral CT data.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1992-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
Cultural and Social Constraints on Portability.
ERIC Educational Resources Information Center
Murray-Lasso, Marco
1990-01-01
Describes 12 constraints imposed by culture on educational software portability. Nielsen's seven-level virtual protocol model of human-computer interaction is discussed as a framework for considering the constraints, a hypothetical example of adapting software for Mexico is included, and suggestions for overcoming constraints and making software…
Organizational Constraints on Corporate Public Relations Practitioners.
ERIC Educational Resources Information Center
Ryan, Michael
1987-01-01
Catalogs various internal constraints under which many public relations practitioners work, including constraints on (1) access to management; (2) information collection; (3) dissemination of timely, accurate information; and (4) the public relations mission. Reports that most practitioners see organizational constraints as more of a problem for…
Identification Constraints and Inference in Factor Models
ERIC Educational Resources Information Center
Loken, Eric
2005-01-01
The choice of constraints used to identify a simple factor model can affect the shape of the likelihood. Specifically, under some nonzero constraints, standard errors may be inestimable even at the maximum likelihood estimate (MLE). For a broader class of nonzero constraints, symmetric normal approximations to the modal region may not be…
Learning and Parallelization Boost Constraint Search
ERIC Educational Resources Information Center
Yun, Xi
2013-01-01
Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…
NASA Astrophysics Data System (ADS)
Rao, R. V.; Savsani, V. J.; Balic, J.
2012-12-01
An efficient optimization algorithm called teaching-learning-based optimization (TLBO) is proposed in this article to solve continuous unconstrained and constrained optimization problems. The proposed method is based on the effect of the influence of a teacher on the output of learners in a class. The basic philosophy of the method is explained in detail. The algorithm is tested on 25 different unconstrained benchmark functions and 35 constrained benchmark functions with different characteristics. For the constrained benchmark functions, TLBO is tested with different constraint handling techniques such as superiority of feasible solutions, self-adaptive penalty, ɛ-constraint, stochastic ranking and ensemble of constraints. The performance of the TLBO algorithm is compared with that of other optimization algorithms and the results show the better performance of the proposed algorithm.
An algorithm for the solution of dynamic linear programs
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
1989-01-01
The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation
NASA Astrophysics Data System (ADS)
Lu, Shen; Kim, Harrison M.
2014-12-01
This article presents a multi-scenario decomposition with complementarity constraints approach to wind farm layout design to maximize wind energy production under region boundary and inter-turbine distance constraints. A complementarity formulation technique is introduced such that the wind farm layout design can be described with a continuously differentiable optimization model, and a multi-scenario decomposition approach is proposed to ensure efficient solution with local optimality. To combine global exploration and local optimization, a hybrid solution algorithm is presented, which combines the multi-scenario approach with a bi-objective genetic algorithm that maximizes energy production and minimizes constraint violations simultaneously. A numerical case study demonstrates the effectiveness of the proposed approach.
Equilibrium Sampling of Hard Spheres up to the Jamming Density and Beyond
NASA Astrophysics Data System (ADS)
Berthier, Ludovic; Coslovich, Daniele; Ninarello, Andrea; Ozawa, Misaki
2016-06-01
We implement and optimize a particle-swap Monte Carlo algorithm that allows us to thermalize a polydisperse system of hard spheres up to unprecedentedly large volume fractions, where previous algorithms and experiments fail to equilibrate. We show that no glass singularity intervenes before the jamming density, which we independently determine through two distinct nonequilibrium protocols. We demonstrate that equilibrium fluid and nonequilibrium jammed states can have the same density, showing that the jamming transition cannot be the end point of the fluid branch.
Equilibrium Sampling of Hard Spheres up to the Jamming Density and Beyond.
Berthier, Ludovic; Coslovich, Daniele; Ninarello, Andrea; Ozawa, Misaki
2016-06-10
We implement and optimize a particle-swap Monte Carlo algorithm that allows us to thermalize a polydisperse system of hard spheres up to unprecedentedly large volume fractions, where previous algorithms and experiments fail to equilibrate. We show that no glass singularity intervenes before the jamming density, which we independently determine through two distinct nonequilibrium protocols. We demonstrate that equilibrium fluid and nonequilibrium jammed states can have the same density, showing that the jamming transition cannot be the end point of the fluid branch. PMID:27341260
Disambiguating Multi–Modal Scene Representations Using Perceptual Grouping Constraints
Pugeault, Nicolas; Wörgötter, Florentin; Krüger, Norbert
2010-01-01
In its early stages, the visual system suffers from a lot of ambiguity and noise that severely limits the performance of early vision algorithms. This article presents feedback mechanisms between early visual processes, such as perceptual grouping, stereopsis and depth reconstruction, that allow the system to reduce this ambiguity and improve early representation of visual information. In the first part, the article proposes a local perceptual grouping algorithm that — in addition to commonly used geometric information — makes use of a novel multi–modal measure between local edge/line features. The grouping information is then used to: 1) disambiguate stereopsis by enforcing that stereo matches preserve groups; and 2) correct the reconstruction error due to the image pixel sampling using a linear interpolation over the groups. The integration of mutual feedback between early vision processes is shown to reduce considerably ambiguity and noise without the need for global constraints. PMID:20544006
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676
Hydro-thermal Commitment Scheduling by Tabu Search Method with Cooling-Banking Constraints
NASA Astrophysics Data System (ADS)
Nayak, Nimain Charan; Rajan, C. Christober Asir
This paper presents a new approach for developing an algorithm for solving the Unit Commitment Problem (UCP) in a Hydro-thermal power system. Unit Commitment is a nonlinear optimization problem to determine the minimum cost turn on/off schedule of the generating units in a power system by satisfying both the forecasted load demand and various operating constraints of the generating units. The effectiveness of the proposed hybrid algorithm is proved by the numerical results shown comparing the generation cost solutions and computation time obtained by using Tabu Search Algorithm with other methods like Evolutionary Programming and Dynamic Programming in reaching proper unit commitment.
Persistence-length renormalization of polymers in a crowded environment of hard disks.
Schöbl, S; Sturm, S; Janke, W; Kroy, K
2014-12-01
The most conspicuous property of a semiflexible polymer is its persistence length, defined as the decay length of tangent correlations along its contour. Using an efficient stochastic growth algorithm to sample polymers embedded in a quenched hard-disk fluid, we find apparent wormlike chain statistics with a renormalized persistence length. We identify a universal form of the disorder renormalization that suggests itself as a quantitative measure of molecular crowding. PMID:25526167
Immune allied genetic algorithm for Bayesian network structure learning
NASA Astrophysics Data System (ADS)
Song, Qin; Lin, Feng; Sun, Wei; Chang, KC
2012-06-01
Bayesian network (BN) structure learning is a NP-hard problem. In this paper, we present an improved approach to enhance efficiency of BN structure learning. To avoid premature convergence in traditional single-group genetic algorithm (GA), we propose an immune allied genetic algorithm (IAGA) in which the multiple-population and allied strategy are introduced. Moreover, in the algorithm, we apply prior knowledge by injecting immune operator to individuals which can effectively prevent degeneration. To illustrate the effectiveness of the proposed technique, we present some experimental results.
Scaling of the running time of the quantum adiabatic algorithm for propositional satisfiability
Znidaric, Marko
2005-06-15
We numerically study the quantum adiabatic algorithm for propositional satisfiability. A new class of previously unknown hard instances is identified among random problems. We numerically find that the running time for such instances grows exponentially with their size. The worst case complexity of the quantum adiabatic algorithm therefore seems to be exponential.
The BQP-hardness of approximating the Jones polynomial
NASA Astrophysics Data System (ADS)
Aharonov, Dorit; Arad, Itai
2011-03-01
A celebrated important result due to Freedman et al (2002 Commun. Math. Phys. 227 605-22) states that providing additive approximations of the Jones polynomial at the kth root of unity, for constant k=5 and k>=7, is BQP-hard. Together with the algorithmic results of Aharonov et al (2005) and Freedman et al (2002 Commun. Math. Phys. 227 587-603), this gives perhaps the most natural BQP-complete problem known today and motivates further study of the topic. In this paper, we focus on the universality proof; we extend the result of Freedman et al (2002) to ks that grow polynomially with the number of strands and crossings in the link, thus extending the BQP-hardness of Jones polynomial approximations to all values to which the AJL algorithm applies (Aharonov et al 2005), proving that for all those values, the problems are BQP-complete. As a side benefit, we derive a fairly elementary proof of the Freedman et al density result, without referring to advanced results from Lie algebra representation theory, making this important result accessible to a wider audience in the computer science research community. We make use of two general lemmas we prove, the bridge lemma and the decoupling lemma, which provide tools for establishing the density of subgroups in SU(n). Those tools seem to be of independent interest in more general contexts of proving the quantum universality. Our result also implies a completely classical statement, that the multiplicative approximations of the Jones polynomial, at exactly the same values, are #P-hard, via a recent result due to Kuperberg (2009 arXiv:0908.0512). Since the first publication of those results in their preliminary form (Aharonov and Arad 2006 arXiv:quant-ph/0605181), the methods we present here have been used in several other contexts (Aharonov and Arad 2007 arXiv:quant-ph/0702008; Peter and Stephen 2008 Quantum Inf. Comput. 8 681). The present paper is an improved and extended version of the results presented by Aharonov and Arad
Deducing Electron Properties from Hard X-Ray Observations
NASA Technical Reports Server (NTRS)
Kontar, E. P.; Brown, J. C.; Emslie, A. G.; Hajdas, W.; Holman, G. D.; Hurford, G. J.; Kasparova, J.; Mallik, P. C. V.; Massone, A. M.; McConnell, M. L.; Piana, M.; Prato, M.; Schmahl, E. J.; Suarez-Garcia, E.
2011-01-01
X-radiation from energetic electrons is the prime diagnostic of flare-accelerated electrons. The observed X-ray flux (and polarization state) is fundamentally a convolution of the cross-section for the hard X-ray emission process(es) in question with the electron distribution function, which is in turn a function of energy, direction, spatial location and time. To address the problems of particle propagation and acceleration one needs to infer as much information as possible on this electron distribution function, through a deconvolution of this fundamental relationship. This review presents recent progress toward this goal using spectroscopic, imaging and polarization measurements, primarily from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). Previous conclusions regarding the energy, angular (pitch angle) and spatial distributions of energetic electrons in solar flares are critically reviewed. We discuss the role and the observational evidence of several radiation processes: free-free electron-ion, free-free electron-electron, free-bound electron-ion, photoelectric absorption and Compton backscatter (albedo), using both spectroscopic and imaging techniques. This unprecedented quality of data allows for the first time inference of the angular distributions of the X-ray-emitting electrons and improved model-independent inference of electron energy spectra and emission measures of thermal plasma. Moreover, imaging spectroscopy has revealed hitherto unknown details of solar flare morphology and detailed spectroscopy of coronal, footpoint and extended sources in flaring regions. Additional attempts to measure hard X-ray polarization were not sufficient to put constraints on the degree of anisotropy of electrons, but point to the importance of obtaining good quality polarization data in the future.
Deducing Electron Properties from Hard X-ray Observations
NASA Astrophysics Data System (ADS)
Kontar, E. P.; Brown, J. C.; Emslie, A. G.; Hajdas, W.; Holman, G. D.; Hurford, G. J.; Kašparová, J.; Mallik, P. C. V.; Massone, A. M.; McConnell, M. L.; Piana, M.; Prato, M.; Schmahl, E. J.; Suarez-Garcia, E.
2011-09-01
X-radiation from energetic electrons is the prime diagnostic of flare-accelerated electrons. The observed X-ray flux (and polarization state) is fundamentally a convolution of the cross-section for the hard X-ray emission process(es) in question with the electron distribution function, which is in turn a function of energy, direction, spatial location and time. To address the problems of particle propagation and acceleration one needs to infer as much information as possible on this electron distribution function, through a deconvolution of this fundamental relationship. This review presents recent progress toward this goal using spectroscopic, imaging and polarization measurements, primarily from the Reuven Ramaty High Energy Solar Spectroscopic Imager ( RHESSI). Previous conclusions regarding the energy, angular (pitch angle) and spatial distributions of energetic electrons in solar flares are critically reviewed. We discuss the role and the observational evidence of several radiation processes: free-free electron-ion, free-free electron-electron, free-bound electron-ion, photoelectric absorption and Compton backscatter (albedo), using both spectroscopic and imaging techniques. This unprecedented quality of data allows for the first time inference of the angular distributions of the X-ray-emitting electrons and improved model-independent inference of electron energy spectra and emission measures of thermal plasma. Moreover, imaging spectroscopy has revealed hitherto unknown details of solar flare morphology and detailed spectroscopy of coronal, footpoint and extended sources in flaring regions. Additional attempts to measure hard X-ray polarization were not sufficient to put constraints on the degree of anisotropy of electrons, but point to the importance of obtaining good quality polarization data in the future.
"Short, Hard Gamma-Ray Bursts - Mystery Solved?????"
NASA Technical Reports Server (NTRS)
Parsons, A.
2006-01-01
After over a decade of speculation about the nature of short-duration hard-spectrum gamma-ray bursts (GRBs), the recent detection of afterglow emission from a small number of short bursts has provided the first physical constraints on possible progenitor models. While the discovery of afterglow emission from long GRBs was a real breakthrough linking their origin to star forming galaxies, and hence the death of massive stars, the progenitors, energetics, and environments for short gamma-ray burst events remain elusive despite a few recent localizations. Thus far, the nature of the host galaxies measured indicates that short GRBs arise from an old (> 1 Gyr) stellar population, strengthening earlier suggestions and providing support for coalescing compact object binaries as the progenitors. On the other hand, some of the short burst afterglow observations cannot be easily explained in the coalescence scenario. These observations raise the possibility that short GRBs may have different or multiple progenitors systems. The study of the short-hard GRB afterglows has been made possible by the Swift Gamma-ray Burst Explorer, launched in November of 2004. Swift is equipped with a coded aperture gamma-ray telescope that can observe up to 2 steradians of the sky and can compute the position of a gamma-ray burst to within 2-3 arcmin in less than 10 seconds. The Swift spacecraft can slew on to this burst position without human intervention, allowing its on-board x ray and optical telescopes to study the afterglow within 2 minutes of the original GRB trigger. More Swift short burst detections and afterglow measurements are needed before we can declare that the mystery of short gamma-ray burst is solved.
Aggregation-based fuzzy dual-mode control for nonlinear systems with mixed constraints
NASA Astrophysics Data System (ADS)
Wen, Jiwei; Liu, Fei
2012-05-01
A new receding horizon dual-mode control method is proposed for a class of discrete-time nonlinear systems represented by Takagi-Sugeno (T-S) fuzzy models subject to mixed constraints including hard input constraint and soft state constraint. On the one hand, our receding horizon scheme is based upon an online optimisation that utilises optimised sequence plus local linear feedback. On the other hand, due to the consideration of computation burden, an amplitude decaying aggregation strategy is introduced to reduce the number of optimisation variables. The proposed controller is obtained using semi-definite programming, which can be easily solved by means of linear matrix inequalities. A numerical example is given to verify the feasibility and efficiency of the proposed method.
Numerical Optimization Algorithms and Software for Systems Biology
Saunders, Michael
2013-02-02
The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.
Scheduling hydro power systems with restricted operating zones and discharge ramping constraints
Guan, X.; Svoboda, Al; Li, C.
1999-02-01
An optimization-based algorithm is presented for scheduling hydro power systems with restricted operating zones and discharge ramping constraints. Hydro watershed scheduling problems are difficult to solve because many constraints, continuous and discrete, including hydraulic coupling of cascaded reservoirs have to be considered. Restricted or forbidden operating zones as well as minimum generation limits of hydro units result in discontinuous preferred operating regions, and hinder direct applications of efficient continuous optimization methods such as network flow algorithms. Discharge ramping constraints due to navigational, environmental and recreational requirements in a hydro system add another dimension of difficulty since they couple generation or water discharge across time horizon. The key idea of this paper is to use additional sets of multipliers to relax discontinuous operating region and discharge ramping constraints on individual hydro units so that a two-level optimization structure is formed. The low level consists of a continuous discharge scheduling subproblem determining the generation levels of all units in the entire watershed, and a number of pure integer scheduling subproblems determining the hydro operating states, one for each unit. The discharge subproblem is solved by a network flow algorithm, and the integer scheduling problems are solved by dynamic programming with a small number of states and well-structured transitions. The two sets of subproblems are coordinated through multipliers updated at the high level by using a modified subgradient algorithm. After the dual problem converges, a feasible hydro schedule is obtained by using the same network flow algorithm with the operating states obtained, and operating ranges modified to guarantee satisfaction of ramping constraints.
NASA Technical Reports Server (NTRS)
Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Amador, Arthur V.; Spitale, Joseph N.
1993-01-01
Robotic spacecraft are controlled by sets of commands called 'sequences.' These sequences must be checked against mission constraints. Making our existing constraint checking program faster would enable new capabilities in our uplink process. Therefore, we are rewriting this program to run on a parallel computer. To do so, we had to determine how to run constraint-checking algorithms in parallel and create a new method of specifying spacecraft models and constraints. This new specification gives us a means of representing flight systems and their predicted response to commands which could be used in a variety of applications throughout the command process, particularly during anomaly or high-activity operations. This commonality could reduce operations cost and risk for future complex missions. Lessons learned in applying some parts of this system to the TOPEX/Poseidon mission will be described.
Killing symmetries as Hamiltonian constraints
NASA Astrophysics Data System (ADS)
Lusanna, Luca
2016-02-01
The existence of a Killing symmetry in a gauge theory is equivalent to the addition of extra Hamiltonian constraints in its phase space formulation, which imply restrictions both on the Dirac observables (the gauge invariant physical degrees of freedom) and on the gauge freedom. When there is a time-like Killing vector field only pure gauge electromagnetic fields survive in Maxwell theory in Minkowski space-time, while in ADM canonical gravity in asymptotically Minkowskian space-times only inertial effects without gravitational waves survive.
QPO Constraints on Neutron Stars
NASA Technical Reports Server (NTRS)
Miller, M. Coleman
2005-01-01
The kilohertz frequencies of QPOs from accreting neutron star systems imply that they are generated in regions of strong gravity, close to the star. This suggests that observations of the QPOs can be used to constrain the properties of neutron stars themselves, and in particular to inform us about the properties of cold matter beyond nuclear densities. Here we discuss some relatively model-insensitive constraints that emerge from the kilohertz QPOs, as well as recent developments that may hint at phenomena related to unstable circular orbits outside neutron stars.
Trajectory constraints in qualitative simulation
Brajnik, G.; Clancy, D.J.
1996-12-31
We present a method for specifying temporal constraints on trajectories of dynamical systems and enforcing them during qualitative simulation. This capability can be used to focus a simulation, simulate non-autonomous and piecewise-continuous systems, reason about boundary condition problems and incorporate observations into the simulation. The method has been implemented in TeQSIM, a qualitative simulator that combines the expressive power of qualitative differential equations with temporal logic. It interleaves temporal logic model checking with the simulation to constrain and refine the resulting predicted behaviors and to inject discontinuous changes into the simulation.
Novel hard compositions and methods of preparation
Sheinberg, Haskell
1983-08-23
Novel very hard compositions of matter are prepared by using in all embodiments only a minor amount of a particular carbide (or materials which can form the carbide in situ when subjected to heat and pressure); and no strategic cobalt is needed. Under a particular range of conditions, densified compositions of matter of the invention are prepared having hardnesses on the Rockwell A test substantially equal to the hardness of pure tungsten carbide and to two of the hardest commercial cobalt-bonded tungsten carbides. Alternately, other compositions of the invention which have slightly lower hardnesses than those described above in one embodiment also possess the advantage of requiring no tungsten and in another embodiment possess the advantage of having a good fracture toughness value. Photomicrographs show that the shapes of the grains of the alloy mixture with which the minor amount of carbide (or carbide-formers) is mixed are radically altered from large, rounded to small, very angular by the addition of the carbide. Superiority of one of these hard compositions of matter over cobalt-bonded tungsten carbide for ultra-high pressure anvil applications was demonstrated.
Novel hard compositions and methods of preparation
Sheinberg, H.
1983-08-23
Novel very hard compositions of matter are prepared by using in all embodiments only a minor amount of a particular carbide (or materials which can form the carbide in situ when subjected to heat and pressure); and no strategic cobalt is needed. Under a particular range of conditions, densified compositions of matter of the invention are prepared having hardnesses on the Rockwell A test substantially equal to the hardness of pure tungsten carbide and to two of the hardest commercial cobalt-bonded tungsten carbides. Alternately, other compositions of the invention which have slightly lower hardnesses than those described above in one embodiment also possess the advantage of requiring no tungsten and in another embodiment possess the advantage of having a good fracture toughness value. Photomicrographs show that the shapes of the grains of the alloy mixture with which the minor amount of carbide (or carbide-formers) is mixed are radically altered from large, rounded to small, very angular by the addition of the carbide. Superiority of one of these hard compositions of matter over cobalt-bonded tungsten carbide for ultra-high pressure anvil applications was demonstrated. 3 figs.
Optimal Sampling-Based Motion Planning under Differential Constraints: the Driftless Case
Schmerling, Edward; Janson, Lucas; Pavone, Marco
2015-01-01
Motion planning under differential constraints is a classic problem in robotics. To date, the state of the art is represented by sampling-based techniques, with the Rapidly-exploring Random Tree algorithm as a leading example. Yet, the problem is still open in many aspects, including guarantees on the quality of the obtained solution. In this paper we provide a thorough theoretical framework to assess optimality guarantees of sampling-based algorithms for planning under differential constraints. We exploit this framework to design and analyze two novel sampling-based algorithms that are guaranteed to converge, as the number of samples increases, to an optimal solution (namely, the Differential Probabilistic RoadMap algorithm and the Differential Fast Marching Tree algorithm). Our focus is on driftless control-affine dynamical models, which accurately model a large class of robotic systems. In this paper we use the notion of convergence in probability (as opposed to convergence almost surely): the extra mathematical flexibility of this approach yields convergence rate bounds — a first in the field of optimal sampling-based motion planning under differential constraints. Numerical experiments corroborating our theoretical results are presented and discussed. PMID:26618041
A practical scheduling algorithm for Shuttle-based astronomy missions
NASA Technical Reports Server (NTRS)
Guffin, O. T.; Roberts, B. H.; Williamson, P. L.
1985-01-01
In the Astro mission series (initial flight planned for March, 1986), the Shuttle will be used as a dedicated stellar astronomy observatory. A modified Spacelab pallet is to be used for the Astro payload, which will consist of three ultraviolet (UV) telescopes and a wide field camera mounted together on a single gimbal mount called the Inertial Pointing System (IPS). Three flights of 7-10 days duration are to be made with the same payload at intervals of 8-9 months. Previous experience has shown that changes in design requirements are inevitable, and the evolution of operational concepts will effect changes in scheduling algorithm software. For these reasons, the design goals of the Astron algorithm and its family of auxiliary software modules have been related to functional modularity, constraint flexibility, user friendliness, and 'light' input requirements. Attention is given to hardware characteristics, environmental constraints, the basic criteria function, 'Cinderella' logic, counters and constraints, and scheduling trends.
NASA Astrophysics Data System (ADS)
Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.
2012-05-01
The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.
A Sensitive Secondary Users Selection Algorithm for Cognitive Radio Ad Hoc Networks
Li, Aohan; Han, Guangjie; Wan, Liangtian; Shu, Lei
2016-01-01
Secondary Users (SUs) are allowed to use the temporarily unused licensed spectrum without disturbing Primary Users (PUs) in Cognitive Radio Ad Hoc Networks (CRAHNs). Existing architectures for CRAHNs impose energy-consuming Cognitive Radios (CRs) on SUs. However, the advanced CRs will increase energy cost for their cognitive functionalities, which is undesirable for the battery powered devices. A new architecture referred to as spectral Requirement-based CRAHN (RCRAHN) is proposed to enhance energy efficiency for CRAHNs in this paper. In RCRAHNs, only parts of SUs are equipped with CRs. SUs equipped with CRs are referred to as Cognitive Radio Users (CRUs). To further enhance energy efficiency of CRAHNs, we aim to select minimum CRUs to sense available spectrum. A non-linear programming problem is mathematically formulated under the constraints of energy efficiency and real-time. Considering the NP-hardness of the problem, a framework of a heuristic algorithm referred to as Sensitive Secondary Users Selection (SSUS) was designed to compute the near-optimal solutions. The simulation results demonstrate that SSUS not only improves the energy efficiency, but also achieves satisfied performances in end-to-end delay and communication reliability. PMID:27023562
A Sensitive Secondary Users Selection Algorithm for Cognitive Radio Ad Hoc Networks.
Li, Aohan; Han, Guangjie; Wan, Liangtian; Shu, Lei
2016-01-01
Secondary Users (SUs) are allowed to use the temporarily unused licensed spectrum without disturbing Primary Users (PUs) in Cognitive Radio Ad Hoc Networks (CRAHNs). Existing architectures for CRAHNs impose energy-consuming Cognitive Radios (CRs) on SUs. However, the advanced CRs will increase energy cost for their cognitive functionalities, which is undesirable for the battery powered devices. A new architecture referred to as spectral Requirement-based CRAHN (RCRAHN) is proposed to enhance energy efficiency for CRAHNs in this paper. In RCRAHNs, only parts of SUs are equipped with CRs. SUs equipped with CRs are referred to as Cognitive Radio Users (CRUs). To further enhance energy efficiency of CRAHNs, we aim to select minimum CRUs to sense available spectrum. A non-linear programming problem is mathematically formulated under the constraints of energy efficiency and real-time. Considering the NP-hardness of the problem, a framework of a heuristic algorithm referred to as Sensitive Secondary Users Selection (SSUS) was designed to compute the near-optimal solutions. The simulation results demonstrate that SSUS not only improves the energy efficiency, but also achieves satisfied performances in end-to-end delay and communication reliability. PMID:27023562
NASA Technical Reports Server (NTRS)
Sarkar, Nilanjan; Yun, Xiaoping; Kumar, Vijay
1994-01-01
There are many examples of mechanical systems that require rolling contacts between two or more rigid bodies. Rolling contacts engender nonholonomic constraints in an otherwise holonomic system. In this article, we develop a unified approach to the control of mechanical systems subject to both holonomic and nonholonomic constraints. We first present a state space realization of a constrained system. We then discuss the input-output linearization and zero dynamics of the system. This approach is applied to the dynamic control of mobile robots. Two types of control algorithms for mobile robots are investigated: trajectory tracking and path following. In each case, a smooth nonlinear feedback is obtained to achieve asymptotic input-output stability and Lagrange stability of the overall system. Simulation results are presented to demonstrate the effectiveness of the control algorithms and to compare the performane of trajectory-tracking and path-following algorithms.
Statistical Inference in Hidden Markov Models Using k-Segment Constraints
Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher
2016-01-01
Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674
NASA Astrophysics Data System (ADS)
Ohno, Akiyoshi; Nishi, Tatsushi; Inuiguchi, Masahiro; Takahashi, Satoru; Ueda, Kenji
In this paper, we propose a column generation for the train-set scheduling problem with regular maintenance constraints. The problem is to allocate the minimum train-set to the train operations required to operate a given train timetable. In the proposed method, a tight lower bound can be obtained from the continuous relaxation for Dantzig-Wolfe reformulation by column generation. The subproblem for the column generation is an elementary shortest path problem with resource constraints. A labeling algorithm is applied to solve the subproblem. In order to reduce the computational effort for solving subproblems, a new state space relaxation of the subproblem is developed in the labeling algorithm. An upper bound is computed by a heuristic algorithm. Computational results demonstrate the effectiveness of the proposed method.
Solano-Altamirano, J M; Goldman, Saul
2015-12-01
We determined the total system elastic Helmholtz free energy, under the constraints of constant temperature and volume, for systems comprised of one or more perfectly bonded hard spherical inclusions (i.e. "hard spheres") embedded in a finite spherical elastic solid. Dirichlet boundary conditions were applied both at the surface(s) of the hard spheres, and at the outer surface of the elastic solid. The boundary conditions at the surface of the spheres were used to describe the rigid displacements of the spheres, relative to their initial location(s) in the unstressed initial state. These displacements, together with the initial positions, provided the final shape of the strained elastic solid. The boundary conditions at the outer surface of the elastic medium were used to ensure constancy of the system volume. We determined the strain and stress tensors numerically, using a method that combines the Neuber-Papkovich spherical harmonic decomposition, the Schwartz alternating method, and Least-squares for determining the spherical harmonic expansion coefficients. The total system elastic Helmholtz free energy was determined by numerically integrating the elastic Helmholtz free energy density over the volume of the elastic solid, either by a quadrature, or a Monte Carlo method, or both. Depending on the initial position of the hard sphere(s) (or equivalently, the shape of the un-deformed stress-free elastic solid), and the displacements, either stationary or non-stationary Helmholtz free energy minima were found. The non-stationary minima, which involved the hard spheres nearly in contact with one another, corresponded to lower Helmholtz free energies, than did the stationary minima, for which the hard spheres were further away from one another. PMID:26701708
Optimisation of nonlinear motion cueing algorithm based on genetic algorithm
NASA Astrophysics Data System (ADS)
Asadi, Houshyar; Mohamed, Shady; Rahim Zadeh, Delpak; Nahavandi, Saeid
2015-04-01
Motion cueing algorithms (MCAs) are playing a significant role in driving simulators, aiming to deliver the most accurate human sensation to the simulator drivers compared with a real vehicle driver, without exceeding the physical limitations of the simulator. This paper provides the optimisation design of an MCA for a vehicle simulator, in order to find the most suitable washout algorithm parameters, while respecting all motion platform physical limitations, and minimising human perception error between real and simulator driver. One of the main limitations of the classical washout filters is that it is attuned by the worst-case scenario tuning method. This is based on trial and error, and is effected by driving and programmers experience, making this the most significant obstacle to full motion platform utilisation. This leads to inflexibility of the structure, production of false cues and makes the resulting simulator fail to suit all circumstances. In addition, the classical method does not take minimisation of human perception error and physical constraints into account. Production of motion cues and the impact of different parameters of classical washout filters on motion cues remain inaccessible for designers for this reason. The aim of this paper is to provide an optimisation method for tuning the MCA parameters, based on nonlinear filtering and genetic algorithms. This is done by taking vestibular sensation error into account between real and simulated cases, as well as main dynamic limitations, tilt coordination and correlation coefficient. Three additional compensatory linear blocks are integrated into the MCA, to be tuned in order to modify the performance of the filters successfully. The proposed optimised MCA is implemented in MATLAB/Simulink software packages. The results generated using the proposed method show increased performance in terms of human sensation, reference shape tracking and exploiting the platform more efficiently without reaching
"We Can Get Everything We Want if We Try Hard": Young People, Celebrity, Hard Work
ERIC Educational Resources Information Center
Mendick, Heather; Allen, Kim; Harvey, Laura
2015-01-01
Drawing on 24 group interviews on celebrity with 148 students aged 14-17 across six schools, we show that "hard work" is valued by young people in England. We argue that we should not simply celebrate this investment in hard work. While it opens up successful subjectivities to previously excluded groups, it reproduces neoliberal…
Hard Water and Soft Soap: Dependence of Soap Performance on Water Hardness
ERIC Educational Resources Information Center
Osorio, Viktoria K. L.; de Oliveira, Wanda; El Seoud, Omar A.; Cotton, Wyatt; Easdon, Jerry
2005-01-01
The demonstration of the performance of soap in different aqueous solutions, which is due to water hardness and soap formulation, is described. The demonstrations use safe, inexpensive reagents and simple glassware and equipment, introduce important everyday topics, stimulates the students to consider the wider consequences of water hardness and…
Research in the Hard Sciences, and in Very Hard "Softer" Domains
ERIC Educational Resources Information Center
Phillips, D. C.
2014-01-01
The author of this commentary argues that physical scientists are attempting to advance knowledge in the so-called hard sciences, whereas education researchers are laboring to increase knowledge and understanding in an "extremely hard" but softer domain. Drawing on the work of Popper and Dewey, this commentary highlights the relative…
Computational search for rare-earth free hard-magnetic materials
NASA Astrophysics Data System (ADS)
Flores Livas, José A.; Sharma, Sangeeta; Dewhurst, John Kay; Gross, Eberhard; MagMat Team
2015-03-01
It is difficult to over state the importance of hard magnets for human life in modern times; they enter every walk of our life from medical equipments (NMR) to transport (trains, planes, cars, etc) to electronic appliances (for house hold use to computers). All the known hard magnets in use today contain rare-earth elements, extraction of which is expensive and environmentally harmful. Rare-earths are also instrumental in tipping the balance of world economy as most of them are mined in limited specific parts of the world. Hence it would be ideal to have similar characteristics as a hard magnet but without or at least with reduced amount of rare-earths. This is the main goal of our work: search for rare-earth-free magnets. To do so we employ a combination of density functional theory and crystal prediction methods. The quantities which define a hard magnet are magnetic anisotropy energy (MAE) and saturation magnetization (Ms), which are the quantities we maximize in search for an ideal magnet. In my talk I will present details of the computation search algorithm together with some potential newly discovered rare-earth free hard magnet. J.A.F.L. acknowledge financial support from EU's 7th Framework Marie-Curie scholarship program within the ``ExMaMa'' Project (329386).
Genetic algorithm for chromaticity correction in diffraction limited storage rings
NASA Astrophysics Data System (ADS)
Ehrlichman, M. P.
2016-04-01
A multiobjective genetic algorithm is developed for optimizing nonlinearities in diffraction limited storage rings. This algorithm determines sextupole and octupole strengths for chromaticity correction that deliver optimized dynamic aperture and beam lifetime. The algorithm makes use of dominance constraints to breed desirable properties into the early generations. The momentum aperture is optimized indirectly by constraining the chromatic tune footprint and optimizing the off-energy dynamic aperture. The result is an effective and computationally efficient technique for correcting chromaticity in a storage ring while maintaining optimal dynamic aperture and beam lifetime.
Erosion testing of hard materials and coatings
Hawk, Jeffrey A.
2005-04-29
Erosion is the process by which unconstrained particles, usually hard, impact a surface, creating damage that leads to material removal and component failure. These particles are usually very small and entrained in fluid of some type, typically air. The damage that occurs as a result of erosion depends on the size of the particles, their physical characteristics, the velocity of the particle/fluid stream, and their angle of impact on the surface of interest. This talk will discuss the basics of jet erosion testing of hard materials, composites and coatings. The standard test methods will be discussed as well as alternative approaches to determining the erosion rate of materials. The damage that occurs will be characterized in genera1 terms, and examples will be presented for the erosion behavior of hard materials and coatings (both thick and thin).
Solar flare hard X-ray observations
NASA Technical Reports Server (NTRS)
Dennis, Brian R.
1988-01-01
Recent hard X-ray observations of solar flares are reviewed with emphasis on results obtained with instruments on the solar maximum satellite. Flares with three sets of characteristics, designated as Type A, Type B, and Type C, are discussed and hard X-ray temporal, spatial spectral, and polarization measurements are reviewed in this framework. Coincident observations are reviewed at other wavelengths including the UV, microwaves, and soft X-rays, with discussions of their interpretations. In conclusion, a brief outline is presented of the potential of future hard X-ray observations with sub-second time resolution, arcsecond spatial resolution, and keV energy resolution, and polarization measurements at the few percent level up to 100 keV.
Potential Health Impacts of Hard Water
Sengupta, Pallav
2013-01-01
In the past five decades or so evidence has been accumulating about an environmental factor, which appears to be influencing mortality, in particular, cardiovascular mortality, and this is the hardness of the drinking water. In addition, several epidemiological investigations have demonstrated the relation between risk for cardiovascular disease, growth retardation, reproductive failure, and other health problems and hardness of drinking water or its content of magnesium and calcium. In addition, the acidity of the water influences the reabsorption of calcium and magnesium in the renal tubule. Not only, calcium and magnesium, but other constituents also affect different health aspects. Thus, the present review attempts to explore the health effects of hard water and its constituents. PMID:24049611