Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
Gaining Algorithmic Insight through Simplifying Constraints.
ERIC Educational Resources Information Center
Ginat, David
2002-01-01
Discusses algorithmic problem solving in computer science education, particularly algorithmic insight, and focuses on the relevance and effectiveness of the heuristic simplifying constraints which involves simplification of a given problem to a problem in which constraints are imposed on the input data. Presents three examples involving…
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Algorithms for reactions of nonholonomic constraints and servo-constraints
NASA Astrophysics Data System (ADS)
Slawianowski, J. J.
Various procedures for deriving equations of motion of constrained mechanical systems are discussed and compared. A geometric interpretation of the procedures is given, stressing both linear and nonlinear nonholonomic constraints. Certain qualitative differences are analyzed between models of nonholonomic dynamics based on different procedures. Two algorithms of particular interest are: (1) the d'Alembert principle and its Appell-Tshetajev generalization, and (2) the variational Hamiltonian principle with subsidiary conditions. It is argued that the Hamiltonian principle, although not accepted in traditional technical applications, is more promising in generalizations concerning systems with higher differential constraints, or the more general functional constraints appearing in feedback and control systems.
A constraint consensus memetic algorithm for solving constrained optimization problems
NASA Astrophysics Data System (ADS)
Hamza, Noha M.; Sarker, Ruhul A.; Essam, Daryl L.; Deb, Kalyanmoy; Elsayed, Saber M.
2014-11-01
Constraint handling is an important aspect of evolutionary constrained optimization. Currently, the mechanism used for constraint handling with evolutionary algorithms mainly assists the selection process, but not the actual search process. In this article, first a genetic algorithm is combined with a class of search methods, known as constraint consensus methods, that assist infeasible individuals to move towards the feasible region. This approach is also integrated with a memetic algorithm. The proposed algorithm is tested and analysed by solving two sets of standard benchmark problems, and the results are compared with other state-of-the-art algorithms. The comparisons show that the proposed algorithm outperforms other similar algorithms. The algorithm has also been applied to solve a practical economic load dispatch problem, where it also shows superior performance over other algorithms.
Measuring Constraint-Set Utility for Partitional Clustering Algorithms
NASA Technical Reports Server (NTRS)
Davidson, Ian; Wagstaff, Kiri L.; Basu, Sugato
2006-01-01
Clustering with constraints is an active area of machine learning and data mining research. Previous empirical work has convincingly shown that adding constraints to clustering improves the performance of a variety of algorithms. However, in most of these experiments, results are averaged over different randomly chosen constraint sets from a given set of labels, thereby masking interesting properties of individual sets. We demonstrate that constraint sets vary significantly in how useful they are for constrained clustering; some constraint sets can actually decrease algorithm performance. We create two quantitative measures, informativeness and coherence, that can be used to identify useful constraint sets. We show that these measures can also help explain differences in performance for four particular constrained clustering algorithms.
Easy and hard testbeds for real-time search algorithms
Koenig, S.; Simmons, R.G.
1996-12-31
Although researchers have studied which factors influence the behavior of traditional search algorithms, currently not much is known about how domain properties influence the performance of real-time search algorithms. In this paper we demonstrate, both theoretically and experimentally, that Eulerian state spaces (a super set of undirected state spaces) are very easy for some existing real-time search algorithms to solve: even real-time search algorithms that can be intractable, in general, are efficient for Eulerian state spaces. Because traditional real-time search testbeds (such as the eight puzzle and gridworlds) are Eulerian, they cannot be used to distinguish between efficient and inefficient real-time search algorithms. It follows that one has to use non-Eulerian domains to demonstrate the general superiority of a given algorithm. To this end, we present two classes of hard-to-search state spaces and demonstrate the performance of various real-time search algorithms on them.
Bacanin, Nebojsa; Tuba, Milan
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645
NASA Astrophysics Data System (ADS)
Tang, Qiuhua; Li, Zixiang; Zhang, Liping; Floudas, C. A.; Cao, Xiaojun
2015-09-01
Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost function. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.
Delocalization of Two-Dimensional Random Surfaces with Hard-Core Constraints
NASA Astrophysics Data System (ADS)
Miłoś, Piotr; Peled, Ron
2015-11-01
We study the fluctuations of random surfaces on a two-dimensional discrete torus. The random surfaces we consider are defined via a nearest-neighbor pair potential, which we require to be twice continuously differentiable on a (possibly infinite) interval and infinity outside of this interval. No convexity assumption is made and we include the case of the so-called hammock potential, when the random surface is uniformly chosen from the set of all surfaces satisfying a Lipschitz constraint. Our main result is that these surfaces delocalize, having fluctuations whose variance is at least of order log n, where n is the side length of the torus. We also show that the expected maximum of such surfaces is of order at least log n. The main tool in our analysis is an adaptation to the lattice setting of an algorithm of Richthammer, who developed a variant of a Mermin-Wagner-type argument applicable to hard-core constraints. We rely also on the reflection positivity of the random surface model. The result answers a question mentioned by Brascamp et al. on the hammock potential and a question of Velenik.
Constraint identification and algorithm stabilization for degenerate nonlinear programs.
Wright, S. J.; Mathematics and Computer Science
2003-01-01
In the vicinity of a solution of a nonlinear programming problem at which both strict complementarity and linear independence of the active constraints may fail to hold, we describe a technique for distinguishing weakly active from strongly active constraints. We show that this information can be used to modify the sequential quadratic programming algorithm so that it exhibits superlinear convergence to the solution under assumptions weaker than those made in previous analyses.
Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms.
Friedrich, Tobias; Neumann, Frank
2015-01-01
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1 + 1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints, we show that the GSEMO achieves a (1 - 1/e)-approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of K ≥ 2 matroids, we show that the (1 + 1) EA achieves a (1/k + δ)-approximation in expected polynomial time for any constant δ > 0. Turning to nonmonotone symmetric submodular functions with k ≥ 1 matroid intersection constraints, we show that the GSEMO achieves a 1/((k + 2)(1 + ε))-approximation in expected time O(n(k + 6)log(n)/ε.
Heinstein, M.W.
1997-10-01
A contact enforcement algorithm has been developed for matrix-free quasistatic finite element techniques. Matrix-free (iterative) solution algorithms such as nonlinear Conjugate Gradients (CG) and Dynamic Relaxation (DR) are distinctive in that the number of iterations required for convergence is typically of the same order as the number of degrees of freedom of the model. From iteration to iteration the contact normal and tangential forces vary significantly making contact constraint satisfaction tenuous. Furthermore, global determination and enforcement of the contact constraints every iteration could be questioned on the grounds of efficiency. This work addresses this situation by introducing an intermediate iteration for treating the active gap constraint and at the same time exactly (kinematically) enforcing the linearized gap rate constraint for both frictionless and frictional response.
Leaf Sequencing Algorithm Based on MLC Shape Constraint
NASA Astrophysics Data System (ADS)
Jing, Jia; Pei, Xi; Wang, Dong; Cao, Ruifen; Lin, Hui
2012-06-01
Intensity modulated radiation therapy (IMRT) requires the determination of the appropriate multileaf collimator settings to deliver an intensity map. The purpose of this work was to attempt to regulate the shape between adjacent multileaf collimator apertures by a leaf sequencing algorithm. To qualify and validate this algorithm, the integral test for the segment of the multileaf collimator of ARTS was performed with clinical intensity map experiments. By comparisons and analyses of the total number of monitor units and number of segments with benchmark results, the proposed algorithm performed well while the segment shape constraint produced segments with more compact shapes when delivering the planned intensity maps, which may help to reduce the multileaf collimator's specific effects.
Emissivity range constraints algorithm for multi-wavelength pyrometer (MWP).
Xing, Jian; Rana, R S; Gu, Weihong
2016-08-22
In order to realize rapid and real temperature measurement for high temperature targets by multi-wavelength pyrometer (MWP), emissivity range constraints to optimize data processing algorithm without effect from emissivity has been developed. Through exploring the relation between emissivity deviation and true temperature by fitting of large number of data from different emissivity distribution target models, the effective search range of emissivity for every time iteration is obtained, so data processing time is greatly reduced. Simulation and experimental results indicate that calculation time is less by 0.2 seconds with 25K absolute error at 1800K true temperature, and the efficiency is improved by more than 90% compared with the previous algorithm. The method has advantages of simplicity, rapidity, and suitability for in-line high temperature measurement.
Emissivity range constraints algorithm for multi-wavelength pyrometer (MWP).
Xing, Jian; Rana, R S; Gu, Weihong
2016-08-22
In order to realize rapid and real temperature measurement for high temperature targets by multi-wavelength pyrometer (MWP), emissivity range constraints to optimize data processing algorithm without effect from emissivity has been developed. Through exploring the relation between emissivity deviation and true temperature by fitting of large number of data from different emissivity distribution target models, the effective search range of emissivity for every time iteration is obtained, so data processing time is greatly reduced. Simulation and experimental results indicate that calculation time is less by 0.2 seconds with 25K absolute error at 1800K true temperature, and the efficiency is improved by more than 90% compared with the previous algorithm. The method has advantages of simplicity, rapidity, and suitability for in-line high temperature measurement. PMID:27557198
NASA Astrophysics Data System (ADS)
Paksi, A. B. N.; Ma'ruf, A.
2016-02-01
In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.
Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints.
Dufour, Pascal A; Ceklic, Lala; Abdillahi, Hannan; Schröder, Simon; De Dzanet, Sandro; Wolf-Schnurrbusch, Ute; Kowal, Jens
2013-03-01
Optical coherence tomography (OCT) is a well-established image modality in ophthalmology and used daily in the clinic. Automatic evaluation of such datasets requires an accurate segmentation of the retinal cell layers. However, due to the naturally low signal to noise ratio and the resulting bad image quality, this task remains challenging. We propose an automatic graph-based multi-surface segmentation algorithm that internally uses soft constraints to add prior information from a learned model. This improves the accuracy of the segmentation and increase the robustness to noise. Furthermore, we show that the graph size can be greatly reduced by applying a smart segmentation scheme. This allows the segmentation to be computed in seconds instead of minutes, without deteriorating the segmentation accuracy, making it ideal for a clinical setup. An extensive evaluation on 20 OCT datasets of healthy eyes was performed and showed a mean unsigned segmentation error of 3.05 ±0.54 μm over all datasets when compared to the average observer, which is lower than the inter-observer variability. Similar performance was measured for the task of drusen segmentation, demonstrating the usefulness of using soft constraints as a tool to deal with pathologies. PMID:23086520
NASA Astrophysics Data System (ADS)
Virrueta, A.; Gaines, J.; O'Hern, C. S.; Regan, L.
2015-03-01
Current research in the O'Hern and Regan laboratories focuses on the development of hard-sphere models with stereochemical constraints for protein structure prediction as an alternative to molecular dynamics methods that utilize knowledge-based corrections in their force-fields. Beginning with simple hydrophobic dipeptides like valine, leucine, and isoleucine, we have shown that our model is able to reproduce the side-chain dihedral angle distributions derived from sets of high-resolution protein crystal structures. However, methionine remains an exception - our model yields a chi-3 side-chain dihedral angle distribution that is relatively uniform from 60 to 300 degrees, while the observed distribution displays peaks at 60, 180, and 300 degrees. Our goal is to resolve this discrepancy by considering clashes with neighboring residues, and averaging the reduced distribution of allowable methionine structures taken from a set of crystallized proteins. We will also re-evaluate the electron density maps from which these protein structures are derived to ensure that the methionines and their local environments are correctly modeled. This work will ultimately serve as a tool for computing side-chain entropy and protein stability. A. V. is supported by an NSF Graduate Research Fellowship and a Ford Foundation Fellowship. J. G. is supported by NIH training Grant NIH-5T15LM007056-28.
NASA Astrophysics Data System (ADS)
Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.
2015-12-01
This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or
Approximation algorithms for NEXTtime-hard periodically specified problems and domino problems
Marathe, M.V.; Hunt, H.B., III; Stearns, R.E.; Rosenkrantz, D.J.
1996-02-01
We study the efficient approximability of two general class of problems: (1) optimization versions of the domino problems studies in [Ha85, Ha86, vEB83, SB84] and (2) graph and satisfiability problems when specified using various kinds of periodic specifications. Both easiness and hardness results are obtained. Our efficient approximation algorithms and schemes are based on extensions of the ideas. Two of properties of our results obtained here are: (1) For the first time, efficient approximation algorithms and schemes have been developed for natural NEXPTIME-complete problems. (2) Our results are the first polynomial time approximation algorithms with good performance guarantees for `hard` problems specified using various kinds of periodic specifications considered in this paper. Our results significantly extend the results in [HW94, Wa93, MH+94].
A pegging algorithm for separable continuous nonlinear knapsack problems with box constraints
NASA Astrophysics Data System (ADS)
Kim, Gitae; Wu, Chih-Hang
2012-10-01
This article proposes an efficient pegging algorithm for solving separable continuous nonlinear knapsack problems with box constraints. A well-known pegging algorithm for solving this problem is the Bitran-Hax algorithm, a preferred choice for large-scale problems. However, at each iteration, it must calculate an optimal dual variable and update all free primal variables, which is time consuming. The proposed algorithm checks the box constraints implicitly using the bounds on the Lagrange multiplier without explicitly calculating primal variables at each iteration as well as updating the dual solution in a more efficient manner. Results of computational experiments have shown that the proposed algorithm consistently outperforms the Bitran-Hax in all baseline testing and two real-time application models. The proposed algorithm shows significant potential for many other mathematical models in real-world applications with straightforward extensions.
Reduced sensitivity algorithm for optical processors using constraints and ridge regression.
Casasent, D; Ghosh, A
1988-04-15
Optical linear algebra processors that involve solutions of linear algebraic equations have significant potential in adaptive and inference machines. We present an algorithm that includes constraints on the accuracy of the processor and improves the accuracy of the results obtained from such analog processors. The constraint algorithm matches the problem to the accuracy of the processor. Calculation of the adaptive weights in a phased array radar is used as a case study. Simulation results prove the benefits advertised. The desensitization of the calculated weights to computational errors in the processor is quantified. Ridge regression isused to determine the parameter needed in the algorithm.
A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, Fred T.
1992-01-01
A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.
Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chiou, Jin-Chern
1990-01-01
Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.
Parallelized event chain algorithm for dense hard sphere and polymer systems
Kampmann, Tobias A. Boltz, Horst-Holger; Kierfeld, Jan
2015-01-15
We combine parallelization and cluster Monte Carlo for hard sphere systems and present a parallelized event chain algorithm for the hard disk system in two dimensions. For parallelization we use a spatial partitioning approach into simulation cells. We find that it is crucial for correctness to ensure detailed balance on the level of Monte Carlo sweeps by drawing the starting sphere of event chains within each simulation cell with replacement. We analyze the performance gains for the parallelized event chain and find a criterion for an optimal degree of parallelization. Because of the cluster nature of event chain moves massive parallelization will not be optimal. Finally, we discuss first applications of the event chain algorithm to dense polymer systems, i.e., bundle-forming solutions of attractive semiflexible polymers.
NASA Technical Reports Server (NTRS)
Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara
2001-01-01
In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.
NEW CONSTRAINTS ON THE BLACK HOLE LOW/HARD STATE INNER ACCRETION FLOW WITH NuSTAR
Miller, J. M.; King, A. L.; Tomsick, J. A.; Boggs, S. E.; Bachetti, M.; Wilkins, D.; Christensen, F. E.; Craig, W. W.; Fabian, A. C.; Kara, E.; Grefenstette, B. W.; Harrison, F. A.; Hailey, C. J.; Stern, D. K; Zhang, W. W.
2015-01-20
We report on an observation of the Galactic black hole candidate GRS 1739–278 during its 2014 outburst, obtained with NuSTAR. The source was captured at the peak of a rising ''low/hard'' state, at a flux of ∼0.3 Crab. A broad, skewed iron line and disk reflection spectrum are revealed. Fits to the sensitive NuSTAR spectra with a number of relativistically blurred disk reflection models yield strong geometrical constraints on the disk and hard X-ray ''corona''. Two models that explicitly assume a ''lamp post'' corona find its base to have a vertical height above the black hole of h=5{sub −2}{sup +7} GM/c{sup 2} and h = 18 ± 4 GM/c {sup 2} (90% confidence errors); models that do not assume a ''lamp post'' return emissivity profiles that are broadly consistent with coronae of this size. Given that X-ray microlensing studies of quasars and reverberation lags in Seyferts find similarly compact coronae, observations may now signal that compact coronae are fundamental across the black hole mass scale. All of the models fit to GRS 1739–278 find that the accretion disk extends very close to the black hole—the least stringent constraint is r{sub in}=5{sub −4}{sup +3} GM/c{sup 2}. Only two of the models deliver meaningful spin constraints, but a = 0.8 ± 0.2 is consistent with all of the fits. Overall, the data provide especially compelling evidence of an association between compact hard X-ray coronae and the base of relativistic radio jets in black holes.
A fast multigrid algorithm for energy minimization under planar density constraints.
Ron, D.; Safro, I.; Brandt, A.; Mathematics and Computer Science; Weizmann Inst. of Science
2010-09-07
The two-dimensional layout optimization problem reinforced by the efficient space utilization demand has a wide spectrum of practical applications. Formulating the problem as a nonlinear minimization problem under planar equality and/or inequality density constraints, we present a linear time multigrid algorithm for solving a correction to this problem. The method is demonstrated in various graph drawing (visualization) instances.
Dogrusoz, Yesim Serinagaoglu; Gavgani, Alireza Mazloumi
2013-04-01
In inverse electrocardiography, the goal is to estimate cardiac electrical sources from potential measurements on the body surface. It is by nature an ill-posed problem, and regularization must be employed to obtain reliable solutions. This paper employs the multiple constraint solution approach proposed in Brooks et al. (IEEE Trans Biomed Eng 46(1):3-18, 1999) and extends its practical applicability to include more than two constraints by finding appropriate values for the multiple regularization parameters. Here, we propose the use of real-valued genetic algorithms for the estimation of multiple regularization parameters. Theoretically, it is possible to include as many constraints as necessary and find the corresponding regularization parameters using this approach. We have shown the feasibility of our method using two and three constraints. The results indicate that GA could be a good approach for the estimation of multiple regularization parameters.
A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem
Bayardo, R.J. Jr.; Miranker, D.P.
1996-12-31
Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-space-bounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevance-bounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes.
On-line reentry guidance algorithm with both path and no-fly zone constraints
NASA Astrophysics Data System (ADS)
Zhang, Da; Liu, Lei; Wang, Yongji
2015-12-01
This study proposes an on-line predictor-corrector reentry guidance algorithm that satisfies path and no-fly zone constraints for hypersonic vehicles with a high lift-to-drag ratio. The proposed guidance algorithm can generate a feasible trajectory at each guidance cycle during the entry flight. In the longitudinal profile, numerical predictor-corrector approaches are used to predict the flight capability from current flight states to expected terminal states and to generate an on-line reference drag acceleration profile. The path constraints on heat rate, aerodynamic load, and dynamic pressure are implemented as a part of the predictor-corrector algorithm. A tracking control law is then designed to track the reference drag acceleration profile. In the lateral profile, a novel guidance algorithm is presented. The velocity azimuth angle error threshold and artificial potential field method are used to reduce heading error and to avoid the no-fly zone. Simulated results for nominal and dispersed cases show that the proposed guidance algorithm not only can avoid the no-fly zone but can also steer a typical entry vehicle along a feasible 3D trajectory that satisfies both terminal and path constraints.
Yurtkuran, Alkın; Emel, Erdal
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.
Yurtkuran, Alkın
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834
Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas
NASA Technical Reports Server (NTRS)
Smith, Barbara M.; Bennett, Sean
1992-01-01
A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.
An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints
Rao, Yunqing; Qi, Dezhong; Li, Jinling
2013-01-01
For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem. PMID:24489491
Zhang, Jinkai; Rivard, Benoit; Rogge, D.M.
2008-01-01
Spectral mixing is a problem inherent to remote sensing data and results in few image pixel spectra representing ″pure″ targets. Linear spectral mixture analysis is designed to address this problem and it assumes that the pixel-to-pixel variability in a scene results from varying proportions of spectral endmembers. In this paper we present a different endmember-search algorithm called the Successive Projection Algorithm (SPA). SPA builds on convex geometry and orthogonal projection common to other endmember search algorithms by including a constraint on the spatial adjacency of endmember candidate pixels. Consequently it can reduce the susceptibility to outlier pixels and generates realistic endmembers.This is demonstrated using two case studies (AVIRIS Cuprite cube and Probe-1 imagery for Baffin Island) where image endmembers can be validated with ground truth data. The SPA algorithm extracts endmembers from hyperspectral data without having to reduce the data dimensionality. It uses the spectral angle (alike IEA) and the spatial adjacency of pixels in the image to constrain the selection of candidate pixels representing an endmember. We designed SPA based on the observation that many targets have spatial continuity (e.g. bedrock lithologies) in imagery and thus a spatial constraint would be beneficial in the endmember search. An additional product of the SPA is data describing the change of the simplex volume ratio between successive iterations during the endmember extraction. It illustrates the influence of a new endmember on the data structure, and provides information on the convergence of the algorithm. It can provide a general guideline to constrain the total number of endmembers in a search.
Precise algorithm to generate random sequential addition of hard hyperspheres at saturation.
Zhang, G; Torquato, S
2013-11-01
The study of the packing of hard hyperspheres in d-dimensional Euclidean space R^{d} has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. E 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g(2)(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed "decorrelation" principle, and the degree of "hyperuniformity" (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the second moment of inertia of the average
Precise algorithm to generate random sequential addition of hard hyperspheres at saturation
NASA Astrophysics Data System (ADS)
Zhang, G.; Torquato, S.
2013-11-01
The study of the packing of hard hyperspheres in d-dimensional Euclidean space Rd has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.74.061308 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g2(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed “decorrelation” principle, and the degree of “hyperuniformity” (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the
Precise algorithm to generate random sequential addition of hard hyperspheres at saturation.
Zhang, G; Torquato, S
2013-11-01
The study of the packing of hard hyperspheres in d-dimensional Euclidean space R^{d} has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. E 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g(2)(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed "decorrelation" principle, and the degree of "hyperuniformity" (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the second moment of inertia of the average
An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.
Zhang, Ye; Yu, Tenglong; Wang, Wenwu
2014-01-01
Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.
Expanding Metabolic Engineering Algorithms Using Feasible Space and Shadow Price Constraint Modules
Tervo, Christopher J.; Reed, Jennifer L.
2014-01-01
While numerous computational methods have been developed that use genome-scale models to propose mutants for the purpose of metabolic engineering, they generally compare mutants based on a single criteria (e.g., production rate at a mutant’s maximum growth rate). As such, these approaches remain limited in their ability to include multiple complex engineering constraints. To address this shortcoming, we have developed feasible space and shadow price constraint (FaceCon and ShadowCon) modules that can be added to existing mixed integer linear adaptive evolution metabolic engineering algorithms, such as OptKnock and OptORF. These modules allow strain designs to be identified amongst a set of multiple metabolic engineering algorithm solutions that are capable of high chemical production while also satisfying additional design criteria. We describe the various module implementations and their potential applications to the field of metabolic engineering. We then incorporated these modules into the OptORF metabolic engineering algorithm. Using an Escherichia coli genome-scale model (iJO1366), we generated different strain designs for the anaerobic production of ethanol from glucose, thus demonstrating the tractability and potential utility of these modules in metabolic engineering algorithms. PMID:25478320
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
Sun, Liping; Luo, Yonglong; Ding, Xintao; Zhang, Ji
2014-01-01
An important component of a spatial clustering algorithm is the distance measure between sample points in object space. In this paper, the traditional Euclidean distance measure is replaced with innovative obstacle distance measure for spatial clustering under obstacle constraints. Firstly, we present a path searching algorithm to approximate the obstacle distance between two points for dealing with obstacles and facilitators. Taking obstacle distance as similarity metric, we subsequently propose the artificial immune clustering with obstacle entity (AICOE) algorithm for clustering spatial point data in the presence of obstacles and facilitators. Finally, the paper presents a comparative analysis of AICOE algorithm and the classical clustering algorithms. Our clustering model based on artificial immune system is also applied to the case of public facility location problem in order to establish the practical applicability of our approach. By using the clone selection principle and updating the cluster centers based on the elite antibodies, the AICOE algorithm is able to achieve the global optimum and better clustering effect.
Bowen, J.; Dozier, G.
1996-12-31
This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Mathews, David H.; Disney, Matthew D.; Childs, Jessica L.; Schroeder, Susan J.; Zuker, Michael; Turner, Douglas H.
2004-01-01
A dynamic programming algorithm for prediction of RNA secondary structure has been revised to accommodate folding constraints determined by chemical modification and to include free energy increments for coaxial stacking of helices when they are either adjacent or separated by a single mismatch. Furthermore, free energy parameters are revised to account for recent experimental results for terminal mismatches and hairpin, bulge, internal, and multibranch loops. To demonstrate the applicability of this method, in vivo modification was performed on 5S rRNA in both Escherichia coli and Candida albicans with 1-cyclohexyl-3-(2-morpholinoethyl) carbodiimide metho-p-toluene sulfonate, dimethyl sulfate, and kethoxal. The percentage of known base pairs in the predicted structure increased from 26.3% to 86.8% for the E. coli sequence by using modification constraints. For C. albicans, the accuracy remained 87.5% both with and without modification data. On average, for these sequences and a set of 14 sequences with known secondary structure and chemical modification data taken from the literature, accuracy improves from 67% to 76%. This enhancement primarily reflects improvement for three sequences that are predicted with <40% accuracy on the basis of energetics alone. For these sequences, inclusion of chemical modification constraints improves the average accuracy from 28% to 78%. For the 11 sequences with <6% pseudoknotted base pairs, structures predicted with constraints from chemical modification contain on average 84% of known canonical base pairs. PMID:15123812
A constraint-based search algorithm for parameter identification of environmental models
NASA Astrophysics Data System (ADS)
Gharari, S.; Shafiei, M.; Hrachowitz, M.; Kumar, R.; Fenicia, F.; Gupta, H. V.; Savenije, H. H. G.
2014-12-01
Many environmental systems models, such as conceptual rainfall-runoff models, rely on model calibration for parameter identification. For this, an observed output time series (such as runoff) is needed, but frequently not available (e.g., when making predictions in ungauged basins). In this study, we provide an alternative approach for parameter identification using constraints based on two types of restrictions derived from prior (or expert) knowledge. The first, called parameter constraints, restricts the solution space based on realistic relationships that must hold between the different model parameters while the second, called process constraints requires that additional realism relationships between the fluxes and state variables must be satisfied. Specifically, we propose a search algorithm for finding parameter sets that simultaneously satisfy such constraints, based on stepwise sampling of the parameter space. Such parameter sets have the desirable property of being consistent with the modeler's intuition of how the catchment functions, and can (if necessary) serve as prior information for further investigations by reducing the prior uncertainties associated with both calibration and prediction.
NASA Astrophysics Data System (ADS)
Sohrabi, Foad; Davidson, Timothy N.
2016-06-01
We consider the problem of power allocation for the single-cell multi-user (MU) multiple-input single-output (MISO) downlink with quality-of-service (QoS) constraints. The base station acquires an estimate of the channels and, for a given beamforming structure, designs the power allocation so as to minimize the total transmission power required to ensure that target signal-to-interference-and-noise ratios at the receivers are met, subject to a specified outage probability. We consider scenarios in which the errors in the base station's channel estimates can be modelled as being zero-mean and Gaussian. Such a model is particularly suitable for time division duplex (TDD) systems with quasi-static channels, in which the base station estimates the channel during the uplink phase. Under that model, we employ a precise deterministic characterization of the outage probability to transform the chance-constrained formulation to a deterministic one. Although that deterministic formulation is not convex, we develop a coordinate descent algorithm that can be shown to converge to a globally optimal solution when the starting point is feasible. Insight into the structure of the deterministic formulation yields approximations that result in coordinate update algorithms with good performance and significantly lower computational cost. The proposed algorithms provide better performance than existing robust power loading algorithms that are based on tractable conservative approximations, and can even provide better performance than robust precoding algorithms based on such approximations.
Solution algorithm of a quasi-Lambert's problem with fixed flight-direction angle constraint
NASA Astrophysics Data System (ADS)
Luo, Qinqin; Meng, Zhanfeng; Han, Chao
2011-04-01
A two-point boundary value problem of the Kepler orbit similar to Lambert's problem is proposed. The problem is to find a Kepler orbit that will travel through the initial and final points in a specified flight time given the radial distances of the two points and the flight-direction angle at the initial point. The Kepler orbits that meet the geometric constraints are parameterized via the universal variable z introduced by Bate. The formula for flight time of the orbits is derived. The admissible interval of the universal variable and the variation pattern of the flight time are explored intensively. A numerical iteration algorithm based on the analytical results is presented to solve the problem. A large number of randomly generated examples are used to test the reliability and efficiency of the algorithm.
Frutos, M.; Méndez, M.; Tohmé, F.; Broz, D.
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502
Frutos, M; Méndez, M; Tohmé, F; Broz, D
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502
A new algorithm for the evaluation of the global hardness of polyatomic molecules
NASA Astrophysics Data System (ADS)
Islam, Nazmul; Ghosh, Dulal Chandra
2011-03-01
Relying upon the commonality of the basic philosophy of the origin and development of electronegativity and hardness, we have attempted to explore whether a hardness equalization principle can be conceived for polyatomic molecules analogous to the electronegativity equalization principle. Starting from the new radial-dependent electrostatic definition of hardness of atoms suggested by the present authors and assuming that the hardness equalization principle is operative and valid, we have derived a formula for evaluating the hardness of polyatomic molecule, ? , where n is the number of ligands, ri is the atomic radius of the ith atom and C is a constant. The formula has been used to calculate the hardness values of 380 polyatomic molecules with widely divergent physico-chemical properties. The computed hardness data of a set of representative molecules are in good agreement with the corresponding hardness data evaluated quantum mechanically. The hardness data of the present work are found to be quite efficacious in explaining the known reaction surfaces of some well-known hard-soft acid-base exchange reactions in the real world. However, the hardness data evaluated through the ansatz and operational and approximate formula of Parr and Pearson poorly correlate the same reaction surface. This study reveals that the new definition of hardness and the assumed model of hardness equalization are scientifically acceptable valid propositions.
Hard Data Analytics Problems Make for Better Data Analysis Algorithms: Bioinformatics as an Example
Widera, Paweł; Lazzarini, Nicola; Krasnogor, Natalio
2014-01-01
Abstract Data mining and knowledge discovery techniques have greatly progressed in the last decade. They are now able to handle larger and larger datasets, process heterogeneous information, integrate complex metadata, and extract and visualize new knowledge. Often these advances were driven by new challenges arising from real-world domains, with biology and biotechnology a prime source of diverse and hard (e.g., high volume, high throughput, high variety, and high noise) data analytics problems. The aim of this article is to show the broad spectrum of data mining tasks and challenges present in biological data, and how these challenges have driven us over the years to design new data mining and knowledge discovery procedures for biodata. This is illustrated with the help of two kinds of case studies. The first kind is focused on the field of protein structure prediction, where we have contributed in several areas: by designing, through regression, functions that can distinguish between good and bad models of a protein's predicted structure; by creating new measures to characterize aspects of a protein's structure associated with individual positions in a protein's sequence, measures containing information that might be useful for protein structure prediction; and by creating accurate estimators of these structural aspects. The second kind of case study is focused on omics data analytics, a class of biological data characterized for having extremely high dimensionalities. Our methods were able not only to generate very accurate classification models, but also to discover new biological knowledge that was later ratified by experimentalists. Finally, we describe several strategies to tightly integrate knowledge extraction and data mining in order to create a new class of biodata mining algorithms that can natively embrace the complexity of biological data, efficiently generate accurate information in the form of classification/regression models, and extract valuable
Hard Data Analytics Problems Make for Better Data Analysis Algorithms: Bioinformatics as an Example.
Bacardit, Jaume; Widera, Paweł; Lazzarini, Nicola; Krasnogor, Natalio
2014-09-01
Data mining and knowledge discovery techniques have greatly progressed in the last decade. They are now able to handle larger and larger datasets, process heterogeneous information, integrate complex metadata, and extract and visualize new knowledge. Often these advances were driven by new challenges arising from real-world domains, with biology and biotechnology a prime source of diverse and hard (e.g., high volume, high throughput, high variety, and high noise) data analytics problems. The aim of this article is to show the broad spectrum of data mining tasks and challenges present in biological data, and how these challenges have driven us over the years to design new data mining and knowledge discovery procedures for biodata. This is illustrated with the help of two kinds of case studies. The first kind is focused on the field of protein structure prediction, where we have contributed in several areas: by designing, through regression, functions that can distinguish between good and bad models of a protein's predicted structure; by creating new measures to characterize aspects of a protein's structure associated with individual positions in a protein's sequence, measures containing information that might be useful for protein structure prediction; and by creating accurate estimators of these structural aspects. The second kind of case study is focused on omics data analytics, a class of biological data characterized for having extremely high dimensionalities. Our methods were able not only to generate very accurate classification models, but also to discover new biological knowledge that was later ratified by experimentalists. Finally, we describe several strategies to tightly integrate knowledge extraction and data mining in order to create a new class of biodata mining algorithms that can natively embrace the complexity of biological data, efficiently generate accurate information in the form of classification/regression models, and extract valuable new
Homotopy Algorithm for Optimal Control Problems with a Second-order State Constraint
Hermant, Audrey
2010-02-15
This paper deals with optimal control problems with a regular second-order state constraint and a scalar control, satisfying the strengthened Legendre-Clebsch condition. We study the stability of structure of stationary points. It is shown that under a uniform strict complementarity assumption, boundary arcs are stable under sufficiently smooth perturbations of the data. On the contrary, nonreducible touch points are not stable under perturbations. We show that under some reasonable conditions, either a boundary arc or a second touch point may appear. Those results allow us to design an homotopy algorithm which automatically detects the structure of the trajectory and initializes the shooting parameters associated with boundary arcs and touch points.
Line Matching Algorithm for Aerial Image Combining image and object space similarity constraints
NASA Astrophysics Data System (ADS)
Wang, Jingxue; Wang, Weixi; Li, Xiaoming; Cao, Zhenyu; Zhu, Hong; Li, Miao; He, Biao; Zhao, Zhigang
2016-06-01
A new straight line matching method for aerial images is proposed in this paper. Compared to previous works, similarity constraints combining radiometric information in image and geometry attributes in object plane are employed in these methods. Firstly, initial candidate lines and the elevation values of lines projection plane are determined by corresponding points in neighborhoods of reference lines. Secondly, project reference line and candidate lines back forward onto the plane, and then similarity measure constraints are enforced to reduce the number of candidates and to determine the finial corresponding lines in a hierarchical way. Thirdly, "one-to-many" and "many-to-one" matching results are transformed into "one-to-one" by merging many lines into the new one, and the errors are eliminated simultaneously. Finally, endpoints of corresponding lines are detected by line expansion process combing with "image-object-image" mapping mode. Experimental results show that the proposed algorithm can be able to obtain reliable line matching results for aerial images.
An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction
ERIC Educational Resources Information Center
Bhasin, Harpreet
2011-01-01
Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…
A Greedy reassignment algorithm for the PBS minimum monitor unit constraint
NASA Astrophysics Data System (ADS)
Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin
2016-06-01
Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90 ± 1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5
A Greedy reassignment algorithm for the PBS minimum monitor unit constraint.
Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin
2016-06-21
Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90 ± 1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5
NASA Astrophysics Data System (ADS)
Guo, Peng; Cheng, Wenming; Wang, Yi
2014-10-01
The quay crane scheduling problem (QCSP) determines the handling sequence of tasks at ship bays by a set of cranes assigned to a container vessel such that the vessel's service time is minimized. A number of heuristics or meta-heuristics have been proposed to obtain the near-optimal solutions to overcome the NP-hardness of the problem. In this article, the idea of generalized extremal optimization (GEO) is adapted to solve the QCSP with respect to various interference constraints. The resulting GEO is termed the modified GEO. A randomized searching method for neighbouring task-to-QC assignments to an incumbent task-to-QC assignment is developed in executing the modified GEO. In addition, a unidirectional search decoding scheme is employed to transform a task-to-QC assignment to an active quay crane schedule. The effectiveness of the developed GEO is tested on a suite of benchmark problems introduced by K.H. Kim and Y.M. Park in 2004 (European Journal of Operational Research, Vol. 156, No. 3). Compared with other well-known existing approaches, the experiment results show that the proposed modified GEO is capable of obtaining the optimal or near-optimal solution in a reasonable time, especially for large-sized problems.
NASA Astrophysics Data System (ADS)
Zhao, Jingtao; Peng, Suping; Du, Wenfeng
2016-02-01
We consider sparsity-constraint inversion method for detecting seismic small-scale discontinuities, such as edges, faults and cavities, which provide rich information about petroleum reservoirs. However, where there is karstification and interference caused by macro-scale fault systems, these seismic small-scale discontinuities are hard to identify when using currently available discontinuity-detection methods. In the subsurface, these small-scale discontinuities are separately and sparsely distributed and their seismic responses occupy a very small part of seismic image. Considering these sparsity and non-smooth features, we propose an effective L 2-L 0 norm model for improvement of their resolution. First, we apply a low-order plane-wave destruction method to eliminate macro-scale smooth events. Then, based the residual data, we use a nonlinear structure-enhancing filter to build a L 2-L 0 norm model. In searching for its solution, an efficient and fast convergent penalty decomposition method is employed. The proposed method can achieve a significant improvement in enhancing seismic small-scale discontinuities. Numerical experiment and field data application demonstrate the effectiveness and feasibility of the proposed method in studying the relevant geology of these reservoirs.
Liang, Mei; Sun, Xiao-gang; Luan, Mei-sheng
2015-10-01
Temperature measurement is one of the important factors for ensuring product quality, reducing production cost and ensuring experiment safety in industrial manufacture and scientific experiment. Radiation thermometry is the main method for non-contact temperature measurement. The second measurement (SM) method is one of the common methods in the multispectral radiation thermometry. However, the SM method cannot be applied to on-line data processing. To solve the problems, a rapid inversion method for multispectral radiation true temperature measurement is proposed and constraint conditions of emissivity model are introduced based on the multispectral brightness temperature model. For non-blackbody, it can be drawn that emissivity is an increasing function in the interval if the brightness temperature is an increasing function or a constant function in a range and emissivity satisfies an inequality of emissivity and wavelength in that interval if the brightness temperature is a decreasing function in a range, according to the relationship of brightness temperatures at different wavelengths. The construction of emissivity assumption values is reduced from multiclass to one class and avoiding the unnecessary emissivity construction with emissivity model constraint conditions on the basis of brightness temperature information. Simulation experiments and comparisons for two different temperature points are carried out based on five measured targets with five representative variation trends of real emissivity. decreasing monotonically, increasing monotonically, first decreasing with wavelength and then increasing, first increasing and then decreasing and fluctuating with wavelength randomly. The simulation results show that compared with the SM method, for the same target under the same initial temperature and emissivity search range, the processing speed of the proposed algorithm is increased by 19.16%-43.45% with the same precision and the same calculation results.
Liang, Mei; Sun, Xiao-gang; Luan, Mei-sheng
2015-10-01
Temperature measurement is one of the important factors for ensuring product quality, reducing production cost and ensuring experiment safety in industrial manufacture and scientific experiment. Radiation thermometry is the main method for non-contact temperature measurement. The second measurement (SM) method is one of the common methods in the multispectral radiation thermometry. However, the SM method cannot be applied to on-line data processing. To solve the problems, a rapid inversion method for multispectral radiation true temperature measurement is proposed and constraint conditions of emissivity model are introduced based on the multispectral brightness temperature model. For non-blackbody, it can be drawn that emissivity is an increasing function in the interval if the brightness temperature is an increasing function or a constant function in a range and emissivity satisfies an inequality of emissivity and wavelength in that interval if the brightness temperature is a decreasing function in a range, according to the relationship of brightness temperatures at different wavelengths. The construction of emissivity assumption values is reduced from multiclass to one class and avoiding the unnecessary emissivity construction with emissivity model constraint conditions on the basis of brightness temperature information. Simulation experiments and comparisons for two different temperature points are carried out based on five measured targets with five representative variation trends of real emissivity. decreasing monotonically, increasing monotonically, first decreasing with wavelength and then increasing, first increasing and then decreasing and fluctuating with wavelength randomly. The simulation results show that compared with the SM method, for the same target under the same initial temperature and emissivity search range, the processing speed of the proposed algorithm is increased by 19.16%-43.45% with the same precision and the same calculation results
Parvizi, A; Van den Broek, W; Koch, C T
2016-04-18
The transport of intensity equation (TIE) is widely applied for recovering wave fronts from an intensity measurement and a measurement of its variation along the direction of propagation. In order to get around the problem of non-uniqueness and ill-conditionedness of the solution of the TIE in the very common case of unspecified boundary conditions or noisy data, additional constraints to the solution are necessary. Although from a numerical optimization point of view, convex constraint as imposed to by total variation minimization is preferable, we will show that in many cases non-convex constraints are necessary to overcome the low-frequency artifacts so typical for convex constraints. We will provide simulated and experimental examples that demonstrate the superiority of solutions to the TIE obtained by our recently introduced gradient flipping algorithm over a total variation constrained solution. PMID:27137272
NASA Astrophysics Data System (ADS)
Chernyaev, Yu. A.
2016-03-01
A numerical algorithm for minimizing a convex function on a smooth surface is proposed. The algorithm is based on reducing the original problem to a sequence of convex programming problems. Necessary extremum conditions are examined, and the convergence of the algorithm is analyzed.
Nakanishi, Takashi
2010-05-28
Dimensionally controlled and hierarchically assembled supramolecular architectures in nano/micro/bulk length scales are formed by self-organization of alkyl-conjugated fullerenes. The simple molecular design of covalently attaching hydrophobic long alkyl chains to fullerene (C(60)) is different from the conventional (hydrophobic-hydrophilic) amphiphilic molecular designs. The two different units of the alkyl-conjugated C(60) are incompatible but both are soluble in organic solvents. The van der Waals intermolecular forces among long hydrocarbon chains and the pi-pi interaction between C(60) moieties govern the self-organization of the alkyl-conjugated C(60) derivatives. A delicate balance between the pi-pi and van der Waals forces in the assemblies leads to a wide variety of supramolecular architectures and paves the way for developing supramolecular soft materials possessing various morphologies and functions. For instance, superhydrophobic films, electron-transporting thermotropic liquid crystals and room-temperature liquids have been demonstrated. Furthermore, the unique morphologies of the assemblies can be utilised as a template for the fabrication of nanostructured metallic surfaces in a highly reproducible and sustainable way. The resulting metallic surfaces can serve as excellent active substrates for surface-enhanced Raman scattering (SERS) owing to their plasmon enhancing characteristics. The use of self-assembling supramolecular objects as a structural template to fabricate innovative well-defined metal nanomaterials links soft matter chemistry to hard matter sciences.
NASA Astrophysics Data System (ADS)
Sun, Junfeng; Chang, Qin; Hu, Xiaohui; Yang, Yueling
2015-04-01
In this paper, we investigate the contributions of hard spectator scattering and annihilation in B → PV decays within the QCD factorization framework. With available experimental data on B → πK* , ρK , πρ and Kϕ decays, comprehensive χ2 analyses of the parameters XA,Hi,f (ρA,Hi,f, ϕA,Hi,f) are performed, where XAf (XAi) and XH are used to parameterize the endpoint divergences of the (non)factorizable annihilation and hard spectator scattering amplitudes, respectively. Based on χ2 analyses, it is observed that (1) The topology-dependent parameterization scheme is feasible for B → PV decays; (2) At the current accuracy of experimental measurements and theoretical evaluations, XH = XAi is allowed by B → PV decays, but XH ≠ XAf at 68% C.L.; (3) With the simplification XH = XAi, parameters XAf and XAi should be treated individually. The above-described findings are very similar to those obtained from B → PP decays. Numerically, for B → PV decays, we obtain (ρA,Hi ,ϕA,Hi [ ° ]) = (2.87-1.95+0.66 , -145-21+14) and (ρAf, ϕAf [ ° ]) = (0.91-0.13+0.12 , -37-9+10) at 68% C.L. With the best-fit values, most of the theoretical results are in good agreement with the experimental data within errors. However, significant corrections to the color-suppressed tree amplitude α2 related to a large ρH result in the wrong sign for ACPdir (B- →π0K*-) compared with the most recent BABAR data, which presents a new obstacle in solving "ππ" and "πK" puzzles through α2. A crosscheck with measurements at Belle (or Belle II) and LHCb, which offer higher precision, is urgently expected to confirm or refute such possible mismatch.
Genetic algorithm to design Laue lenses with optimal performance for focusing hard X- and γ-rays
NASA Astrophysics Data System (ADS)
Camattari, Riccardo; Guidi, Vincenzo
2014-10-01
To focus hard X- and γ-rays it is possible to use a Laue lens as a concentrator. With this optics it is possible to improve the detection of radiation for several applications, from the observation of the most violent phenomena in the sky to nuclear medicine applications for diagnostic and therapeutic purposes. We implemented a code named LaueGen, which is based on a genetic algorithm and aims to design optimized Laue lenses. The genetic algorithm was selected because optimizing a Laue lens is a complex and discretized problem. The output of the code consists of the design of a Laue lens, which is composed of diffracting crystals that are selected and arranged in such a way as to maximize the lens performance. The code allows managing crystals of any material and crystallographic orientation. The program is structured in such a way that the user can control all the initial lens parameters. As a result, LaueGen is highly versatile and can be used to design very small lenses, for example, for nuclear medicine, or very large lenses, for example, for satellite-borne astrophysical missions.
NASA Astrophysics Data System (ADS)
Chang, Cheng; Xu, Wei; Chen-Wiegart, Yu-chen Karen; Wang, Jun; Yu, Dantong
2013-12-01
X-ray Absorption Near Edge Structure (XANES) imaging, an advanced absorption spectroscopy technique, at the Transmission X-ray Microscopy (TXM) Beamline X8C of NSLS enables high-resolution chemical mapping (a.k.a. chemical composition identification or chemical spectra fitting). Two-Dimensional (2D) chemical mapping has been successfully applied to study many functional materials to decide the percentages of chemical components at each pixel position of the material images. In chemical mapping, the attenuation coefficient spectrum of the material (sample) can be fitted with the weighted sum of standard spectra of individual chemical compositions, where the weights are the percentages to be calculated. In this paper, we first implemented and compared two fitting approaches: (i) a brute force enumeration method, and (ii) a constrained least square minimization algorithm proposed by us. Next, as 2D spectra fitting can be conducted pixel by pixel, so theoretically, both methods can be implemented in parallel. In order to demonstrate the feasibility of parallel computing in the chemical mapping problem and investigate how much efficiency improvement can be achieved, we used the second approach as an example and implemented a parallel version for a multi-core computer cluster. Finally we used a novel way to visualize the calculated chemical compositions, by which domain scientists could grasp the percentage difference easily without looking into the real data.
Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson
2006-08-01
We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.
NASA Astrophysics Data System (ADS)
Krauze, W.; Makowski, P.; Kujawińska, M.
2015-06-01
Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.
Williams, P.T.
1993-09-01
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.
Efficient Haplotype Block Partitioning and Tag SNP Selection Algorithms under Various Constraints
Chen, Wen-Pei; Lin, Yaw-Ling
2013-01-01
Patterns of linkage disequilibrium plays a central role in genome-wide association studies aimed at identifying genetic variation responsible for common human diseases. These patterns in human chromosomes show a block-like structure, and regions of high linkage disequilibrium are called haplotype blocks. A small subset of SNPs, called tag SNPs, is sufficient to capture the haplotype patterns in each haplotype block. Previously developed algorithms completely partition a haplotype sample into blocks while attempting to minimize the number of tag SNPs. However, when resource limitations prevent genotyping all the tag SNPs, it is desirable to restrict their number. We propose two dynamic programming algorithms, incorporating many diversity evaluation functions, for haplotype block partitioning using a limited number of tag SNPs. We use the proposed algorithms to partition the chromosome 21 haplotype data. When the sample is fully partitioned into blocks by our algorithms, the 2,266 blocks and 3,260 tag SNPs are fewer than those identified by previous studies. We also demonstrate that our algorithms find the optimal solution by exploiting the nonmonotonic property of a common haplotype-evaluation function. PMID:24319694
Semenov, Alexander; Zaikin, Oleg
2016-01-01
In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method. PMID:27190753
Semenov, Alexander; Zaikin, Oleg
2016-01-01
In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Torczon, Virginia
1998-01-01
We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.
Shankar, T.J.; Sokhansanj, Shahabaddine
2010-02-01
Crossover and mutation are the main search operators of genetic algorithm, one of the most important features which distinguish it from other search algorithms like simulated annealing. A genetic algorithm adopts crossover and mutation as their main genetic operators. The present work was aimed to see the effect of genetic algorithm operators like crossover and mutation (Pc & Pm), population size (n), and number of iterations (I) on predicting the minimum hardness (N) of the biomaterial extrudate. The second order polynomial regression equation developed for the extrudate property hardness in terms of the independent variables like barrel temperature, screw speed, fish content of the feed, and feed moisture content was used as the objective function in the GA analysis. A simple genetic algorithm (SGA) with a crossover and mutation operators was used in the present study. A program was developed in C language for a SGA with a rank based fitness selection method. The upper limit of population and iterations were fixed at 100. It was observed that increasing population and iterations the prediction of function minimum improved drastically. Minimum predicted hardness values were achievable with a medium population of 50, iterations of 50 and crossover and mutation probabilities of 50 % and 0.5 %. Further the Pareto charts indicated that the effect of Pc was found to be more significant when population is 50 and Pm played a major role at low population ( 10). A crossover probability of 50 % and mutation probability of 0.5 % are the threshold values for the convergence of GA to reach a global search space. A minimum predicted hardness value of 3.82 (N) was observed for n = 60 and I = 100 and Pc & Pm of 85 % and 0.5 %.
NASA Astrophysics Data System (ADS)
Berger, Gilles; Million-Picallion, Lisa; Lefevre, Grégory; Delaunay, Sophie
2015-04-01
Introduction: The hydrothermal crystallization of silicates phases in the Si-Al-Fe system may lead to industrial constraints that can be encountered in the nuclear industry in at least two contexts: the geological repository for nuclear wastes and the formation of hard sludges in the steam generator of the PWR nuclear plants. In the first situation, the chemical reactions between the Fe-canister and the surrounding clays have been extensively studied in laboratory [1-7] and pilot experiments [8]. These studies demonstrated that the high reactivity of metallic iron leads to the formation of Fe-silicates, berthierine like, in a wide range of temperature. By contrast, the formation of deposits in the steam generators of PWR plants, called hard sludges, is a newer and less studied issue which can affect the reactor performance. Experiments: We present here a preliminary set of experiments reproducing the formation of hard sludges under conditions representative of the steam generator of PWR power plant: 275°C, diluted solutions maintained at low potential by hydrazine addition and at alkaline pH by low concentrations of amines and ammoniac. Magnetite, a corrosion by-product of the secondary circuit, is the source of iron while aqueous Si and Al, the major impurities in this system, are supplied either as trace elements in the circulating solution or by addition of amorphous silica and alumina when considering confined zones. The fluid chemistry is monitored by sampling aliquots of the solution. Eh and pH are continuously measured by hydrothermal Cormet© electrodes implanted in a titanium hydrothermal reactor. The transformation, or not, of the solid fraction was examined post-mortem. These experiments evidenced the role of Al colloids as precursor of cements composed of kaolinite and boehmite, and the passivation of amorphous silica (becoming unreactive) likely by sorption of aqueous iron. But no Fe-bearing was formed by contrast to many published studies on the Fe
NASA Astrophysics Data System (ADS)
Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang
2016-08-01
Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.
Algorithms for magnetic tomography—on the role of a priori knowledge and constraints
NASA Astrophysics Data System (ADS)
Hauer, Karl-Heinz; Potthast, Roland; Wannert, Martin
2008-08-01
Magnetic tomography investigates the reconstruction of currents from their magnetic fields. Here, we will study a number of projection methods in combination with the Tikhonov regularization for stabilization for the solution of the Biot-Savart integral equation Wj = H with the Biot-Savart integral operator W:(L2(Ω))3 → (L2(∂G))3 where \\overline{\\Omega} \\subset G . In particular, we study the role of a priori knowledge when incorporated into the choice of the projection spaces X_n \\subset (L^2(\\Omega))^3, n\\in {\\bb N} , for example the conditions div j = 0 or the use of the full boundary value problem div σgrad phivE = 0 in Ω, ν sdot σgrad phivE = g on ∂Ω with some known function g, where j = σgrad phivE and σ is an anisotropic matrix-valued conductivity. We will discuss and compare these schemes investigating the ill-posedness of each algorithm in terms of the behaviour of the singular values of the corresponding operators both when a priori knowledge is incorporated and when the geometrical setting is modified. Finally, we will numerically evaluate the stability constants in the practical setup of magnetic tomography for fuel cells and, thus, calculate usable error bounds for this important application area.
NASA Astrophysics Data System (ADS)
Huseyin Turan, Hasan; Kasap, Nihat; Savran, Huseyin
2014-03-01
Nowadays, every firm uses telecommunication networks in different amounts and ways in order to complete their daily operations. In this article, we investigate an optimisation problem that a firm faces when acquiring network capacity from a market in which there exist several network providers offering different pricing and quality of service (QoS) schemes. The QoS level guaranteed by network providers and the minimum quality level of service, which is needed for accomplishing the operations are denoted as fuzzy numbers in order to handle the non-deterministic nature of the telecommunication network environment. Interestingly, the mathematical formulation of the aforementioned problem leads to the special case of a well-known two-dimensional bin packing problem, which is famous for its computational complexity. We propose two different heuristic solution procedures that have the capability of solving the resulting nonlinear mixed integer programming model with fuzzy constraints. In conclusion, the efficiency of each algorithm is tested in several test instances to demonstrate the applicability of the methodology.
Lonchampt, J.; Fessart, K.
2013-07-01
The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in the Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Hu, Jiaqi; Li, Qi; Cui, Shanshan
2014-10-20
In terahertz inline digital holography, zero-order diffraction light and conjugate images can cause the reconstructed image to be blurred. In this paper, three phase retrieval algorithms are applied to conduct reconstruction based on the same near-field diffraction propagation conditions and image-plane constraints. The impact of different object-plane constraints on CW terahertz inline digital holographic reconstruction is studied. The results show that in the phase retrieval algorithm it is not suitable to impose restriction on the phase when the object is not isolated in the transmission-type CW terahertz inline digital holography. In addition, the effects of zero-padding expansion, boundary replication expansion, and apodization operation on reconstructed images are studied. The results indicate that the conjugate image can be eliminated, and a better reconstructed image can be obtained by adopting an appropriate phase retrieval algorithm after the normalized hologram extending to the minimum area, which meets the applicable range of the angular spectrum reconstruction algorithm by means of boundary replication.
NASA Astrophysics Data System (ADS)
de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein
2012-12-01
In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009), 10.1103/PhysRevLett.103.188302] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011), 10.1103/PhysRevLett.107.155501] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.
de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein
2012-12-01
In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009)] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011)] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations. PMID:23231211
Technology Transfer Automated Retrieval System (TEKTRAN)
This research was initiated to investigate the association between flour breadmaking traits and mixing characteristics and empirical dough rheological property under thermal stress. Flour samples from 30 hard spring wheat were analyzed by a mixolab standard procedure at optimum water absorptions. Mi...
Statistical Physics of Hard Optimization Problems
NASA Astrophysics Data System (ADS)
Zdeborová, Lenka
2008-06-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the NP-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this thesis is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.
Statistical physics of hard optimization problems
NASA Astrophysics Data System (ADS)
Zdeborová, Lenka
2009-06-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.
Bloom, Joshua S.; Prochaska, J.X.; Pooley, D.; Blake, C.W.; Foley, R.J.; Jha, S.; Ramirez-Ruiz, E.; Granot, J.; Filippenko, A.V.; Sigurdsson, S.; Barth, A.J.; Chen, H.-W.; Cooper, M.C.; Falco, E.E.; Gal, R.R.; Gerke, B.F.; Gladders, M.D.; Greene, J.E.; Hennanwi, J.; Ho, L.C.; Hurley, K.; /UC, Berkeley, Astron. Dept. /Lick Observ. /Harvard-Smithsonian Ctr. Astrophys. /Princeton, Inst. Advanced Study /KIPAC, Menlo Park /Penn State U., Astron. Astrophys. /UC, Irvine /MIT, MKI /UC, Davis /UC, Berkeley /Carnegie Inst. Observ. /UC, Berkeley, Space Sci. Dept. /Michigan U. /LBL, Berkeley /Spitzer Space Telescope
2005-06-07
The localization of the short-duration, hard-spectrum gamma-ray burst GRB050509b by the Swift satellite was a watershed event. Never before had a member of this mysterious subclass of classic GRBs been rapidly and precisely positioned in a sky accessible to the bevy of ground-based follow-up facilities. Thanks to the nearly immediate relay of the GRB position by Swift, we began imaging the GRB field 8 minutes after the burst and have continued during the 8 days since. Though the Swift X-ray Telescope (XRT) discovered an X-ray afterglow of GRB050509b, the first ever of a short-hard burst, thus far no convincing optical/infrared candidate afterglow or supernova has been found for the object. We present a re-analysis of the XRT afterglow and find an absolute position of R.A. = 12h36m13.59s, Decl. = +28{sup o}59'04.9'' (J2000), with a 1{sigma} uncertainty of 3.68'' in R.A., 3.52'' in Decl.; this is about 4'' to the west of the XRT position reported previously. Close to this position is a bright elliptical galaxy with redshift z = 0.2248 {+-} 0.0002, about 1' from the center of a rich cluster of galaxies. This cluster has detectable diffuse emission, with a temperature of kT = 5.25{sub -1.68}{sup +3.36} keV. We also find several ({approx}11) much fainter galaxies consistent with the XRT position from deep Keck imaging and have obtained Gemini spectra of several of these sources. Nevertheless we argue, based on positional coincidences, that the GRB and the bright elliptical are likely to be physically related. We thus have discovered reasonable evidence that at least some short-duration, hard-spectra GRBs are at cosmological distances. We also explore the connection of the properties of the burst and the afterglow, finding that GRB050509b was underluminous in both of these relative to long-duration GRBs. However, we also demonstrate that the ratio of the blast-wave energy to the {gamma}-ray energy is consistent with that of long-duration GRBs. We thus find plausible
NASA Astrophysics Data System (ADS)
Trunfio, Roberto
2015-06-01
In a recent article, Guo, Cheng and Wang proposed a randomized search algorithm, called modified generalized extremal optimization (MGEO), to solve the quay crane scheduling problem for container groups under the assumption that schedules are unidirectional. The authors claim that the proposed algorithm is capable of finding new best solutions with respect to a well-known set of benchmark instances taken from the literature. However, as shown in this note, there are some errors in their work that can be detected by analysing the Gantt charts of two solutions provided by MGEO. In addition, some comments on the method used to evaluate the schedule corresponding to a task-to-quay crane assignment and on the search scheme of the proposed algorithm are provided. Finally, to assess the effectiveness of the proposed algorithm, the computational experiments are repeated and additional computational experiments are provided.
Temporal Constraint Reasoning With Preferences
NASA Technical Reports Server (NTRS)
Khatib, Lina; Morris, Paul; Morris, Robert; Rossi, Francesca
2001-01-01
A number of reasoning problems involving the manipulation of temporal information can naturally be viewed as implicitly inducing an ordering of potential local decisions involving time (specifically, associated with durations or orderings of events) on the basis of preferences. For example. a pair of events might be constrained to occur in a certain order, and, in addition. it might be preferable that the delay between them be as large, or as small, as possible. This paper explores problems in which a set of temporal constraints is specified, where each constraint is associated with preference criteria for making local decisions about the events involved in the constraint, and a reasoner must infer a complete solution to the problem such that, to the extent possible, these local preferences are met in the best way. A constraint framework for reasoning about time is generalized to allow for preferences over event distances and durations, and we study the complexity of solving problems in the resulting formalism. It is shown that while in general such problems are NP-hard, some restrictions on the shape of the preference functions, and on the structure of the preference set, can be enforced to achieve tractability. In these cases, a simple generalization of a single-source shortest path algorithm can be used to compute a globally preferred solution in polynomial time.
NGC 5548: LACK OF A BROAD Fe K{alpha} LINE AND CONSTRAINTS ON THE LOCATION OF THE HARD X-RAY SOURCE
Brenneman, L. W.; Elvis, M.; Krongold, Y.; Liu, Y.; Mathur, S.
2012-01-01
We present an analysis of the co-added and individual 0.7-40 keV spectra from seven Suzaku observations of the Sy 1.5 galaxy NGC 5548 taken over a period of eight weeks. We conclude that the source has a moderately ionized, three-zone warm absorber, a power-law continuum, and exhibits contributions from cold, distant reflection. Relativistic reflection signatures are not significantly detected in the co-added data, and we place an upper limit on the equivalent width of a relativistically broad Fe K{alpha} line at EW {<=} 26 eV at 90% confidence. Thus NGC 5548 can be labeled as a 'weak' type 1 active galactic nucleus (AGN) in terms of its observed inner disk reflection signatures, in contrast to sources with very broad, strong iron lines such as MCG-6-30-15, which are likely much fewer in number. We compare physical properties of NGC 5548 and MCG-6-30-15 that might explain this difference in their reflection properties. Though there is some evidence that NGC 5548 may harbor a truncated inner accretion disk, this evidence is inconclusive, so we also consider light bending of the hard X-ray continuum emission in order to explain the lack of relativistic reflection in our observation. If the absence of a broad Fe K{alpha} line is interpreted in the light-bending context, we conclude that the source of the hard X-ray continuum lies at radii r{sub s} {approx}> 100 r{sub g}. We note, however, that light-bending models must be expanded to include a broader range of physical parameter space in order to adequately explain the spectral and timing properties of average AGNs, rather than just those with strong, broad iron lines.
NASA Astrophysics Data System (ADS)
Castro, Marcelo A.; Thomasson, David; Avila, Nilo A.; Hufton, Jennifer; Senseney, Justin; Johnson, Reed F.; Dyall, Julie
2013-03-01
Monkeypox virus is an emerging zoonotic pathogen that results in up to 10% mortality in humans. Knowledge of clinical manifestations and temporal progression of monkeypox disease is limited to data collected from rare outbreaks in remote regions of Central and West Africa. Clinical observations show that monkeypox infection resembles variola infection. Given the limited capability to study monkeypox disease in humans, characterization of the disease in animal models is required. A previous work focused on the identification of inflammatory patterns using PET/CT image modality in two non-human primates previously inoculated with the virus. In this work we extended techniques used in computer-aided detection of lung tumors to identify inflammatory lesions from monkeypox virus infection and their progression using CT images. Accurate estimation of partial volumes of lung lesions via segmentation is difficult because of poor discrimination between blood vessels, diseased regions, and outer structures. We used hard C-means algorithm in conjunction with landmark based registration to estimate the extent of monkeypox virus induced disease before inoculation and after disease progression. Automated estimation is in close agreement with manual segmentation.
Smell Detection Agent Based Optimization Algorithm
NASA Astrophysics Data System (ADS)
Vinod Chandra, S. S.
2016-09-01
In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.
Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2003-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.
Designing a fuzzy scheduler for hard real-time systems
NASA Technical Reports Server (NTRS)
Yen, John; Lee, Jonathan; Pfluger, Nathan; Natarajan, Swami
1992-01-01
In hard real-time systems, tasks have to be performed not only correctly, but also in a timely fashion. If timing constraints are not met, there might be severe consequences. Task scheduling is the most important problem in designing a hard real-time system, because the scheduling algorithm ensures that tasks meet their deadlines. However, the inherent nature of uncertainty in dynamic hard real-time systems increases the problems inherent in scheduling. In an effort to alleviate these problems, we have developed a fuzzy scheduler to facilitate searching for a feasible schedule. A set of fuzzy rules are proposed to guide the search. The situation we are trying to address is the performance of the system when no feasible solution can be found, and therefore, certain tasks will not be executed. We wish to limit the number of important tasks that are not scheduled.
Nie, Chu; Geng, Jun; Marlow, William H
2016-04-14
In order to improve the sampling of restricted microstates in our previous work [C. Nie, J. Geng, and W. H. Marlow, J. Chem. Phys. 127, 154505 (2007); 128, 234310 (2008)] and quantitatively predict thermal properties of supersaturated vapors, an extension is made to the Corti and Debenedetti subcell constraint algorithm [D. S. Corti and P. Debenedetti, Chem. Eng. Sci. 49, 2717 (1994)], which restricts the maximum allowed local density at any point in a simulation box. The maximum allowed local density at a point in a simulation box is defined by the maximum number of particles Nm allowed to appear inside a sphere of radius R, with this point as the center of the sphere. Both Nm and R serve as extra thermodynamic variables for maintaining a certain degree of spatial homogeneity in a supersaturated system. In a restricted canonical ensemble, at a given temperature and an overall density, series of local minima on the Helmholtz free energy surface F(Nm, R) are found subject to different (Nm, R) pairs. The true equilibrium metastable state is identified through the analysis of the formation free energies of Stillinger clusters of various sizes obtained from these restricted states. The simulation results of a supersaturated Lennard-Jones vapor at reduced temperature 0.7 including the vapor pressure isotherm, formation free energies of critical nuclei, and chemical potential differences are presented and analyzed. In addition, with slight modifications, the current algorithm can be applied to computing thermal properties of superheated liquids. PMID:27083734
NASA Astrophysics Data System (ADS)
Nie, Chu; Geng, Jun; Marlow, William H.
2016-04-01
In order to improve the sampling of restricted microstates in our previous work [C. Nie, J. Geng, and W. H. Marlow, J. Chem. Phys. 127, 154505 (2007); 128, 234310 (2008)] and quantitatively predict thermal properties of supersaturated vapors, an extension is made to the Corti and Debenedetti subcell constraint algorithm [D. S. Corti and P. Debenedetti, Chem. Eng. Sci. 49, 2717 (1994)], which restricts the maximum allowed local density at any point in a simulation box. The maximum allowed local density at a point in a simulation box is defined by the maximum number of particles Nm allowed to appear inside a sphere of radius R, with this point as the center of the sphere. Both Nm and R serve as extra thermodynamic variables for maintaining a certain degree of spatial homogeneity in a supersaturated system. In a restricted canonical ensemble, at a given temperature and an overall density, series of local minima on the Helmholtz free energy surface F(Nm, R) are found subject to different (Nm, R) pairs. The true equilibrium metastable state is identified through the analysis of the formation free energies of Stillinger clusters of various sizes obtained from these restricted states. The simulation results of a supersaturated Lennard-Jones vapor at reduced temperature 0.7 including the vapor pressure isotherm, formation free energies of critical nuclei, and chemical potential differences are presented and analyzed. In addition, with slight modifications, the current algorithm can be applied to computing thermal properties of superheated liquids.
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Compact location problems with budget and communication constraints
Krumke, S.O.; Noltemeier, H.; Ravi, S.S.; Marathe, M.V.
1995-07-01
The authors consider the problem of placing a specified number p of facilities on the nodes of a given network with two nonnegative edge-weight functions so as to minimize the diameter of the placement with respect to the first weight function subject to a diameter or sum-constraint with respect to the second weight function. Define an ({alpha}, {beta})-approximation algorithm as a polynomial-time algorithm that produces a solution within {alpha} times the optimal value with respect to the first weight function, violating the constraint with respect to the second weight function by a factor of at most {beta}. They show that in general obtaining an ({alpha}, {beta})-approximation for any fixed {alpha}, {beta} {ge} 1 is NP-hard for any of these problems. They also present efficient approximation algorithms for several of the problems studied, when both edge-weight functions obey the triangle inequality.
Robust H∞ stabilization of a hard disk drive system with a single-stage actuator
NASA Astrophysics Data System (ADS)
Harno, Hendra G.; Kiin Woon, Raymond Song
2015-04-01
This paper considers a robust H∞ control problem for a hard disk drive system with a single stage actuator. The hard disk drive system is modeled as a linear time-invariant uncertain system where its uncertain parameters and high-order dynamics are considered as uncertainties satisfying integral quadratic constraints. The robust H∞ control problem is transformed into a nonlinear optimization problem with a pair of parameterized algebraic Riccati equations as nonconvex constraints. The nonlinear optimization problem is then solved using a differential evolution algorithm to find stabilizing solutions to the Riccati equations. These solutions are used for synthesizing an output feedback robust H∞ controller to stabilize the hard disk drive system with a specified disturbance attenuation level.
Order-to-chaos transition in the hardness of random Boolean satisfiability problems
NASA Astrophysics Data System (ADS)
Varga, Melinda; Sumi, Róbert; Toroczkai, Zoltán; Ercsey-Ravasz, Mária
2016-05-01
Transient chaos is a ubiquitous phenomenon characterizing the dynamics of phase-space trajectories evolving towards a steady-state attractor in physical systems as diverse as fluids, chemical reactions, and condensed matter systems. Here we show that transient chaos also appears in the dynamics of certain efficient algorithms searching for solutions of constraint satisfaction problems that include scheduling, circuit design, routing, database problems, and even Sudoku. In particular, we present a study of the emergence of hardness in Boolean satisfiability (k -SAT), a canonical class of constraint satisfaction problems, by using an analog deterministic algorithm based on a system of ordinary differential equations. Problem hardness is defined through the escape rate κ , an invariant measure of transient chaos of the dynamical system corresponding to the analog algorithm, and it expresses the rate at which the trajectory approaches a solution. We show that for a given density of constraints and fixed number of Boolean variables N , the hardness of formulas in random k -SAT ensembles has a wide variation, approximable by a lognormal distribution. We also show that when increasing the density of constraints α , hardness appears through a second-order phase transition at αχ in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic. A similar behavior is found in 4-SAT as well, however, such a transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter κ ˜NB |α - αχ|1-γ with 0 <γ <1 . We demonstrate that the transition is generated by the appearance of metastable basins in the solution space as the density of constraints α is increased.
Order-to-chaos transition in the hardness of random Boolean satisfiability problems.
Varga, Melinda; Sumi, Róbert; Toroczkai, Zoltán; Ercsey-Ravasz, Mária
2016-05-01
Transient chaos is a ubiquitous phenomenon characterizing the dynamics of phase-space trajectories evolving towards a steady-state attractor in physical systems as diverse as fluids, chemical reactions, and condensed matter systems. Here we show that transient chaos also appears in the dynamics of certain efficient algorithms searching for solutions of constraint satisfaction problems that include scheduling, circuit design, routing, database problems, and even Sudoku. In particular, we present a study of the emergence of hardness in Boolean satisfiability (k-SAT), a canonical class of constraint satisfaction problems, by using an analog deterministic algorithm based on a system of ordinary differential equations. Problem hardness is defined through the escape rate κ, an invariant measure of transient chaos of the dynamical system corresponding to the analog algorithm, and it expresses the rate at which the trajectory approaches a solution. We show that for a given density of constraints and fixed number of Boolean variables N, the hardness of formulas in random k-SAT ensembles has a wide variation, approximable by a lognormal distribution. We also show that when increasing the density of constraints α, hardness appears through a second-order phase transition at α_{χ} in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic. A similar behavior is found in 4-SAT as well, however, such a transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter κ∼N^{B|α-α_{χ}|^{1-γ}} with 0<γ<1. We demonstrate that the transition is generated by the appearance of metastable basins in the solution space as the density of constraints α is increased. PMID:27300884
Order-to-chaos transition in the hardness of random Boolean satisfiability problems.
Varga, Melinda; Sumi, Róbert; Toroczkai, Zoltán; Ercsey-Ravasz, Mária
2016-05-01
Transient chaos is a ubiquitous phenomenon characterizing the dynamics of phase-space trajectories evolving towards a steady-state attractor in physical systems as diverse as fluids, chemical reactions, and condensed matter systems. Here we show that transient chaos also appears in the dynamics of certain efficient algorithms searching for solutions of constraint satisfaction problems that include scheduling, circuit design, routing, database problems, and even Sudoku. In particular, we present a study of the emergence of hardness in Boolean satisfiability (k-SAT), a canonical class of constraint satisfaction problems, by using an analog deterministic algorithm based on a system of ordinary differential equations. Problem hardness is defined through the escape rate κ, an invariant measure of transient chaos of the dynamical system corresponding to the analog algorithm, and it expresses the rate at which the trajectory approaches a solution. We show that for a given density of constraints and fixed number of Boolean variables N, the hardness of formulas in random k-SAT ensembles has a wide variation, approximable by a lognormal distribution. We also show that when increasing the density of constraints α, hardness appears through a second-order phase transition at α_{χ} in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic. A similar behavior is found in 4-SAT as well, however, such a transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter κ∼N^{B|α-α_{χ}|^{1-γ}} with 0<γ<1. We demonstrate that the transition is generated by the appearance of metastable basins in the solution space as the density of constraints α is increased.
Order-to-chaos transition in the hardness of random Boolean satisfiability problems
NASA Astrophysics Data System (ADS)
Varga, Melinda; Sumi, Robert; Ercsey-Ravasz, Maria; Toroczkai, Zoltan
Transient chaos is a phenomenon characterizing the dynamics of phase space trajectories evolving towards an attractor in physical systems. We show that transient chaos also appears in the dynamics of certain algorithms searching for solutions of constraint satisfaction problems (e.g., Sudoku). We present a study of the emergence of hardness in Boolean satisfiability (k-SAT) using an analog deterministic algorithm. Problem hardness is defined through the escape rate κ, an invariant measure of transient chaos, and it expresses the rate at which the trajectory approaches a solution. We show that the hardness in random k-SAT ensembles has a wide variation approximable by a lognormal distribution. We also show that when increasing the density of constraints α, hardness appears through a second-order phase transition at αc in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic, however, such transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter. We demonstrate that the transition is generated by the appearance of non-solution basins in the solution space as the density of constraints is increased.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmüller, U.; Strozzi, T.
2012-12-01
The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities
Generalizing Atoms in Constraint Logic
NASA Technical Reports Server (NTRS)
Page, C. David, Jr.; Frisch, Alan M.
1991-01-01
This paper studies the generalization of atomic formulas, or atoms, that are augmented with constraints on or among their terms. The atoms may also be viewed as definite clauses whose antecedents express the constraints. Atoms are generalized relative to a body of background information about the constraints. This paper first examines generalization of atoms with only monadic constraints. The paper develops an algorithm for the generalization task and discusses algorithm complexity. It then extends the algorithm to apply to atoms with constraints of arbitrary arity. The paper also presents semantic properties of the generalizations computed by the algorithms, making the algorithms applicable to such problems as abduction, induction, and knowledge base verification. The paper emphasizes the application to induction and presents a pac-learning result for constrained atoms.
Multiprocessor scheduling problem with machine constraints
NASA Astrophysics Data System (ADS)
He, Yong; Tan, Zhiyi
2001-09-01
This paper investigates multiprocessor scheduling with machine constraints, which has many applications in the flexible manufacturing systems and in VLSI chip design. Machines have different starting times and each machine can schedule at most k jobs in a period. The objective is to minimizing the makespan. For this strogly NP-hard problem, it is important to design near-optimal approximation algorithms. It is known that Modified LPT algorithm has a worst-case ratio of 3/2-1/(2m) for kequals2 where m is the number of machines. For k>2, no good algorithm has been got in the literature. In this paper, we prove the worst-case ratio of Modified LPT is less than 2. We further present an approximation algorithm Matching and show it has a worst-case ratio 2-1/m for every k>2. By introducing parameters, we get two better worst-case ratios which show the Matching algorithm is near optimal for two special cases.
Compact location problems with budget and communication constraints
Krumke, S.O.; Noltemeier, H.; Ravi, S.S.; Marathe, M.V.
1995-05-01
We consider the problem of placing a specified number p of facilities on the nodes of a given network with two nonnegative edge-weight functions so as to minimize the diameter of the placement with respect to the first distance function under diameter or sum-constraints with respect to the second weight function. Define an ({alpha}, {beta})-approximation algorithm as a polynomial-time algorithm that produces a solution within a times the optimal function value, violating the constraint with respect to the second distance function by a factor of at most {beta}. We observe that in general obtaining an ({alpha}, {beta})-approximation for any fixed {alpha}, {beta} {ge} 1 is NP-hard for any of these problems. We present efficient approximation algorithms for the case, when both edge-weight functions obey the triangle inequality. For the problem of minimizing the diameter under a diameter Constraint with respect to the second weight-function, we provide a (2,2)-approximation algorithm. We. also show that no polynomial time algorithm can provide an ({alpha},2 {minus} {var_epsilon})- or (2 {minus} {var_epsilon},{beta})-approximation for any fixed {var_epsilon} > 0 and {alpha},{beta} {ge} 1, unless P = NP. This result is proved to remain true, even if one fixes {var_epsilon}{prime} > 0 and allows the algorithm to place only 2p/{vert_bar}VI{vert_bar}/{sup 6 {minus} {var_epsilon}{prime}} facilities. Our techniques can be extended to the case, when either the objective or the constraint is of sum-type and also to handle additional weights on the nodes of the graph.
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Constraints in Genetic Programming
NASA Technical Reports Server (NTRS)
Janikow, Cezary Z.
1996-01-01
Genetic programming refers to a class of genetic algorithms utilizing generic representation in the form of program trees. For a particular application, one needs to provide the set of functions, whose compositions determine the space of program structures being evolved, and the set of terminals, which determine the space of specific instances of those programs. The algorithm searches the space for the best program for a given problem, applying evolutionary mechanisms borrowed from nature. Genetic algorithms have shown great capabilities in approximately solving optimization problems which could not be approximated or solved with other methods. Genetic programming extends their capabilities to deal with a broader variety of problems. However, it also extends the size of the search space, which often becomes too large to be effectively searched even by evolutionary methods. Therefore, our objective is to utilize problem constraints, if such can be identified, to restrict this space. In this publication, we propose a generic constraint specification language, powerful enough for a broad class of problem constraints. This language has two elements -- one reduces only the number of program instances, the other reduces both the space of program structures as well as their instances. With this language, we define the minimal set of complete constraints, and a set of operators guaranteeing offspring validity from valid parents. We also show that these operators are not less efficient than the standard genetic programming operators if one preprocesses the constraints - the necessary mechanisms are identified.
Rigorous location of phase transitions in hard optimization problems.
Achlioptas, Dimitris; Naor, Assaf; Peres, Yuval
2005-06-01
It is widely believed that for many optimization problems, no algorithm is substantially more efficient than exhaustive search. This means that finding optimal solutions for many practical problems is completely beyond any current or projected computational capacity. To understand the origin of this extreme 'hardness', computer scientists, mathematicians and physicists have been investigating for two decades a connection between computational complexity and phase transitions in random instances of constraint satisfaction problems. Here we present a mathematically rigorous method for locating such phase transitions. Our method works by analysing the distribution of distances between pairs of solutions as constraints are added. By identifying critical behaviour in the evolution of this distribution, we can pinpoint the threshold location for a number of problems, including the two most-studied ones: random k-SAT and random graph colouring. Our results prove that the heuristic predictions of statistical physics in this context are essentially correct. Moreover, we establish that random instances of constraint satisfaction problems have solutions well beyond the reach of any analysed algorithm. PMID:15944693
Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints
NASA Astrophysics Data System (ADS)
Cassandras, Christos G.; Zhuang, Shixin
2005-11-01
Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.
Agyepong, Irene Akua
2015-03-01
A major constraint to the application of any form of knowledge and principles is the awareness, understanding and acceptance of the knowledge and principles. Systems Thinking (ST) is a way of understanding and thinking about the nature of health systems and how to make and implement decisions within health systems to maximize desired and minimize undesired effects. A major constraint to applying ST within health systems in Low- and Middle-Income Countries (LMICs) would appear to be an awareness and understanding of ST and how to apply it. This is a fundamental constraint and in the increasing desire to enable the application of ST concepts in health systems in LMIC and understand and evaluate the effects; an essential first step is going to be enabling of a wide spread as well as deeper understanding of ST and how to apply this understanding. PMID:25774378
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch
Karthikeyan, M.; Sree Ranga Raja, T.
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710
NASA Technical Reports Server (NTRS)
Knox, C. E.
1983-01-01
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.
The photogrammetric inner constraints
NASA Astrophysics Data System (ADS)
Dermanis, Athanasios
A derivation of the complete inner constraints, which are required for obtaining "free network" solutions in close-range photogrammetry, is presented. The inner constraints are derived analytically for the bundle method, by exploiting the fact that the rows of their coefficient matrix from a basis for the null subspace of the design matrix used in the linearized observation equations. The derivation is independent of any particular choice of rotational parameters and examples are given for three types of rotation angles used in photogrammetry, as well as for the Rodriguez elements. A convenient algorithm based on the use of the S-transformation is presented, for the computation of free solutions with either inner or partial inner constraints. This approach is finally compared with alternative approaches to free network solutions.
NASA Astrophysics Data System (ADS)
Yukita, Mihoko; Ptak, Andrew; Maccarone, Thomas J.; Hornschemeier, Ann E.; Wik, Daniel R.; Pottschmidt, Katja; Antoniou, Vallia; Baganoff, Frederick K.; Lehmer, Bret; Zezas, Andreas; Boyd, Patricia T.; Kennea, Jamie; Page, Kim L.
2016-04-01
Thanks to its better sensitivity and spatial resolution, NuSTAR allows us to investigate the E>10 keV properties of nearby galaxies. We now know that starburst galaxies, containing very young stellar populations, have X-ray spectra which drop quickly above 10 keV. We extend our investigation of hard X-ray properties to an older stellar population system, the bulge of M31. The NuSTAR and Swift simultaneous observations reveal a bright hard source dominating the M31 bulge above 20 keV, which is likely to be a counterpart of Swift J0042.6+4112 previously detected (but not classified) in the Swift BAT All-sky Hard X-ray Survey. This source had been classified as an XRB candidate in various Chandra and XMM-Newton studies; however, since it was not clear that it is the counterpart to the strong Swift J0042.6+4112 source at higher energies, the previous E < 10 keV observations did not generate much attention. The NuSTAR and Swift spectra of this source drop quickly at harder energies as observed in sources in starburst galaxies. The X-ray spectral properties of this source are very similar to those of an accreting pulsar; yet, we do not find a pulsation in the NuSTAR data. The existing deep HST images indicate no high mass donors at the location of this source, further suggesting that this source has an intermediate or low mass companion. The most likely scenario for the nature of this source is an X-ray pulsar with an intermediate/low mass companion similar to the Galactic Her X-1 system. We will also discuss other possibilities in more detail.
NASA Astrophysics Data System (ADS)
Liu, Jingfa; Jiang, Yucong; Li, Gang; Xue, Yu; Liu, Zhaoxia; Zhang, Zhen
2015-08-01
The optimal layout problem of circle group in a circular container with performance constraints of equilibrium belongs to a class of NP-hard problem. The key obstacle of solving this problem is the lack of an effective global optimization method. We convert the circular packing problem with performance constraints of equilibrium into the unconstrained optimization problem by using quasi-physical strategy and penalty function method. By putting forward a new updating mechanism of the histogram function in energy landscape paving (ELP) method and incorporating heuristic conformation update strategies into the ELP method, we obtain an improved ELP (IELP) method. Subsequently, by combining the IELP method and the local search (LS) procedure, we put forward a hybrid algorithm, denoted by IELP-LS, for the circular packing problem with performance constraints of equilibrium. We test three sets of benchmarks consisting of 21 representative instances from the current literature. The proposed algorithm breaks the records of all 10 instances in the first set, and achieves the same or even better results than other methods in literature for 10 out of 11 instances in the second and third sets. The computational results show that the proposed algorithm is an effective method for solving the circular packing problem with performance constraints of equilibrium.
FATIGUE OF BIOMATERIALS: HARD TISSUES.
Arola, D; Bajaj, D; Ivancik, J; Majd, H; Zhang, D
2010-09-01
The fatigue and fracture behavior of hard tissues are topics of considerable interest today. This special group of organic materials comprises the highly mineralized and load-bearing tissues of the human body, and includes bone, cementum, dentin and enamel. An understanding of their fatigue behavior and the influence of loading conditions and physiological factors (e.g. aging and disease) on the mechanisms of degradation are essential for achieving lifelong health. But there is much more to this topic than the immediate medical issues. There are many challenges to characterizing the fatigue behavior of hard tissues, much of which is attributed to size constraints and the complexity of their microstructure. The relative importance of the constituents on the type and distribution of defects, rate of coalescence, and their contributions to the initiation and growth of cracks, are formidable topics that have not reached maturity. Hard tissues also provide a medium for learning and a source of inspiration in the design of new microstructures for engineering materials. This article briefly reviews fatigue of hard tissues with shared emphasis on current understanding, the challenges and the unanswered questions.
FATIGUE OF BIOMATERIALS: HARD TISSUES
Arola, D.; Bajaj, D.; Ivancik, J.; Majd, H.; Zhang, D.
2009-01-01
The fatigue and fracture behavior of hard tissues are topics of considerable interest today. This special group of organic materials comprises the highly mineralized and load-bearing tissues of the human body, and includes bone, cementum, dentin and enamel. An understanding of their fatigue behavior and the influence of loading conditions and physiological factors (e.g. aging and disease) on the mechanisms of degradation are essential for achieving lifelong health. But there is much more to this topic than the immediate medical issues. There are many challenges to characterizing the fatigue behavior of hard tissues, much of which is attributed to size constraints and the complexity of their microstructure. The relative importance of the constituents on the type and distribution of defects, rate of coalescence, and their contributions to the initiation and growth of cracks, are formidable topics that have not reached maturity. Hard tissues also provide a medium for learning and a source of inspiration in the design of new microstructures for engineering materials. This article briefly reviews fatigue of hard tissues with shared emphasis on current understanding, the challenges and the unanswered questions. PMID:20563239
NASA Technical Reports Server (NTRS)
Zweben, Monte
1991-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocations for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its applications to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1991-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1993-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
Artificial immune algorithm for multi-depot vehicle scheduling problems
NASA Astrophysics Data System (ADS)
Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling
2008-10-01
In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.
On Constraints in Assembly Planning
Calton, T.L.; Jones, R.E.; Wilson, R.H.
1998-12-17
Constraints on assembly plans vary depending on product, assembly facility, assembly volume, and many other factors. Assembly costs and other measures to optimize vary just as widely. To be effective, computer-aided assembly planning systems must allow users to express the plan selection criteria that appIy to their products and production environments. We begin this article by surveying the types of user criteria, both constraints and quality measures, that have been accepted by assembly planning systems to date. The survey is organized along several dimensions, including strategic vs. tactical criteria; manufacturing requirements VS. requirements of the automated planning process itself and the information needed to assess compliance with each criterion. The latter strongly influences the efficiency of planning. We then focus on constraints. We describe a framework to support a wide variety of user constraints for intuitive and efficient assembly planning. Our framework expresses all constraints on a sequencing level, specifying orders and conditions on part mating operations in a number of ways. Constraints are implemented as simple procedures that either accept or reject assembly operations proposed by the planner. For efficiency, some constraints are supplemented with special-purpose modifications to the planner's algorithms. Fast replanning enables an interactive plan-view-constrain-replan cycle that aids in constraint discovery and documentation. We describe an implementation of the framework in a computer-aided assembly planning system and experiments applying the system to a number of complex assemblies, including one with 472 parts.
NASA Astrophysics Data System (ADS)
Li, Yuzhong
Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.
Data assimilation with inequality constraints
NASA Astrophysics Data System (ADS)
Thacker, W. C.
If values of variables in a numerical model are limited to specified ranges, these restrictions should be enforced when data are assimilated. The simplest option is to assimilate without regard for constraints and then to correct any violations without worrying about additional corrections implied by correlated errors. This paper addresses the incorporation of inequality constraints into the standard variational framework of optimal interpolation with emphasis on our limited knowledge of the underlying probability distributions. Simple examples involving only two or three variables are used to illustrate graphically how active constraints can be treated as error-free data when background errors obey a truncated multi-normal distribution. Using Lagrange multipliers, the formalism is expanded to encompass the active constraints. Two algorithms are presented, both relying on a solution ignoring the inequality constraints to discover violations to be enforced. While explicitly enforcing a subset can, via correlations, correct the others, pragmatism based on our poor knowledge of the underlying probability distributions suggests the expedient of enforcing them all explicitly to avoid the computationally expensive task of determining the minimum active set. If additional violations are encountered with these solutions, the process can be repeated. Simple examples are used to illustrate the algorithms and to examine the nature of the corrections implied by correlated errors.
Cluster and constraint analysis in tetrahedron packings.
Jin, Weiwei; Lu, Peng; Liu, Lufeng; Li, Shuixiang
2015-04-01
The disordered packings of tetrahedra often show no obvious macroscopic orientational or positional order for a wide range of packing densities, and it has been found that the local order in particle clusters is the main order form of tetrahedron packings. Therefore, a cluster analysis is carried out to investigate the local structures and properties of tetrahedron packings in this work. We obtain a cluster distribution of differently sized clusters, and peaks are observed at two special clusters, i.e., dimer and wagon wheel. We then calculate the amounts of dimers and wagon wheels, which are observed to have linear or approximate linear correlations with packing density. Following our previous work, the amount of particles participating in dimers is used as an order metric to evaluate the order degree of the hierarchical packing structure of tetrahedra, and an order map is consequently depicted. Furthermore, a constraint analysis is performed to determine the isostatic or hyperstatic region in the order map. We employ a Monte Carlo algorithm to test jamming and then suggest a new maximally random jammed packing of hard tetrahedra from the order map with a packing density of 0.6337.
Constraint programming based biomarker optimization.
Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng
2015-01-01
Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.
A Path Algorithm for Constrained Estimation.
Zhou, Hua; Lange, Kenneth
2013-01-01
Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online.
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
A Monte Carlo Approach for Adaptive Testing with Content Constraints
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander
2008-01-01
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
Use of Justified Constraints in Coherent Diffractive Imaging
Kim, S.; McNulty, I.; Chen, Y. K.; Putkunz, C. T.; Dunand, D. C.
2011-09-09
We demonstrate the use of physically justified object constraints in x-ray Fresnel coherent diffractive imaging on a sample of nanoporous gold prepared by dealloying. Use of these constraints in the reconstruction algorithm enabled highly reliable imaging of the sample's shape and quantification of the 23- to 52-nm pore structure within it without use of a tight object support constraint.
Network interdiction with budget constraints
Santhi, Nankakishore; Pan, Feng
2009-01-01
Several scenarios exist in the modern interconnected world which call for efficient network interdiction algorithms. Applications are varied, including computer network security, prevention of spreading of Internet worms, policing international smuggling networks, controlling spread of diseases and optimizing the operation of large public energy grids. In this paper we consider some natural network optimization questions related to the budget constrained interdiction problem over general graphs. Many of these questions turn out to be computationally hard to tackle. We present a particularly interesting practical form of the interdiction question which we show to be computationally tractable. A polynomial time algorithm is then presented for this problem.
Parallel-batch scheduling and transportation coordination with waiting time constraint.
Gong, Hua; Chen, Daheng; Xu, Ke
2014-01-01
This paper addresses a parallel-batch scheduling problem that incorporates transportation of raw materials or semifinished products before processing with waiting time constraint. The orders located at the different suppliers are transported by some vehicles to a manufacturing facility for further processing. One vehicle can load only one order in one shipment. Each order arriving at the facility must be processed in the limited waiting time. The orders are processed in batches on a parallel-batch machine, where a batch contains several orders and the processing time of the batch is the largest processing time of the orders in it. The goal is to find a schedule to minimize the sum of the total flow time and the production cost. We prove that the general problem is NP-hard in the strong sense. We also demonstrate that the problem with equal processing times on the machine is NP-hard. Furthermore, a dynamic programming algorithm in pseudopolynomial time is provided to prove its ordinarily NP-hardness. An optimal algorithm in polynomial time is presented to solve a special case with equal processing times and equal transportation times for each order.
Constraint-based interactive assembly planning
Jones, R.E.; Wilson, R.H.; Calton, T.L.
1997-03-01
The constraints on assembly plans vary depending on the product, assembly facility, assembly volume, and many other factors. This paper describes the principles and implementation of a framework that supports a wide variety of user-specified constraints for interactive assembly planning. Constraints from many sources can be expressed on a sequencing level, specifying orders and conditions on part mating operations in a number of ways. All constraints are implemented as filters that either accept or reject assembly operations proposed by the planner. For efficiency, some constraints are supplemented with special-purpose modifications to the planner`s algorithms. Replanning is fast enough to enable a natural plan-view-constrain-replan cycle that aids in constraint discovery and documentation. We describe an implementation of the framework in a computer-aided assembly planning system and experiments applying the system to several complex assemblies. 12 refs., 2 figs., 3 tabs.
Constraint Embedding for Multibody System Dynamics
NASA Technical Reports Server (NTRS)
Jain, Abhinandan
2009-01-01
This paper describes a constraint embedding approach for the handling of local closure constraints in multibody system dynamics. The approach uses spatial operator techniques to eliminate local-loop constraints from the system and effectively convert the system into tree-topology systems. This approach allows the direct derivation of recursive O(N) techniques for solving the system dynamics and avoiding the expensive steps that would otherwise be required for handling the closedchain dynamics. The approach is very effective for systems where the constraints are confined to small-subgraphs within the system topology. The paper provides background on the spatial operator O(N) algorithms, the extensions for handling embedded constraints, and concludes with some examples of such constraints.
Session: Hard Rock Penetration
Tennyson, George P. Jr.; Dunn, James C.; Drumheller, Douglas S.; Glowka, David A.; Lysne, Peter
1992-01-01
This session at the Geothermal Energy Program Review X: Geothermal Energy and the Utility Market consisted of five presentations: ''Hard Rock Penetration - Summary'' by George P. Tennyson, Jr.; ''Overview - Hard Rock Penetration'' by James C. Dunn; ''An Overview of Acoustic Telemetry'' by Douglas S. Drumheller; ''Lost Circulation Technology Development Status'' by David A. Glowka; ''Downhole Memory-Logging Tools'' by Peter Lysne.
NASA Technical Reports Server (NTRS)
Hauser, D. L.; Buras, D. F.; Corbin, J. M.
1987-01-01
Rubber-hardness tester modified for use on rigid polyurethane foam. Provides objective basis for evaluation of improvements in foam manufacturing and inspection. Typical acceptance criterion requires minimum hardness reading of 80 on modified tester. With adequate correlation tests, modified tester used to measure indirectly tensile and compressive strengths of foam.
Cugell, D.W. )
1992-06-01
Hard metal is a mixture of tungsten carbide and cobalt, to which small amounts of other metals may be added. It is widely used for industrial purposes whenever extreme hardness and high temperature resistance are needed, such as for cutting tools, oil well drilling bits, and jet engine exhaust ports. Cobalt is the component of hard metal that can be a health hazard. Respiratory diseases occur in workers exposed to cobalt--either in the production of hard metal, from machining hard metal parts, or from other sources. Adverse pulmonary reactions include asthma, hypersensitivity pneumonitis, and interstitial fibrosis. A peculiar, almost unique form of lung fibrosis, giant cell interstitial pneumonia, is closely linked with cobalt exposure.66 references.
Cugell, D W
1992-06-01
Hard metal is a mixture of tungsten carbide and cobalt, to which small amounts of other metals may be added. It is widely used for industrial purposes whenever extreme hardness and high temperature resistance are needed, such as for cutting tools, oil well drilling bits, and jet engine exhaust ports. Cobalt is the component of hard metal that can be a health hazard. Respiratory diseases occur in workers exposed to cobalt--either in the production of hard metal, from machining hard metal parts, or from other sources. Adverse pulmonary reactions include asthma, hypersensitivity pneumonitis, and interstitial fibrosis. A peculiar, almost unique form of lung fibrosis, giant cell interstitial pneumonia, is closely linked with cobalt exposure.
Cugell, D W
1992-06-01
Hard metal is a mixture of tungsten carbide and cobalt, to which small amounts of other metals may be added. It is widely used for industrial purposes whenever extreme hardness and high temperature resistance are needed, such as for cutting tools, oil well drilling bits, and jet engine exhaust ports. Cobalt is the component of hard metal that can be a health hazard. Respiratory diseases occur in workers exposed to cobalt--either in the production of hard metal, from machining hard metal parts, or from other sources. Adverse pulmonary reactions include asthma, hypersensitivity pneumonitis, and interstitial fibrosis. A peculiar, almost unique form of lung fibrosis, giant cell interstitial pneumonia, is closely linked with cobalt exposure. PMID:1511554
Object-oriented algorithmic laboratory for ordering sparse matrices
Kumfert, G K
2000-05-01
We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively
Adiabatic Quantum Programming: Minor Embedding With Hard Faults
Klymko, Christine F; Sullivan, Blair D; Humble, Travis S
2013-01-01
Adiabatic quantum programming defines the time-dependent mapping of a quantum algorithm into the hardware or logical fabric. An essential programming step is the embedding of problem-specific information into the logical fabric to define the quantum computational transformation. We present algorithms for embedding arbitrary instances of the adiabatic quantum optimization algorithm into a square lattice of specialized unit cells. Our methods are shown to be extensible in fabric growth, linear in time, and quadratic in logical footprint. In addition, we provide methods for accommodating hard faults in the logical fabric without invoking approximations to the original problem. These hard fault-tolerant embedding algorithms are expected to prove useful for benchmarking the adiabatic quantum optimization algorithm on existing quantum logical hardware. We illustrate this versatility through numerical studies of embeddabilty versus hard fault rates in square lattices of complete bipartite unit cells.
Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem
NASA Astrophysics Data System (ADS)
Tein, Lim Huai; Ramli, Razamin
2014-12-01
Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.
ERIC Educational Resources Information Center
Stocker, H. Robert; Hilton, Thomas S. E.
1991-01-01
Suggests strategies that make hard disk organization easy and efficient, such as making, changing, and removing directories; grouping files by subject; naming files effectively; backing up efficiently; and using PATH. (JOW)
Constraint neighborhood projections for semi-supervised clustering.
Wang, Hongjun; Li, Tao; Li, Tianrui; Yang, Yan
2014-05-01
Semi-supervised clustering aims to incorporate the known prior knowledge into the clustering algorithm. Pairwise constraints and constraint projections are two popular techniques in semi-supervised clustering. However, both of them only consider the given constraints and do not consider the neighbors around the data points constrained by the constraints. This paper presents a new technique by utilizing the constrained pairwise data points and their neighbors, denoted as constraint neighborhood projections that requires fewer labeled data points (constraints) and can naturally deal with constraint conflicts. It includes two steps: 1) the constraint neighbors are chosen according to the pairwise constraints and a given radius so that the pairwise constraint relationships can be extended to their neighbors, and 2) the original data points are projected into a new low-dimensional space learned from the pairwise constraints and their neighbors. A CNP-Kmeans algorithm is developed based on the constraint neighborhood projections. Extensive experiments on University of California Irvine (UCI) datasets demonstrate the effectiveness of the proposed method. Our study also shows that constraint neighborhood projections (CNP) has some favorable features compared with the previous techniques.
Data Structures and Algorithms.
ERIC Educational Resources Information Center
Wirth, Niklaus
1984-01-01
Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)
A Space-Bounded Anytime Algorithm for the Multiple Longest Common Subsequence Problem
Yang, Jiaoyun; Xu, Yun; Shang, Yi; Chen, Guoliang
2014-01-01
The multiple longest common subsequence (MLCS) problem, related to the identification of sequence similarity, is an important problem in many fields. As an NP-hard problem, its exact algorithms have difficulty in handling large-scale data and time- and space-efficient algorithms are required in real-world applications. To deal with time constraints, anytime algorithms have been proposed to generate good solutions with a reasonable time. However, there exists little work on space-efficient MLCS algorithms. In this paper, we formulate the MLCS problem into a graph search problem and present two space-efficient anytime MLCS algorithms, SA-MLCS and SLA-MLCS. SA-MLCS uses an iterative beam widening search strategy to reduce space usage during the iterative process of finding better solutions. Based on SA-MLCS, SLA-MLCS, a space-bounded algorithm, is developed to avoid space usage from exceeding available memory. SLA-MLCS uses a replacing strategy when SA-MLCS reaches a given space bound. Experimental results show SA-MLCS and SLA-MLCS use an order of magnitude less space and time than the state-of-the-art approximate algorithm MLCS-APP while finding better solutions. Compared to the state-of-the-art anytime algorithm Pro-MLCS, SA-MLCS and SLA-MLCS can solve an order of magnitude larger size instances. Furthermore, SLA-MLCS can find much better solutions than SA-MLCS on large size instances. PMID:25400485
A Space-Bounded Anytime Algorithm for the Multiple Longest Common Subsequence Problem.
Yang, Jiaoyun; Xu, Yun; Shang, Yi; Chen, Guoliang
2014-11-01
The multiple longest common subsequence (MLCS) problem, related to the identification of sequence similarity, is an important problem in many fields. As an NP-hard problem, its exact algorithms have difficulty in handling large-scale data and time- and space-efficient algorithms are required in real-world applications. To deal with time constraints, anytime algorithms have been proposed to generate good solutions with a reasonable time. However, there exists little work on space-efficient MLCS algorithms. In this paper, we formulate the MLCS problem into a graph search problem and present two space-efficient anytime MLCS algorithms, SA-MLCS and SLA-MLCS. SA-MLCS uses an iterative beam widening search strategy to reduce space usage during the iterative process of finding better solutions. Based on SA-MLCS, SLA-MLCS, a space-bounded algorithm, is developed to avoid space usage from exceeding available memory. SLA-MLCS uses a replacing strategy when SA-MLCS reaches a given space bound. Experimental results show SA-MLCS and SLA-MLCS use an order of magnitude less space and time than the state-of-the-art approximate algorithm MLCS-APP while finding better solutions. Compared to the state-of-the-art anytime algorithm Pro-MLCS, SA-MLCS and SLA-MLCS can solve an order of magnitude larger size instances. Furthermore, SLA-MLCS can find much better solutions than SA-MLCS on large size instances.
Constraint monitoring in TOSCA
NASA Technical Reports Server (NTRS)
Beck, Howard
1992-01-01
The Job-Shop Scheduling Problem (JSSP) deals with the allocation of resources over time to factory operations. Allocations are subject to various constraints (e.g., production precedence relationships, factory capacity constraints, and limits on the allowable number of machine setups) which must be satisfied for a schedule to be valid. The identification of constraint violations and the monitoring of constraint threats plays a vital role in schedule generation in terms of the following: (1) directing the scheduling process; and (2) informing scheduling decisions. This paper describes a general mechanism for identifying constraint violations and monitoring threats to the satisfaction of constraints throughout schedule generation.
NASA Technical Reports Server (NTRS)
Chan, Hak-Wai; Yan, Tsun-Yee
1989-01-01
Algorithm developed for optimal routing of packets of data along links of multilink, multinode digital communication network. Algorithm iterative and converges to cost-optimal assignment independent of initial assignment. Each node connected to other nodes through links, each containing number of two-way channels. Algorithm assigns channels according to message traffic leaving and arriving at each node. Modified to take account of different priorities among packets belonging to different users by using different delay constraints or imposing additional penalties via cost function.
A Framework for Parallel Nonlinear Optimization by Partitioning Localized Constraints
Xu, You; Chen, Yixin
2008-06-28
We present a novel parallel framework for solving large-scale continuous nonlinear optimization problems based on constraint partitioning. The framework distributes constraints and variables to parallel processors and uses an existing solver to handle the partitioned subproblems. In contrast to most previous decomposition methods that require either separability or convexity of constraints, our approach is based on a new constraint partitioning theory and can handle nonconvex problems with inseparable global constraints. We also propose a hypergraph partitioning method to recognize the problem structure. Experimental results show that the proposed parallel algorithm can efficiently solve some difficult test cases.
Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems
Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo
2015-01-01
Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016
ERIC Educational Resources Information Center
Parrino, Frank M.
2003-01-01
Interviews with school board members and administrators produced a list of suggestions for balancing a budget in hard times. Among these are changing calendars and schedules to reduce heating and cooling costs; sharing personnel; rescheduling some extracurricular activities; and forming cooperative agreements with other districts. (MLF)
ERIC Educational Resources Information Center
Kennedy, Mike
1999-01-01
Provides guidelines to help schools maintain hard floors and carpets, including special areas in schools and colleges that need attention and the elements needed to have a successful carpet-maintenance program. The importance of using heavy equipment to lessen time and effort is explained as are the steps maintenance workers can take to make the…
ERIC Educational Resources Information Center
Sturgeon, Julie
2008-01-01
Acting on information from students who reported seeing a classmate looking at inappropriate material on a school computer, school officials used forensics software to plunge the depths of the PC's hard drive, searching for evidence of improper activity. Images were found in a deleted Internet Explorer cache as well as deleted file space.…
ERIC Educational Resources Information Center
Berry, John N., III
2009-01-01
Roberta Stevens and Kent Oliver are campaigning hard for the presidency of the American Library Association (ALA). Stevens is outreach projects and partnerships officer at the Library of Congress. Oliver is executive director of the Stark County District Library in Canton, Ohio. They have debated, discussed, and posted web sites, Facebook pages,…
NASA Astrophysics Data System (ADS)
Cocco, S.; Monasson, R.
2001-08-01
The computational complexity of solving random 3-Satisfiability (3-SAT) problems is investigated using statistical physics concepts and techniques related to phase transitions, growth processes and (real-space) renormalization flows. 3-SAT is a representative example of hard computational tasks; it consists in knowing whether a set of αN randomly drawn logical constraints involving N Boolean variables can be satisfied altogether or not. Widely used solving procedures, as the Davis-Putnam-Loveland-Logemann (DPLL) algorithm, perform a systematic search for a solution, through a sequence of trials and errors represented by a search tree. The size of the search tree accounts for the computational complexity, i.e. the amount of computational efforts, required to achieve resolution. In the present study, we identify, using theory and numerical experiments, easy (size of the search tree scaling polynomially with N) and hard (exponential scaling) regimes as a function of the ratio α of constraints per variable. The typical complexity is explicitly calculated in the different regimes, in very good agreement with numerical simulations. Our theoretical approach is based on the analysis of the growth of the branches in the search tree under the operation of DPLL. On each branch, the initial 3-SAT problem is dynamically turned into a more generic 2+p-SAT problem, where p and 1 - p are the fractions of constraints involving three and two variables respectively. The growth of each branch is monitored by the dynamical evolution of α and p and is represented by a trajectory in the static phase diagram of the random 2+p-SAT problem. Depending on whether or not the trajectories cross the boundary between satisfiable and unsatisfiable phases, single branches or full trees are generated by DPLL, resulting in easy or hard resolutions. Our picture for the origin of complexity can be applied to other computational problems solved by branch and bound algorithms.
ERIC Educational Resources Information Center
McNeil, Michele
2008-01-01
Hard-to-grasp dollar amounts are forcing real cuts in K-12 education at a time when the cost of fueling buses and providing school lunches is increasing and the demands of the federal No Child Left Behind Act still loom larger over states and districts. "One of the real challenges is to continue progress in light of the economy," said Gale Gaines,…
Melese, P.; CDF Collaboration
1997-06-01
We present results on diffractive production of hard processes in {anti p}p collisions at {radical}s = 1.8 TeV at the Tevatron using the CDF detector. The signatures used to identify diffractive events are the forward rapidity gap and/or the detection of a recoil antiproton with high forward momentum. We have observed diffractive W- boson, dijet, and heavy quark production. We also present results on double-pomeron production of dijets.
ERIC Educational Resources Information Center
Mathews, Jay
2009-01-01
In 1994, fresh from a two-year stint with Teach for America, Mike Feinberg and Dave Levin inaugurated the Knowledge Is Power Program (KIPP) in Houston with an enrollment of 49 5th graders. By this Fall, 75 KIPP schools will be up and running, setting children from poor and minority families on a path to college through a combination of hard work,…
Mansur, Louis K; Bhattacharya, R; Blau, Peter Julian; Clemons, Art; Eberle, Cliff; Evans, H B; Janke, Christopher James; Jolly, Brian C; Lee, E H; Leonard, Keith J; Trejo, Rosa M; Rivard, John D
2010-01-01
High energy ion beam surface treatments were applied to a selected group of polymers. Of the six materials in the present study, four were thermoplastics (polycarbonate, polyethylene, polyethylene terephthalate, and polystyrene) and two were thermosets (epoxy and polyimide). The particular epoxy evaluated in this work is one of the resins used in formulating fiber reinforced composites for military helicopter blades. Measures of mechanical properties of the near surface regions were obtained by nanoindentation hardness and pin on disk wear. Attempts were also made to measure erosion resistance by particle impact. All materials were hardness tested. Pristine materials were very soft, having values in the range of approximately 0.1 to 0.5 GPa. Ion beam treatment increased hardness by up to 50 times compared to untreated materials. For reference, all materials were hardened to values higher than those typical of stainless steels. Wear tests were carried out on three of the materials, PET, PI and epoxy. On the ion beam treated epoxy no wear could be detected, whereas the untreated material showed significant wear.
COMPLEXITY & APPROXIMABILITY OF QUANTIFIED & STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS
H. B. HUNT; M. V. MARATHE; R. E. STEARNS
2001-06-01
Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity or efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C,S,T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94] Our techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-Q-SAT(S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93, CF+94, Cr95, KSW97]. Keywords: NP-hardness; Approximation Algorithms; PSPACE-hardness; Quantified and Stochastic Constraint Satisfaction Problems.
Ultrasonic characterization of materials hardness
Badidi Bouda A; Benchaala; Alem
2000-03-01
In this paper, an experimental technique has been developed to measure velocities and attenuation of ultrasonic waves through a steel with a variable hardness. A correlation between ultrasonic measurements and steel hardness was investigated.
Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search.
Rajalakshmi, S; Maguteeswaran, R
2015-01-01
A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established. PMID:26495429
Wayne F. Boyer; Gurdeep S. Hura
2005-09-01
The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized task orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,
Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search.
Rajalakshmi, S; Maguteeswaran, R
2015-01-01
A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established.
Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search
Rajalakshmi, S.; Maguteeswaran, R.
2015-01-01
A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established. PMID:26495429
Quiet planting in the locked constraints satisfaction problems
Zdeborova, Lenka; Krzakala, Florent
2009-01-01
We study the planted ensemble of locked constraint satisfaction problems. We describe the connection between the random and planted ensembles. The use of the cavity method is combined with arguments from reconstruction on trees and first and second moment considerations; in particular the connection with the reconstruction on trees appears to be crucial. Our main result is the location of the hard region in the planted ensemble, thus providing hard satisfiable benchmarks. In a part of that hard region instances have with high probability a single satisfying assignment.
Sheinberg, H.
1983-07-26
A composition of matter having a Rockwell A hardness of at least 85 is formed from a precursor mixture comprising between 3 and 10 wt % boron carbide and the remainder a metal mixture comprising from 70 to 90% tungsten or molybdenum, with the remainder of the metal mixture comprising nickel and iron or a mixture thereof. The composition has a relatively low density of between 7 and 14 g/cc. The precursor is preferably hot pressed to yield a composition having greater than 100% of theoretical density.
Sheinberg, Haskell
1986-01-01
A composition of matter having a Rockwell A hardness of at least 85 is formed from a precursor mixture comprising between 3 and 10 weight percent boron carbide and the remainder a metal mixture comprising from 70 to 90 percent tungsten or molybdenum, with the remainder of the metal mixture comprising nickel and iron or a mixture thereof. The composition has a relatively low density of between 7 to 14 g/cc. The precursor is preferably hot pressed to yield a composition having greater than 100% of theoretical density.
Hard Exclusive Pion Leptoproduction
NASA Astrophysics Data System (ADS)
Kroll, Peter
2016-08-01
In this talk it is reported on an analysis of hard exclusive leptoproduction of pions within the handbag approach. It is argued that recent measurements of this process performed by HERMES and CLAS clearly indicate the occurrence of strong contributions from transversely polarized photons. Within the handbag approach such γ ^{ *}_T→ π transitions are described by the transversity GPDs accompanied by twist-3 pion wave functions. It is shown that the handbag approach leads to results on cross sections and single-spin asymmetries in fair agreement with experiment. Predictions for other pseudoscalar meson channels are also briefly discussed.
Quantum defragmentation algorithm
Burgarth, Daniel; Giovannetti, Vittorio
2010-08-15
In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.
On Reformulating Planning as Dynamic Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Jonsson, Ari K.; Morris, Paul; Koga, Dennis (Technical Monitor)
2000-01-01
In recent years, researchers have reformulated STRIPS planning problems as SAT problems or CSPs. In this paper, we discuss the Constraint-Based Interval Planning (CBIP) paradigm, which can represent planning problems incorporating interval time and resources. We describe how to reformulate mutual exclusion constraints for a CBIP-based system, the Extendible Uniform Remote Operations Planner Architecture (EUROPA). We show that reformulations involving dynamic variable domains restrict the algorithms which can be used to solve the resulting DCSP. We present an alternative formulation which does not employ dynamic domains, and describe the relative merits of the different reformulations.
Arching in tapped deposits of hard disks.
Pugnaloni, Luis A; Valluzzi, Marcos G; Valluzzi, Lucas G
2006-05-01
We simulate the tapping of a bed of hard disks in a rectangular box by using a pseudodynamic algorithm. In these simulations, arches are unambiguously defined and we can analyze their properties as a function of the tapping amplitude. We find that an order-disorder transition occurs within a narrow range of tapping amplitudes as has been seen by others. Arches are always present in the system although they exhibit regular shapes in the ordered regime. Interestingly, an increase in the number of arches does not always correspond to a reduction in the packing fraction. This is in contrast with what is found in three-dimensional systems.
Powered Descent Guidance with General Thrust-Pointing Constraints
NASA Technical Reports Server (NTRS)
Carson, John M., III; Acikmese, Behcet; Blackmore, Lars
2013-01-01
The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.
Creating Positive Task Constraints
ERIC Educational Resources Information Center
Mally, Kristi K.
2006-01-01
Constraints are characteristics of the individual, the task, or the environment that mold and shape movement choices and performances. Constraints can be positive--encouraging proficient movements or negative--discouraging movement or promoting ineffective movements. Physical educators must analyze, evaluate, and determine the effect various…
Constraint Reasoning Over Strings
NASA Technical Reports Server (NTRS)
Koga, Dennis (Technical Monitor); Golden, Keith; Pang, Wanlin
2003-01-01
This paper discusses an approach to representing and reasoning about constraints over strings. We discuss how many string domains can often be concisely represented using regular languages, and how constraints over strings, and domain operations on sets of strings, can be carried out using this representation.
Credit Constraints in Education
ERIC Educational Resources Information Center
Lochner, Lance; Monge-Naranjo, Alexander
2012-01-01
We review studies of the impact of credit constraints on the accumulation of human capital. Evidence suggests that credit constraints have recently become important for schooling and other aspects of households' behavior. We highlight the importance of early childhood investments, as their response largely determines the impact of credit…
Coherent diffractive imaging: a new statistically regularized amplitude constraint
NASA Astrophysics Data System (ADS)
Dilanian, R. A.; Williams, G. J.; Whitehead, L. W.; Vine, D. J.; Peele, A. G.; Balaur, E.; McNulty, I.; Quiney, H. M.; Nugent, K. A.
2010-09-01
Statistical information about measurement errors is incorporated in an algorithm that reconstructs the image of an object from x-ray diffraction data. The distribution function of measurement errors is included directly into reconstruction processes using a statistically based amplitude constraint. The algorithm is tested using simulated and experimental data and is shown to yield high-quality reconstructions in the presence of noise. This approach can be generalized to incorporate experimentally determined measurement error functions into image reconstruction algorithms.
Approximate resolution of hard numbering problems
Bailleux, O.; Chabrier, J.J.
1996-12-31
We present a new method for estimating the number of solutions of constraint satisfaction problems. We use a stochastic forward checking algorithm for drawing a sample of paths from a search tree. With this sample, we compute two values related to the number of solutions of a CSP instance. First, an unbiased estimate, second, a lower bound with an arbitrary low error probability. We will describe applications to the Boolean Satisfiability problem and the Queens problem. We shall give some experimental results for these problems.
Total-variation regularization with bound constraints
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
A dual method for optimal control problems with initial and final boundary constraints.
NASA Technical Reports Server (NTRS)
Pironneau, O.; Polak, E.
1973-01-01
This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.
A fast full constraints unmixing method
NASA Astrophysics Data System (ADS)
Ye, Zhang; Wei, Ran; Wang, Qing Yan
2012-10-01
Mixed pixels are inevitable due to low-spatial resolutions of hyperspectral image (HSI). Linear spectrum mixture model (LSMM) is a classical mathematical model to relate the spectrum of mixing substance to corresponding individual components. The solving of LSMM, namely unmixing, is essentially a linear optimization problem with constraints, which is usually consisting of iterations implemented on decent direction and stopping criterion to terminate algorithms. Such criterion must be properly set in order to balance the accuracy and speed of solution. However, the criterion in existing algorithm is too strict, which maybe lead to convergence rate reducing. In this paper, by broaden constraints in unmixing, a new stopping rule is proposed, which can reduce rate of convergence. The experiments results prove both in runtime and iteration numbers that our method can accelerate convergence processing with only cost of little quality decrease in resulting.
Exclusive, Hard Diffraction in QCD
NASA Astrophysics Data System (ADS)
Freund, Andreas
1999-03-01
In the first chapter we give an introduction to hard diffractive scattering in QCD to introduce basic concepts and terminology. In the second chapter we make predictions for the evolution of skewed parton distributions in a proton in the LLA. We calculate the DGLAP-type evolution kernels in the LLA and solve the skewed GLAP evolution equations with a modified version of the CTEQ-package. In the third chapter, we discuss the algorithms used in the LO evolution program for skewed parton distributions in the DGLAP region, discuss the stability of the code and reproduce the LO diagonal evolution within less than 0.5% of the original CTEQ-code. In chapter 4, we show that factorization holds for the deeply virtual Compton scattering amplitude in QCD, up to power suppressed terms, to all orders in perturbation theory. In chapter 5, we demonstrate that perturbative QCD allows one to calculate the absolute cross section of diffractive, exclusive production of photons (DVCS) at large Q^2 at HERA, while the aligned jet model allows one to estimate the cross section for intermediate Q^2 ˜ 2 GeV^2. We find a significant DVCS counting rate for the current generation of experiments at HERA and a large azimuthal angle asymmetry for HERA kinematics. In the last chapter, we propose a new methodology of gaining shape fits to skewed parton distributions and, for the first time, to determine the ratio of the real to imaginary part of the DIS amplitude. We do this by using several recent fits to F_2(x,Q^2) to compute the asymmetry A for the combined DVCS and Bethe-Heitler cross section. In the appendix, we give an application of distributional methods as discussed abstractly in chapter 4.
Constraint Embedding Technique for Multibody System Dynamics
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Cheng, Michael K.
2011-01-01
Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Overview: Hard Rock Penetration
Dunn, J.C.
1992-01-01
The Hard Rock Penetration program is developing technology to reduce the costs of drilling and completing geothermal wells. Current projects include: lost circulation control, rock penetration mechanics, instrumentation, and industry/DOE cost shared projects of the Geothermal Drilling organization. Last year, a number of accomplishments were achieved in each of these areas. A new flow meter being developed to accurately measure drilling fluid outflow was tested extensively during Long Valley drilling. Results show that this meter is rugged, reliable, and can provide useful measurements of small differences in fluid inflow and outflow rates. By providing early indications of fluid gain or loss, improved control of blow-out and lost circulation problems during geothermal drilling can be expected. In the area of downhole tools for lost circulation control, the concept of a downhole injector for injecting a two-component, fast-setting cementitious mud was developed. DOE filed a patent application for this concept during FY 91. The design criteria for a high-temperature potassium, uranium, thorium logging tool featuring a downhole data storage computer were established, and a request for proposals was submitted to tool development companies. The fundamental theory of acoustic telemetry in drill strings was significantly advanced through field experimentation and analysis. A new understanding of energy loss mechanisms was developed.
Overview: Hard Rock Penetration
Dunn, J.C.
1992-08-01
The Hard Rock Penetration program is developing technology to reduce the costs of drilling and completing geothermal wells. Current projects include: lost circulation control, rock penetration mechanics, instrumentation, and industry/DOE cost shared projects of the Geothermal Drilling organization. Last year, a number of accomplishments were achieved in each of these areas. A new flow meter being developed to accurately measure drilling fluid outflow was tested extensively during Long Valley drilling. Results show that this meter is rugged, reliable, and can provide useful measurements of small differences in fluid inflow and outflow rates. By providing early indications of fluid gain or loss, improved control of blow-out and lost circulation problems during geothermal drilling can be expected. In the area of downhole tools for lost circulation control, the concept of a downhole injector for injecting a two-component, fast-setting cementitious mud was developed. DOE filed a patent application for this concept during FY 91. The design criteria for a high-temperature potassium, uranium, thorium logging tool featuring a downhole data storage computer were established, and a request for proposals was submitted to tool development companies. The fundamental theory of acoustic telemetry in drill strings was significantly advanced through field experimentation and analysis. A new understanding of energy loss mechanisms was developed.
Overview - Hard Rock Penetration
Dunn, James C.
1992-03-24
The Hard Rock Penetration program is developing technology to reduce the costs of drilling and completing geothermal wells. Current projects include: lost circulation control, rock penetration mechanics, instrumentation, and industry/DOE cost shared projects of the Geothermal Drilling Organization. Last year, a number of accomplishments were achieved in each of these areas. A new flow meter being developed to accurately measure drilling fluid outflow was tested extensively during Long Valley drilling. Results show that this meter is rugged, reliable, and can provide useful measurements of small differences in fluid inflow and outflow rates. By providing early indications of fluid gain or loss, improved control of blow-out and lost circulation problems during geothermal drilling can be expected. In the area of downhole tools for lost circulation control, the concept of a downhole injector for injecting a two-component, fast-setting cementitious mud was developed. DOE filed a patent application for this concept during FY 91. The design criteria for a high-temperature potassium, uranium, thorium logging tool featuring a downhole data storage computer were established, and a request for proposals was submitted to tool development companies. The fundamental theory of acoustic telemetry in drill strings was significantly advanced through field experimentation and analysis. A new understanding of energy loss mechanisms was developed.
General adaptive guidance using nonlinear programming constraint solving methods (FAST)
NASA Astrophysics Data System (ADS)
Skalecki, Lisa; Martin, Marc
An adaptive, general purpose, constraint solving guidance algorithm called FAST (Flight Algorithm to Solve Trajectories) has been developed by the authors in response to the requirements for the Advanced Launch System (ALS). The FAST algorithm can be used for all mission phases for a wide range of Space Transportation Vehicles without code modification because of the general formulation of the nonlinear programming (NLP) problem, ad the general trajectory simulation used to predict constraint values. The approach allows on board re-targeting for severe weather and changes in payload or mission parameters, increasing flight reliability and dependability while reducing the amount of pre-flight analysis that must be performed. The algorithm is described in general in this paper. Three degree of freedom simulation results are presented for application of the algorithm to ascent and reentry phases of an ALS mission, and Mars aerobraking. Flight processor CPU requirement data is also shown.
Measuring the Hardness of Minerals
ERIC Educational Resources Information Center
Bushby, Jessica
2005-01-01
The author discusses Moh's hardness scale, a comparative scale for minerals, whereby the softest mineral (talc) is placed at 1 and the hardest mineral (diamond) is placed at 10, with all other minerals ordered in between, according to their hardness. Development history of the scale is outlined, as well as a description of how the scale is used…
Kirk, R.L.
1987-01-01
Thermal evolution of Ganymede from a hot start is modeled. On cooling ice I forms above the liquid H/sub 2/O and dense ices at higher entropy below it. A novel diapiric instability is proposed to occur if the ocean thins enough, mixing these layers and perhaps leading to resurfacing and groove formation. Rising warm-ice diapirs may cause a dramatic heat pulse and fracturing at the surface, and provide material for surface flows. Timing of the pulse depends on ice rheology but could agree with crater-density dates for resurfacing. Origins of the Ganymede-Callisto dichotomy in light of the model are discussed. Based on estimates of the conductivity of H/sub 2/ (Jupiter, Saturn) and H/sub 2/O (Uranus, Neptune), the zonal winds of the giant planets will, if they penetrate below the visible atmosphere, interact with the magnetic field well outside the metallic core. The scaling argument is supported by a model with zonal velocity constant on concentric cylinders, the Lorentz torque on each balanced by viscous stresses. The problem of two-dimensional photoclinometry, i.e. reconstruction of a surface from its image, is formulated in terms of finite elements and a fast algorithm using Newton-SOR iteration accelerated by multigridding is presented.
Constraint algebra in bigravity
Soloviev, V. O.
2015-07-15
The number of degrees of freedom in bigravity theory is found for a potential of general form and also for the potential proposed by de Rham, Gabadadze, and Tolley (dRGT). This aim is pursued via constructing a Hamiltonian formalismand studying the Poisson algebra of constraints. A general potential leads to a theory featuring four first-class constraints generated by general covariance. The vanishing of the respective Hessian is a crucial property of the dRGT potential, and this leads to the appearance of two additional second-class constraints and, hence, to the exclusion of a superfluous degree of freedom—that is, the Boulware—Deser ghost. The use of a method that permits avoiding an explicit expression for the dRGT potential is a distinctive feature of the present study.
Constraint algebra in bigravity
NASA Astrophysics Data System (ADS)
Soloviev, V. O.
2015-07-01
The number of degrees of freedom in bigravity theory is found for a potential of general form and also for the potential proposed by de Rham, Gabadadze, and Tolley (dRGT). This aim is pursued via constructing a Hamiltonian formalismand studying the Poisson algebra of constraints. A general potential leads to a theory featuring four first-class constraints generated by general covariance. The vanishing of the respective Hessian is a crucial property of the dRGT potential, and this leads to the appearance of two additional second-class constraints and, hence, to the exclusion of a superfluous degree of freedom—that is, the Boulware—Deser ghost. The use of a method that permits avoiding an explicit expression for the dRGT potential is a distinctive feature of the present study.
Beta Backscatter Measures the Hardness of Rubber
NASA Technical Reports Server (NTRS)
Morrissey, E. T.; Roje, F. N.
1986-01-01
Nondestructive testing method determines hardness, on Shore scale, of room-temperature-vulcanizing silicone rubber. Measures backscattered beta particles; backscattered radiation count directly proportional to Shore hardness. Test set calibrated with specimen, Shore hardness known from mechanical durometer test. Specimen of unknown hardness tested, and radiation count recorded. Count compared with known sample to find Shore hardness of unknown.
Constraints as a destriping tool for Hires images
NASA Technical Reports Server (NTRS)
Cao, YU; Prince, Thomas A.
1994-01-01
Images produced from the Maximum Correlation Method sometimes suffer from visible striping artifacts, especially for areas of extended sources. Possible causes are different baseline levels and calibration errors in the detectors. We incorporated these factors into the MCM algorithm, and tested the effects of different constraints on the output image. The result shows significant visual improvement over the standard MCM Method. In some areas the new images show intelligible structures that are otherwise corrupted by striping artifacts, and the removal of these artifacts could enhance performance of object classification algorithms. The constraints were also tested on low surface brightness areas, and were found to be effective in reducing the noise level.
Fault-Tolerant, Radiation-Hard DSP
NASA Technical Reports Server (NTRS)
Czajkowski, David
2011-01-01
Commercial digital signal processors (DSPs) for use in high-speed satellite computers are challenged by the damaging effects of space radiation, mainly single event upsets (SEUs) and single event functional interrupts (SEFIs). Innovations have been developed for mitigating the effects of SEUs and SEFIs, enabling the use of very-highspeed commercial DSPs with improved SEU tolerances. Time-triple modular redundancy (TTMR) is a method of applying traditional triple modular redundancy on a single processor, exploiting the VLIW (very long instruction word) class of parallel processors. TTMR improves SEU rates substantially. SEFIs are solved by a SEFI-hardened core circuit, external to the microprocessor. It monitors the health of the processor, and if a SEFI occurs, forces the processor to return to performance through a series of escalating events. TTMR and hardened-core solutions were developed for both DSPs and reconfigurable field-programmable gate arrays (FPGAs). This includes advancement of TTMR algorithms for DSPs and reconfigurable FPGAs, plus a rad-hard, hardened-core integrated circuit that services both the DSP and FPGA. Additionally, a combined DSP and FPGA board architecture was fully developed into a rad-hard engineering product. This technology enables use of commercial off-the-shelf (COTS) DSPs in computers for satellite and other space applications, allowing rapid deployment at a much lower cost. Traditional rad-hard space computers are very expensive and typically have long lead times. These computers are either based on traditional rad-hard processors, which have extremely low computational performance, or triple modular redundant (TMR) FPGA arrays, which suffer from power and complexity issues. Even more frustrating is that the TMR arrays of FPGAs require a fixed, external rad-hard voting element, thereby causing them to lose much of their reconfiguration capability and in some cases significant speed reduction. The benefits of COTS high
Image restoration by a novel method of parallel projection onto constraint sets.
Kotzer, T; Cohen, N; Shamir, J
1995-05-15
Image restoration from degraded observations and from properties that the image is supposed to satisfy has been approached by the method of projections onto convex constraint sets. Previous attempts have incorporated only partially the knowledge that we possess about the image to be restored because of difficulties in the implementation of some of the projections. In the parallel-projection algorithm presented here the a priori knowledge can be fully exploited. Moreover, the algorithm operates well even if the constraints are nonconvex and/or if the constraints have an empty intersection, without a limitation on the (finite) number of constraint sets.
Generalized arc consistency for global cardinality constraint
Regin, J.C.
1996-12-31
A global cardinality constraint (gcc) is specified in terms of a set of variables X = (x{sub 1},..., x{sub p}) which take their values in a subset of V = (v{sub 1},...,v{sub d}). It constrains the number of times a value v{sub i} {epsilon} V is assigned to a variable in X to be in an interval [l{sub i}, c{sub i}]. Cardinality constraints have proved very useful in many real-life problems, such as scheduling, timetabling, or resource allocation. A gcc is more general than a constraint of difference, which requires each interval to be. In this paper, we present an efficient way of implementing generalized arc consistency for a gcc. The algorithm we propose is based on a new theorem of flow theory. Its space complexity is O({vert_bar}X{vert_bar} {times} {vert_bar}V{vert_bar}) and its time complexity is O({vert_bar}X{vert_bar}{sup 2} {times} {vert_bar}V{vert_bar}). We also show how this algorithm can efficiently be combined with other filtering techniques.
Melting of polydisperse hard disks.
Pronk, Sander; Frenkel, Daan
2004-06-01
The melting of a polydisperse hard-disk system is investigated by Monte Carlo simulations in the semigrand canonical ensemble. This is done in the context of possible continuous melting by a dislocation-unbinding mechanism, as an extension of the two-dimensional hard-disk melting problem. We find that while there is pronounced fractionation in polydispersity, the apparent density-polydispersity gap does not increase in width, contrary to 3D polydisperse hard spheres. The point where the Young's modulus is low enough for the dislocation unbinding to occur moves with the apparent melting point, but stays within the density gap, just like for the monodisperse hard-disk system. Additionally, we find that throughout the accessible polydispersity range, the bound dislocation-pair concentration is high enough to affect the dislocation-unbinding melting as predicted by Kosterlitz, Thouless, Halperin, Nelson, and Young.
NASA Technical Reports Server (NTRS)
Vardi, A.
1984-01-01
The representation min t s.t. F(I)(x). - t less than or equal to 0 for all i is examined. An active set strategy is designed of functions: active, semi-active, and non-active. This technique will help in preventing zigzagging which often occurs when an active set strategy is used. Some of the inequality constraints are handled with slack variables. Also a trust region strategy is used in which at each iteration there is a sphere around the current point in which the local approximation of the function is trusted. The algorithm is implemented into a successful computer program. Numerical results are provided.
Prediction of binary hard-sphere crystal structures.
Filion, Laura; Dijkstra, Marjolein
2009-04-01
We present a method based on a combination of a genetic algorithm and Monte Carlo simulations to predict close-packed crystal structures in hard-core systems. We employ this method to predict the binary crystal structures in a mixture of large and small hard spheres with various stoichiometries and diameter ratios between 0.4 and 0.84. In addition to known binary hard-sphere crystal structures similar to NaCl and AlB2, we predict additional crystal structures with the symmetry of CrB, gammaCuTi, alphaIrV, HgBr2, AuTe2, Ag2Se, and various structures for which an atomic analog was not found. In order to determine the crystal structures at infinite pressures, we calculate the maximum packing density as a function of size ratio for the crystal structures predicted by our GA using a simulated annealing approach. PMID:19518387
General heuristics algorithms for solving capacitated arc routing problem
NASA Astrophysics Data System (ADS)
Fadzli, Mohammad; Najwa, Nurul; Masran, Hafiz
2015-05-01
In this paper, we try to determine the near-optimum solution for the capacitated arc routing problem (CARP). In general, NP-hard CARP is a special graph theory specifically arises from street services such as residential waste collection and road maintenance. By purpose, the design of the CARP model and its solution techniques is to find optimum (or near-optimum) routing cost for a fleet of vehicles involved in operation. In other words, finding minimum-cost routing is compulsory in order to reduce overall operation cost that related with vehicles. In this article, we provide a combination of various heuristics algorithm to solve a real case of CARP in waste collection and benchmark instances. These heuristics work as a central engine in finding initial solutions or near-optimum in search space without violating the pre-setting constraints. The results clearly show that these heuristics algorithms could provide good initial solutions in both real-life and benchmark instances.
Baryon Spectrum Analysis using Covariant Constraint Dynamics
NASA Astrophysics Data System (ADS)
Whitney, Joshua; Crater, Horace
2012-03-01
The energy spectrum of the baryons is determined by treating each of them as a three-body system with the interacting forces coming from a set of two-body potentials that depend on both the distance between the quarks and the spin and orbital angular momentum coupling terms. The Two Body Dirac equations of constraint dynamics derived by Crater and Van Alstine, matched with the quasipotential formalism of Todorov as the underlying two-body formalism are used, as well as the three-body constraint formalism of Sazdjian to integrate the three two-body equations into a single relativistically covariant three body equation for the bound state energies. The results are analyzed and compared to experiment using a best fit method and several different algorithms, including a gradient approach, and Monte Carlo method. Results for all well-known baryons are presented and compared to experiment, with good accuracy.
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
Dynamical Constraints on Exoplanets
NASA Astrophysics Data System (ADS)
Horner, Jonti; Wittenmyer, Robert A.; Tinney, Chris; Hinse, Tobias C.; Marshall, Jonathan P.
2014-01-01
Dynamical studies of new exoplanet systems are a critical component of the discovery and characterisation process. Such studies can provide firmer constraints on the parameters of the newly discovered planets, and may even reveal that the proposed planets do not stand up to dynamical scrutiny. Here, we demonstrate how dynamical studies can assist the characterisation of such systems through two examples: QS Virginis and HD 73526.
Hiding quiet solutions in random constraint satisfaction problems
Zdeborova, Lenka; Krzakala, Florent
2008-01-01
We study constraint satisfaction problems on the so-called planted random ensemble. We show that for a certain class of problems, e.g., graph coloring, many of the properties of the usual random ensemble are quantitatively identical in the planted random ensemble. We study the structural phase transitions and the easy-hard-easy pattern in the average computational complexity. We also discuss the finite temperature phase diagram, finding a close connection with the liquid-glass-solid phenomenology.
Practical engineering of hard spin-glass instances
NASA Astrophysics Data System (ADS)
Marshall, Jeffrey; Martin-Mayor, Victor; Hen, Itay
2016-07-01
Recent technological developments in the field of experimental quantum annealing have made prototypical annealing optimizers with hundreds of qubits commercially available. The experimental demonstration of a quantum speedup for optimization problems has since then become a coveted, albeit elusive goal. Recent studies have shown that the so far inconclusive results, regarding a quantum enhancement, may have been partly due to the benchmark problems used being unsuitable. In particular, these problems had inherently too simple a structure, allowing for both traditional resources and quantum annealers to solve them with no special efforts. The need therefore has arisen for the generation of harder benchmarks which would hopefully possess the discriminative power to separate classical scaling of performance with size from quantum. We introduce here a practical technique for the engineering of extremely hard spin-glass Ising-type problem instances that does not require "cherry picking" from large ensembles of randomly generated instances. We accomplish this by treating the generation of hard optimization problems itself as an optimization problem, for which we offer a heuristic algorithm that solves it. We demonstrate the genuine thermal hardness of our generated instances by examining them thermodynamically and analyzing their energy landscapes, as well as by testing the performance of various state-of-the-art algorithms on them. We argue that a proper characterization of the generated instances offers a practical, efficient way to properly benchmark experimental quantum annealers, as well as any other optimization algorithm.
The enigma of nonholonomic constraints
NASA Astrophysics Data System (ADS)
Flannery, M. R.
2005-03-01
The problems associated with the modification of Hamilton's principle to cover nonholonomic constraints by the application of the multiplier theorem of variational calculus are discussed. The reason for the problems is subtle and is discussed, together with the reason why the proper account of nonholonomic constraints is outside the scope of Hamilton's variational principle. However, linear velocity constraints remain within the scope of D'Alembert's principle. A careful and comprehensive analysis facilitates the resolution of the puzzling features of nonholonomic constraints.
Unraveling Quantum Annealers using Classical Hardness.
Martin-Mayor, Victor; Hen, Itay
2015-01-01
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as 'D-Wave' chips, promise to solve practical optimization problems potentially faster than conventional 'classical' computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize 'temperature chaos' as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257
Unraveling Quantum Annealers using Classical Hardness.
Martin-Mayor, Victor; Hen, Itay
2015-10-20
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as 'D-Wave' chips, promise to solve practical optimization problems potentially faster than conventional 'classical' computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize 'temperature chaos' as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip.
NASA Technical Reports Server (NTRS)
Tielking, John T.
1989-01-01
Two algorithms for obtaining static contact solutions are described in this presentation. Although they were derived for contact problems involving specific structures (a tire and a solid rubber cylinder), they are sufficiently general to be applied to other shell-of-revolution and solid-body contact problems. The shell-of-revolution contact algorithm is a method of obtaining a point load influence coefficient matrix for the portion of shell surface that is expected to carry a contact load. If the shell is sufficiently linear with respect to contact loading, a single influence coefficient matrix can be used to obtain a good approximation of the contact pressure distribution. Otherwise, the matrix will be updated to reflect nonlinear load-deflection behavior. The solid-body contact algorithm utilizes a Lagrange multiplier to include the contact constraint in a potential energy functional. The solution is found by applying the principle of minimum potential energy. The Lagrange multiplier is identified as the contact load resultant for a specific deflection. At present, only frictionless contact solutions have been obtained with these algorithms. A sliding tread element has been developed to calculate friction shear force in the contact region of the rolling shell-of-revolution tire model.
Structure Constraints in a Constraint-Based Planner
NASA Technical Reports Server (NTRS)
Pang, Wan-Lin; Golden, Keith
2004-01-01
In this paper we report our work on a new constraint domain, where variables can take structured values. Earth-science data processing (ESDP) is a planning domain that requires the ability to represent and reason about complex constraints over structured data, such as satellite images. This paper reports on a constraint-based planner for ESDP and similar domains. We discuss our approach for translating a planning problem into a constraint satisfaction problem (CSP) and for representing and reasoning about structured objects and constraints over structures.
Exclusive, hard diffraction in QCD
NASA Astrophysics Data System (ADS)
Freund, Andreas
In the first chapter we give an introduction to hard diffractive scattering in QCD to introduce basic concepts and terminology, thus setting the stage for the following chapters. In the second chapter we make predictions for nondiagonal parton distributions in a proton in the LLA. We calculate the DGLAP-type evolution kernels in the LLA, solve the nondiagonal GLAP evolution equations with a modified version of the CTEQ-package and comment on the range of applicability of the LLA in the asymmetric regime. We show that the nondiagonal gluon distribution g(x1,x2,t,μ2) can be well approximated at small x by the conventional gluon density xG(x,μ2). In the third chapter, we discuss the algorithms used in the LO evolution program for nondiagonal parton distributions in the DGLAP region and discuss the stability of the code. Furthermore, we demonstrate that we can reproduce the case of the LO diagonal evolution within less than 0.5% of the original code as developed by the CTEQ-collaboration. In chapter 4, we show that factorization holds for the deeply virtual Compton scattering amplitude in QCD, up to power suppressed terms, to all orders in perturbation theory. Furthermore, we show that the virtuality of the produced photon does not influence the general theorem. In chapter 5, we demonstrate that perturbative QCD allows one to calculate the absolute cross section of diffractive exclusive production of photons at large Q2 at HERA, while the aligned jet model allows one to estimate the cross section for intermediate Q2~2GeV2. Furthermore, we find that the imaginary part of the amplitude for the production of real photons is larger than the imaginary part of the corresponding DIS amplitude, leading to predictions of a significant counting rate for the current generation of experiments at HERA. We also find a large azimuthal angle asymmetry in ep scattering for HERA kinematics which allows one to directly measure the real part of the DVCS amplitude and hence the
Teaching Database Design with Constraint-Based Tutors
ERIC Educational Resources Information Center
Mitrovic, Antonija; Suraweera, Pramuditha
2016-01-01
Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…
Numerical methods for portfolio selection with bounded constraints
NASA Astrophysics Data System (ADS)
Yin, G.; Jin, Hanqing; Jin, Zhuo
2009-11-01
This work develops an approximation procedure for portfolio selection with bounded constraints. Based on the Markov chain approximation techniques, numerical procedures are constructed for the utility optimization task. Under simple conditions, the convergence of the approximation sequences to the wealth process and the optimal utility function is established. Numerical examples are provided to illustrate the performance of the algorithms.
NASA Astrophysics Data System (ADS)
Zheng, Genrang; Lin, ZhengChun
The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.
ERIC Educational Resources Information Center
Moreland, James D., Jr
2013-01-01
This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…
A Framework for Optimal Control Allocation with Structural Load Constraints
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc
2010-01-01
Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.
WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations
NASA Astrophysics Data System (ADS)
Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi
We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.
NASA Astrophysics Data System (ADS)
Overgaard Rasmussen, Christine
2016-07-01
We present an overview of the options for diffraction implemented in the general-purpose event generator Pythia 8 [1]. We review the existing model for soft diffraction and present a new model for hard diffraction. Both models use the Pomeron approach pioneered by Ingelman and Schlein, factorising the diffractive cross section into a Pomeron flux and a Pomeron PDF, with several choices for both implemented in Pythia 8. The model of hard diffraction is implemented as a part of the multiparton interactions (MPI) framework, thus introducing a dynamical gap survival probability that explicitly breaks factorisation.
Hardness of ion implanted ceramics
Oliver, W.C.; McHargue, C.J.; Farlow, G.C.; White, C.W.
1985-01-01
It has been established that the wear behavior of ceramic materials can be modified through ion implantation. Studies have been done to characterize the effect of implantation on the structure and composition of ceramic surfaces. To understand how these changes affect the wear properties of the ceramic, other mechanical properties must be measured. To accomplish this, a commercially available ultra low load hardness tester has been used to characterize Al/sub 2/O/sub 3/ with different implanted species and doses. The hardness of the base material is compared with the highly damaged crystalline state as well as the amorphous material.
A Constraint-Based Planner for Data Production
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Golden, Keith
2005-01-01
This paper presents a graph-based backtracking algorithm designed to support constrain-tbased planning in data production domains. This algorithm performs backtracking at two nested levels: the outer- backtracking following the structure of the planning graph to select planner subgoals and actions to achieve them and the inner-backtracking inside a subproblem associated with a selected action to find action parameter values. We show this algorithm works well in a planner applied to automating data production in an ecological forecasting system. We also discuss how the idea of multi-level backtracking may improve efficiency of solving semi-structured constraint problems.
Constraints influencing sports wheelchair propulsion performance and injury risk
2013-01-01
The Paralympic Games are the pinnacle of sport for many athletes with a disability. A potential issue for many wheelchair athletes is how to train hard to maximise performance while also reducing the risk of injuries, particularly to the shoulder due to the accumulation of stress placed on this joint during activities of daily living, training and competition. The overall purpose of this narrative review was to use the constraints-led approach of dynamical systems theory to examine how various constraints acting upon the wheelchair-user interface may alter hand rim wheelchair performance during sporting activities, and to a lesser extent, their injury risk. As we found no studies involving Paralympic athletes that have directly utilised the dynamical systems approach to interpret their data, we have used this approach to select some potential constraints and discussed how they may alter wheelchair performance and/or injury risk. Organism constraints examined included player classifications, wheelchair setup, training and intrinsic injury risk factors. Task constraints examined the influence of velocity and types of locomotion (court sports vs racing) in wheelchair propulsion, while environmental constraints focused on forces that tend to oppose motion such as friction and surface inclination. Finally, the ecological validity of the research studies assessing wheelchair propulsion was critiqued prior to recommendations for practice and future research being given. PMID:23557065
Adaptive laser link reconfiguration using constraint propagation
NASA Technical Reports Server (NTRS)
Crone, M. S.; Julich, P. M.; Cook, L. M.
1993-01-01
This paper describes Harris AI research performed on the Adaptive Link Reconfiguration (ALR) study for Rome Lab, and focuses on the application of constraint propagation to the problem of link reconfiguration for the proposed space based Strategic Defense System (SDS) Brilliant Pebbles (BP) communications system. According to the concept of operations at the time of the study, laser communications will exist between BP's and to ground entry points. Long-term links typical of RF transmission will not exist. This study addressed an initial implementation of BP's based on the Global Protection Against Limited Strikes (GPALS) SDI mission. The number of satellites and rings studied was representative of this problem. An orbital dynamics program was used to generate line-of-site data for the modeled architecture. This was input into a discrete event simulation implemented in the Harris developed COnstraint Propagation Expert System (COPES) Shell, developed initially on the Rome Lab BM/C3 study. Using a model of the network and several heuristics, the COPES shell was used to develop the Heuristic Adaptive Link Ordering (HALO) Algorithm to rank and order potential laser links according to probability of communication. A reduced set of links based on this ranking would then be used by a routing algorithm to select the next hop. This paper includes an overview of Constraint Propagation as an Artificial Intelligence technique and its embodiment in the COPES shell. It describes the design and implementation of both the simulation of the GPALS BP network and the HALO algorithm in COPES. This is described using a 59 Data Flow Diagram, State Transition Diagrams, and Structured English PDL. It describes a laser communications model and the heuristics involved in rank-ordering the potential communication links. The generation of simulation data is described along with its interface via COPES to the Harris developed View Net graphical tool for visual analysis of communications
Evaluation of Open-Source Hard Real Time Software Packages
NASA Technical Reports Server (NTRS)
Mattei, Nicholas S.
2004-01-01
Reliable software is, at times, hard to find. No piece of software can be guaranteed to work in every situation that may arise during its use here at Glenn Research Center or in space. The job of the Software Assurance (SA) group in the Risk Management Office is to rigorously test the software in an effort to ensure it matches the contract specifications. In some cases the SA team also researches new alternatives for selected software packages. This testing and research is an integral part of the department of Safety and Mission Assurance. Real Time operation in reference to a computer system is a particular style of handing the timing and manner with which inputs and outputs are handled. A real time system executes these commands and appropriate processing within a defined timing constraint. Within this definition there are two other classifications of real time systems: hard and soft. A soft real time system is one in which if the particular timing constraints are not rigidly met there will be no critical results. On the other hand, a hard real time system is one in which if the timing constraints are not met the results could be catastrophic. An example of a soft real time system is a DVD decoder. If the particular piece of data from the input is not decoded and displayed to the screen at exactly the correct moment nothing critical will become of it, the user may not even notice it. However, a hard real time system is needed to control the timing of fuel injections or steering on the Space Shuttle; a delay of even a fraction of a second could be catastrophic in such a complex system. The current real time system employed by most NASA projects is Wind River's VxWorks operating system. This is a proprietary operating system that can be configured to work with many of NASA s needs and it provides very accurate and reliable hard real time performance. The down side is that since it is a proprietary operating system it is also costly to implement. The prospect of
Structure of hard particle fluids near a hard wall. II. yw(z) for hard spheres
NASA Astrophysics Data System (ADS)
Labik, S.; Smith, William R.; Speedy, Robin J.
1988-02-01
Predictions of the wall-cavity correlation function yw(z) for hard spheres against a hard wall are tested using the treatment that Smith and Speedy developed and examined for the case of hard disks in part I of this series, as well as an extension of this approach using an alternative procedure. yw(z) in the range 0≤z≤1 may be accurately predicted using only the thermodynamic properties of the bulk fluid, for which precise expressions are available. These predictions are tested by determining yw(z) and the cavity concentration profile nwo(z) in a computer simulation study. We also derive a new integral equation relating yw(z) near the wall to its values just outside the wall and illustrate this in examining the consistency of our computer simulation results.
Asteroseismic constraints for Gaia
NASA Astrophysics Data System (ADS)
Creevey, O. L.; Thévenin, F.
2012-12-01
Distances from the Gaia mission will no doubt improve our understanding of stellar physics by providing an excellent constraint on the luminosity of the star. However, it is also clear that high precision stellar properties from, for example, asteroseismology, will also provide a needed input constraint in order to calibrate the methods that Gaia will use, e.g. stellar models or GSP_Phot. For solar-like stars (F, G, K IV/V), asteroseismic data delivers at the least two very important quantities: (1) the average large frequency separation < Δ ν > and (2) the frequency corresponding to the maximum of the modulated-amplitude spectrum ν_{max}. Both of these quantities are related directly to stellar parameters (radius and mass) and in particular their combination (gravity and density). We show how the precision in < Δ ν >, ν_{max}, and atmospheric parameters T_{eff} and [Fe/H] affect the determination of gravity (log g) for a sample of well-known stars. We find that log g can be determined within less than 0.02 dex accuracy for our sample while considering precisions in the data expected for V˜12 stars from Kepler data. We also derive masses and radii which are accurate to within 1σ of the accepted values. This study validates the subsequent use of all of the available asteroseismic data on solar-like stars from the Kepler field (>500 IV/V stars) in order to provide a very important constraint for Gaia calibration of GSP_Phot} through the use of log g. We note that while we concentrate on IV/V stars, both the CoRoT and Kepler fields contain asteroseismic data on thousands of giant stars which will also provide useful calibration measures.
Superresolution via sparsity constraints
NASA Technical Reports Server (NTRS)
Donoho, David L.
1992-01-01
The problem of recovering a measure mu supported on a lattice of span Delta is considered under the condition that measurements are only available concerning the Fourier Transform at frequencies of Omega or less. If Omega is much smaller than the Nyquist frequency pi/Delta and the measurements are noisy, then stable recovery of mu is generally impossible. It is shown here that if, in addition, it is known that mu satisfies certain sparsity constraints, then stable recovery is possible. This finding validates practical efforts in spectroscopy, seismic prospecting, and astronomy to provide superresolution by imposing support limitations in reconstruction.
Performance constraints in decathletes.
Van Damme, Raoul; Wilson, Robbie S; Vanhooydonck, Bieke; Aerts, Peter
2002-02-14
Physical performance by vertebrates is thought to be constrained by trade-offs between antagonistic pairs of ecologically relevant traits and between conflicting specialist and generalist phenotypes, but there is surprisingly little evidence to support this reasoning. Here we analyse the performance of world-class athletes in standardized decathlon events and find that it is subject to both types of trade-off, after correction has been made for differences between athletes in general ability across all 10 events. These trade-offs may have imposed important constraints on the evolution of physical performance in humans and other vertebrates. PMID:11845199
Dual-Byte-Marker Algorithm for Detecting JFIF Header
NASA Astrophysics Data System (ADS)
Mohamad, Kamaruddin Malik; Herawan, Tutut; Deris, Mustafa Mat
The use of efficient algorithm to detect JPEG file is vital to reduce time taken for analyzing ever increasing data in hard drive or physical memory. In the previous paper, single-byte-marker algorithm is proposed for header detection. In this paper, another novel header detection algorithm called dual-byte-marker is proposed. Based on the experiments done on images from hard disk, physical memory and data set from DFRWS 2006 Challenge, results showed that dual-byte-marker algorithm gives better performance with better execution time for header detection as compared to single-byte-marker.
Cosmological constraints on coupled dark energy
NASA Astrophysics Data System (ADS)
Yang, Weiqiang; Li, Hang; Wu, Yabo; Lu, Jianbo
2016-10-01
The coupled dark energy model provides a possible approach to mitigate the coincidence problem of cosmological standard model. Here, the coupling term is assumed as bar Q = 3Hξxbar rhox, which is related to the interaction rate and energy density of dark energy. We derive the background and perturbation evolution equations for several coupled models. Then, we test these models by currently available cosmic observations which include cosmic microwave background radiation from Planck 2015, baryon acoustic oscillation, type Ia supernovae, fσ8(z) data points from redshift-space distortions, and weak gravitational lensing. The constraint results tell us there is no evidence of interaction at 2σ level, it is very hard to distinguish different coupled models from other ones.
TRACON Aircraft Arrival Planning and Optimization Through Spatial Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Bergh, Christopher P.; Krzeczowski, Kenneth J.; Davis, Thomas J.; Denery, Dallas G. (Technical Monitor)
1995-01-01
A new aircraft arrival planning and optimization algorithm has been incorporated into the Final Approach Spacing Tool (FAST) in the Center-TRACON Automation System (CTAS) developed at NASA-Ames Research Center. FAST simulations have been conducted over three years involving full-proficiency, level five air traffic controllers from around the United States. From these simulations an algorithm, called Spatial Constraint Satisfaction, has been designed, coded, undergone testing, and soon will begin field evaluation at the Dallas-Fort Worth and Denver International airport facilities. The purpose of this new design is an attempt to show that the generation of efficient and conflict free aircraft arrival plans at the runway does not guarantee an operationally acceptable arrival plan upstream from the runway -information encompassing the entire arrival airspace must be used in order to create an acceptable aircraft arrival plan. This new design includes functions available previously but additionally includes necessary representations of controller preferences and workload, operationally required amounts of extra separation, and integrates aircraft conflict resolution. As a result, the Spatial Constraint Satisfaction algorithm produces an optimized aircraft arrival plan that is more acceptable in terms of arrival procedures and air traffic controller workload. This paper discusses the current Air Traffic Control arrival planning procedures, previous work in this field, the design of the Spatial Constraint Satisfaction algorithm, and the results of recent evaluations of the algorithm.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
Hard processes in hadronic interactions
Satz, H. |; Wang, X.N.
1995-07-01
Quantum chromodynamics is today accepted as the fundamental theory of strong interactions, even though most hadronic collisions lead to final states for which quantitative QCD predictions are still lacking. It therefore seems worthwhile to take stock of where we stand today and to what extent the presently available data on hard processes in hadronic collisions can be accounted for in terms of QCD. This is one reason for this work. The second reason - and in fact its original trigger - is the search for the quark-gluon plasma in high energy nuclear collisions. The hard processes to be considered here are the production of prompt photons, Drell-Yan dileptons, open charm, quarkonium states, and hard jets. For each of these, we discuss the present theoretical understanding, compare the resulting predictions to available data, and then show what behaviour it leads to at RHIC and LHC energies. All of these processes have the structure mentioned above: they contain a hard partonic interaction, calculable perturbatively, but also the non-perturbative parton distribution within a hadron. These parton distributions, however, can be studied theoretically in terms of counting rule arguments, and they can be checked independently by measurements of the parton structure functions in deep inelastic lepton-hadron scattering. The present volume is the work of Hard Probe Collaboration, a group of theorists who are interested in the problem and were willing to dedicate a considerable amount of their time and work on it. The necessary preparation, planning and coordination of the project were carried out in two workshops of two weeks` duration each, in February 1994 at CERn in Geneva andin July 1994 at LBL in Berkeley.
On heterotic model constraints
NASA Astrophysics Data System (ADS)
Bouchard, Vincent; Donagi, Ron
2008-08-01
The constraints imposed on heterotic compactifications by global consistency and phenomenology seem to be very finely balanced. We show that weakening these constraints, as was proposed in some recent works, is likely to lead to frivolous results. In particular, we construct an infinite set of such frivolous models having precisely the massless spectrum of the MSSM and other quasi-realistic features. Only one model in this infinite collection (the one constructed in [8]) is globally consistent and supersymmetric. The others might be interpreted as being anomalous, or as non-supersymmetric models, or as local models that cannot be embedded in a global one. We also show that the strongly coupled model of [8] can be modified to a perturbative solution with stable SU(4) or SU(5) bundles in the hidden sector. We finally propose a detailed exploration of heterotic vacua involving bundles on Calabi-Yau threefolds with Bbb Z6 Wilson lines; we obtain many more frivolous solutions, but none that are globally consistent and supersymmetric at the string scale.
Symbolic Constraint Maintenance Grid
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
Version 3.1 of Symbolic Constraint Maintenance Grid (SCMG) is a software system that provides a general conceptual framework for utilizing pre-existing programming techniques to perform symbolic transformations of data. SCMG also provides a language (and an associated communication method and protocol) for representing constraints on the original non-symbolic data. SCMG provides a facility for exchanging information between numeric and symbolic components without knowing the details of the components themselves. In essence, it integrates symbolic software tools (for diagnosis, prognosis, and planning) with non-artificial-intelligence software. SCMG executes a process of symbolic summarization and monitoring of continuous time series data that are being abstractly represented as symbolic templates of information exchange. This summarization process enables such symbolic- reasoning computing systems as artificial- intelligence planning systems to evaluate the significance and effects of channels of data more efficiently than would otherwise be possible. As a result of the increased efficiency in representation, reasoning software can monitor more channels and is thus able to perform monitoring and control functions more effectively.
A quantitative model for interpreting nanometer scale hardness measurements of thin films
Poisl, W.H.; Fabes, B.D.; Oliver, W.C.
1993-09-01
A model was developed to determine hardness of thin films from hardness versus depth curves, given film thickness and substrate hardness. The model is developed by dividing the measured hardness into film and substrate contributions based on the projected areas of both the film and substrate under the indenter. The model incorporates constraints on the deformation of the film by the surrounding material in the film, the substrate, and friction at the indenter/film and film/substrate interfaces. These constraints increase the pressure that the film can withstand and account for the increase in measured hardness as the indenter approaches the substrate. The model is evaluated by fitting the predicted hardness versus depth curves obtained from titanium and Ta{sub 2}O{sub 5} films of varying thicknesses on sapphire substrates. The model is also able to describe experimental data for Ta{sub 2}O{sub 5} films on sapphire with a carbon layer between the film and the substrate by a reduction in the interfacial strength from that obtained for a film without an interfacial carbon layer.
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
Wang, Xiang; Huang, Zhitao; Zhou, Yiyu
2012-01-01
Signal of interest (SOI) extraction is a vital issue in communication signal processing. In this paper, we propose two novel iterative algorithms for extracting SOIs from instantaneous mixtures, which explores the spatial constraint corresponding to the Directions of Arrival (DOAs) of the SOIs as a priori information into the constrained Independent Component Analysis (cICA) framework. The first algorithm utilizes the spatial constraint to form a new constrained optimization problem under the previous cICA framework which requires various user parameters, i.e., Lagrange parameter and threshold measuring the accuracy degree of the spatial constraint, while the second algorithm incorporates the spatial constraints to select specific initialization of extracting vectors. The major difference between the two novel algorithms is that the former incorporates the prior information into the learning process of the iterative algorithm and the latter utilizes the prior information to select the specific initialization vector. Therefore, no extra parameters are necessary in the learning process, which makes the algorithm simpler and more reliable and helps to improve the speed of extraction. Meanwhile, the convergence condition for the spatial constraints is analyzed. Compared with the conventional techniques, i.e., MVDR, numerical simulation results demonstrate the effectiveness, robustness and higher performance of the proposed algorithms. PMID:23012531
Wang, Xiang; Huang, Zhitao; Zhou, Yiyu
2012-01-01
Signal of interest (SOI) extraction is a vital issue in communication signal processing. In this paper, we propose two novel iterative algorithms for extracting SOIs from instantaneous mixtures, which explores the spatial constraint corresponding to the Directions of Arrival (DOAs) of the SOIs as a priori information into the constrained Independent Component Analysis (cICA) framework. The first algorithm utilizes the spatial constraint to form a new constrained optimization problem under the previous cICA framework which requires various user parameters, i.e., Lagrange parameter and threshold measuring the accuracy degree of the spatial constraint, while the second algorithm incorporates the spatial constraints to select specific initialization of extracting vectors. The major difference between the two novel algorithms is that the former incorporates the prior information into the learning process of the iterative algorithm and the latter utilizes the prior information to select the specific initialization vector. Therefore, no extra parameters are necessary in the learning process, which makes the algorithm simpler and more reliable and helps to improve the speed of extraction. Meanwhile, the convergence condition for the spatial constraints is analyzed. Compared with the conventional techniques, i.e., MVDR, numerical simulation results demonstrate the effectiveness, robustness and higher performance of the proposed algorithms. PMID:23012531
Self-organization and clustering algorithms
NASA Technical Reports Server (NTRS)
Bezdek, James C.
1991-01-01
Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed.
Nanomechanics of hard films on compliant substrates.
Reedy, Earl David, Jr.; Emerson, John Allen; Bahr, David F.; Moody, Neville Reid; Zhou, Xiao Wang; Hales, Lucas; Adams, David Price; Yeager,John; Nyugen, Thao D.; Corona, Edmundo; Kennedy, Marian S.; Cordill, Megan J.
2009-09-01
Development of flexible thin film systems for biomedical, homeland security and environmental sensing applications has increased dramatically in recent years [1,2,3,4]. These systems typically combine traditional semiconductor technology with new flexible substrates, allowing for both the high electron mobility of semiconductors and the flexibility of polymers. The devices have the ability to be easily integrated into components and show promise for advanced design concepts, ranging from innovative microelectronics to MEMS and NEMS devices. These devices often contain layers of thin polymer, ceramic and metallic films where differing properties can lead to large residual stresses [5]. As long as the films remain substrate-bonded, they may deform far beyond their freestanding counterpart. Once debonded, substrate constraint disappears leading to film failure where compressive stresses can lead to wrinkling, delamination, and buckling [6,7,8] while tensile stresses can lead to film fracture and decohesion [9,10,11]. In all cases, performance depends on film adhesion. Experimentally it is difficult to measure adhesion. It is often studied using tape [12], pull off [13,14,15], and peel tests [16,17]. More recent techniques for measuring adhesion include scratch testing [18,19,20,21], four point bending [22,23,24], indentation [25,26,27], spontaneous blisters [28,29] and stressed overlayers [7,26,30,31,32,33]. Nevertheless, sample design and test techniques must be tailored for each system. There is a large body of elastic thin film fracture and elastic contact mechanics solutions for elastic films on rigid substrates in the published literature [5,7,34,35,36]. More recent work has extended these solutions to films on compliant substrates and show that increasing compliance markedly changes fracture energies compared with rigid elastic solution results [37,38]. However, the introduction of inelastic substrate response significantly complicates the problem [10,39,40]. As
A global approach to kinematic path planning to robots with holonomic and nonholonomic constraints
NASA Technical Reports Server (NTRS)
Divelbiss, Adam; Seereeram, Sanjeev; Wen, John T.
1993-01-01
Robots in applications may be subject to holonomic or nonholonomic constraints. Examples of holonomic constraints include a manipulator constrained through the contact with the environment, e.g., inserting a part, turning a crank, etc., and multiple manipulators constrained through a common payload. Examples of nonholonomic constraints include no-slip constraints on mobile robot wheels, local normal rotation constraints for soft finger and rolling contacts in grasping, and conservation of angular momentum of in-orbit space robots. The above examples all involve equality constraints; in applications, there are usually additional inequality constraints such as robot joint limits, self collision and environment collision avoidance constraints, steering angle constraints in mobile robots, etc. The problem of finding a kinematically feasible path that satisfies a given set of holonomic and nonholonomic constraints, of both equality and inequality types is addressed. The path planning problem is first posed as a finite time nonlinear control problem. This problem is subsequently transformed to a static root finding problem in an augmented space which can then be iteratively solved. The algorithm has shown promising results in planning feasible paths for redundant arms satisfying Cartesian path following and goal endpoint specifications, and mobile vehicles with multiple trailers. In contrast to local approaches, this algorithm is less prone to problems such as singularities and local minima.
Relative constraints and evolution
NASA Astrophysics Data System (ADS)
Ochoa, Juan G. Diaz
2014-03-01
Several mathematical models of evolving systems assume that changes in the micro-states are constrained to the search of an optimal value in a local or global objective function. However, the concept of evolution requires a continuous change in the environment and species, making difficult the definition of absolute optimal values in objective functions. In this paper, we define constraints that are not absolute but relative to local micro-states, introducing a rupture in the invariance of the phase space of the system. This conceptual basis is useful to define alternative mathematical models for biological (or in general complex) evolving systems. We illustrate this concept with a modified Ising model, which can be useful to understand and model problems like the somatic evolution of cancer.
Ultrasonic material hardness depth measurement
Good, Morris S.; Schuster, George J.; Skorpik, James R.
1997-01-01
The invention is an ultrasonic surface hardness depth measurement apparatus and method permitting rapid determination of hardness depth of shafts, rods, tubes and other cylindrical parts. The apparatus of the invention has a part handler, sensor, ultrasonic electronics component, computer, computer instruction sets, and may include a display screen. The part handler has a vessel filled with a couplant, and a part rotator for rotating a cylindrical metal part with respect to the sensor. The part handler further has a surface follower upon which the sensor is mounted, thereby maintaining a constant distance between the sensor and the exterior surface of the cylindrical metal part. The sensor is mounted so that a front surface of the sensor is within the vessel with couplant between the front surface of the sensor and the part.
Overview-hard rock penetration
Dunn, J.C. )
1993-01-01
The Hard Rock Penetration program is developing technology to reduce the costs of drilling and completing geothermal wells. Current projects include: lost circulation control, borehole instrumentation, acoustic telemetry, slimhole drilling, geothermal heat pumps. A new project to improve synthetic diamond drill bits for hard rock drilling was initiated during the year. Accomplishments during the year include completion of important acoustic telemetry tests in the Long Valley Well. These tests produced the first set of reliable, repeatable data in a drill hole. The results indicate the promise of acoustic transmission through drill pipe for great distances without repeaters. The rolling float meter for measuring drilling fluid outflow was duplicated and sent to six different companies for evaluation in the field. A new slimhole spectral gamma tool for operation at temperatures up to 300 C was fabricated and evaluated in the laboratory. Slimhole drilling for exploration and reservoir characterization was begun with several projects jointly completed with industry.
Ultrasonic material hardness depth measurement
Good, M.S.; Schuster, G.J.; Skorpik, J.R.
1997-07-08
The invention is an ultrasonic surface hardness depth measurement apparatus and method permitting rapid determination of hardness depth of shafts, rods, tubes and other cylindrical parts. The apparatus of the invention has a part handler, sensor, ultrasonic electronics component, computer, computer instruction sets, and may include a display screen. The part handler has a vessel filled with a couplant, and a part rotator for rotating a cylindrical metal part with respect to the sensor. The part handler further has a surface follower upon which the sensor is mounted, thereby maintaining a constant distance between the sensor and the exterior surface of the cylindrical metal part. The sensor is mounted so that a front surface of the sensor is within the vessel with couplant between the front surface of the sensor and the part. 12 figs.
Evolutionary constraints or opportunities?
Sharov, Alexei A
2014-04-22
Natural selection is traditionally viewed as a leading factor of evolution, whereas variation is assumed to be random and non-directional. Any order in variation is attributed to epigenetic or developmental constraints that can hinder the action of natural selection. In contrast I consider the positive role of epigenetic mechanisms in evolution because they provide organisms with opportunities for rapid adaptive change. Because the term "constraint" has negative connotations, I use the term "regulated variation" to emphasize the adaptive nature of phenotypic variation, which helps populations and species to survive and evolve in changing environments. The capacity to produce regulated variation is a phenotypic property, which is not described in the genome. Instead, the genome acts as a switchboard, where mostly random mutations switch "on" or "off" preexisting functional capacities of organism components. Thus, there are two channels of heredity: informational (genomic) and structure-functional (phenotypic). Functional capacities of organisms most likely emerged in a chain of modifications and combinations of more simple ancestral functions. The role of DNA has been to keep records of these changes (without describing the result) so that they can be reproduced in the following generations. Evolutionary opportunities include adjustments of individual functions, multitasking, connection between various components of an organism, and interaction between organisms. The adaptive nature of regulated variation can be explained by the differential success of lineages in macro-evolution. Lineages with more advantageous patterns of regulated variation are likely to produce more species and secure more resources (i.e., long-term lineage selection).
Evolutionary constraints or opportunities?
Sharov, Alexei A
2014-09-01
Natural selection is traditionally viewed as a leading factor of evolution, whereas variation is assumed to be random and non-directional. Any order in variation is attributed to epigenetic or developmental constraints that can hinder the action of natural selection. In contrast I consider the positive role of epigenetic mechanisms in evolution because they provide organisms with opportunities for rapid adaptive change. Because the term "constraint" has negative connotations, I use the term "regulated variation" to emphasize the adaptive nature of phenotypic variation, which helps populations and species to survive and evolve in changing environments. The capacity to produce regulated variation is a phenotypic property, which is not described in the genome. Instead, the genome acts as a switchboard, where mostly random mutations switch "on" or "off" preexisting functional capacities of organism components. Thus, there are two channels of heredity: informational (genomic) and structure-functional (phenotypic). Functional capacities of organisms most likely emerged in a chain of modifications and combinations of more simple ancestral functions. The role of DNA has been to keep records of these changes (without describing the result) so that they can be reproduced in the following generations. Evolutionary opportunities include adjustments of individual functions, multitasking, connection between various components of an organism, and interaction between organisms. The adaptive nature of regulated variation can be explained by the differential success of lineages in macro-evolution. Lineages with more advantageous patterns of regulated variation are likely to produce more species and secure more resources (i.e., long-term lineage selection).
Neural constraints on learning
Sadtler, Patrick T.; Quick, Kristin M.; Golub, Matthew D.; Chase, Steven M.; Ryu, Stephen I.; Tyler-Kabara, Elizabeth C.; Yu, Byron M.; Batista, Aaron P.
2014-01-01
Motor, sensory, and cognitive learning require networks of neurons to generate new activity patterns. Because some behaviors are easier to learn than others1,2, we wondered if some neural activity patterns are easier to generate than others. We asked whether the existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define the constraint. We employed a closed-loop intracortical brain-computer interface (BCI) learning paradigm in which Rhesus monkeys controlled a computer cursor by modulating neural activity patterns in primary motor cortex. Using the BCI paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. These patterns comprise a low-dimensional space (termed the intrinsic manifold, or IM) within the high-dimensional neural firing rate space. They presumably reflect constraints imposed by the underlying neural circuitry. We found that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the IM. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the IM. This result suggests that the existing structure of a network can shape learning. On the timescale of hours, it appears to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already possess3,4. PMID:25164754
ϑ-SHAKE: An extension to SHAKE for the explicit treatment of angular constraints
NASA Astrophysics Data System (ADS)
Gonnet, Pedro; Walther, Jens H.; Koumoutsakos, Petros
2009-03-01
This paper presents ϑ-SHAKE, an extension to SHAKE, an algorithm for the resolution of holonomic constraints in molecular dynamics simulations, which allows for the explicit treatment of angular constraints. We show that this treatment is more efficient than the use of fictitious bonds, significantly reducing the overlap between the individual constraints and thus accelerating convergence. The new algorithm is compared with SHAKE, M-SHAKE, the matrix-based approach described by Ciccotti and Ryckaert and P-SHAKE for rigid water and octane.
Microwave assisted hard rock cutting
Lindroth, David P.; Morrell, Roger J.; Blair, James R.
1991-01-01
An apparatus for the sequential fracturing and cutting of subsurface volume of hard rock (102) in the strata (101) of a mining environment (100) by subjecting the volume of rock to a beam (25) of microwave energy to fracture the subsurface volume of rock by differential expansion; and , then bringing the cutting edge (52) of a piece of conventional mining machinery (50) into contact with the fractured rock (102).
Results on hard diffractive production
Goulianos, K.
1995-07-01
The results of experiments at hadron colliders probing the structure of the pomeron through hard diffraction are reviewed. Some results on deep inelastic diffractive scattering obtained a HERA are also discussed and placed in perspective. By using a properly normalized pomeron flux factor in single diffraction dissociation, as dictated by unitarity, the pomeron emerges as a combination of valence quark and gluon color singlets in a ratio suggested by asymptopia.
Sahoo, Pradyumna Kumar; Mandal, Palash Kumar; Ghosh, Saradindu
2014-01-01
Schwannomas are benign encapsulated perineural tumors. The head and neck region is the most common site. Intraoral origin is seen in only 1% of cases, tongue being the most common site; its location in the palate is rare. We report a case of hard-palate schwannoma with bony erosion which was immunohistochemically confirmed. The tumor was excised completely intraorally. After two months of follow-up, the defect was found to be completely covered with palatal mucosa. PMID:25298716
Naeem, Muhammad; Pareek, Udit; Lee, Daniel C; Anpalagan, Alagan
2013-01-01
Due to the rapid increase in the usage and demand of wireless sensor networks (WSN), the limited frequency spectrum available for WSN applications will be extremely crowded in the near future. More sensor devices also mean more recharging/replacement of batteries, which will cause significant impact on the global carbon footprint. In this paper, we propose a relay-assisted cognitive radio sensor network (CRSN) that allocates communication resources in an environmentally friendly manner. We use shared band amplify and forward relaying for cooperative communication in the proposed CRSN. We present a multi-objective optimization architecture for resource allocation in a green cooperative cognitive radio sensor network (GC-CRSN). The proposed multi-objective framework jointly performs relay assignment and power allocation in GC-CRSN, while optimizing two conflicting objectives. The first objective is to maximize the total throughput, and the second objective is to minimize the total transmission power of CRSN. The proposed relay assignment and power allocation problem is a non-convex mixed-integer non-linear optimization problem (NC-MINLP), which is generally non-deterministic polynomial-time (NP)-hard. We introduce a hybrid heuristic algorithm for this problem. The hybrid heuristic includes an estimation-of-distribution algorithm (EDA) for performing power allocation and iterative greedy schemes for constraint satisfaction and relay assignment. We analyze the throughput and power consumption tradeoff in GC-CRSN. A detailed analysis of the performance of the proposed algorithm is presented with the simulation results. PMID:23584119
Naeem, Muhammad; Pareek, Udit; Lee, Daniel C.; Anpalagan, Alagan
2013-01-01
Due to the rapid increase in the usage and demand of wireless sensor networks (WSN), the limited frequency spectrum available for WSN applications will be extremely crowded in the near future. More sensor devices also mean more recharging/replacement of batteries, which will cause significant impact on the global carbon footprint. In this paper, we propose a relay-assisted cognitive radio sensor network (CRSN) that allocates communication resources in an environmentally friendly manner. We use shared band amplify and forward relaying for cooperative communication in the proposed CRSN. We present a multi-objective optimization architecture for resource allocation in a green cooperative cognitive radio sensor network (GC-CRSN). The proposed multi-objective framework jointly performs relay assignment and power allocation in GC-CRSN, while optimizing two conflicting objectives. The first objective is to maximize the total throughput, and the second objective is to minimize the total transmission power of CRSN. The proposed relay assignment and power allocation problem is a non-convex mixed-integer non-linear optimization problem (NC-MINLP), which is generally non-deterministic polynomial-time (NP)-hard. We introduce a hybrid heuristic algorithm for this problem. The hybrid heuristic includes an estimation-of-distribution algorithm (EDA) for performing power allocation and iterative greedy schemes for constraint satisfaction and relay assignment. We analyze the throughput and power consumption tradeoff in GC-CRSN. A detailed analysis of the performance of the proposed algorithm is presented with the simulation results. PMID:23584119
Naeem, Muhammad; Pareek, Udit; Lee, Daniel C; Anpalagan, Alagan
2013-04-12
Due to the rapid increase in the usage and demand of wireless sensor networks (WSN), the limited frequency spectrum available for WSN applications will be extremely crowded in the near future. More sensor devices also mean more recharging/replacement of batteries, which will cause significant impact on the global carbon footprint. In this paper, we propose a relay-assisted cognitive radio sensor network (CRSN) that allocates communication resources in an environmentally friendly manner. We use shared band amplify and forward relaying for cooperative communication in the proposed CRSN. We present a multi-objective optimization architecture for resource allocation in a green cooperative cognitive radio sensor network (GC-CRSN). The proposed multi-objective framework jointly performs relay assignment and power allocation in GC-CRSN, while optimizing two conflicting objectives. The first objective is to maximize the total throughput, and the second objective is to minimize the total transmission power of CRSN. The proposed relay assignment and power allocation problem is a non-convex mixed-integer non-linear optimization problem (NC-MINLP), which is generally non-deterministic polynomial-time (NP)-hard. We introduce a hybrid heuristic algorithm for this problem. The hybrid heuristic includes an estimation-of-distribution algorithm (EDA) for performing power allocation and iterative greedy schemes for constraint satisfaction and relay assignment. We analyze the throughput and power consumption tradeoff in GC-CRSN. A detailed analysis of the performance of the proposed algorithm is presented with the simulation results.
Low dose hard x-ray contact microscopy assisted by a photoelectric conversion layer
Gomella, Andrew; Martin, Eric W.; Lynch, Susanna K.; Wen, Han; Morgan, Nicole Y.
2013-04-15
Hard x-ray contact microscopy provides images of dense samples at resolutions of tens of nanometers. However, the required beam intensity can only be delivered by synchrotron sources. We report on the use of a gold photoelectric conversion layer to lower the exposure dose by a factor of 40 to 50, allowing hard x-ray contact microscopy to be performed with a compact x-ray tube. We demonstrate the method in imaging the transmission pattern of a type of hard x-ray grating that cannot be fitted into conventional x-ray microscopes due to its size and shape. Generally the method is easy to implement and can record images of samples in the hard x-ray region over a large area in a single exposure, without some of the geometric constraints associated with x-ray microscopes based on zone-plate or other magnifying optics.
Velocity and energy distributions in microcanonical ensembles of hard spheres
NASA Astrophysics Data System (ADS)
Scalas, Enrico; Gabriel, Adrian T.; Martin, Edgar; Germano, Guido
2015-08-01
In a microcanonical ensemble (constant N V E , hard reflecting walls) and in a molecular dynamics ensemble (constant N V E PG , periodic boundary conditions) with a number N of smooth elastic hard spheres in a d -dimensional volume V having a total energy E , a total momentum P , and an overall center of mass position G , the individual velocity components, velocity moduli, and energies have transformed beta distributions with different arguments and shape parameters depending on d , N , E , the boundary conditions, and possible symmetries in the initial conditions. This can be shown marginalizing the joint distribution of individual energies, which is a symmetric Dirichlet distribution. In the thermodynamic limit the beta distributions converge to gamma distributions with different arguments and shape or scale parameters, corresponding respectively to the Gaussian, i.e., Maxwell-Boltzmann, Maxwell, and Boltzmann or Boltzmann-Gibbs distribution. These analytical results agree with molecular dynamics and Monte Carlo simulations with different numbers of hard disks or spheres and hard reflecting walls or periodic boundary conditions. The agreement is perfect with our Monte Carlo algorithm, which acts only on velocities independently of positions with the collision versor sampled uniformly on a unit half sphere in d dimensions, while slight deviations appear with our molecular dynamics simulations for the smallest values of N .
Stochastic Hard-Sphere Dynamics for Hydrodynamics of Non-Ideal Fluids
Donev, A; Alder, B J; Garcia, A L
2008-02-26
A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.
An information-based neural approach to constraint satisfaction.
Jönsson, H; Söderberg, B
2001-08-01
A novel artificial neural network approach to constraint satisfaction problems is presented. Based on information-theoretical considerations, it differs from a conventional mean-field approach in the form of the resulting free energy. The method, implemented as an annealing algorithm, is numerically explored on a testbed of K-SAT problems. The performance shows a dramatic improvement over that of a conventional mean-field approach and is comparable to that of a state-of-the-art dedicated heuristic (GSAT+walk). The real strength of the method, however, lies in its generality. With minor modifications, it is applicable to arbitrary types of discrete constraint satisfaction problems. PMID:11506672
NASA Astrophysics Data System (ADS)
Evertz, Hans Gerd
1998-03-01
Exciting new investigations have recently become possible for strongly correlated systems of spins, bosons, and fermions, through Quantum Monte Carlo simulations with the Loop Algorithm (H.G. Evertz, G. Lana, and M. Marcu, Phys. Rev. Lett. 70, 875 (1993).) (For a recent review see: H.G. Evertz, cond- mat/9707221.) and its generalizations. A review of this new method, its generalizations and its applications is given, including some new results. The Loop Algorithm is based on a formulation of physical models in an extended ensemble of worldlines and graphs, and is related to Swendsen-Wang cluster algorithms. It performs nonlocal changes of worldline configurations, determined by local stochastic decisions. It overcomes many of the difficulties of traditional worldline simulations. Computer time requirements are reduced by orders of magnitude, through a corresponding reduction in autocorrelations. The grand-canonical ensemble (e.g. varying winding numbers) is naturally simulated. The continuous time limit can be taken directly. Improved Estimators exist which further reduce the errors of measured quantities. The algorithm applies unchanged in any dimension and for varying bond-strengths. It becomes less efficient in the presence of strong site disorder or strong magnetic fields. It applies directly to locally XYZ-like spin, fermion, and hard-core boson models. It has been extended to the Hubbard and the tJ model and generalized to higher spin representations. There have already been several large scale applications, especially for Heisenberg-like models, including a high statistics continuous time calculation of quantum critical exponents on a regularly depleted two-dimensional lattice of up to 20000 spatial sites at temperatures down to T=0.01 J.
Improving the Held and Karp Approach with Constraint Programming
NASA Astrophysics Data System (ADS)
Benchimol, Pascal; Régin, Jean-Charles; Rousseau, Louis-Martin; Rueher, Michel; van Hoeve, Willem-Jan
Held and Karp have proposed, in the early 1970s, a relaxation for the Traveling Salesman Problem (TSP) as well as a branch-and-bound procedure that can solve small to modest-size instances to optimality [4, 5]. It has been shown that the Held-Karp relaxation produces very tight bounds in practice, and this relaxation is therefore applied in TSP solvers such as Concorde [1]. In this short paper we show that the Held-Karp approach can benefit from well-known techniques in Constraint Programming (CP) such as domain filtering and constraint propagation. Namely, we show that filtering algorithms developed for the weighted spanning tree constraint [3, 8] can be adapted to the context of the Held and Karp procedure. In addition to the adaptation of existing algorithms, we introduce a special-purpose filtering algorithm based on the underlying mechanisms used in Prim's algorithm [7]. Finally, we explored two different branching schemes to close the integrality gap. Our initial experimental results indicate that the addition of the CP techniques to the Held-Karp method can be very effective.
NASA Astrophysics Data System (ADS)
Santos, A.; Yuste, S. B.; López de Haro, M.
The composition-independent virial coefficients of a d-dimensional binary mixture of (additive) hard hyperspheres following from a recent proposal for the equation of state of the mixture (SANTOS, A., YUSTE, S. B., and LÓPEZ DE HARO, M., 1999, Molec. Phys., 96 , 1) are examined. Good agreement between theoretical estimates and available exact or numerical results is found for d = 2, 3, 4 and 5, except for mixtures whose components are very disparate in size. A slight modification that remedies this deficiency is introduced and the resummation of the associated virial series is carried out, leading to a new proposal for the equation of state. The case of binary hard sphere mixtures (d = 3) is analysed in some detail.
Applicability of Dynamic Facilitation Theory to Binary Hard Disk Systems
NASA Astrophysics Data System (ADS)
Isobe, Masaharu; Keys, Aaron S.; Chandler, David; Garrahan, Juan P.
2016-09-01
We numerically investigate the applicability of dynamic facilitation (DF) theory for glass-forming binary hard disk systems where supercompression is controlled by pressure. By using novel efficient algorithms for hard disks, we are able to generate equilibrium supercompressed states in an additive nonequimolar binary mixture, where microcrystallization and size segregation do not emerge at high average packing fractions. Above an onset pressure where collective heterogeneous relaxation sets in, we find that relaxation times are well described by a "parabolic law" with pressure. We identify excitations, or soft spots, that give rise to structural relaxation and find that they are spatially localized, their average concentration decays exponentially with pressure, and their associated energy scale is logarithmic in the excitation size. These observations are consistent with the predictions of DF generalized to systems controlled by pressure rather than temperature.
NP-hardness of decoding quantum error-correction codes
Hsieh, Min-Hsiu; Le Gall, Francois
2011-05-15
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
Code of Federal Regulations, 2010 CFR
2010-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2011 CFR
2011-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2014 CFR
2014-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2013 CFR
2013-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2012 CFR
2012-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Hard seed. 201.30 Section 201.30 Agriculture..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Labeling Vegetable Seeds § 201.30 Hard seed. The label shall show the percentage of hard seed,...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Hard seed. 201.30 Section 201.30 Agriculture..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Labeling Vegetable Seeds § 201.30 Hard seed. The label shall show the percentage of hard seed,...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Hard seed. 201.21 Section 201.21 Agriculture..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Labeling Agricultural Seeds § 201.21 Hard seed. The label shall show the percentage of hard...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Hard seed. 201.21 Section 201.21 Agriculture..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Labeling Agricultural Seeds § 201.21 Hard seed. The label shall show the percentage of hard...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Hard seed. 201.30 Section 201.30 Agriculture..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Labeling Vegetable Seeds § 201.30 Hard seed. The label shall show the percentage of hard seed,...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Hard seed. 201.21 Section 201.21 Agriculture..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Labeling Agricultural Seeds § 201.21 Hard seed. The label shall show the percentage of hard...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Hard seed. 201.30 Section 201.30 Agriculture..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Labeling Vegetable Seeds § 201.30 Hard seed. The label shall show the percentage of hard seed,...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Hard seed. 201.21 Section 201.21 Agriculture..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Labeling Agricultural Seeds § 201.21 Hard seed. The label shall show the percentage of hard...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Hard seed. 201.21 Section 201.21 Agriculture..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Labeling Agricultural Seeds § 201.21 Hard seed. The label shall show the percentage of hard...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Hard seed. 201.30 Section 201.30 Agriculture..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Labeling Vegetable Seeds § 201.30 Hard seed. The label shall show the percentage of hard seed,...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Hard hats. 56.15002 Section 56.15002 Mineral... HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Personal Protection § 56.15002 Hard hats. All persons shall wear suitable hard hats when in or around a mine or plant where falling...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Hard hats. 57.15002 Section 57.15002 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... Underground § 57.15002 Hard hats. All persons shall wear suitable hard hats when in or around a mine or...
Credit Constraints for Higher Education
ERIC Educational Resources Information Center
Solis, Alex
2012-01-01
This paper exploits a natural experiment that produces exogenous variation on credit access to determine the effect on college enrollment. The paper assess how important are credit constraints to explain the gap in college enrollment by family income, and what would be the gap if credit constraints are eliminated. Progress in college and dropout…
Fixed Costs and Hours Constraints
ERIC Educational Resources Information Center
Johnson, William R.
2011-01-01
Hours constraints are typically identified by worker responses to questions asking whether they would prefer a job with more hours and more pay or fewer hours and less pay. Because jobs with different hours but the same rate of pay may be infeasible when there are fixed costs of employment or mandatory overtime premia, the constraint in those…
The Hard Problem of Cooperation
Eriksson, Kimmo; Strimling, Pontus
2012-01-01
Based on individual variation in cooperative inclinations, we define the “hard problem of cooperation” as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition) change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior. PMID:22792282
Making Nozzles From Hard Materials
NASA Technical Reports Server (NTRS)
Wells, Dennis L.
1989-01-01
Proposed method of electrical-discharge machining (EDM) cuts hard materials like silicon carbide into smoothly contoured parts. Concept developed for fabrication of interior and exterior surfaces and internal cooling channels of convergent/divergent nozzles. EDM wire at skew angle theta creates hyperboloidal cavity in tube. Wire offset from axis of tube and from axis of rotation by distance equal to throat radius. Maintaining same skew angle as that used to cut hyperboloidal inner surface but using larger offset, cooling channel cut in material near inner hyperboloidal surface.
Radiation Hardness Assurance (RHA) Guideline
NASA Technical Reports Server (NTRS)
Campola, Michael J.
2016-01-01
Radiation Hardness Assurance (RHA) consists of all activities undertaken to ensure that the electronics and materials of a space system perform to their design specifications after exposure to the mission space environment. The subset of interests for NEPP and the REAG, are EEE parts. It is important to register that all of these undertakings are in a feedback loop and require constant iteration and updating throughout the mission life. More detail can be found in the reference materials on applicable test data for usage on parts.
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-02-28
We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-08-30
We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
A Constraint Integer Programming Approach for Resource-Constrained Project Scheduling
NASA Astrophysics Data System (ADS)
Berthold, Timo; Heinz, Stefan; Lübbecke, Marco E.; Möhring, Rolf H.; Schulz, Jens
We propose a hybrid approach for solving the resource-constrained project scheduling problem which is an extremely hard to solve combinatorial optimization problem of practical relevance. Jobs have to be scheduled on (renewable) resources subject to precedence constraints such that the resource capacities are never exceeded and the latest completion time of all jobs is minimized.
Boonvisut, Pasu; Cavusoglu, M. Cenk
2014-01-01
Robotic motion planning algorithms for manipulation of deformable objects, such as in medical robotics applications, rely on accurate estimations of object deformations that occur during manipulation. An estimation of the tissue response (for off-line planning or real-time on-line re-planning), in turn, requires knowledge of both object constitutive parameters and boundary constraints. In this paper, a novel algorithm for estimating boundary constraints of deformable objects from robotic manipulation data is presented. The proposed algorithm uses tissue deformation data collected with a vision system, and employs a multi-stage hill climbing procedure to estimate the boundary constraints of the object. An active exploration technique, which uses an information maximization approach, is also proposed to extend the identification algorithm. The effects of uncertainties on the proposed methods are analyzed in simulation. The results of experimental evaluation of the methods are also presented. PMID:25684836
Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem
NASA Astrophysics Data System (ADS)
Chen, Wei
2015-07-01
In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Unveiling the magnetic nature of new hard X-ray emitting CVs
NASA Astrophysics Data System (ADS)
de Martino, Domitilla
2007-10-01
An unexpected large fraction of Cataclysmic Variables (CVs) has been identified as optical counterparts of hard X-ray sources in the INTEGRAL and Swift surveys. Most of them belong to the magnetic class of the Intermediate Polars (IPs), suggesting a potentially important role in the study of galactic populations of X-ray sources. To date many new CVs still need to be properly classified. Here we propose to observe 6 new hard X-ray CV systems to detect X-ray pulsations at the white dwarf rotational period which, together with their spectral properties, will provide firm constraints on their suspected magnetic nature.
Isolated hard photon radiation in multijet production at LEP
NASA Astrophysics Data System (ADS)
Glover, E. W. N.; Stirling, W. J.
1992-11-01
We present a detailed, quantitative analysis of isolated hard photon radiation in multiparton final states at LEP energies. Since a perfectly isolated photon is not an infrared safe quantity different definitions of an “isolated photon” can influence the relative production rates for photon plus jets events. We argue that there is no obvious discrepancy between recent experimental measurements and the theoretical predictions and compute the next-to-leading order corrections to photon +1, 2 jet production, using a clustering algorithm more closely matched to the experimental procedure.
Fifth to eleventh virial coefficients of hard spheres
NASA Astrophysics Data System (ADS)
Schultz, Andrew J.; Kofke, David A.
2014-08-01
Virial coefficients Bn of three-dimensional hard spheres are reported for n=5 to 11, with precision exceeding that presently available in the literature. Calculations are performed using the recursive method due to Wheatley, and a binning approach is proposed to allow more flexibility in where computational effort is directed in the calculations. We highlight the difficulty as a general measure that quantifies performance of an algorithm that computes a stochastic average and show how it can be used as the basis for optimizing such calculations.
Hard and Soft Safety Verifications
NASA Technical Reports Server (NTRS)
Wetherholt, Jon; Anderson, Brenda
2012-01-01
The purpose of this paper is to examine the differences between and the effects of hard and soft safety verifications. Initially, the terminology should be defined and clarified. A hard safety verification is datum which demonstrates how a safety control is enacted. An example of this is relief valve testing. A soft safety verification is something which is usually described as nice to have but it is not necessary to prove safe operation. An example of a soft verification is the loss of the Solid Rocket Booster (SRB) casings from Shuttle flight, STS-4. When the main parachutes failed, the casings impacted the water and sank. In the nose cap of the SRBs, video cameras recorded the release of the parachutes to determine safe operation and to provide information for potential anomaly resolution. Generally, examination of the casings and nozzles contributed to understanding of the newly developed boosters and their operation. Safety verification of SRB operation was demonstrated by examination for erosion or wear of the casings and nozzle. Loss of the SRBs and associated data did not delay the launch of the next Shuttle flight.
Hardness correlation for uranium and its alloys
Humphreys, D L; Romig, Jr, A D
1983-03-01
The hardness of 16 different uranium-titanium (U-Ti) alloys was measured on six (6) different hardness scales (R/sub A/, R/sub B/, R/sub C/, R/sub D/, Knoop, and Vickers). The alloys contained between 0.75 and 2.0 wt % Ti. All of the alloys were solutionized (850/sup 0/C, 1 h) and ice-water quenched to produce a supersaturated martensitic phase. A range of hardnesses was obtained by aging the samples for various times and temperatures. The correlation of various hardness scales was shown to be virtually identical to the hardness-scale correlation for steels. For more-accurate conversion from one hardness scale to another, least-squares-curve fits were determined for the various hardness-scale correlations. 34 figures, 5 tables.
Evolutionary constraints or opportunities?
Sharov, Alexei A.
2014-01-01
Natural selection is traditionally viewed as a leading factor of evolution, whereas variation is assumed to be random and non-directional. Any order in variation is attributed to epigenetic or developmental constraints that can hinder the action of natural selection. In contrast I consider the positive role of epigenetic mechanisms in evolution because they provide organisms with opportunities for rapid adaptive change. Because the term “constraint” has negative connotations, I use the term “regulated variation” to emphasize the adaptive nature of phenotypic variation, which helps populations and species to survive and evolve in changing environments. The capacity to produce regulated variation is a phenotypic property, which is not described in the genome. Instead, the genome acts as a switchboard, where mostly random mutations switch “on” or “off” preexisting functional capacities of organism components. Thus, there are two channels of heredity: informational (genomic) and structure-functional (phenotypic). Functional capacities of organisms most likely emerged in a chain of modifications and combinations of more simple ancestral functions. The role of DNA has been to keep records of these changes (without describing the result) so that they can be reproduced in the following generations. Evolutionary opportunities include adjustments of individual functions, multitasking, connection between various components of an organism, and interaction between organisms. The adaptive nature of regulated variation can be explained by the differential success of lineages in macro-evolution. Lineages with more advantageous patterns of regulated variation are likely to produce more species and secure more resources (i.e., long-term lineage selection). PMID:24769155
Russian Doll Search for solving Constraint Optimization problems
Verfaillie, G.; Lemaitre, M.
1996-12-31
If the Constraint Satisfaction framework has been extended to deal with Constraint Optimization problems, it appears that optimization is far more complex than satisfaction. One of the causes of the inefficiency of complete tree search methods, like Depth First Branch and Bound, lies in the poor quality of the lower bound on the global valuation of a partial assignment, even when using Forward Checking techniques. In this paper, we introduce the Russian Doll Search algorithm which replaces one search by n successive searches on nested subproblems (n being the number of problem variables), records the results of each search and uses them later, when solving larger subproblems, in order to improve the lower bound on the global valuation of any partial assignment. On small random problems and on large real scheduling problems, this algorithm yields surprisingly good results, which greatly improve as the problems get more constrained and the bandwidth of the used variable ordering diminishes.
Predictive directional compensator for systems with input constraints.
Haeri, Mohammad; Aalam, Nima
2006-07-01
Nonlinearity caused by actuator constraint plays a destructive role in the overall performance of a control system. A model predictive controller can handle the problem by implementing a constrained optimization algorithm. Due to the iterative nature of the solution, however, this requires high computation power. In the present work we propose a new method to approach the problem by separating the constraint handling from the predictive control job. The input constraint effects are dealt with in a newly defined component called a predictive directional compensator, which works based on the directionality and predictive concepts. Through implementation of the proposed method, the computational requirement is greatly reduced with the least degradation of the closed-loop performance. Meanwhile, a new characteristic matrix has been defined by which directionality of SISO as well as nonminimum phase systems can be determined.
Enhancement of coupled multichannel images using sparsity constraints.
Ramakrishnan, Naveen; Ertin, Emre; Moses, Randolph L
2010-08-01
We consider the problem of joint enhancement of multichannel images with pixel based constraints on the multichannel data. Previous work by Cetin and Karl introduced nonquadratic regularization methods for SAR image enhancement using sparsity enforcing penalty terms. We formulate an optimization problem that jointly enhances complex-valued multichannel images while preserving the cross-channel information, which we include as constraints tying the multichannel images together. We pose this problem as a joint optimization problem with constraints. We first reformulate it as an equivalent (unconstrained) dual problem and develop a numerically-efficient method for solving it. We develop the Dual Descent method, which has low complexity, for solving the joint optimization problem. The algorithm is applied to both an interferometric synthetic aperture radar (IFSAR) problem, in which the relative phase between two complex-valued images indicate height, and to a synthetic multimodal medical image example. PMID:20236892
Federal constraints: earned or unearned?
Chalkley, D T
1977-08-01
The author discusses the evolution of federal constraints on medical, behavioral, and social science research. There has been only one court decision related to behavioral research and none in medical research. The burden of consent procedures can be lightened somewhat by careful consideration of the potential risks and nature of the research; questions are presented that can be used to determine whether constraints apply. The author notes that although there are good reasons for regulations in both behavioral and medical research, the appropriateness of current and proposed constraints is still a matter of debate.
Resource allocation using constraint propagation
NASA Technical Reports Server (NTRS)
Rogers, John S.
1990-01-01
The concept of constraint propagation was discussed. Performance increases are possible with careful application of these constraint mechanisms. The degree of performance increase is related to the interdependence of the different activities resource usage. Although this method of applying constraints to activities and resources is often beneficial, it is obvious that this is no panacea cure for the computational woes that are experienced by dynamic resource allocation and scheduling problems. A combined effort for execution optimization in all areas of the system during development and the selection of the appropriate development environment is still the best method of producing an efficient system.
Introduction to classical mechanics of systems with constraints, part 2
NASA Astrophysics Data System (ADS)
Razumov, A. V.; Solovev, L. D.
Lagrangians whose symmetry transformations include arbitrary functions on time are shown to be, with necessity, degenerate. The Hamiltionian formalism for mechanical systems with degenerate Lagrangians is presented. The algorithm of producing the constraints and the total Hamiltonian is described. The reduction of a Hamiltonian mechanical system to a surface in the extended phase space and application of Dirac brackets to calculate poisson brackets of the reduced system are considered.
Applying Motion Constraints Based on Test Data
NASA Technical Reports Server (NTRS)
Burlone, Michael
2014-01-01
MSC ADAMS is a simulation software that is used to analyze multibody dynamics. Using user subroutines, it is possible to apply motion constraints to the rigid bodies so that they match the motion profile collected from test data. This presentation describes the process of taking test data and passing it to ADAMS using user subroutines, and uses the Morpheus free-flight 4 test as an example of motion data used for this purpose. Morpheus is the name of a prototype lander vehicle built by NASA that serves as a test bed for various experimental technologies (see backup slides for details) MSC.ADAMS"TM" is used to play back telemetry data (vehicle orientation and position) from each test as the inputs to a 6-DoF general motion constraint (details in backup slides) The MSC.ADAMS"TM" playback simulations allow engineers to examine and analyze flight trajectory as well as observe vehicle motion from any angle and at any playback speed. This facilitates the development of robust and stable control algorithms, increasing reliability and reducing development costs of this developmental engine The simulation also incorporates a 3D model of the artificial hazard field, allowing engineers to visualize and measure performance of the developmental autonomous landing and hazard avoidance technology ADAMS is a multibody dynamics solver. It uses forces, constraints, and mass properties to numerically integrate equations of motion. The ADAMS solver will ask the motion subroutine for position, velocity, and acceleration values at various time steps. Those values must be continuous over the whole time domain. Each degree of freedom in the telemetry data can be examined separately; however, linear interpolation of the telemetry data is invalid, since there will be discontinuities in velocity and acceleration.
Rao, R.; Buescher, K.L.; Hanagandi, V.
1995-12-31
In the optimal plant location and sizing problem it is desired to optimize cost function involving plant sizes, locations, and production schedules in the face of supply-demand and plant capacity constraints. We will use simulated annealing (SA) and a genetic algorithm (GA) to solve this problem. We will compare these techniques with respect to computational expenses, constraint handling capabilities, and the quality of the solution obtained in general. Simulated Annealing is a combinatorial stochastic optimization technique which has been shown to be effective in obtaining fast suboptimal solutions for computationally, hard problems. The technique is especially attractive since solutions are obtained in polynomial time for problems where an exhaustive search for the global optimum would require exponential time. We propose a synergy between the cluster analysis technique, popular in classical stochastic global optimization, and the GA to accomplish global optimization. This synergy minimizes redundant searches around local optima and enhances the capable it of the GA to explore new areas in the search space.
Self-accelerating massive gravity: Hidden constraints and characteristics
NASA Astrophysics Data System (ADS)
Motloch, Pavel; Hu, Wayne; Motohashi, Hayato
2016-05-01
Self-accelerating backgrounds in massive gravity provide an arena to explore the Cauchy problem for derivatively coupled fields that obey complex constraints which reduce the phase space degrees of freedom. We present here an algorithm based on the Kronecker form of a matrix pencil that finds all hidden constraints, for example those associated with derivatives of the equations of motion, and characteristic curves for any 1 +1 dimensional system of linear partial differential equations. With the Regge-Wheeler-Zerilli decomposition of metric perturbations into angular momentum and parity states, this technique applies to fully 3 +1 dimensional perturbations of massive gravity around any spherically symmetric self-accelerating background. Five spin modes of the massive graviton propagate once the constraints are imposed: two spin-2 modes with luminal characteristics present in the massless theory as well as two spin-1 modes and one spin-0 mode. Although the new modes all possess the same—typically spacelike—characteristic curves, the spin-1 modes are parabolic while the spin-0 modes are hyperbolic. The joint system, which remains coupled by nonderivative terms, cannot be solved as a simple Cauchy problem from a single noncharacteristic surface. We also illustrate the generality of the algorithm with other cases where derivative constraints reduce the number of propagating degrees of freedom or order of the equations.
An SMP soft classification algorithm for remote sensing
NASA Astrophysics Data System (ADS)
Phillips, Rhonda D.; Watson, Layne T.; Easterling, David R.; Wynne, Randolph H.
2014-07-01
This work introduces a symmetric multiprocessing (SMP) version of the continuous iterative guided spectral class rejection (CIGSCR) algorithm, a semiautomated classification algorithm for remote sensing (multispectral) images. The algorithm uses soft data clusters to produce a soft classification containing inherently more information than a comparable hard classification at an increased computational cost. Previous work suggests that similar algorithms achieve good parallel scalability, motivating the parallel algorithm development work here. Experimental results of applying parallel CIGSCR to an image with approximately 108 pixels and six bands demonstrate superlinear speedup. A soft two class classification is generated in just over 4 min using 32 processors.
Development of radiation hard scintillators
Markley, F.; Woods, D.; Pla-Dalmau, A.; Foster, G. ); Blackburn, R. )
1992-05-01
Substantial improvements have been made in the radiation hardness of plastic scintillators. Cylinders of scintillating materials 2.2 cm in diameter and 1 cm thick have been exposed to 10 Mrads of gamma rays at a dose rate of 1 Mrad/h in a nitrogen atmosphere. One of the formulations tested showed an immediate decrease in pulse height of only 4% and has remained stable for 12 days while annealing in air. By comparison a commercial PVT scintillator showed an immediate decrease of 58% and after 43 days of annealing in air it improved to a 14% loss. The formulated sample consisted of 70 parts by weight of Dow polystyrene, 30 pbw of pentaphenyltrimethyltrisiloxane (Dow Corning DC 705 oil), 2 pbw of p-terphenyl, 0.2 pbw of tetraphenylbutadiene, and 0.5 pbw of UVASIL299LM from Ferro.
NASA Technical Reports Server (NTRS)
Rothschild, R. E.
1981-01-01
Past hard X-ray and lower energy satellite instruments are reviewed and it is shown that observation above 20 keV and up to hundreds of keV can provide much valuable information on the astrophysics of cosmic sources. To calculate possible sensitivities of future arrays, the efficiencies of a one-atmosphere inch gas counter (the HEAO-1 A-2 xenon filled HED3) and a 3 mm phoswich scintillator (the HEAO-1 A-4 Na1 LED1) were compared. Above 15 keV, the scintillator was more efficient. In a similar comparison, the sensitivity of germanium detectors did not differ much from that of the scintillators, except at high energies where the sensitivity would remain flat and not rise with loss of efficiency. Questions to be addressed concerning the physics of active galaxies and the diffuse radiation background, black holes, radio pulsars, X-ray pulsars, and galactic clusters are examined.
ERIC Educational Resources Information Center
Végh, Ladislav
2016-01-01
The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…
Nonlinear Global Optimization Using Curdling Algorithm
1996-03-01
An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,more » gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.« less
A timeline algorithm for astronomy missions
NASA Technical Reports Server (NTRS)
Moore, J. E.; Guffin, O. T.
1975-01-01
An algorithm is presented for generating viewing timelines for orbital astronomy missions of the pointing (nonsurvey/scan) type. The algorithm establishes a target sequence from a list of candidate targets in a way which maximizes total viewing time. Two special cases are treated. One concerns dim targets which, due to lighting constraints, are scheduled only during the antipolar portion of each orbit. They normally require long observation times extending over several revolutions. A minimum slew heuristic is employed to select the sequence of dim targets. The other case deals with bright, or short duration, targets, which have less restrictive lighting constraints and are scheduled during the portion of each orbit when dim targets cannot be viewed. Since this process moves much more rapidly than the dim path, an enumeration algorithm is used to select the sequence that maximizes total viewing time.
Hardness corrections for copper are inappropriate for protecting sensitive freshwater biota.
Markich, S J; Batley, G E; Stauber, J L; Rogers, N J; Apte, S C; Hyne, R V; Bowles, K C; Wilde, K L; Creighton, N M
2005-06-01
Toxicity testing using a freshwater alga (Chlorella sp.), a bacterium (Erwinnia sp.) and a cladoceran (Ceriodaphnia cf. dubia) exposed to copper in synthetic and natural freshwaters of varying hardness (44-375 mg CaCO3/l), with constant alkalinity, pH and dissolved organic carbon concentration, demonstrated negligible hardness effects in the pH range 6.1-7.8. Therefore, the use of a generic hardness-correction algorithm, developed as part of national water quality guidelines for protecting freshwater biota, is not recommended for assessing the toxicity of copper to these, and other, sensitive freshwater species. Use of the algorithm for these sensitive species will be underprotective because the calculated concentrations of copper in water that cause a toxic effect will be higher.
Hardness corrections for copper are inappropriate for protecting sensitive freshwater biota.
Markich, S J; Batley, G E; Stauber, J L; Rogers, N J; Apte, S C; Hyne, R V; Bowles, K C; Wilde, K L; Creighton, N M
2005-06-01
Toxicity testing using a freshwater alga (Chlorella sp.), a bacterium (Erwinnia sp.) and a cladoceran (Ceriodaphnia cf. dubia) exposed to copper in synthetic and natural freshwaters of varying hardness (44-375 mg CaCO3/l), with constant alkalinity, pH and dissolved organic carbon concentration, demonstrated negligible hardness effects in the pH range 6.1-7.8. Therefore, the use of a generic hardness-correction algorithm, developed as part of national water quality guidelines for protecting freshwater biota, is not recommended for assessing the toxicity of copper to these, and other, sensitive freshwater species. Use of the algorithm for these sensitive species will be underprotective because the calculated concentrations of copper in water that cause a toxic effect will be higher. PMID:15910895
Weighted constraints in generative linguistics.
Pater, Joe
2009-08-01
Harmonic Grammar (HG) and Optimality Theory (OT) are closely related formal frameworks for the study of language. In both, the structure of a given language is determined by the relative strengths of a set of constraints. They differ in how these strengths are represented: as numerical weights (HG) or as ranks (OT). Weighted constraints have advantages for the construction of accounts of language learning and other cognitive processes, partly because they allow for the adaptation of connectionist and statistical models. HG has been little studied in generative linguistics, however, largely due to influential claims that weighted constraints make incorrect predictions about the typology of natural languages, predictions that are not shared by the more popular OT. This paper makes the case that HG is in fact a promising framework for typological research, and reviews and extends the existing arguments for weighted over ranked constraints.
Areibi, Shawki; Yang, Zhen
2004-01-01
Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem. PMID:15355604
Areibi, Shawki; Yang, Zhen
2004-01-01
Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem.
Fluid convection, constraint and causation
Bishop, Robert C.
2012-01-01
Complexity—nonlinear dynamics for my purposes in this essay—is rich with metaphysical and epistemological implications but is receiving sustained philosophical analysis only recently. I will explore some of the subtleties of causation and constraint in Rayleigh–Bénard convection as an example of a complex phenomenon, and extract some lessons for further philosophical reflection on top-down constraint and causation particularly with respect to causal foundationalism. PMID:23386955
NASA Astrophysics Data System (ADS)
Gras, Vincent; Luong, Michel; Amadon, Alexis; Boulant, Nicolas
2015-12-01
In Magnetic Resonance Imaging at ultra-high field, kT-points radiofrequency pulses combined with parallel transmission are a promising technique to mitigate the B1 field inhomogeneity in 3D imaging applications. The optimization of the corresponding k-space trajectory for its slice-selective counterpart, i.e. the spokes method, has been shown in various studies to be very valuable but also dependent on the hardware and specific absorption rate constraints. Due to the larger number of degrees of freedom than for spokes excitations, joint design techniques based on the fine discretization (gridding) of the parameter space become hardly tractable for kT-points pulses. In this article, we thus investigate the simultaneous optimization of the 3D blipped k-space trajectory and of the kT-points RF pulses, using a magnitude least squares cost-function, with explicit constraints and in the large flip angle regime. A second-order active-set algorithm is employed due to its demonstrated success and robustness in similar problems. An analysis of global optimality and of the structure of the returned trajectories is proposed. The improvement provided by the k-space trajectory optimization is validated experimentally by measuring the flip angle on a spherical water phantom at 7T and via Quantum Process Tomography.
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
Packing Boxes into Multiple Containers Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Menghani, Deepak; Guha, Anirban
2016-07-01
Container loading problems have been studied extensively in the literature and various analytical, heuristic and metaheuristic methods have been proposed. This paper presents two different variants of a genetic algorithm framework for the three-dimensional container loading problem for optimally loading boxes into multiple containers with constraints. The algorithms are designed so that it is easy to incorporate various constraints found in real life problems. The algorithms are tested on data of standard test cases from literature and are found to compare well with the benchmark algorithms in terms of utilization of containers. This, along with the ability to easily incorporate a wide range of practical constraints, makes them attractive for implementation in real life scenarios.
Hard X-ray outbursts in LMXBs: the case of 4U 1705-44
NASA Astrophysics Data System (ADS)
D'Ai, Antonino
2011-10-01
We propose a 60 ks XMM-Newton observation of the atoll source 4U 1705-44 as a Target of Opportunity when the source is in the hard state. This observation will set the still lacking constraints on the shape of the reflection component in this spectral state. The XMM observation will be coupled with a weeks-long coverage, through periodic visits, made with the Swift satellite.
Fuzzy and hard clustering analysis for thyroid disease.
Azar, Ahmad Taher; El-Said, Shaimaa Ahmed; Hassanien, Aboul Ella
2013-07-01
Thyroid hormones produced by the thyroid gland help regulation of the body's metabolism. A variety of methods have been proposed in the literature for thyroid disease classification. As far as we know, clustering techniques have not been used in thyroid diseases data set so far. This paper proposes a comparison between hard and fuzzy clustering algorithms for thyroid diseases data set in order to find the optimal number of clusters. Different scalar validity measures are used in comparing the performances of the proposed clustering systems. To demonstrate the performance of each algorithm, the feature values that represent thyroid disease are used as input for the system. Several runs are carried out and recorded with a different number of clusters being specified for each run (between 2 and 11), so as to establish the optimum number of clusters. To find the optimal number of clusters, the so-called elbow criterion is applied. The experimental results revealed that for all algorithms, the elbow was located at c=3. The clustering results for all algorithms are then visualized by the Sammon mapping method to find a low-dimensional (normally 2D or 3D) representation of a set of points distributed in a high dimensional pattern space. At the end of this study, some recommendations are formulated to improve determining the actual number of clusters present in the data set. PMID:23357404
Laser-induced autofluorescence of oral cavity hard tissues
NASA Astrophysics Data System (ADS)
Borisova, E. G.; Uzunov, Tz. T.; Avramov, L. A.
2007-03-01
In current study oral cavity hard tissues autofluorescence was investigated to obtain more complete picture of their optical properties. As an excitation source nitrogen laser with parameters - 337,1 nm, 14 μJ, 10 Hz (ILGI-503, Russia) was used. In vitro spectra from enamel, dentine, cartilage, spongiosa and cortical part of the periodontal bones were registered using a fiber-optic microspectrometer (PC2000, "Ocean Optics" Inc., USA). Gingival fluorescence was also obtained for comparison of its spectral properties with that of hard oral tissues. Samples are characterized with significant differences of fluorescence properties one to another. It is clearly observed signal from different collagen types and collagen-cross links with maxima at 385, 430 and 480-490 nm. In dentine are observed only two maxima at 440 and 480 nm, related also to collagen structures. In samples of gingival and spongiosa were observed traces of hemoglobin - by its re-absorption at 545 and 575 nm, which distort the fluorescence spectra detected from these anatomic sites. Results, obtained in this study are foreseen to be used for development of algorithms for diagnosis and differentiation of teeth lesions and other problems of oral cavity hard tissues as periodontitis and gingivitis.
Habitat Suitability Index Models: Hard clam
Mulholland, Rosemarie
1984-01-01
Two species of hard clams occur along the Atlantic and Gulf of Mexico coasts of North America: the southern hard clam, Mercenaria campechiensis Gmelin 1791, and the northern hard clam, ~lercenaria mercenaria Linne 1758 (Wells 1957b). The latter species, also commonly kno\\'m as the quahog, was formerly named Venus mercenaria. The two species are closely related, produce viable hybrids (Menzel and Menzel 1965), and may be a single species.
Spacecraft Attitude Maneuver Planning Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Kornfeld, Richard P.
2004-01-01
A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This approach for attitude path planning makes full use of a priori constraint knowledge and is computationally tractable enough to be executed onboard a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used as is or as an initial solution to initialize additional deterministic optimization algorithms. A number of representative case examples for time-fixed and time-varying conditions yielded search times that are typically on the order of minutes, thus demonstrating the viability of this method. This approach is applicable to all deep space and planet Earth missions requiring greater spacecraft autonomy, and greatly facilitates navigation and science observation planning.
Diverless hard-pipe connection systems for subsea pipelines and flowlines
Reddy, S.K.; Paull, B.M.; Hals, B.E.
1996-12-31
Hard-pipe tie-in jumpers, for diverless subsea connections between production manifolds and export pipelines (or satellite wells), are an economical alternative to traditional diverless connection methods life deflection and pull-in; and also to flexible pipe jumpers. A systems level approach to the design of the jumpers, which takes into consideration performance requirements, measurement methods, fabrication and installation constraints, as well as code requirements, is essential to making these connections economical and reliable. A dependable, ROV-friendly measurement system is key to making these connections possible. The parameters affecting the design of hard-pipe jumpers, and the relationship between these, are discussed in the context of minimizing cost while maintaining reliability. The applicability of pipeline codes to the design of hard-pipe jumpers is examined. The design, construction and installation of the Amoco Liuhua 11-1 pipeline tie-in jumpers are presented as a case study for applying these concepts.
NASA Astrophysics Data System (ADS)
Radac, Mircea-Bogdan; Precup, Radu-Emil
2016-05-01
This paper presents the design and experimental validation of a new model-free data-driven iterative reference input tuning (IRIT) algorithm that solves a reference trajectory tracking problem as an optimization problem with control signal saturation constraints and control signal rate constraints. The IRIT algorithm design employs an experiment-based stochastic search algorithm to use the advantages of iterative learning control. The experimental results validate the IRIT algorithm applied to a non-linear aerodynamic position control system. The results prove that the IRIT algorithm offers the significant control system performance improvement by few iterations and experiments conducted on the real-world process and model-free parameter tuning.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats, or hard caps... STANDARDS-UNDERGROUND COAL MINES Miscellaneous § 75.1720-1 Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats or hard caps... Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color from those worn by experienced miners shall be worn...
Berera, Arjun
2000-07-01
Two mechanisms are examined for hard double ''pomeron'' exchange dijet production, the factorized model of Ingelman-Schlein, and the nonfactorized model of lossless jet production which exhibits the Collins-Frankfurt-Strikman mechanism. Comparisons between these two mechanisms are made of the total cross section, E{sub T} spectra, and mean rapidity spectra. For both mechanisms, several specific models are examined with the cuts of the collider detector at Fermilab (CDF), DOe, and representative cuts of CERN LHC. Distinct qualitative differences are predicted by the two mechanisms for the CDF y{sub +} spectra and for the E{sub T} spectra for all three experimental cuts. The preliminary CDF and DOe experimental data for this process are interpreted in terms of these two mechanisms. The y{sub +} spectra of the CDF data are suggestive of domination by the factorized Ingelman-Schlein mechanism, whereas the DOe data show no greater preference for either mechanism. An inconsistency is found among all the theoretical models in attempting to explain the ratio of the cross sections given by the data from these two experiments. (c) 2000 The American Physical Society.
Radiation Hardness of Trigger Electronics
NASA Astrophysics Data System (ADS)
Zawisza, Irene; Safonov, Alexei; Gilmore, Jason; Khotilovich, Vadim
2011-10-01
As the maximum intensity of particle accelerators increases, probing the most basic questions of the Universe, detectors and electronics must be designed to insure reliability in high-radiation environments. As the Large Hadron Collider (LHC) beam intensity is increased, it is necessary to upgrade the electronics in the Compact Muon Solenoid (CMS). To select interesting events, CMS utilizes fast electronics, which are installed in the experimental cavern. However, much higher post-upgrade levels of radiation in the cavern set tight requirements on the radiation hardness of the new electronics. Damaging effects of high and low energy radiation leads to disruption of digital circuits and accumulated degradation of silicon components. Quantifying the radiation exposure is required for the design of a radiation-tolerant system, but current simulation studies suffer from large uncertainties. We compare simulation predictions with measured performance in two different experimental studies, which evaluate component performance for pre and post irradiation determining the survivability of electronics in the harsh CMS environment. Funded by DOE and NSF-REU Program.
Developmental constraints on behavioural flexibility
Holekamp, Kay E.; Swanson, Eli M.; Van Meter, Page E.
2013-01-01
We suggest that variation in mammalian behavioural flexibility not accounted for by current socioecological models may be explained in part by developmental constraints. From our own work, we provide examples of constraints affecting variation in behavioural flexibility, not only among individuals, but also among species and higher taxonomic units. We first implicate organizational maternal effects of androgens in shaping individual differences in aggressive behaviour emitted by female spotted hyaenas throughout the lifespan. We then compare carnivores and primates with respect to their locomotor and craniofacial adaptations. We inquire whether antagonistic selection pressures on the skull might impose differential functional constraints on evolvability of skulls and brains in these two orders, thus ultimately affecting behavioural flexibility in each group. We suggest that, even when carnivores and primates would theoretically benefit from the same adaptations with respect to behavioural flexibility, carnivores may nevertheless exhibit less behavioural flexibility than primates because of constraints imposed by past adaptations in the morphology of the limbs and skull. Phylogenetic analysis consistent with this idea suggests greater evolutionary lability in relative brain size within families of primates than carnivores. Thus, consideration of developmental constraints may help elucidate variation in mammalian behavioural flexibility. PMID:23569298
Developmental constraints on behavioural flexibility.
Holekamp, Kay E; Swanson, Eli M; Van Meter, Page E
2013-05-19
We suggest that variation in mammalian behavioural flexibility not accounted for by current socioecological models may be explained in part by developmental constraints. From our own work, we provide examples of constraints affecting variation in behavioural flexibility, not only among individuals, but also among species and higher taxonomic units. We first implicate organizational maternal effects of androgens in shaping individual differences in aggressive behaviour emitted by female spotted hyaenas throughout the lifespan. We then compare carnivores and primates with respect to their locomotor and craniofacial adaptations. We inquire whether antagonistic selection pressures on the skull might impose differential functional constraints on evolvability of skulls and brains in these two orders, thus ultimately affecting behavioural flexibility in each group. We suggest that, even when carnivores and primates would theoretically benefit from the same adaptations with respect to behavioural flexibility, carnivores may nevertheless exhibit less behavioural flexibility than primates because of constraints imposed by past adaptations in the morphology of the limbs and skull. Phylogenetic analysis consistent with this idea suggests greater evolutionary lability in relative brain size within families of primates than carnivores. Thus, consideration of developmental constraints may help elucidate variation in mammalian behavioural flexibility. PMID:23569298
Developmental constraints on behavioural flexibility.
Holekamp, Kay E; Swanson, Eli M; Van Meter, Page E
2013-05-19
We suggest that variation in mammalian behavioural flexibility not accounted for by current socioecological models may be explained in part by developmental constraints. From our own work, we provide examples of constraints affecting variation in behavioural flexibility, not only among individuals, but also among species and higher taxonomic units. We first implicate organizational maternal effects of androgens in shaping individual differences in aggressive behaviour emitted by female spotted hyaenas throughout the lifespan. We then compare carnivores and primates with respect to their locomotor and craniofacial adaptations. We inquire whether antagonistic selection pressures on the skull might impose differential functional constraints on evolvability of skulls and brains in these two orders, thus ultimately affecting behavioural flexibility in each group. We suggest that, even when carnivores and primates would theoretically benefit from the same adaptations with respect to behavioural flexibility, carnivores may nevertheless exhibit less behavioural flexibility than primates because of constraints imposed by past adaptations in the morphology of the limbs and skull. Phylogenetic analysis consistent with this idea suggests greater evolutionary lability in relative brain size within families of primates than carnivores. Thus, consideration of developmental constraints may help elucidate variation in mammalian behavioural flexibility.
The development of hard x-ray optics at MSFC
NASA Astrophysics Data System (ADS)
Ramsey, Brian D.; Elsner, Ron F.; Engelhaupt, Darell; Gubarev, Mikhail V.; Kolodziejczak, Jeffery J.; O'Dell, Stephen L.; Speegle, Chet O.; Weisskopf, Martin C.
2004-02-01
We have developed the electroformed-nickel replication process to enable us to fabricate light-weight, high-quality mirrors for the hard-x-ray region. Two projects currently utilizing this technology are the production of 240 mirror shells, of diameters ranging from 50 to 94 mm, for our HERO balloon payload, and 150- and 230-mm-diameter shells for a prototype Constellation-X hard-x-ray telescope module. The challenge for the former is to fabricate, mount, align and fly a large number of high-resolution mirrors within the constraints of a modest budget. For the latter, the challenge is to maintain high angular resolution despite weight-budget-driven mirror shell thicknesses (100 μm) which make the shells extremely sensitive to fabrication and handling stresses, and to ensure that the replication process does not degrade the ultra-smooth surface finish (~3 Å) required for eventual multilayer coatings. We present a progress report on these two programs.
"Hard Science" for Gifted 1st Graders
ERIC Educational Resources Information Center
DeGennaro, April
2006-01-01
"Hard Science" is designed to teach 1st grade gifted students accurate and high level science concepts. It is based upon their experience of the world and attempts to build a foundation for continued love and enjoyment of science. "Hard Science" provides field experiences and opportunities for hands-on discovery working beside experts in the field…
21 CFR 133.150 - Hard cheeses.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Hard cheeses. 133.150 Section 133.150 Food and... CONSUMPTION CHEESES AND RELATED CHEESE PRODUCTS Requirements for Specific Standardized Cheese and Related Products § 133.150 Hard cheeses. (a) The cheeses for which definitions and standards of identity...
Retraction of Hard, Lozano, and Tversky (2006)
ERIC Educational Resources Information Center
Hard, B. M.; Lozano, S. C.; Tversky, B.
2008-01-01
Reports a retraction of "Hierarchical encoding of behavior: Translating perception into action" by Bridgette Martin Hard, Sandra C. Lozano and Barbara Tversky (Journal of Experimental Psychology: General, 2006[Nov], Vol 135[4], 588-608). All authors retract this article. Co-author Tversky and co-author Hard believe that the research results cannot…
Genetic map construction with constraints
Clark, D.A.; Rawlings, C.J.; Soursenot, S.
1994-12-31
A pilot program, CME, is described for generating a physical genetic map from hybridization fingerprinting data. CME is implemented in the parallel constraint logic programming language ElipSys. The features of constraint logic programming are used to enable the integration of preexisting mapping information (partial probe orders from cytogenetic maps and local physical maps) into the global map generation process, while parallelism enables the search space to be traversed more efficiently. CME was tested using data from chromosome 2 of Schizosaccharomyces pombe and was found able to generate maps as well as (and sometimes better than) a more traditional method. This paper illustrates the practical benefits of using a symbolic logic programming language and shows that the features of constraint handling and parallel execution bring the development of practical systems based on Al programming technologies nearer to being a reality.
Symmetry constraint for foreground extraction.
Fu, Huazhu; Cao, Xiaochun; Tu, Zhuowen; Lin, Dongdai
2014-05-01
Symmetry as an intrinsic shape property is often observed in natural objects. In this paper, we discuss how explicitly taking into account the symmetry constraint can enhance the quality of foreground object extraction. In our method, a symmetry foreground map is used to represent the symmetry structure of the image, which includes the symmetry matching magnitude and the foreground location prior. Then, the symmetry constraint model is built by introducing this symmetry structure into the graph-based segmentation function. Finally, the segmentation result is obtained via graph cuts. Our method encourages objects with symmetric parts to be consistently extracted. Moreover, our symmetry constraint model is applicable to weak symmetric objects under the part-based framework. Quantitative and qualitative experimental results on benchmark datasets demonstrate the advantages of our approach in extracting the foreground. Our method also shows improved results in segmenting objects with weak, complex symmetry properties.
Hardness Evolution of Gamma-Irradiated Polyoxymethylene
NASA Astrophysics Data System (ADS)
Hung, Chuan-Hao; Harmon, Julie P.; Lee, Sanboh
2016-04-01
This study focuses on analyzing hardness evolution in gamma-irradiated polyoxymethylene (POM) exposed to elevated temperatures after irradiation. Hardness increases with increasing annealing temperature and time, but decreases with increasing gamma ray dose. Hardness changes are attributed to defects generated in the microstructure and molecular structure. Gamma irradiation causes a decrease in the glass transition temperature, melting point, and extent of crystallinity. The kinetics of defects resulting in hardness changes follow a first-order structure relaxation. The rate constant adheres to an Arrhenius equation, and the corresponding activation energy decreases with increasing dose due to chain scission during gamma irradiation. The structure relaxation of POM has a lower energy barrier in crystalline regions than in amorphous ones. The hardness evolution in POM is an endothermic process due to the semi-crystalline nature of this polymer.
Magnetotail dynamics under isobaric constraints
NASA Technical Reports Server (NTRS)
Birn, Joachim; Schindler, Karl; Janicke, Lutz; Hesse, Michael
1994-01-01
Using linear theory and nonlinear MHD simulations, we investigate the resistive and ideal MHD stability of two-dimensional plasma configurations under the isobaric constraint dP/dt = 0, which in ideal MHD is equivalent to conserving the pressure function P = P(A), where A denotes the magnetic flux. This constraint is satisfied for incompressible modes, such as Alfven waves, and for systems undergoing energy losses. The linear stability analysis leads to a Schroedinger equation, which can be investigated by standard quantum mechanics procedures. We present an application to a typical stretched magnetotail configuration. For a one-dimensional sheet equilibrium characteristic properties of tearing instability are rediscovered. However, the maximum growth rate scales with the 1/7 power of the resistivity, which implies much faster growth than for the standard tearing mode (assuming that the resistivity is small). The same basic eigen-mode is found also for weakly two-dimensional equilibria, even in the ideal MHD limit. In this case the growth rate scales with the 1/4 power of the normal magnetic field. The results of the linear stability analysis are confirmed qualitatively by nonlinear dynamic MHD simulations. These results suggest the interesting possibility that substorm onset, or the thinning in the late growth phase, is caused by the release of a thermodynamic constraint without the (immediate) necessity of releasing the ideal MHD constraint. In the nonlinear regime the resistive and ideal developments differ in that the ideal mode does not lead to neutral line formation without the further release of the ideal MHD constraint; instead a thin current sheet forms. The isobaric constraint is critically discussed. Under perhaps more realistic adiabatic conditions the ideal mode appears to be stable but could be driven by external perturbations and thus generate the thin current sheet in the late growth phase, before a nonideal instability sets in.
Evolutionary Constraints to Viroid Evolution
Elena, Santiago F.; Gómez, Gustavo; Daròs, José-Antonio
2009-01-01
We suggest that viroids are trapped into adaptive peaks as the result of adaptive constraints. The first one is imposed by the necessity to fold into packed structures to escape from RNA silencing. This creates antagonistic epistases, which make future adaptive trajectories contingent upon the first mutation and slow down the rate of adaptation. This second constraint can only be surpassed by increasing genetic redundancy or by recombination. Eigen’s paradox imposes a limit to the increase in genome complexity in the absence of mechanisms reducing mutation rate. Therefore, recombination appears as the only possible route to evolutionary innovation in viroids. PMID:21994548
The constraint model of attrition
Hartley, D.S. III.
1989-01-01
Helmbold demonstrated a relationship between a ratio containing initial force sizes and casualties, herein called the Helmbold ratio, and the initial force ratio in a large number of historical battles. This paper examines some of the complexity of the Helmbold ratio using analytical and simulation techniques and demonstrates that a constraint model of attrition captures some aspects of historical data. The effect that the constraint model would have on warfare modeling is uncertain. However, some speculation has been attempted concerning its use in large scale simulations. 9 refs., 7 figs., 2 tabs.
Greenstone belt tectonics: Thermal constraints
NASA Technical Reports Server (NTRS)
Bickle, M. J.; Nisbet, E. G.
1986-01-01
Archaean rocks provide a record of the early stages of planetary evolution. The interpretation is frustrated by the probable unrepresentative nature of the preserved crust and by the well known ambiguities of tectonic geological synthesis. Broad constraints can be placed on the tectonic processes in the early Earth from global scale modeling of thermal and chemical evolution of the Earth and its hydrosphere and atmosphere. The Archean record is the main test of such models. Available general model constraints are outlined based on the global tectonic setting within which Archaean crust evolved and on the direct evidence the Archaean record provides, particularly the thermal state of the early Earth.
NASA Astrophysics Data System (ADS)
Nättilä, J.; Steiner, A. W.; Kajava, J. J. E.; Suleimanov, V. F.; Poutanen, J.
2016-06-01
The cooling phase of thermonuclear (type-I) X-ray bursts can be used to constrain neutron star (NS) compactness by comparing the observed cooling tracks of bursts to accurate theoretical atmosphere model calculations. By applying the so-called cooling tail method, where the information from the whole cooling track is used, we constrain the mass, radius, and distance for three different NSs in low-mass X-ray binaries 4U 1702-429, 4U 1724-307, and SAX J1810.8-260. Care is taken to use only the hard state bursts where it is thought that the NS surface alone is emitting. We then use a Markov chain Monte Carlo algorithm within a Bayesian framework to obtain a parameterized equation of state (EoS) of cold dense matter from our initial mass and radius constraints. This allows us to set limits on various nuclear parameters and to constrain an empirical pressure-density relationship for the dense matter. Our predicted EoS results in NS a radius between 10.5-12.8 km (95% confidence limits) for a mass of 1.4 M⊙, depending slightly on the assumed composition. Because of systematic errors and uncertainty in the composition, these results should be interpreted as lower limits for the radius.
Tan, Q; Huang, G H; Cai, Y P
2010-09-01
The existing inexact optimization methods based on interval-parameter linear programming can hardly address problems where coefficients in objective functions are subject to dual uncertainties. In this study, a superiority-inferiority-based inexact fuzzy two-stage mixed-integer linear programming (SI-IFTMILP) model was developed for supporting municipal solid waste management under uncertainty. The developed SI-IFTMILP approach is capable of tackling dual uncertainties presented as fuzzy boundary intervals (FuBIs) in not only constraints, but also objective functions. Uncertainties expressed as a combination of intervals and random variables could also be explicitly reflected. An algorithm with high computational efficiency was provided to solve SI-IFTMILP. SI-IFTMILP was then applied to a long-term waste management case to demonstrate its applicability. Useful interval solutions were obtained. SI-IFTMILP could help generate dynamic facility-expansion and waste-allocation plans, as well as provide corrective actions when anticipated waste management plans are violated. It could also greatly reduce system-violation risk and enhance system robustness through examining two sets of penalties resulting from variations in fuzziness and randomness. Moreover, four possible alternative models were formulated to solve the same problem; solutions from them were then compared with those from SI-IFTMILP. The results indicate that SI-IFTMILP could provide more reliable solutions than the alternatives. PMID:20580864
Tan, Q; Huang, G H; Cai, Y P
2010-09-01
The existing inexact optimization methods based on interval-parameter linear programming can hardly address problems where coefficients in objective functions are subject to dual uncertainties. In this study, a superiority-inferiority-based inexact fuzzy two-stage mixed-integer linear programming (SI-IFTMILP) model was developed for supporting municipal solid waste management under uncertainty. The developed SI-IFTMILP approach is capable of tackling dual uncertainties presented as fuzzy boundary intervals (FuBIs) in not only constraints, but also objective functions. Uncertainties expressed as a combination of intervals and random variables could also be explicitly reflected. An algorithm with high computational efficiency was provided to solve SI-IFTMILP. SI-IFTMILP was then applied to a long-term waste management case to demonstrate its applicability. Useful interval solutions were obtained. SI-IFTMILP could help generate dynamic facility-expansion and waste-allocation plans, as well as provide corrective actions when anticipated waste management plans are violated. It could also greatly reduce system-violation risk and enhance system robustness through examining two sets of penalties resulting from variations in fuzziness and randomness. Moreover, four possible alternative models were formulated to solve the same problem; solutions from them were then compared with those from SI-IFTMILP. The results indicate that SI-IFTMILP could provide more reliable solutions than the alternatives.
Genetic algorithms for the vehicle routing problem
NASA Astrophysics Data System (ADS)
Volna, Eva
2016-06-01
The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.
NASA Technical Reports Server (NTRS)
Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide
2014-01-01
There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning
A new algorithm for constrained nonlinear least-squares problems, part 1
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, F. T.
1983-01-01
A Gauss-Newton algorithm is presented for solving nonlinear least squares problems. The problem statement may include simple bounds or more general constraints on the unknowns. The algorithm uses a trust region that allows the objective function to increase with logic for retreating to best values. The computations for the linear problem are done using a least squares system solver that allows for simple bounds and linear constraints. The trust region limits are defined by a box around the current point. In its current form the algorithm is effective only for problems with small residuals, linear constraints and dense Jacobian matrices. Results on a set of test problems are encouraging.
Contextual Constraints on Adolescents' Leisure.
ERIC Educational Resources Information Center
Silbereisen, Rainer K.
2003-01-01
Interlinks crucial cultural themes emerging from preceding chapters, highlighting the contextual constraints in adolescents' use of free time. Draws parallels across the nations discussed on issues related to how school molds leisure time, the balance of passive versus active leisure, timing of leisure pursuits, and the cumulative effect of…
Perceptual Constraints in Phonotactic Learning
ERIC Educational Resources Information Center
Endress, Ansgar D.; Mehler, Jacques
2010-01-01
Structural regularities in language have often been attributed to symbolic or statistical general purpose computations, whereas perceptual factors influencing such generalizations have received less interest. Here, we use phonotactic-like constraints as a case study to ask whether the structural properties of specific perceptual and memory…
Constraints on galaxy formation theories
NASA Technical Reports Server (NTRS)
Szalay, A. S.
1986-01-01
The present theories of galaxy formation are reviewed. The relation between peculiar velocities, temperature fluctuations of the microwave background and the correlation function of galaxies point to the possibility that galaxies do not form uniformly everywhere. The velocity data provide strong constraints on the theories even in the case when light does not follow mass of the universe.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Quantum Algorithm for Linear Programming Problems
NASA Astrophysics Data System (ADS)
Joag, Pramod; Mehendale, Dhananjay
The quantum algorithm (PRL 103, 150502, 2009) solves a system of linear equations with exponential speedup over existing classical algorithms. We show that the above algorithm can be readily adopted in the iterative algorithms for solving linear programming (LP) problems. The first iterative algorithm that we suggest for LP problem follows from duality theory. It consists of finding nonnegative solution of the equation forduality condition; forconstraints imposed by the given primal problem and for constraints imposed by its corresponding dual problem. This problem is called the problem of nonnegative least squares, or simply the NNLS problem. We use a well known method for solving the problem of NNLS due to Lawson and Hanson. This algorithm essentially consists of solving in each iterative step a new system of linear equations . The other iterative algorithms that can be used are those based on interior point methods. The same technique can be adopted for solving network flow problems as these problems can be readily formulated as LP problems. The suggested quantum algorithm cansolveLP problems and Network Flow problems of very large size involving millions of variables.
A segmentation algorithm for noisy images
Xu, Y.; Olman, V.; Uberbacher, E.C.
1996-12-31
This paper presents a 2-D image segmentation algorithm and addresses issues related to its performance on noisy images. The algorithm segments an image by first constructing a minimum spanning tree representation of the image and then partitioning the spanning tree into sub-trees representing different homogeneous regions. The spanning tree is partitioned in such a way that the sum of gray-level variations over all partitioned subtrees is minimized under the constraints that each subtree has at least a specified number of pixels and two adjacent subtrees have significantly different ``average`` gray-levels. Two types of noise, transmission errors and Gaussian additive noise. are considered and their effects on the segmentation algorithm are studied. Evaluation results have shown that the segmentation algorithm is robust in the presence of these two types of noise.
Genetic algorithms and supernovae type Ia analysis
Bogdanos, Charalampos; Nesseris, Savvas E-mail: nesseris@nbi.dk
2009-05-15
We introduce genetic algorithms as a means to analyze supernovae type Ia data and extract model-independent constraints on the evolution of the Dark Energy equation of state w(z) {identical_to} P{sub DE}/{rho}{sub DE}. Specifically, we will give a brief introduction to the genetic algorithms along with some simple examples to illustrate their advantages and finally we will apply them to the supernovae type Ia data. We find that genetic algorithms can lead to results in line with already established parametric and non-parametric reconstruction methods and could be used as a complementary way of treating SNIa data. As a non-parametric method, genetic algorithms provide a model-independent way to analyze data and can minimize bias due to premature choice of a dark energy model.
Scheduling Earth Observing Satellites with Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2003-01-01
We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.
Srinivasan, Gautham; Srinivas, Chakravarthi Rangachari; Mathew, Anil C; Duraiswami, Divakar
2013-01-01
Background: Hardness of water is determined by the amount of salts (calcium carbonate [CaCO3] and magnesium sulphate [MgSO4]) present in water. The hardness of the water used for washing hair may cause fragility of hair. Objective: The objective of the following study is to compare the tensile strength and elasticity of hair treated in hard water and hair treated in distilled water. Materials and Methods: 10-15 strands of hair of length 15-20 cm, lost during combing were obtained from 15 volunteers. Each sample was cut in the middle to obtain 2 sets of hair per volunteer. One set of 15 samples was immersed in hard water and the other set in distilled water for 10 min on alternate days. Procedure was repeated for 30 days. The tensile strength and elasticity of the hair treated in hard water and distilled water was determined using INSTRON universal strength tester. Results: The CaCO3 and MgSO4 content of hard water and distilled water were determined as 212.5 ppm of CaCO3 and 10 ppm of CaCO3 respectively. The tensile strength and elasticity in each sample was determined and the mean values were compared using t-test. The mean (SD) of tensile strength of hair treated in hard water was 105.28 (27.59) and in distilled water was 103.66 (20.92). No statistical significance was observed in the tensile strength, t = 0.181, P = 0.858. The mean (SD) of elasticity of hair treated in hard water was 37.06 (2.24) and in distilled water was 36.84 (4.8). No statistical significance was observed in the elasticity, t = 0.161, P = 0.874. Conclusion: The hardness of water does not interfere with the tensile strength and elasticity of hair. PMID:24574692
Loop Closing Detection in RGB-D SLAM Combining Appearance and Geometric Constraints
Zhang, Heng; Liu, Yanli; Tan, Jindong
2015-01-01
A kind of multi feature points matching algorithm fusing local geometric constraints is proposed for the purpose of quickly loop closing detection in RGB-D Simultaneous Localization and Mapping (SLAM). The visual feature is encoded with BRAND (binary robust appearance and normals descriptor), which efficiently combines appearance and geometric shape information from RGB-D images. Furthermore, the feature descriptors are stored using the Locality-Sensitive-Hashing (LSH) technique and hierarchical clustering trees are used to search for these binary features. Finally, the algorithm for matching of multi feature points using local geometric constraints is provided, which can effectively reject the possible false closure hypotheses. We demonstrate the efficiency of our algorithms by real-time RGB-D SLAM with loop closing detection in indoor image sequences taken with a handheld Kinect camera and comparative experiments using other algorithms in RTAB-Map dealing with a benchmark dataset. PMID:26102492
Loop Closing Detection in RGB-D SLAM Combining Appearance and Geometric Constraints.
Zhang, Heng; Liu, Yanli; Tan, Jindong
2015-01-01
A kind of multi feature points matching algorithm fusing local geometric constraints is proposed for the purpose of quickly loop closing detection in RGB-D Simultaneous Localization and Mapping (SLAM). The visual feature is encoded with BRAND (binary robust appearance and normals descriptor), which efficiently combines appearance and geometric shape information from RGB-D images. Furthermore, the feature descriptors are stored using the Locality-Sensitive-Hashing (LSH) technique and hierarchical clustering trees are used to search for these binary features. Finally, the algorithm for matching of multi feature points using local geometric constraints is provided, which can effectively reject the possible false closure hypotheses. We demonstrate the efficiency of our algorithms by real-time RGB-D SLAM with loop closing detection in indoor image sequences taken with a handheld Kinect camera and comparative experiments using other algorithms in RTAB-Map dealing with a benchmark dataset. PMID:26102492
NASA Astrophysics Data System (ADS)
Lapert, M.; Tehini, R.; Turinici, G.; Sugny, D.
2009-06-01
We propose a monotonically convergent algorithm which can enforce spectral constraints on the control field (and extends to arbitrary filters). The procedure differs from standard algorithms in that at each iteration, the control field is taken as a linear combination of the control field (computed by the standard algorithm) and the filtered field. The parameter of the linear combination is chosen to respect the monotonic behavior of the algorithm and to be as close to the filtered field as possible. We test the efficiency of this method on molecular alignment. Using bandpass filters, we show how to select particular rotational transitions to reach high alignment efficiency. We also consider spectral constraints corresponding to experimental conditions using pulse-shaping techniques. We determine an optimal solution that could be implemented experimentally with this technique.
Three penalized EM-type algorithms for PET image reconstruction.
Teng, Yueyang; Zhang, Tie
2012-06-01
Based on Bayes theory, Green introduced the maximum a posteriori (MAP) algorithm to obtain a smoothing reconstruction for positron emission tomography. This algorithm is flexible and convenient for most of the penalties, but it is hard to guarantee convergence. For a common goal, Fessler penalized a weighted least squares (WLS) estimator by a quadratic penalty and then solved it with the successive over-relaxation (SOR) algorithm, however, the algorithm was time-consuming and difficultly parallelized. Anderson proposed another WLS estimator for faster convergence, on which there were few regularization methods studied. For three regularized estimators above, we develop three new expectation maximization (EM) type algorithms to solve them. Unlike MAP and SOR, the proposed algorithms yield update rules by minimizing the auxiliary functions constructed on the previous iterations, which ensure the cost functions monotonically decreasing. Experimental results demonstrated the robustness and effectiveness of the proposed algorithms.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Analysis of Hard Thin Film Coating
NASA Technical Reports Server (NTRS)
Shen, Dashen
1998-01-01
MSFC is interested in developing hard thin film coating for bearings. The wearing of the bearing is an important problem for space flight engine. Hard thin film coating can drastically improve the surface of the bearing and improve the wear-endurance of the bearing. However, many fundamental problems in surface physics, plasma deposition, etc, need further research. The approach is using electron cyclotron resonance chemical vapor deposition (ECRCVD) to deposit hard thin film an stainless steel bearing. The thin films in consideration include SiC, SiN and other materials. An ECRCVD deposition system is being assembled at MSFC.
Analysis of Hard Thin Film Coating
NASA Technical Reports Server (NTRS)
Shen, Dashen
1998-01-01
Marshall Space Flight Center (MSFC) is interested in developing hard thin film coating for bearings. The wearing of the bearing is an important problem for space flight engine. Hard thin film coating can drastically improve the surface of the bearing and improve the wear-endurance of the bearing. However, many fundamental problems in surface physics, plasma deposition, etc, need further research. The approach is using Electron Cyclotron Resonance Chemical Vapor Deposition (ECRCVD) to deposit hard thin film on stainless steel bearing. The thin films in consideration include SiC, SiN and other materials. An ECRCVD deposition system is being assembled at MSFC.
Differential cross-sections with hard targets
NASA Astrophysics Data System (ADS)
Brun, J. L.; Pacheco, A. F.
2005-09-01
When the concept of scattering differential cross-section is introduced in classical mechanics textbooks, usually it is first supposed that the target is a fixed, hard sphere. In this paper we calculate the scattering differential cross-section in the case of the hard target being a fixed figure of revolution of any shape. When the target is a paraboloid of revolution, we find the well-known formula corresponding to Rutherford's scattering. In addition, we analyse the inverse problem, i.e. given a differential cross-section, what is the profile of the corresponding hard target?
[Methods for evaluation of penile erection hardness].
Yuan, Yi-Ming; Zhou, Su; Zhang, Kai
2010-07-01
Penile erection hardness is one of the key factors for successful sexual intercourse, as well as an important index in the diagnosis and treatment of erectile dysfunction (ED). This article gives an overview on the component and impact factors of erection hardness, summarizes some commonly used evaluation methods, including those for objective indexes, such as Rigiscan, axial buckling test and color Doppler ultrasonography, and those for subjective indexes of ED patients, such as IIEF, the Erectile Function Domain of IIEF (IIEF-EF), and Erection Hardness Score (EHS), and discusses the characteristics of these methods.
Constraints on cosmic distance duality relation from cosmological observations
NASA Astrophysics Data System (ADS)
Lv, Meng-Zhen; Xia, Jun-Qing
2016-09-01
In this paper, we use the model dependent method to revisit the constraint on the well-known cosmic distance duality relation (CDDR). By using the latest SNIa samples, such as Union2.1, JLA and SNLS, we find that the SNIa data alone cannot constrain the cosmic opacity parameter ε, which denotes the deviation from the CDDR, dL =dA(1 + z) 2 + ε, very well. The constraining power on ε from the luminosity distance indicator provided by SNIa and GRB is hardly to be improved at present. When we include other cosmological observations, such as the measurements of Hubble parameter, the baryon acoustic oscillations and the distance information from cosmic microwave background, we obtain the tightest constraint on the cosmic opacity parameter ε, namely the 68% C.L. limit: ε = 0.023 ± 0.018. Furthermore, we also consider the evolution of ε as a function of z using two methods, the parametrization and the principle component analysis, and do not find the evidence for the deviation from zero. Finally, we simulate the future SNIa and Hubble measurements and find the mock data could give very tight constraint on the cosmic opacity ε and verify the CDDR at high significance.
Constraint-Led Changes in Internal Variability in Running
Haudum, Anita; Birklbauer, Jürgen; Kröll, Josef; Müller, Erich
2012-01-01
We investigated the effect of a one-time application of elastic constraints on movement-inherent variability during treadmill running. Eleven males ran two 35-min intervals while surface EMG was measured. In one of two 35-min intervals, after 10 min of running without tubes, elastic tubes (between hip and heels) were attached, followed by another 5 min of running without tubes. To assess variability, stride-to-stride iEMG variability was calculated. Significant increases in variability (36 % to 74 %) were observed during tube running, whereas running without tubes after the tube running block showed no significant differences. Results show that elastic tubes affect variability on a muscular level despite the constant environmental conditions and underline the nervous system's adaptability to cope with somehow unpredictable constraints since stride duration was unaltered. Key points The elastic constraints led to an increase in iEMG variability but left stride duration variability unaltered. Runners adapted to the elastic cords, evident in an iEMG variability decrease over time towards normal running. Hardly any aftermaths were observed in the iEMG analyses when comparing normal running after the constrained running block to normal running. PMID:24149117
Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2005-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.
NASA Astrophysics Data System (ADS)
Afzalirad, Mojtaba; Rezaeian, Javad
2016-04-01
This study involves an unrelated parallel machine scheduling problem in which sequence-dependent set-up times, different release dates, machine eligibility and precedence constraints are considered to minimize total late works. A new mixed-integer programming model is presented and two efficient hybrid meta-heuristics, genetic algorithm and ant colony optimization, combined with the acceptance strategy of the simulated annealing algorithm (Metropolis acceptance rule), are proposed to solve this problem. Manifestly, the precedence constraints greatly increase the complexity of the scheduling problem to generate feasible solutions, especially in a parallel machine environment. In this research, a new corrective algorithm is proposed to obtain the feasibility in all stages of the algorithms. The performance of the proposed algorithms is evaluated in numerical examples. The results indicate that the suggested hybrid ant colony optimization statistically outperformed the proposed hybrid genetic algorithm in solving large-size test problems.
Genetic Algorithms for Digital Quantum Simulations
NASA Astrophysics Data System (ADS)
Las Heras, U.; Alvarez-Rodriguez, U.; Solano, E.; Sanz, M.
2016-06-01
We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136
NASA Astrophysics Data System (ADS)
Yedidia, Jonathan S.
2011-11-01
Message-passing algorithms can solve a wide variety of optimization, inference, and constraint satisfaction problems. The algorithms operate on factor graphs that visually represent and specify the structure of the problems. After describing some of their applications, I survey the family of belief propagation (BP) algorithms, beginning with a detailed description of the min-sum algorithm and its exactness on tree factor graphs, and then turning to a variety of more sophisticated BP algorithms, including free-energy based BP algorithms, "splitting" BP algorithms that generalize "tree-reweighted" BP, and the various BP algorithms that have been proposed to deal with problems with continuous variables. The Divide and Concur (DC) algorithm is a projection-based constraint satisfaction algorithm that deals naturally with continuous variables, and converges to exact answers for problems where the solution sets of the constraints are convex. I show how it exploits the "difference-map" dynamics to avoid traps that cause more naive alternating projection algorithms to fail for non-convex problems, and explain that it is a message-passing algorithm that can also be applied to optimization problems. The BP and DC algorithms are compared, both in terms of their fundamental justifications and their strengths and weaknesses.
Competitive learning with pairwise constraints.
Covões, Thiago F; Hruschka, Eduardo R; Ghosh, Joydeep
2013-01-01
Constrained clustering has been an active research topic since the last decade. Most studies focus on batch-mode algorithms. This brief introduces two algorithms for on-line constrained learning, named on-line linear constrained vector quantization error (O-LCVQE) and constrained rival penalized competitive learning (C-RPCL). The former is a variant of the LCVQE algorithm for on-line settings, whereas the latter is an adaptation of the (on-line) RPCL algorithm to deal with constrained clustering. The accuracy results--in terms of the normalized mutual information (NMI)--from experiments with nine datasets show that the partitions induced by O-LCVQE are competitive with those found by the (batch-mode) LCVQE. Compared with this formidable baseline algorithm, it is surprising that C-RPCL can provide better partitions (in terms of the NMI) for most of the datasets. Also, experiments on a large dataset show that on-line algorithms for constrained clustering can significantly reduce the computational time.
Stress constraints in optimality criteria design
NASA Technical Reports Server (NTRS)
Levy, R.
1982-01-01
Procedures described emphasize the processing of stress constraints within optimality criteria designs for low structural weight with stress and compliance constraints. Prescreening criteria are used to partition stress constraints into either potentially active primary sets or passive secondary sets that require minimal processing. Side constraint boundaries for passive constraints are derived by projections from design histories to modify conventional stress-ratio boundaries. Other procedures described apply partial structural modification reanalysis to design variable groups to correct stress constraint violations of unfeasible designs. Sample problem results show effective design convergence and, in particular, advantages for reanalysis in obtaining lower feasible design weights.
Dirac's Covariant Constraint Dynamics Applied to the Baryon Spectrum
NASA Astrophysics Data System (ADS)
Whitney, Joshua; Crater, Horace
2010-02-01
A baryon is a hadron containing three quarks in a combination of up, down, strange, charm, or bottom. For prediction of the baryon energy spectrum, a baryon is modeled as a three-body system with the interacting forces coming from a set of two-body potentials that depend on the distance between the quarks, the spin-spin and spin-orbit angular momentum coupling terms, and a tensor term. Techniques and equations are derived from Todorov's work on constraint dynamics and the quasi-potential equation together with Two Body Dirac equations developed by Crater and Van Alstine, and adapted to this specific problem by further use of Sazdjian's N-body constraints dynamics for general confined systems. Baryon spectroscopy results are presented and compared with experiment. Typically, a best fit method is used in the analyses that employ several different algorithms, including a gradient approach, Monte Carlo modeling, and simulated annealing methods. )
Finite element solution of optimal control problems with inequality constraints
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1990-01-01
A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.
Exploring stochasticity and imprecise knowledge based on linear inequality constraints.
Subbey, Sam; Planque, Benjamin; Lindstrøm, Ulf
2016-09-01
This paper explores the stochastic dynamics of a simple foodweb system using a network model that mimics interacting species in a biosystem. It is shown that the system can be described by a set of ordinary differential equations with real-valued uncertain parameters, which satisfy a set of linear inequality constraints. The constraints restrict the solution space to a bounded convex polytope. We present results from numerical experiments to show how the stochasticity and uncertainty characterizing the system can be captured by sampling the interior of the polytope with a prescribed probability rule, using the Hit-and-Run algorithm. The examples illustrate a parsimonious approach to modeling complex biosystems under vague knowledge. PMID:26746217
Exploring stochasticity and imprecise knowledge based on linear inequality constraints.
Subbey, Sam; Planque, Benjamin; Lindstrøm, Ulf
2016-09-01
This paper explores the stochastic dynamics of a simple foodweb system using a network model that mimics interacting species in a biosystem. It is shown that the system can be described by a set of ordinary differential equations with real-valued uncertain parameters, which satisfy a set of linear inequality constraints. The constraints restrict the solution space to a bounded convex polytope. We present results from numerical experiments to show how the stochasticity and uncertainty characterizing the system can be captured by sampling the interior of the polytope with a prescribed probability rule, using the Hit-and-Run algorithm. The examples illustrate a parsimonious approach to modeling complex biosystems under vague knowledge.
Unit Commitment Considering Generation Flexibility and Environmental Constraints
Lu, Shuai; Makarov, Yuri V.; Zhu, Yunhua; Lu, Ning; Prakash Kumar, Nirupama; Chakrabarti, Bhujanga B.
2010-07-31
This paper proposes a new framework for power system unit commitment process, which incorporates the generation flexibility requirements and environmental constraints into the existing unit commitment algorithm. The generation flexibility requirements are to address the uncertainties with large amount of intermittent resources as well as with load and traditional generators, which causes real-time balancing requirements to be variable and less predictable. The proposed flexibility requirements include capacity, ramp and ramp duration for both upward and downward balancing reserves. The environmental constraints include emission allowance for fossil fuel-based generators and ecological regulations for hydro power plants. Calculation of emission rates is formulated. Unit commitment under this new framework will be critical to the economic and reliable operation of the power grid and the minimization of its negative environmental impacts, especially when high penetration levels of intermittent resources are being approached, as required by the renewable portfolio standards in many states.
Novel hard compositions and methods of preparation
Sheinberg, H.
1981-02-03
Novel very hard compositions of matter are prepared by using in all embodiments only a minor amount of a particular carbide (or materials which can form the carbide in situ when subjected to heat and pressure); and no strategic cobalt is needed. Under a particular range of conditions, densified compositions of matter of the invention are prepared having hardnesses on the Rockwell A test substantially equal to the hardness of pure tungsten carbide and to two of the hardest commercial cobalt-bonded tungsten carbides. Alternately, other compositions of the invention which have slightly lower hardnesses than those described above in one embodiment also possess the advantage of requiring no tungsten and in another embodiment possess the advantage of having a good fracture toughness value.
Automated radiation hard ASIC design tool
NASA Technical Reports Server (NTRS)
White, Mike; Bartholet, Bill; Baze, Mark
1993-01-01
A commercial based, foundry independent, compiler design tool (ChipCrafter) with custom radiation hardened library cells is described. A unique analysis approach allows low hardness risk for Application Specific IC's (ASIC's). Accomplishments, radiation test results, and applications are described.
Financial Incentives for Staffing Hard Places.
ERIC Educational Resources Information Center
Prince, Cynthia D.
2002-01-01
Describes examples of financial incentives used to recruit teachers for low-achieving and hard-to-staff schools. Includes targeted salary increases, housing incentives, tuition assistance, and tax credits. (PKP)
Electronic Teaching: Hard Disks and Networks.
ERIC Educational Resources Information Center
Howe, Samuel F.
1984-01-01
Describes floppy-disk and hard-disk based networks, electronic systems linking microcomputers together for the purpose of sharing peripheral devices, and presents points to remember when shopping for a network. (MBR)
Density functional theory for hard polyhedra.
Marechal, Matthieu; Löwen, Hartmut
2013-03-29
Using the framework of geometry-based fundamental-measure theory, we develop a classical density functional for hard polyhedra and their mixtures and apply it to inhomogeneous fluids of Platonic solids near a hard wall. As revealed by Monte Carlo simulations, the faceted shape of the polyhedra leads to complex layering and orientational ordering near the wall, which is excellently reproduced by our theory. These effects can be verified in real-space experiments on polyhedral colloids.
Breakdown of QCD factorization in hard diffraction
NASA Astrophysics Data System (ADS)
Kopeliovich, B. Z.
2016-07-01
Factorization of short- and long-distance interactions is severely broken in hard diffractive hadronic collisions. Interaction with the spectator partons leads to an interplay between soft and hard scales, which results in a leading twist behavior of the cross section, on the contrary to the higher twist predicted by factorization. This feature is explicitly demonstrated for diffractive radiation of abelian (Drell-Yan, gauge bosons, Higgs) and non-abelian (heavy flavors) particles.
A Novel Approach to Hardness Testing
NASA Technical Reports Server (NTRS)
Spiegel, F. Xavier; West, Harvey A.
1996-01-01
This paper gives a description of the application of a simple rebound time measuring device and relates the determination of relative hardness of a variety of common engineering metals. A relation between rebound time and hardness will be sought. The effect of geometry and surface condition will also be discussed in order to acquaint the student with the problems associated with this type of method.
Laser Ablatin of Dental Hard Tissue
Seka, W.; Rechmann, P.; Featherstone, J.D.B.; Fried, D.
2007-07-31
This paper discusses ablation of dental hard tissue using pulsed lasers. It focuses particularly on the relevant tissue and laser parameters and some of the basic ablation processes that are likely to occur. The importance of interstitial water and its phase transitions is discussed in some detail along with the ablation processes that may or may not directly involve water. The interplay between tissue parameters and laser parameters in the outcome of the removal of dental hard tissue is discussed in detail.
A disturbance based control/structure design algorithm
NASA Technical Reports Server (NTRS)
Mclaren, Mark D.; Slater, Gary L.
1989-01-01
Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.
Collisional statistics of the hard-sphere gas.
Visco, Paolo; van Wijland, Frédéric; Trizac, Emmanuel
2008-04-01
We investigate the probability distribution functions of the free flight time and of the number of collisions in a hard-sphere gas at equilibrium. At variance with naive expectation, the latter quantity does not follow Poissonian statistics, even in the dilute limit, which is the focus of the present analysis. The corresponding deviations are addressed both numerically and analytically. In writing an equation for the generating function of the cumulants of the number of collisions, we came across a perfect mapping between our problem and a previously introduced model: the probabilistic ballistic annihilation process [Coppex, Phys. Rev. E 69, 11303 (2004)]. We exploit this analogy to construct a Monte Carlo-like algorithm able to investigate the asymptotically large time behavior of the collisional statistics within a reasonable computational time. In addition, our predictions are compared with the results of molecular dynamics simulations and the direct simulation Monte Carlo technique. An excellent agreement is reported. PMID:18517588
Unitarity constraints on trimaximal mixing
Kumar, Sanjeev
2010-07-01
When the neutrino mass eigenstate {nu}{sub 2} is trimaximally mixed, the mixing matrix is called trimaximal. The middle column of the trimaximal mixing matrix is identical to tribimaximal mixing and the other two columns are subject to unitarity constraints. This corresponds to a mixing matrix with four independent parameters in the most general case. Apart from the two Majorana phases, the mixing matrix has only one free parameter in the CP conserving limit. Trimaximality results in interesting interplay between mixing angles and CP violation. A notion of maximal CP violation naturally emerges here: CP violation is maximal for maximal 2-3 mixing. Similarly, there is a natural constraint on the deviation from maximal 2-3 mixing which takes its maximal value in the CP conserving limit.
System engineering approach to GPM retrieval algorithms
Rose, C. R.; Chandrasekar, V.
2004-01-01
System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do
LATENT DEMOGRAPHIC PROFILE ESTIMATION IN HARD-TO-REACH GROUPS
McCormick, Tyler H.; Zheng, Tian
2015-01-01
The sampling frame in most social science surveys excludes members of certain groups, known as hard-to-reach groups. These groups, or sub-populations, may be difficult to access (the homeless, e.g.), camouflaged by stigma (individuals with HIV/AIDS), or both (commercial sex workers). Even basic demographic information about these groups is typically unknown, especially in many developing nations. We present statistical models which leverage social network structure to estimate demographic characteristics of these subpopulations using Aggregated relational data (ARD), or questions of the form “How many X’s do you know?” Unlike other network-based techniques for reaching these groups, ARD require no special sampling strategy and are easily incorporated into standard surveys. ARD also do not require respondents to reveal their own group membership. We propose a Bayesian hierarchical model for estimating the demographic characteristics of hard-to-reach groups, or latent demographic profiles, using ARD. We propose two estimation techniques. First, we propose a Markov-chain Monte Carlo algorithm for existing data or cases where the full posterior distribution is of interest. For cases when new data can be collected, we propose guidelines and, based on these guidelines, propose a simple estimate motivated by a missing data approach. Using data from McCarty et al. [Human Organization 60 (2001) 28–39], we estimate the age and gender profiles of six hard-to-reach groups, such as individuals who have HIV, women who were raped, and homeless persons. We also evaluate our simple estimates using simulation studies. PMID:26966475
Managing Restaurant Tables using Constraints
NASA Astrophysics Data System (ADS)
Vidotto, Alfio; Brown, Kenneth N.; Beck, J. Christopher
Restaurant table management can have significant impact on both profitability and the customer experience. The core of the issue is a complex dynamic combinatorial problem. We show how to model the problem as constraint satisfaction, with extensions which generate flexible seating plans and which maintain stability when changes occur. We describe an implemented system which provides advice to users in real time. The system is currently being evaluated in a restaurant environment.
Macroscopic constraints on string unification
Taylor, T.R.
1989-03-01
The comparison of sting theory with experiment requires a huge extrapolation from the microscopic distances, of order of the Planck length, up to the macroscopic laboratory distances. The quantum effects give rise to large corrections to the macroscopic predictions of sting unification. I discus the model-independent constraints on the gravitational sector of string theory due to the inevitable existence of universal Fradkin-Tseytlin dilatons. 9 refs.
Simulated annealing algorithm for solving chambering student-case assignment problem
NASA Astrophysics Data System (ADS)
Ghazali, Saadiah; Abdul-Rahman, Syariza
2015-12-01
The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.
An Automated Cloud-edge Detection Algorithm Using Cloud Physics and Radar Data
NASA Technical Reports Server (NTRS)
Ward, Jennifer G.; Merceret, Francis J.; Grainger, Cedric A.
2003-01-01
An automated cloud edge detection algorithm was developed and extensively tested. The algorithm uses in-situ cloud physics data measured by a research aircraft coupled with ground-based weather radar measurements to determine whether the aircraft is in or out of cloud. Cloud edges are determined when the in/out state changes, subject to a hysteresis constraint. The hysteresis constraint prevents isolated transient cloud puffs or data dropouts from being identified as cloud boundaries. The algorithm was verified by detailed manual examination of the data set in comparison to the results from application of the automated algorithm.
Updating neutrino magnetic moment constraints
NASA Astrophysics Data System (ADS)
Cañas, B. C.; Miranda, O. G.; Parada, A.; Tórtola, M.; Valle, J. W. F.
2016-02-01
In this paper we provide an updated analysis of the neutrino magnetic moments (NMMs), discussing both the constraints on the magnitudes of the three transition moments Λi and the role of the CP violating phases present both in the mixing matrix and in the NMM matrix. The scattering of solar neutrinos off electrons in Borexino provides the most stringent restrictions, due to its robust statistics and the low energies observed, below 1 MeV. Our new limit on the effective neutrino magnetic moment which follows from the most recent Borexino data is 3.1 ×10-11μB at 90% C.L. This corresponds to the individual transition magnetic moment constraints: |Λ1 | ≤ 5.6 ×10-11μB, |Λ2 | ≤ 4.0 ×10-11μB, and |Λ3 | ≤ 3.1 ×10-11μB (90% C.L.), irrespective of any complex phase. Indeed, the incoherent admixture of neutrino mass eigenstates present in the solar flux makes Borexino insensitive to the Majorana phases present in the NMM matrix. For this reason we also provide a global analysis including the case of reactor and accelerator neutrino sources, presenting the resulting constraints for different values of the relevant CP phases. Improved reactor and accelerator neutrino experiments will be needed in order to underpin the full profile of the neutrino electromagnetic properties.
Constraint Based Modeling Going Multicellular
Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas
2016-01-01
Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches. PMID:26904548
Infrared Constraint on Ultraviolet Theories
Tsai, Yuhsin
2012-08-01
While our current paradigm of particle physics, the Standard Model (SM), has been extremely successful at explaining experiments, it is theoretically incomplete and must be embedded into a larger framework. In this thesis, we review the main motivations for theories beyond the SM (BSM) and the ways such theories can be constrained using low energy physics. The hierarchy problem, neutrino mass and the existence of dark matter (DM) are the main reasons why the SM is incomplete . Two of the most plausible theories that may solve the hierarchy problem are the Randall-Sundrum (RS) models and supersymmetry (SUSY). RS models usually suffer from strong flavor constraints, while SUSY models produce extra degrees of freedom that need to be hidden from current experiments. To show the importance of infrared (IR) physics constraints, we discuss the flavor bounds on the anarchic RS model in both the lepton and quark sectors. For SUSY models, we discuss the difficulties in obtaining a phenomenologically allowed gaugino mass, its relation to R-symmetry breaking, and how to build a model that avoids this problem. For the neutrino mass problem, we discuss the idea of generating small neutrino masses using compositeness. By requiring successful leptogenesis and the existence of warm dark matter (WDM), we can set various constraints on the hidden composite sector. Finally, to give an example of model independent bounds from collider experiments, we show how to constrain the DM–SM particle interactions using collider results with an effective coupling description.
Isocurvature constraints on portal couplings
NASA Astrophysics Data System (ADS)
Kainulainen, Kimmo; Nurmi, Sami; Tenkanen, Tommi; Tuominen, Kimmo; Vaskonen, Ville
2016-06-01
We consider portal models which are ultraweakly coupled with the Standard Model, and confront them with observational constraints on dark matter abundance and isocurvature perturbations. We assume the hidden sector to contain a real singlet scalar s and a sterile neutrino ψ coupled to s via a pseudoscalar Yukawa term. During inflation, a primordial condensate consisting of the singlet scalar s is generated, and its contribution to the isocurvature perturbations is imprinted onto the dark matter abundance. We compute the total dark matter abundance including the contributions from condensate decay and nonthermal production from the Standard Model sector. We then use the Planck limit on isocurvature perturbations to derive a novel constraint connecting dark matter mass and the singlet self coupling with the scale of inflation: mDM/GeV lesssim 0.2λs3/8 (H*/1011 GeV)‑3/2. This constraint is relevant in most portal models ultraweakly coupled with the Standard Model and containing light singlet scalar fields.
Steric constraints as folding coadjuvant
NASA Astrophysics Data System (ADS)
Tarragó, M. E.; Rocha, Luiz F.; Dasilva, R. A.; Caliri, A.
2003-03-01
Through the analyses of the Miyazawa-Jernigan matrix it has been shown that the hydrophobic effect generates the dominant driving force for protein folding. By using both lattice and off-lattice models, it is shown that hydrophobic-type potentials are indeed efficient in inducing the chain through nativelike configurations, but they fail to provide sufficient stability so as to keep the chain in the native state. However, through comparative Monte Carlo simulations, it is shown that hydrophobic potentials and steric constraints are two basic ingredients for the folding process. Specifically, it is shown that suitable pairwise steric constraints introduce strong changes on the configurational activity, whose main consequence is a huge increase in the overall stability condition of the native state; detailed analysis of the effects of steric constraints on the heat capacity and configurational activity are provided. The present results support the view that the folding problem of globular proteins can be approached as a process in which the mechanism to reach the native conformation and the requirements for the globule stability are uncoupled.
Constraint Based Modeling Going Multicellular.
Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas
2016-01-01
Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches.
Constraint Based Modeling Going Multicellular.
Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas
2016-01-01
Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches. PMID:26904548
Optimal reactive planning with security constraints
Thomas, W.R.; Cheng, D.T.Y.; Dixon, A.M.; Thorp, J.D.; Dunnett, R.M.; Schaff, G.
1995-12-31
The National Grid Company (NGC) of England and Wales has developed a computer program, SCORPION, to help system planners optimize the location and size of new reactive compensation plant on the transmission system. The reactive power requirements of the NGC system have risen as a result of increased power flows and the shorter timescale on which power stations are commissioned and withdrawn from service. In view of the high costs involved, it is important that reactive compensation be installed as economically as possible, without compromising security. Traditional methods based on iterative use of a load flow program are labor intensive and subjective. SCORPION determines a near-optimal pattern of new reactive sources which are required to satisfy voltage constraints for normal and contingent states of operation of the transmission system. The algorithm processes the system states sequentially, instead of optimizing all of them simultaneously. This allows a large number of system states to be considered with an acceptable run time and computer memory requirement. Installed reactive sources are treated as continuous, rather than discrete, variables. However, the program has a restart facility which enables the user to add realistically sized reactive sources explicitly and thereby work towards a realizable solution to the planning problem.
Variable depth recursion algorithm for leaf sequencing
Siochi, R. Alfredo C.
2007-02-15
The processes of extraction and sweep are basic segmentation steps that are used in leaf sequencing algorithms. A modified version of a commercial leaf sequencer changed the way that the extracts are selected and expanded the search space, but the modification maintained the basic search paradigm of evaluating multiple solutions, each one consisting of up to 12 extracts and a sweep sequence. While it generated the best solutions compared to other published algorithms, it used more computation time. A new, faster algorithm selects one extract at a time but calls itself as an evaluation function a user-specified number of times, after which it uses the bidirectional sweeping window algorithm as the final evaluation function. To achieve a performance comparable to that of the modified commercial leaf sequencer, 2-3 calls were needed, and in all test cases, there were only slight improvements beyond two calls. For the 13 clinical test maps, computation speeds improved by a factor between 12 and 43, depending on the constraints, namely the ability to interdigitate and the avoidance of the tongue-and-groove under dose. The new algorithm was compared to the original and modified versions of the commercial leaf sequencer. It was also compared to other published algorithms for 1400, random, 15x15, test maps with 3-16 intensity levels. In every single case the new algorithm provided the best solution.
Algorithm Optimally Allocates Actuation of a Spacecraft
NASA Technical Reports Server (NTRS)
Motaghedi, Shi
2007-01-01
A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.
A Hybrid Constraint Representation and Reasoning Framework
NASA Technical Reports Server (NTRS)
Golden, Keith; Pang, Wanlin
2004-01-01
In this paper, we introduce JNET, a novel constraint representation and reasoning framework that supports procedural constraints and constraint attachments, providing a flexible way of integrating the constraint system with a runtime software environment and improving its applicability. We describe how JNET is applied to a real-world problem - NASA's Earth-science data processing domain, and demonstrate how JNET can be extended, without any knowledge of how it is implemented, to meet the growing demands of real-world applications.
WINDOWAC (Wing Design Optimization With Aeroelastic Constraints): Program manual
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Starnes, J. H., Jr.
1974-01-01
User and programer documentation for the WIDOWAC programs is given. WIDOWAC may be used for the design of minimum mass wing structures subjected to flutter, strength, and minimum gage constraints. The wing structure is modeled by finite elements, flutter conditions may be both subsonic and supersonic, and mathematical programing methods are used for the optimization procedure. The user documentation gives general directions on how the programs may be used and describes their limitations; in addition, program input and output are described, and example problems are presented. A discussion of computational algorithms and flow charts of the WIDOWAC programs and major subroutines is also given.
Spatial motion constraints for robot assisted suturing using virtual fixtures.
Kapoor, Ankur; Li, Ming; Taylor, Russell H
2005-01-01
We address the problem of the stitching task in endoscopic surgery using a circular needle under robotic assistance. Our main focus is to present an algorithm for suturing using guidance virtual fixtures (VF) that assist the surgeon to move towards a desired goal. A weighted multi-objective, constraint optimization framework is used to compute the joint motions required for the tasks. We show that with the help of VF, suturing can be performed at awkward angles without multiple trials, thus avoiding damage to tissue. In this preliminary study we show the feasibility of our approach and demonstrate the promise of cooperative assistance in complex tasks such as suturing.
Developing and Studying the Methods of Hard-Facing with Heat-Resisting High-Hardness Steels
NASA Astrophysics Data System (ADS)
Malushin, N. N.; Kovalev, A. P.; Valuev, D. V.; Shats, E. A.; Borovikov, I. F.
2016-08-01
The authors develop the methods of hard-facing of mining-metallurgic equipment parts with heat-resisting high-hardness steels on the base of plasma-jet hard-facing in the shielding-alloying nitrogen atmosphere.
Equilibrium Sampling of Hard Spheres up to the Jamming Density and Beyond
NASA Astrophysics Data System (ADS)
Berthier, Ludovic; Coslovich, Daniele; Ninarello, Andrea; Ozawa, Misaki
2016-06-01
We implement and optimize a particle-swap Monte Carlo algorithm that allows us to thermalize a polydisperse system of hard spheres up to unprecedentedly large volume fractions, where previous algorithms and experiments fail to equilibrate. We show that no glass singularity intervenes before the jamming density, which we independently determine through two distinct nonequilibrium protocols. We demonstrate that equilibrium fluid and nonequilibrium jammed states can have the same density, showing that the jamming transition cannot be the end point of the fluid branch.
Equilibrium Sampling of Hard Spheres up to the Jamming Density and Beyond.
Berthier, Ludovic; Coslovich, Daniele; Ninarello, Andrea; Ozawa, Misaki
2016-06-10
We implement and optimize a particle-swap Monte Carlo algorithm that allows us to thermalize a polydisperse system of hard spheres up to unprecedentedly large volume fractions, where previous algorithms and experiments fail to equilibrate. We show that no glass singularity intervenes before the jamming density, which we independently determine through two distinct nonequilibrium protocols. We demonstrate that equilibrium fluid and nonequilibrium jammed states can have the same density, showing that the jamming transition cannot be the end point of the fluid branch. PMID:27341260
Learning and Parallelization Boost Constraint Search
ERIC Educational Resources Information Center
Yun, Xi
2013-01-01
Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…
Cultural and Social Constraints on Portability.
ERIC Educational Resources Information Center
Murray-Lasso, Marco
1990-01-01
Describes 12 constraints imposed by culture on educational software portability. Nielsen's seven-level virtual protocol model of human-computer interaction is discussed as a framework for considering the constraints, a hypothetical example of adapting software for Mexico is included, and suggestions for overcoming constraints and making software…
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
Interpretation Of Assembly Task Constraints From Position And Force Sensory Data
NASA Astrophysics Data System (ADS)
Hou, E. S. H.; Lee, C. S. G.
1990-03-01
One of the major deficiencies in current robot control schemes is the lack of high-level knowledge in the feedback loop. Typically, the sensory data acquired are fed back to the robot controller with minimal amount of processing. However, by accumulating useful sensory data and processing them intelligently, one can obtain invaluable information about the state of the task being performed by the robot. This paper presents a method based on the screw theory for interpreting the position and force sensory data into high-level assembly task constraints. The position data are obtained from the joint angle encoders of the manipulator and the force data are obtained from a wrist force sensor attached to the mounting plate of the manipulator end-effector. The interpretation of the sensory data is divided into two subproblems: representation problem and interpretation problem. Spatial and physical constraints based on the screw axis and force axis of the manipulator are used to represent the high-level task constraints. Algorithms which yield least-squared error results are developed to obtain the spatial and physical constraints from the position and force data. The spatial and physical constraints obtained from the sensory data are then compared with the desired spatial and physical constraints to interpret the state of the assembly task. Computer simulation and experimental results for verifying the validity of the algorithms are also presented and discussed.
Swimming constraints and arm coordination.
Seifert, Ludovic; Chollet, Didier; Rouard, Annie
2007-02-01
Following Newell's concept of constraint (1986), we sought to identify the constraints (organismic, environmental and task) on front crawl performance, focusing on arm coordination adaptations over increasing race paces. Forty-two swimmers (15 elite men, 15 mid-level men and 12 elite women) performed seven self-paced swim trials (race paces: as if competitively swimming 1500m, 800m, 400m, 200m, 100m, 50m, and maximal velocity, respectively) using the front crawl stroke. The paces were race simulations over 25m to avoid fatigue effects. Swim velocity, stroke rate, stroke length, and various arm stroke phases were calculated from video analysis. Arm coordination was quantified in terms of an index of coordination (IdC) based on the lag time between the propulsive phases of each arm. This measure quantified three possible coordination modes in the front crawl: opposition (continuity between the two arm propulsions), catch-up (a time gap between the two arm propulsions) and superposition (an overlap of the two arm propulsions). With increasing race paces, swim velocity, stroke rate, and stroke length, the three groups showed a similar transition in arm coordination mode at the critical 200m pace, which separated the long- and mid-pace pattern from the sprint pace pattern. The 200m pace was also characterized by a stroke rate close to 40strokemin(-1). The finding that all three groups showed a similar adaptation of arm coordination suggested that race paces, swim velocity, stroke rate and stroke length reflect task constraints that can be manipulated as control parameters, with race paces (R(2)=.28) and stroke rate (R(2)=.36) being the best predictors of IdC changes. On the other hand, only the elite men reached a velocity greater than 1.8ms(-1) and a stroke rate of 50strokemin(-1). They did so using superposition of the propulsion phases of the two arms, which occurred because of the great forward resistance created when these swimmers achieved high velocity, i.e., an
Persistence-length renormalization of polymers in a crowded environment of hard disks.
Schöbl, S; Sturm, S; Janke, W; Kroy, K
2014-12-01
The most conspicuous property of a semiflexible polymer is its persistence length, defined as the decay length of tangent correlations along its contour. Using an efficient stochastic growth algorithm to sample polymers embedded in a quenched hard-disk fluid, we find apparent wormlike chain statistics with a renormalized persistence length. We identify a universal form of the disorder renormalization that suggests itself as a quantitative measure of molecular crowding. PMID:25526167
"Short, Hard Gamma-Ray Bursts - Mystery Solved?????"
NASA Technical Reports Server (NTRS)
Parsons, A.
2006-01-01
After over a decade of speculation about the nature of short-duration hard-spectrum gamma-ray bursts (GRBs), the recent detection of afterglow emission from a small number of short bursts has provided the first physical constraints on possible progenitor models. While the discovery of afterglow emission from long GRBs was a real breakthrough linking their origin to star forming galaxies, and hence the death of massive stars, the progenitors, energetics, and environments for short gamma-ray burst events remain elusive despite a few recent localizations. Thus far, the nature of the host galaxies measured indicates that short GRBs arise from an old (> 1 Gyr) stellar population, strengthening earlier suggestions and providing support for coalescing compact object binaries as the progenitors. On the other hand, some of the short burst afterglow observations cannot be easily explained in the coalescence scenario. These observations raise the possibility that short GRBs may have different or multiple progenitors systems. The study of the short-hard GRB afterglows has been made possible by the Swift Gamma-ray Burst Explorer, launched in November of 2004. Swift is equipped with a coded aperture gamma-ray telescope that can observe up to 2 steradians of the sky and can compute the position of a gamma-ray burst to within 2-3 arcmin in less than 10 seconds. The Swift spacecraft can slew on to this burst position without human intervention, allowing its on-board x ray and optical telescopes to study the afterglow within 2 minutes of the original GRB trigger. More Swift short burst detections and afterglow measurements are needed before we can declare that the mystery of short gamma-ray burst is solved.
Deducing Electron Properties from Hard X-Ray Observations
NASA Technical Reports Server (NTRS)
Kontar, E. P.; Brown, J. C.; Emslie, A. G.; Hajdas, W.; Holman, G. D.; Hurford, G. J.; Kasparova, J.; Mallik, P. C. V.; Massone, A. M.; McConnell, M. L.; Piana, M.; Prato, M.; Schmahl, E. J.; Suarez-Garcia, E.
2011-01-01
X-radiation from energetic electrons is the prime diagnostic of flare-accelerated electrons. The observed X-ray flux (and polarization state) is fundamentally a convolution of the cross-section for the hard X-ray emission process(es) in question with the electron distribution function, which is in turn a function of energy, direction, spatial location and time. To address the problems of particle propagation and acceleration one needs to infer as much information as possible on this electron distribution function, through a deconvolution of this fundamental relationship. This review presents recent progress toward this goal using spectroscopic, imaging and polarization measurements, primarily from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). Previous conclusions regarding the energy, angular (pitch angle) and spatial distributions of energetic electrons in solar flares are critically reviewed. We discuss the role and the observational evidence of several radiation processes: free-free electron-ion, free-free electron-electron, free-bound electron-ion, photoelectric absorption and Compton backscatter (albedo), using both spectroscopic and imaging techniques. This unprecedented quality of data allows for the first time inference of the angular distributions of the X-ray-emitting electrons and improved model-independent inference of electron energy spectra and emission measures of thermal plasma. Moreover, imaging spectroscopy has revealed hitherto unknown details of solar flare morphology and detailed spectroscopy of coronal, footpoint and extended sources in flaring regions. Additional attempts to measure hard X-ray polarization were not sufficient to put constraints on the degree of anisotropy of electrons, but point to the importance of obtaining good quality polarization data in the future.
Deducing Electron Properties from Hard X-ray Observations
NASA Astrophysics Data System (ADS)
Kontar, E. P.; Brown, J. C.; Emslie, A. G.; Hajdas, W.; Holman, G. D.; Hurford, G. J.; Kašparová, J.; Mallik, P. C. V.; Massone, A. M.; McConnell, M. L.; Piana, M.; Prato, M.; Schmahl, E. J.; Suarez-Garcia, E.
2011-09-01
X-radiation from energetic electrons is the prime diagnostic of flare-accelerated electrons. The observed X-ray flux (and polarization state) is fundamentally a convolution of the cross-section for the hard X-ray emission process(es) in question with the electron distribution function, which is in turn a function of energy, direction, spatial location and time. To address the problems of particle propagation and acceleration one needs to infer as much information as possible on this electron distribution function, through a deconvolution of this fundamental relationship. This review presents recent progress toward this goal using spectroscopic, imaging and polarization measurements, primarily from the Reuven Ramaty High Energy Solar Spectroscopic Imager ( RHESSI). Previous conclusions regarding the energy, angular (pitch angle) and spatial distributions of energetic electrons in solar flares are critically reviewed. We discuss the role and the observational evidence of several radiation processes: free-free electron-ion, free-free electron-electron, free-bound electron-ion, photoelectric absorption and Compton backscatter (albedo), using both spectroscopic and imaging techniques. This unprecedented quality of data allows for the first time inference of the angular distributions of the X-ray-emitting electrons and improved model-independent inference of electron energy spectra and emission measures of thermal plasma. Moreover, imaging spectroscopy has revealed hitherto unknown details of solar flare morphology and detailed spectroscopy of coronal, footpoint and extended sources in flaring regions. Additional attempts to measure hard X-ray polarization were not sufficient to put constraints on the degree of anisotropy of electrons, but point to the importance of obtaining good quality polarization data in the future.
Yeguas, Enrique; Joan-Arinyo, Robert; Victoria Luz N, Mar A
2011-01-01
The availability of a model to measure the performance of evolutionary algorithms is very important, especially when these algorithms are applied to solve problems with high computational requirements. That model would compute an index of the quality of the solution reached by the algorithm as a function of run-time. Conversely, if we fix an index of quality for the solution, the model would give the number of iterations to be expected. In this work, we develop a statistical model to describe the performance of PBIL and CHC evolutionary algorithms applied to solve the root identification problem. This problem is basic in constraint-based, geometric parametric modeling, as an instance of general constraint-satisfaction problems. The performance model is empirically validated over a benchmark with very large search spaces.
Hydro-thermal Commitment Scheduling by Tabu Search Method with Cooling-Banking Constraints
NASA Astrophysics Data System (ADS)
Nayak, Nimain Charan; Rajan, C. Christober Asir
This paper presents a new approach for developing an algorithm for solving the Unit Commitment Problem (UCP) in a Hydro-thermal power system. Unit Commitment is a nonlinear optimization problem to determine the minimum cost turn on/off schedule of the generating units in a power system by satisfying both the forecasted load demand and various operating constraints of the generating units. The effectiveness of the proposed hybrid algorithm is proved by the numerical results shown comparing the generation cost solutions and computation time obtained by using Tabu Search Algorithm with other methods like Evolutionary Programming and Dynamic Programming in reaching proper unit commitment.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Constrained minimization of smooth functions using a genetic algorithm
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.; Pamadi, Bandu N.
1994-01-01
The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.
A deterministic algorithm for constrained enumeration of transmembrane protein folds.
Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.
2004-07-01
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
Novel hard compositions and methods of preparation
Sheinberg, H.
1983-08-23
Novel very hard compositions of matter are prepared by using in all embodiments only a minor amount of a particular carbide (or materials which can form the carbide in situ when subjected to heat and pressure); and no strategic cobalt is needed. Under a particular range of conditions, densified compositions of matter of the invention are prepared having hardnesses on the Rockwell A test substantially equal to the hardness of pure tungsten carbide and to two of the hardest commercial cobalt-bonded tungsten carbides. Alternately, other compositions of the invention which have slightly lower hardnesses than those described above in one embodiment also possess the advantage of requiring no tungsten and in another embodiment possess the advantage of having a good fracture toughness value. Photomicrographs show that the shapes of the grains of the alloy mixture with which the minor amount of carbide (or carbide-formers) is mixed are radically altered from large, rounded to small, very angular by the addition of the carbide. Superiority of one of these hard compositions of matter over cobalt-bonded tungsten carbide for ultra-high pressure anvil applications was demonstrated. 3 figs.
Novel hard compositions and methods of preparation
Sheinberg, Haskell
1983-08-23
Novel very hard compositions of matter are prepared by using in all embodiments only a minor amount of a particular carbide (or materials which can form the carbide in situ when subjected to heat and pressure); and no strategic cobalt is needed. Under a particular range of conditions, densified compositions of matter of the invention are prepared having hardnesses on the Rockwell A test substantially equal to the hardness of pure tungsten carbide and to two of the hardest commercial cobalt-bonded tungsten carbides. Alternately, other compositions of the invention which have slightly lower hardnesses than those described above in one embodiment also possess the advantage of requiring no tungsten and in another embodiment possess the advantage of having a good fracture toughness value. Photomicrographs show that the shapes of the grains of the alloy mixture with which the minor amount of carbide (or carbide-formers) is mixed are radically altered from large, rounded to small, very angular by the addition of the carbide. Superiority of one of these hard compositions of matter over cobalt-bonded tungsten carbide for ultra-high pressure anvil applications was demonstrated.
Efficient Controls for Finitely Convergent Sequential Algorithms
Chen, Wei; Herman, Gabor T.
2010-01-01
Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327
Optical mechanical analogy and nonlinear nonholonomic constraints
NASA Astrophysics Data System (ADS)
Bloch, Anthony M.; Rojo, Alberto G.
2016-02-01
In this paper we establish a connection between particle trajectories subject to a nonholonomic constraint and light ray trajectories in a variable index of refraction. In particular, we extend the analysis of systems with linear nonholonomic constraints to the dynamics of particles in a potential subject to nonlinear velocity constraints. We contrast the long time behavior of particles subject to a constant kinetic energy constraint (a thermostat) to particles with the constraint of parallel velocities. We show that, while in the former case the velocities of each particle equalize in the limit, in the latter case all the kinetic energies of each particle remain the same.
An algorithm for constrained one-step inversion of spectral CT data.
Foygel Barber, Rina; Sidky, Emil Y; Gilat Schmidt, Taly; Pan, Xiaochuan
2016-05-21
We develop a primal-dual algorithm that allows for one-step inversion of spectral CT transmission photon counts data to a basis map decomposition. The algorithm allows for image constraints to be enforced on the basis maps during the inversion. The derivation of the algorithm makes use of a local upper bounding quadratic approximation to generate descent steps for non-convex spectral CT data discrepancy terms, combined with a new convex-concave optimization algorithm. Convergence of the algorithm is demonstrated on simulated spectral CT data. Simulations with noise and anthropomorphic phantoms show examples of how to employ the constrained one-step algorithm for spectral CT data.
Scheduling with genetic algorithms
NASA Technical Reports Server (NTRS)
Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.
1994-01-01
In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.
"We Can Get Everything We Want if We Try Hard": Young People, Celebrity, Hard Work
ERIC Educational Resources Information Center
Mendick, Heather; Allen, Kim; Harvey, Laura
2015-01-01
Drawing on 24 group interviews on celebrity with 148 students aged 14-17 across six schools, we show that "hard work" is valued by young people in England. We argue that we should not simply celebrate this investment in hard work. While it opens up successful subjectivities to previously excluded groups, it reproduces neoliberal…
Research in the Hard Sciences, and in Very Hard "Softer" Domains
ERIC Educational Resources Information Center
Phillips, D. C.
2014-01-01
The author of this commentary argues that physical scientists are attempting to advance knowledge in the so-called hard sciences, whereas education researchers are laboring to increase knowledge and understanding in an "extremely hard" but softer domain. Drawing on the work of Popper and Dewey, this commentary highlights the relative…
Hard Water and Soft Soap: Dependence of Soap Performance on Water Hardness
ERIC Educational Resources Information Center
Osorio, Viktoria K. L.; de Oliveira, Wanda; El Seoud, Omar A.; Cotton, Wyatt; Easdon, Jerry
2005-01-01
The demonstration of the performance of soap in different aqueous solutions, which is due to water hardness and soap formulation, is described. The demonstrations use safe, inexpensive reagents and simple glassware and equipment, introduce important everyday topics, stimulates the students to consider the wider consequences of water hardness and…
NASA Technical Reports Server (NTRS)
Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Amador, Arthur V.; Spitale, Joseph N.
1993-01-01
Robotic spacecraft are controlled by sets of commands called 'sequences.' These sequences must be checked against mission constraints. Making our existing constraint checking program faster would enable new capabilities in our uplink process. Therefore, we are rewriting this program to run on a parallel computer. To do so, we had to determine how to run constraint-checking algorithms in parallel and create a new method of specifying spacecraft models and constraints. This new specification gives us a means of representing flight systems and their predicted response to commands which could be used in a variety of applications throughout the command process, particularly during anomaly or high-activity operations. This commonality could reduce operations cost and risk for future complex missions. Lessons learned in applying some parts of this system to the TOPEX/Poseidon mission will be described.
Trajectory constraints in qualitative simulation
Brajnik, G.; Clancy, D.J.
1996-12-31
We present a method for specifying temporal constraints on trajectories of dynamical systems and enforcing them during qualitative simulation. This capability can be used to focus a simulation, simulate non-autonomous and piecewise-continuous systems, reason about boundary condition problems and incorporate observations into the simulation. The method has been implemented in TeQSIM, a qualitative simulator that combines the expressive power of qualitative differential equations with temporal logic. It interleaves temporal logic model checking with the simulation to constrain and refine the resulting predicted behaviors and to inject discontinuous changes into the simulation.
QPO Constraints on Neutron Stars
NASA Technical Reports Server (NTRS)
Miller, M. Coleman
2005-01-01
The kilohertz frequencies of QPOs from accreting neutron star systems imply that they are generated in regions of strong gravity, close to the star. This suggests that observations of the QPOs can be used to constrain the properties of neutron stars themselves, and in particular to inform us about the properties of cold matter beyond nuclear densities. Here we discuss some relatively model-insensitive constraints that emerge from the kilohertz QPOs, as well as recent developments that may hint at phenomena related to unstable circular orbits outside neutron stars.
New algorithms for the minimal form'' problem
Oliveira, J.S.; Cook, G.O. Jr. ); Purtill, M.R. . Center for Communications Research)
1991-12-20
It is widely appreciated that large-scale algebraic computation (performing computer algebra operations on large symbolic expressions) places very significant demands upon existing computer algebra systems. Because of this, parallel versions of many important algorithms have been successfully sought, and clever techniques have been found for improving the speed of the algebraic simplification process. In addition, some attention has been given to the issue of restructuring large expressions, or transforming them into minimal forms.'' By minimal form,'' we mean that form of an expression that involves a minimum number of operations in the sense that no simple transformation on the expression leads to a form involving fewer operations. Unfortunately, the progress that has been achieved to date on this very hard problem is not adequate for the very significant demands of large computer algebra problems. In response to this situation, we have developed some efficient algorithms for constructing minimal forms.'' In this paper, the multi-stage algorithm in which these new algorithms operate is defined and the features of these algorithms are developed. In a companion paper, we introduce the core algebra engine of a new tool that provides the algebraic framework required for the implementation of these new algorithms.
Solano-Altamirano, J M; Goldman, Saul
2015-12-01
We determined the total system elastic Helmholtz free energy, under the constraints of constant temperature and volume, for systems comprised of one or more perfectly bonded hard spherical inclusions (i.e. "hard spheres") embedded in a finite spherical elastic solid. Dirichlet boundary conditions were applied both at the surface(s) of the hard spheres, and at the outer surface of the elastic solid. The boundary conditions at the surface of the spheres were used to describe the rigid displacements of the spheres, relative to their initial location(s) in the unstressed initial state. These displacements, together with the initial positions, provided the final shape of the strained elastic solid. The boundary conditions at the outer surface of the elastic medium were used to ensure constancy of the system volume. We determined the strain and stress tensors numerically, using a method that combines the Neuber-Papkovich spherical harmonic decomposition, the Schwartz alternating method, and Least-squares for determining the spherical harmonic expansion coefficients. The total system elastic Helmholtz free energy was determined by numerically integrating the elastic Helmholtz free energy density over the volume of the elastic solid, either by a quadrature, or a Monte Carlo method, or both. Depending on the initial position of the hard sphere(s) (or equivalently, the shape of the un-deformed stress-free elastic solid), and the displacements, either stationary or non-stationary Helmholtz free energy minima were found. The non-stationary minima, which involved the hard spheres nearly in contact with one another, corresponded to lower Helmholtz free energies, than did the stationary minima, for which the hard spheres were further away from one another. PMID:26701708
Potential Health Impacts of Hard Water
Sengupta, Pallav
2013-01-01
In the past five decades or so evidence has been accumulating about an environmental factor, which appears to be influencing mortality, in particular, cardiovascular mortality, and this is the hardness of the drinking water. In addition, several epidemiological investigations have demonstrated the relation between risk for cardiovascular disease, growth retardation, reproductive failure, and other health problems and hardness of drinking water or its content of magnesium and calcium. In addition, the acidity of the water influences the reabsorption of calcium and magnesium in the renal tubule. Not only, calcium and magnesium, but other constituents also affect different health aspects. Thus, the present review attempts to explore the health effects of hard water and its constituents. PMID:24049611
Erosion testing of hard materials and coatings
Hawk, Jeffrey A.
2005-04-29
Erosion is the process by which unconstrained particles, usually hard, impact a surface, creating damage that leads to material removal and component failure. These particles are usually very small and entrained in fluid of some type, typically air. The damage that occurs as a result of erosion depends on the size of the particles, their physical characteristics, the velocity of the particle/fluid stream, and their angle of impact on the surface of interest. This talk will discuss the basics of jet erosion testing of hard materials, composites and coatings. The standard test methods will be discussed as well as alternative approaches to determining the erosion rate of materials. The damage that occurs will be characterized in genera1 terms, and examples will be presented for the erosion behavior of hard materials and coatings (both thick and thin).
Potential health impacts of hard water.
Sengupta, Pallav
2013-08-01
In the past five decades or so evidence has been accumulating about an environmental factor, which appears to be influencing mortality, in particular, cardiovascular mortality, and this is the hardness of the drinking water. In addition, several epidemiological investigations have demonstrated the relation between risk for cardiovascular disease, growth retardation, reproductive failure, and other health problems and hardness of drinking water or its content of magnesium and calcium. In addition, the acidity of the water influences the reabsorption of calcium and magnesium in the renal tubule. Not only, calcium and magnesium, but other constituents also affect different health aspects. Thus, the present review attempts to explore the health effects of hard water and its constituents.
Library of Continuation Algorithms
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Optimal Sampling-Based Motion Planning under Differential Constraints: the Driftless Case
Schmerling, Edward; Janson, Lucas; Pavone, Marco
2015-01-01
Motion planning under differential constraints is a classic problem in robotics. To date, the state of the art is represented by sampling-based techniques, with the Rapidly-exploring Random Tree algorithm as a leading example. Yet, the problem is still open in many aspects, including guarantees on the quality of the obtained solution. In this paper we provide a thorough theoretical framework to assess optimality guarantees of sampling-based algorithms for planning under differential constraints. We exploit this framework to design and analyze two novel sampling-based algorithms that are guaranteed to converge, as the number of samples increases, to an optimal solution (namely, the Differential Probabilistic RoadMap algorithm and the Differential Fast Marching Tree algorithm). Our focus is on driftless control-affine dynamical models, which accurately model a large class of robotic systems. In this paper we use the notion of convergence in probability (as opposed to convergence almost surely): the extra mathematical flexibility of this approach yields convergence rate bounds — a first in the field of optimal sampling-based motion planning under differential constraints. Numerical experiments corroborating our theoretical results are presented and discussed. PMID:26618041
Computational search for rare-earth free hard-magnetic materials
NASA Astrophysics Data System (ADS)
Flores Livas, José A.; Sharma, Sangeeta; Dewhurst, John Kay; Gross, Eberhard; MagMat Team
2015-03-01
It is difficult to over state the importance of hard magnets for human life in modern times; they enter every walk of our life from medical equipments (NMR) to transport (trains, planes, cars, etc) to electronic appliances (for house hold use to computers). All the known hard magnets in use today contain rare-earth elements, extraction of which is expensive and environmentally harmful. Rare-earths are also instrumental in tipping the balance of world economy as most of them are mined in limited specific parts of the world. Hence it would be ideal to have similar characteristics as a hard magnet but without or at least with reduced amount of rare-earths. This is the main goal of our work: search for rare-earth-free magnets. To do so we employ a combination of density functional theory and crystal prediction methods. The quantities which define a hard magnet are magnetic anisotropy energy (MAE) and saturation magnetization (Ms), which are the quantities we maximize in search for an ideal magnet. In my talk I will present details of the computation search algorithm together with some potential newly discovered rare-earth free hard magnet. J.A.F.L. acknowledge financial support from EU's 7th Framework Marie-Curie scholarship program within the ``ExMaMa'' Project (329386).
Genetic Algorithm Approaches for Actuator Placement
NASA Technical Reports Server (NTRS)
Crossley, William A.
2000-01-01
This research investigated genetic algorithm approaches for smart actuator placement to provide aircraft maneuverability without requiring hinged flaps or other control surfaces. The effort supported goals of the Multidisciplinary Design Optimization focus efforts in NASA's Aircraft au program. This work helped to properly identify various aspects of the genetic algorithm operators and parameters that allow for placement of discrete control actuators/effectors. An improved problem definition, including better definition of the objective function and constraints, resulted from this research effort. The work conducted for this research used a geometrically simple wing model; however, an increasing number of potential actuator placement locations were incorporated to illustrate the ability of the GA to determine promising actuator placement arrangements. This effort's major result is a useful genetic algorithm-based approach to assist in the discrete actuator/effector placement problem.
Density equalizing map projections: A new algorithm
Merrill, D.W.; Selvin, S.; Mohr, M.S.
1992-02-01
In the study of geographic disease clusters, an alternative to traditional methods based on rates is to analyze case locations on a transformed map in which population density is everywhere equal. Although the analyst`s task is thereby simplified, the specification of the density equalizing map projection (DEMP) itself is not simple and continues to be the subject of considerable research. Here a new DEMP algorithm is described, which avoids some of the difficulties of earlier approaches. The new algorithm (a) avoids illegal overlapping of transformed polygons; (b) finds the unique solution that minimizes map distortion; (c) provides constant magnification over each map polygon; (d) defines a continuous transformation over the entire map domain; (e) defines an inverse transformation; (f) can accept optional constraints such as fixed boundaries; and (g) can use commercially supported minimization software. Work is continuing to improve computing efficiency and improve the algorithm.
Density equalizing map projections: A new algorithm
Merrill, D.W.; Selvin, S.; Mohr, M.S.
1992-02-01
In the study of geographic disease clusters, an alternative to traditional methods based on rates is to analyze case locations on a transformed map in which population density is everywhere equal. Although the analyst's task is thereby simplified, the specification of the density equalizing map projection (DEMP) itself is not simple and continues to be the subject of considerable research. Here a new DEMP algorithm is described, which avoids some of the difficulties of earlier approaches. The new algorithm (a) avoids illegal overlapping of transformed polygons; (b) finds the unique solution that minimizes map distortion; (c) provides constant magnification over each map polygon; (d) defines a continuous transformation over the entire map domain; (e) defines an inverse transformation; (f) can accept optional constraints such as fixed boundaries; and (g) can use commercially supported minimization software. Work is continuing to improve computing efficiency and improve the algorithm.
Equation of state for fluid mixtures of hard spheres and heteronuclear hard dumbbells
NASA Astrophysics Data System (ADS)
Barrio, C.; Solana, J. R.
1999-09-01
A theoretically founded equation of state is developed for mixtures of hard spheres with heteronuclear hard dumbbells. It is based on a model previously developed for hard-convex-body fluid mixtures, and further extended to fluid mixtures of homonuclear hard dumbbells. The equation scales the excess compressibility factor for an equivalent hard-sphere fluid mixture to obtain that corresponding to the true mixture. The equivalent mixture is one in which the averaged volume of a sphere is the same as the effective molecular volume of a molecule in the real mixture. Thus, the theory requires two parameters, namely the averaged effective molecular volume of the molecules in the mixture and the scaling factor, which is the effective nonsphericity parameter. Expressions to determine these parameters are derived in terms of the geometrical characteristics of the molecules that form the mixture. The overall results are in closer agreement with simulation data than those obtained with other theories developed for these kinds of mixtures.
Smedskjaer, Morten M.; Bauchy, Mathieu; Mauro, John C.; Rzoska, Sylwester J.; Bockowski, Michal
2015-10-28
The properties of glass are determined not only by temperature, pressure, and composition, but also by their complete thermal and pressure histories. Here, we show that glasses of identical composition produced through thermal annealing and through quenching from elevated pressure can result in samples with identical density and mean interatomic distances, yet different bond angle distributions, medium-range structures, and, thus, macroscopic properties. We demonstrate that hardness is higher when the density increase is obtained through thermal annealing rather than through pressure-quenching. Molecular dynamics simulations reveal that this arises because pressure-quenching has a larger effect on medium-range order, while annealing has a larger effect on short-range structures (sharper bond angle distribution), which ultimately determine hardness according to bond constraint theory. Our work could open a new avenue towards industrially useful glasses that are identical in terms of composition and density, but with differences in thermodynamic, mechanical, and rheological properties due to unique structural characteristics.
Imaging the sun in hard X-rays - Spatial and rotating modulation collimators
NASA Technical Reports Server (NTRS)
Campbell, Jonathan W.; Davis, John M.; Emslie, A. G.
1991-01-01
Several approaches to imaging hard X-rays emitted from solar flares have been proposed or are planned for the nineties including the spatial modulation collimator (SMC) and the rotating modulation collimator (RMC). A survey of current solar flare theoretical literature indicates the desirability of spatial resolutions down to 1 arcsecond, field of views greater than the full solar disk (i.e., 32 arcminutes), and temporal resolutions down to 1 second. Although the sun typically provides relatively high flux levels, the requirement for 1 second temporal resolution raises the question as to the viability of Fourier telescopes subject to the aforementioned constraints. A basic photon counting, Monte Carlo 'end-to-end' model telescope was employed using the Astronomical Image Processing System (AIPS) for image reconstruction. The resulting solar flare hard X-ray images compared against typical observations indicated that both telescopes show promise for the future.
Photon-splitting limits to the hardness of emission in strongly magnetized soft gamma repeaters
NASA Technical Reports Server (NTRS)
Baring, Matthew G.
1995-01-01
Soft gamma repeaters are characterized by recurrent activity consisting of short-duration outbursts of high-energy emission that is typically of temperature less than 40 keV. One recent model of repeaters is that they originate in the environs of neutron stars with superstrong magnetic fields, perhaps greater than 10(exp 14) G. In such fields, the exotic process of magnetic photon splitting gamma yields gamma gamma acts very effectively to reprocess gamma-ray radiation down to hard X-ray energies. In this Letter, the action of photon splitting is considered in some detail, via the solution of photon kinetic equations, determining how it limits the hardness of emission in strongly magnetized repeaters, and thereby obtaining observational constraints to the field in SGR 1806-20.
Constraint optimized weight adaptation for Gaussian mixture reduction
NASA Astrophysics Data System (ADS)
Chen, H. D.; Chang, K. C.; Smith, Chris
2010-04-01
Gaussian mixture model (GMM) has been used in many applications for dynamic state estimation such as target tracking or distributed fusion. However, the number of components in the mixture distribution tends to grow rapidly when multiple GMMs are combined. In order to keep the computational complexity bounded, it is necessary to approximate a Gaussian mixture by one with reduced number of components. Gaussian mixture reduction is traditionally conducted by recursively selecting two components that appear to be most similar to each other and merging them. Different definitions on similarity measure have been used in literature. For the case of one-dimensional Gaussian mixtures, Kmeans algorithms and some variations are recently proposed to cluster Gaussian mixture components in groups, use a center component to represent all in each group, readjust parameters in the center components, and finally perform weight optimization. In this paper, we focus on multi-dimensional Gaussian mixture models. With a variety of reduction algorithms and possible combinations, we developed a hybrid algorithm with constraint optimized weight adaptation to minimize the integrated squared error (ISE). In additions, with extensive simulations, we showed that the proposed algorithm provides an efficient and effective Gaussian mixture reduction performance in various random scenarios.
Novel Aspects of Hard Diffraction in QCD
Brodsky, Stanley J.; /SLAC
2005-12-14
Initial- and final-state interactions from gluon-exchange, normally neglected in the parton model have a profound effect in QCD hard-scattering reactions, leading to leading-twist single-spin asymmetries, diffractive deep inelastic scattering, diffractive hard hadronic reactions, and nuclear shadowing and antishadowing--leading-twist physics not incorporated in the light-front wavefunctions of the target computed in isolation. I also discuss the use of diffraction to materialize the Fock states of a hadronic projectile and test QCD color transparency.
UV curable hard coatings on polyesters
NASA Astrophysics Data System (ADS)
Datashvili, Tea; Brostow, Witold; Kao, David
2006-10-01
UV curable, hard and transparent hybrid inorganic-organic coatings with covalent links between the inorganic and the organic networks were prepared using organically crosslinked heteropolysiloxanes based on the sol-gel process. The materials were applied onto polyester sheets and UV cured. The deposition was followed by a thermal treatment to improve mechanical properties of the coatings. High light transmission and the resulting thermophysical properties indicate the presence of a nanoscale hybrid composition. The coatings show excellent adhesion to polyesters even without using primers. Further mechanical characterization shows that the coatings provide high hardness and good abrasion resistance.
Radiation-hard static induction transistor
Hanes, M.H.; Bartko, J.; Hwang, J.M.; Rai-Choudhury, P.; Leslie, S.G.
1988-12-01
The static induction transistor (SIT) has been proposed as a preferred power switching device for applications in military and space environments because of its potential for radiation hardness, high-frequency operation, and the incorporation of on-chip smart power sensor and logic functions. Design, fabrication, and characteristics of a 350 V, 100 A buried gate SIT are described. The potential radiation hardness of this class of devices was evaluated by measurement of SIT characteristics after irradiation with 100 Mrad electrons (2 MeV), and up to 10%16% fission neutrons/cm/sup 2/. High-temperature operation and the possibility of radiation damage self-annealing are discussed.
[Necrotizing sialometaplasia of the hard palate].
Topstad, T K; Olofsson, J; Myking, A
1991-11-30
Necrotizing sialometaplasia is a benign, self-healing disease of salivary gland tissue and is usually confined to the minor salivary glands of the hard palate. It has clinical and histological features that simulate malignancies such as mucoepidermoid and squamous cell carcinomas. Wrong diagnosis has led to unnecessary mutilating surgical procedures. The etiology of the disease is unknown, but an ischaemic process is considered most likely. We describe two patients with necrotizing sialometaplasia, one with midline and one with bilateral symmetrical affection of the hard palate.
NASA Technical Reports Server (NTRS)
Sarkar, Nilanjan; Yun, Xiaoping; Kumar, Vijay
1994-01-01
There are many examples of mechanical systems that require rolling contacts between two or more rigid bodies. Rolling contacts engender nonholonomic constraints in an otherwise holonomic system. In this article, we develop a unified approach to the control of mechanical systems subject to both holonomic and nonholonomic constraints. We first present a state space realization of a constrained system. We then discuss the input-output linearization and zero dynamics of the system. This approach is applied to the dynamic control of mobile robots. Two types of control algorithms for mobile robots are investigated: trajectory tracking and path following. In each case, a smooth nonlinear feedback is obtained to achieve asymptotic input-output stability and Lagrange stability of the overall system. Simulation results are presented to demonstrate the effectiveness of the control algorithms and to compare the performane of trajectory-tracking and path-following algorithms.
Statistical Inference in Hidden Markov Models Using k-Segment Constraints
Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher
2016-01-01
Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674
NASA Astrophysics Data System (ADS)
Matsui, Shouichi; Watanabe, Isamu; Tokoro, Ken-Ichi
A new genetic algorithm is proposed for solving job-shop scheduling problems where the total number of search points is limited. The objective of the problem is to minimize the makespan. The solution is represented by an operation sequence, i.e., a permutation of operations. The proposed algorithm is based on the framework of the parameter-free genetic algorithm. It encodes a permutation using random keys into a chromosome. A schedule is derived from a permutation using a hybrid scheduling (HS), and the parameter of HS is also encoded in a chromosome. Experiments using benchmark problems show that the proposed algorithm outperforms the previously proposed algorithms, genetic algorithm by Shi et al. and the improved local search by Nakano et al., for large-scale problems under the constraint of limited number of search points.
Learning in stochastic neural networks for constraint satisfaction problems
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Adorf, Hans-Martin
1989-01-01
Researchers describe a newly-developed artificial neural network algorithm for solving constraint satisfaction problems (CSPs) which includes a learning component that can significantly improve the performance of the network from run to run. The network, referred to as the Guarded Discrete Stochastic (GDS) network, is based on the discrete Hopfield network but differs from it primarily in that auxiliary networks (guards) are asymmetrically coupled to the main network to enforce certain types of constraints. Although the presence of asymmetric connections implies that the network may not converge, it was found that, for certain classes of problems, the network often quickly converges to find satisfactory solutions when they exist. The network can run efficiently on serial machines and can find solutions to very large problems (e.g., N-queens for N as large as 1024). One advantage of the network architecture is that network connection strengths need not be instantiated when the network is established: they are needed only when a participating neural element transitions from off to on. They have exploited this feature to devise a learning algorithm, based on consistency techniques for discrete CSPs, that updates the network biases and connection strengths and thus improves the network performance.
On the optimization of discrete structures with aeroelastic constraints
NASA Technical Reports Server (NTRS)
Mcintosh, S. C., Jr.; Ashley, H.
1978-01-01
The paper deals with the problem of dynamic structural optimization where constraints relating to flutter of a wing (or other dynamic aeroelastic performance) are imposed along with conditions of a more conventional nature such as those relating to stress under load, deflection, minimum dimensions of structural elements, etc. The discussion is limited to a flutter problem for a linear system with a finite number of degrees of freedom and a single constraint involving aeroelastic stability, and the structure motion is assumed to be a simple harmonic time function. Three search schemes are applied to the minimum-weight redesign of a particular wing: the first scheme relies on the method of feasible directions, while the other two are derived from necessary conditions for a local optimum so that they can be referred to as optimality-criteria schemes. The results suggest that a heuristic redesign algorithm involving an optimality criterion may be best suited for treating multiple constraints with large numbers of design variables.
Hydroeconomic optimization of reservoir management under downstream water quality constraints
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Holm, Peter E.; Trapp, Stefan; Rosbjerg, Dan; Bauer-Gottwein, Peter
2015-10-01
A hydroeconomic optimization approach is used to guide water management in a Chinese river basin with the objectives of meeting water quantity and water quality constraints, in line with the China 2011 No. 1 Policy Document and 2015 Ten-point Water Plan. The proposed modeling framework couples water quantity and water quality management and minimizes the total costs over a planning period assuming stochastic future runoff. The outcome includes cost-optimal reservoir releases, groundwater pumping, water allocation, wastewater treatments and water curtailments. The optimization model uses a variant of stochastic dynamic programming known as the water value method. Nonlinearity arising from the water quality constraints is handled with an effective hybrid method combining genetic algorithms and linear programming. Untreated pollutant loads are represented by biochemical oxygen demand (BOD), and the resulting minimum dissolved oxygen (DO) concentration is computed with the Streeter-Phelps equation and constrained to match Chinese water quality targets. The baseline water scarcity and operational costs are estimated to 15.6 billion CNY/year. Compliance to water quality grade III causes a relatively low increase to 16.4 billion CNY/year. Dilution plays an important role and increases the share of surface water allocations to users situated furthest downstream in the system. The modeling framework generates decision rules that result in the economically efficient strategy for complying with both water quantity and water quality constraints.
A convex minimization approach to data association with prior constraints
NASA Astrophysics Data System (ADS)
Chen, Huimin; Kirubarajan, Thiagalingam
2004-08-01
In this paper we propose a new formulation for reliably solving the measurement-to-track association problem with a priori constraints. Those constraints are incorporated into the scalar objective function in a general formula. This is a key step in most target tracking problems when one has to handle the measurement origin uncertainty. Our methodology is able to formulate the measurement-to-track correspondence problem with most of the commonly used assumptions and considers target feature measurements and possibly unresolved measurements as well. The resulting constrained optimization problem deals with the whole combinatorial space of possible feature selections and measurement-to-track correspondences. To find the global optimal solution, we build a convex objective function and relax the integer constraint. The special structure of this extended problem assures its equivalence to the original one, but it can be solved optimally by efficient algorithms to avoid the cominatorial search. This approach works for any cost function with continuous second derivatives. We use a track formation example and a multisensor tracking scenario to illustrate the effectiveness of the convex programming approach.
The Reduced RUM as a Logit Model: Parameterization and Constraints.
Chiu, Chia-Yi; Köhn, Hans-Friedrich
2016-06-01
Cognitive diagnosis models (CDMs) for educational assessment are constrained latent class models. Examinees are assigned to classes of intellectual proficiency defined in terms of cognitive skills called attributes, which an examinee may or may not have mastered. The Reduced Reparameterized Unified Model (Reduced RUM) has received considerable attention among psychometricians. Markov Chain Monte Carlo (MCMC) or Expectation Maximization (EM) are typically used for estimating the Reduced RUM. Commercial implementations of the EM algorithm are available in the latent class analysis (LCA) routines of Latent GOLD and Mplus, for example. Fitting the Reduced RUM with an LCA routine requires that it be reparameterized as a logit model, with constraints imposed on the parameters. For models involving two attributes, these have been worked out. However, for models involving more than two attributes, the parameterization and the constraints are nontrivial and currently unknown. In this article, the general parameterization of the Reduced RUM as a logit model involving any number of attributes and the associated parameter constraints are derived. As a practical illustration, the LCA routine in Mplus is used for fitting the Reduced RUM to two synthetic data sets and to a real-world data set; for comparison, the results obtained by using the MCMC implementation in OpenBUGS are also provided.
Immune allied genetic algorithm for Bayesian network structure learning
NASA Astrophysics Data System (ADS)
Song, Qin; Lin, Feng; Sun, Wei; Chang, KC
2012-06-01
Bayesian network (BN) structure learning is a NP-hard problem. In this paper, we present an improved approach to enhance efficiency of BN structure learning. To avoid premature convergence in traditional single-group genetic algorithm (GA), we propose an immune allied genetic algorithm (IAGA) in which the multiple-population and allied strategy are introduced. Moreover, in the algorithm, we apply prior knowledge by injecting immune operator to individuals which can effectively prevent degeneration. To illustrate the effectiveness of the proposed technique, we present some experimental results.
Developing Constraint-based Recommenders
NASA Astrophysics Data System (ADS)
Felfernig, Alexander; Friedrich, Gerhard; Jannach, Dietmar; Zanker, Markus
Traditional recommendation approaches (content-based filtering [48] and collaborative filtering[40]) are well-suited for the recommendation of quality&taste products such as books, movies, or news. However, especially in the context of products such as cars, computers, appartments, or financial services those approaches are not the best choice (see also Chapter 11). For example, apartments are not bought very frequently which makes it rather infeasible to collect numerous ratings for one specific item (exactly such ratings are required by collaborative recommendation algorithms). Furthermore, users of recommender applications would not be satisfied with recommendations based on years-old item preferences (exactly such preferences would be exploited in this context by content-based filtering algorithms).
Numerical Optimization Algorithms and Software for Systems Biology
Saunders, Michael
2013-02-02
The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.
A Fast Retrieving Algorithm of Hierarchical Relationships Using Trie Structures.
ERIC Educational Resources Information Center
Koyama, Masafumi; Morita, Kazuhiro; Fuketa, Masao; Aoe, Jun-Ichi
1998-01-01
Presents a faster method for determining hierarchical relationships in information retrieval by using trie structures instead of a linear storage of a concept code. Highlights include case structures, a knowledge representation for natural-language understanding with semantic constraints; a compression algorithm of tries; and evaluation.…
A Sensitive Secondary Users Selection Algorithm for Cognitive Radio Ad Hoc Networks
Li, Aohan; Han, Guangjie; Wan, Liangtian; Shu, Lei
2016-01-01
Secondary Users (SUs) are allowed to use the temporarily unused licensed spectrum without disturbing Primary Users (PUs) in Cognitive Radio Ad Hoc Networks (CRAHNs). Existing architectures for CRAHNs impose energy-consuming Cognitive Radios (CRs) on SUs. However, the advanced CRs will increase energy cost for their cognitive functionalities, which is undesirable for the battery powered devices. A new architecture referred to as spectral Requirement-based CRAHN (RCRAHN) is proposed to enhance energy efficiency for CRAHNs in this paper. In RCRAHNs, only parts of SUs are equipped with CRs. SUs equipped with CRs are referred to as Cognitive Radio Users (CRUs). To further enhance energy efficiency of CRAHNs, we aim to select minimum CRUs to sense available spectrum. A non-linear programming problem is mathematically formulated under the constraints of energy efficiency and real-time. Considering the NP-hardness of the problem, a framework of a heuristic algorithm referred to as Sensitive Secondary Users Selection (SSUS) was designed to compute the near-optimal solutions. The simulation results demonstrate that SSUS not only improves the energy efficiency, but also achieves satisfied performances in end-to-end delay and communication reliability. PMID:27023562
A Sensitive Secondary Users Selection Algorithm for Cognitive Radio Ad Hoc Networks.
Li, Aohan; Han, Guangjie; Wan, Liangtian; Shu, Lei
2016-01-01
Secondary Users (SUs) are allowed to use the temporarily unused licensed spectrum without disturbing Primary Users (PUs) in Cognitive Radio Ad Hoc Networks (CRAHNs). Existing architectures for CRAHNs impose energy-consuming Cognitive Radios (CRs) on SUs. However, the advanced CRs will increase energy cost for their cognitive functionalities, which is undesirable for the battery powered devices. A new architecture referred to as spectral Requirement-based CRAHN (RCRAHN) is proposed to enhance energy efficiency for CRAHNs in this paper. In RCRAHNs, only parts of SUs are equipped with CRs. SUs equipped with CRs are referred to as Cognitive Radio Users (CRUs). To further enhance energy efficiency of CRAHNs, we aim to select minimum CRUs to sense available spectrum. A non-linear programming problem is mathematically formulated under the constraints of energy efficiency and real-time. Considering the NP-hardness of the problem, a framework of a heuristic algorithm referred to as Sensitive Secondary Users Selection (SSUS) was designed to compute the near-optimal solutions. The simulation results demonstrate that SSUS not only improves the energy efficiency, but also achieves satisfied performances in end-to-end delay and communication reliability. PMID:27023562
NASA Astrophysics Data System (ADS)
Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.
2012-05-01
The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.
A Sensitive Secondary Users Selection Algorithm for Cognitive Radio Ad Hoc Networks.
Li, Aohan; Han, Guangjie; Wan, Liangtian; Shu, Lei
2016-03-26
Secondary Users (SUs) are allowed to use the temporarily unused licensed spectrum without disturbing Primary Users (PUs) in Cognitive Radio Ad Hoc Networks (CRAHNs). Existing architectures for CRAHNs impose energy-consuming Cognitive Radios (CRs) on SUs. However, the advanced CRs will increase energy cost for their cognitive functionalities, which is undesirable for the battery powered devices. A new architecture referred to as spectral Requirement-based CRAHN (RCRAHN) is proposed to enhance energy efficiency for CRAHNs in this paper. In RCRAHNs, only parts of SUs are equipped with CRs. SUs equipped with CRs are referred to as Cognitive Radio Users (CRUs). To further enhance energy efficiency of CRAHNs, we aim to select minimum CRUs to sense available spectrum. A non-linear programming problem is mathematically formulated under the constraints of energy efficiency and real-time. Considering the NP-hardness of the problem, a framework of a heuristic algorithm referred to as Sensitive Secondary Users Selection (SSUS) was designed to compute the near-optimal solutions. The simulation results demonstrate that SSUS not only improves the energy efficiency, but also achieves satisfied performances in end-to-end delay and communication reliability.
Ototraumatic effects of hard rock music.
Reddell, R C; Lebo, C P
1972-01-01
Temporary and permanent shifts in auditory thresholds were found in 43 hard rock musicians and temporary shifts were also observed in some listeners. The threshold shifts involved all of the conventional puretone test frequencies. Custom-fitted polyvinyl chloride ear protectors were found to be effective in prevention of these noise-induced hearing losses.
Sustaining Transformation: "Resiliency in Hard Times"
ERIC Educational Resources Information Center
Guarasci, Richard; Lieberman, Devorah
2009-01-01
The strategic, systemic, and encompassing evolution of a college or university spans a number of years, and the vagaries of economic cycles inevitably catch transforming institutions in mid-voyage. "Sustaining Transformation: Resiliency in Hard Times" presents a study of Wagner College as it moves into its second decade of purposeful institutional…
Hard Times: Philosophy and the Fundamentalist Imagination
ERIC Educational Resources Information Center
Allsup, Randall Everett
2005-01-01
A close reading of Gradgrind's opening monologue of Hard Times by Charles Dickens will provide the starting off point for an examination of the role and place of philosophy in the music curriculum. The Gradgrind philosophy finds easy parallel to current thinking in American education. In the fundamentalist imagination, sources of ambiguity must be…
Registration of 'Advance' Hard Red Spring Wheat
Technology Transfer Automated Retrieval System (TEKTRAN)
Grower and end-user acceptance of new hard red spring wheat (HRSW; Triticum aestivum L.) cultivars is largely contingent on satisfactory agronomic performance, end-use quality potential, and disease resistance levels. Additional characteristics, such as desirable plant height, can also help to maxi...
Hard thermal loops in static external fields
Frenkel, J.; Takahashi, N.; Pereira, S. H.
2009-04-15
We examine, in the imaginary-time formalism, the high temperature behavior of n-point thermal loops in static Yang-Mills and gravitational fields. We show that in this regime, any hard thermal loop gives the same leading contribution as the one obtained by evaluating the loop integral at zero external energies and momenta.