Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
A constraint algorithm for singular Lagrangians subjected to nonholonomic constraints
de Leon, M.; de Diego, D.M.
1997-06-01
We construct a constraint algorithm for singular Lagrangian systems subjected to nonholonomic constraints which generalizes that of Dirac for constrained Hamiltonian systems. {copyright} {ital 1997 American Institute of Physics.}
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Learning With Mixed Hard/Soft Pointwise Constraints.
Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello
2015-09-01
A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.
Iterative restoration algorithms for nonlinear constraint computing
NASA Astrophysics Data System (ADS)
Szu, Harold
A general iterative-restoration principle is introduced to facilitate the implementation of nonlinear optical processors. The von Neumann convergence theorem is generalized to include nonorthogonal subspaces which can be reduced to a special orthogonal projection operator by applying an orthogonality condition. This principle is shown to permit derivation of the Jacobi algorithm, the recursive principle, the van Cittert (1931) deconvolution method, the iteration schemes of Gerchberg (1974) and Papoulis (1975), and iteration schemes using two Fourier conjugate domains (e.g., Fienup, 1981). Applications to restoring the image of a double star and division by hard and soft zeros are discussed, and sample results are presented graphically.
Gemperline, Paul J; Cash, Eric
2003-08-15
A new algorithm for self-modeling curve resolution (SMCR) that yields improved results by incorporating soft constraints is described. The method uses least squares penalty functions to implement constraints in an alternating least squares algorithm, including nonnegativity, unimodality, equality, and closure constraints. By using least squares penalty functions, soft constraints are formulated rather than hard constraints. Significant benefits are (obtained using soft constraints, especially in the form of fewer distortions due to noise in resolved profiles. Soft equality constraints can also be used to introduce incomplete or partial reference information into SMCR solutions. Four different examples demonstrating application of the new method are presented, including resolution of overlapped HPLC-DAD peaks, flow injection analysis data, and batch reaction data measured by UV/visible and near-infrared spectroscopy (NIR). Each example was selected to show one aspect of the significant advantages of soft constraints over traditionally used hard constraints. Incomplete or partial reference information into self-modeling curve resolution models is described. The method offers a substantial improvement in the ability to resolve time-dependent concentration profiles from mixture spectra recorded as a function of time.
Protein threading with profiles and distance constraints using clique based algorithms.
Dukka, Bahadur K C; Tomita, Etsuji; Suzuki, Jun'ichi; Horimoto, Katsuhisa; Akutsu, Tatsuya
2006-02-01
With the advent of experimental technologies like chemical cross-linking, it has become possible to obtain distances between specific residues of a newly sequenced protein. These types of experiments usually are less time consuming than X-ray crystallography or NMR. Consequently, it is highly desired to develop a method that incorporates this distance information to improve the performance of protein threading methods. However, protein threading with profiles in which constraints on distances between residues are given is known to be NP-hard. By using the notion of a maximum edge-weight clique finding algorithm, we introduce a more efficient method called FTHREAD for profile threading with distance constraints that is 18 times faster than its predecessor CLIQUETHREAD. Moreover, we also present a novel practical algorithm NTHREAD for profile threading with Non-strict constraints. The overall performance of FTHREAD on a data set shows that although our algorithm uses a simple threading function, our algorithm performs equally well as some of the existing methods. Particularly, when there are some unsatisfied constraints, NTHREAD (Non-strict constraints threading algorithm) performs better than threading with FTHREAD (Strict constraints threading algorithm). We have also analyzed the effects of using a number of distance constraints. This algorithm helps the enhancement of alignment quality between the query sequence and template structure, once the corresponding template structure is determined for the target sequence.
Measuring Constraint-Set Utility for Partitional Clustering Algorithms
NASA Technical Reports Server (NTRS)
Davidson, Ian; Wagstaff, Kiri L.; Basu, Sugato
2006-01-01
Clustering with constraints is an active area of machine learning and data mining research. Previous empirical work has convincingly shown that adding constraints to clustering improves the performance of a variety of algorithms. However, in most of these experiments, results are averaged over different randomly chosen constraint sets from a given set of labels, thereby masking interesting properties of individual sets. We demonstrate that constraint sets vary significantly in how useful they are for constrained clustering; some constraint sets can actually decrease algorithm performance. We create two quantitative measures, informativeness and coherence, that can be used to identify useful constraint sets. We show that these measures can also help explain differences in performance for four particular constrained clustering algorithms.
Bacanin, Nebojsa; Tuba, Milan
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.
Rigidity transition in materials: hardness is driven by weak atomic constraints.
Bauchy, Mathieu; Qomi, Mohammad Javad Abdolhosseini; Bichara, Christophe; Ulm, Franz-Josef; Pellenq, Roland J-M
2015-03-27
Understanding the composition dependence of the hardness in materials is of primary importance for infrastructures and handled devices. Stimulated by the need for stronger protective screens, topological constraint theory has recently been used to predict the hardness in glasses. Herein, we report that the concept of rigidity transition can be extended to a broader range of materials than just glass. We show that hardness depends linearly on the number of angular constraints, which, compared to radial interactions, constitute the weaker ones acting between the atoms. This leads to a predictive model for hardness, generally applicable to any crystalline or glassy material.
NASA Astrophysics Data System (ADS)
Tang, Qiuhua; Li, Zixiang; Zhang, Liping; Floudas, C. A.; Cao, Xiaojun
2015-09-01
Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost function. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.
A synthetic dataset for evaluating soft and hard fusion algorithms
NASA Astrophysics Data System (ADS)
Graham, Jacob L.; Hall, David L.; Rimland, Jeffrey
2011-06-01
There is an emerging demand for the development of data fusion techniques and algorithms that are capable of combining conventional "hard" sensor inputs such as video, radar, and multispectral sensor data with "soft" data including textual situation reports, open-source web information, and "hard/soft" data such as image or video data that includes human-generated annotations. New techniques that assist in sense-making over a wide range of vastly heterogeneous sources are critical to improving tactical situational awareness in counterinsurgency (COIN) and other asymmetric warfare situations. A major challenge in this area is the lack of realistic datasets available for test and evaluation of such algorithms. While "soft" message sets exist, they tend to be of limited use for data fusion applications due to the lack of critical message pedigree and other metadata. They also lack corresponding hard sensor data that presents reasonable "fusion opportunities" to evaluate the ability to make connections and inferences that span the soft and hard data sets. This paper outlines the design methodologies, content, and some potential use cases of a COIN-based synthetic soft and hard dataset created under a United States Multi-disciplinary University Research Initiative (MURI) program funded by the U.S. Army Research Office (ARO). The dataset includes realistic synthetic reports from a variety of sources, corresponding synthetic hard data, and an extensive supporting database that maintains "ground truth" through logical grouping of related data into "vignettes." The supporting database also maintains the pedigree of messages and other critical metadata.
On the Convergence of Iterative Receiver Algorithms Utilizing Hard Decisions
NASA Astrophysics Data System (ADS)
Rößler, Jürgen F.; Gerstacker, Wolfgang H.
2010-12-01
The convergence of receivers performing iterative hard decision interference cancellation (IHDIC) is analyzed in a general framework for ASK, PSK, and QAM constellations. We first give an overview of IHDIC algorithms known from the literature applied to linear modulation and DS-CDMA-based transmission systems and show the relation to Hopfield neural network theory. It is proven analytically that IHDIC with serial update scheme always converges to a stable state in the estimated values in course of iterations and that IHDIC with parallel update scheme converges to cycles of length 2. Additionally, we visualize the convergence behavior with the aid of convergence charts. Doing so, we give insight into possible errors occurring in IHDIC which turn out to be caused by locked error situations. The derived results can directly be applied to those iterative soft decision interference cancellation (ISDIC) receivers whose soft decision functions approach hard decision functions in course of the iterations.
An algorithm for optimal structural design with frequency constraints
NASA Technical Reports Server (NTRS)
Kiusalaas, J.; Shaw, R. C. J.
1978-01-01
The paper presents a finite element method for minimum weight design of structures with lower-bound constraints on the natural frequencies, and upper and lower bounds on the design variables. The design algorithm is essentially an iterative solution of the Kuhn-Tucker optimality criterion. The three most important features of the algorithm are: (1) a small number of design iterations are needed to reach optimal or near-optimal design, (2) structural elements with a wide variety of size-stiffness may be used, the only significant restriction being the exclusion of curved beam and shell elements, and (3) the algorithm will work for multiple as well as single frequency constraints. The design procedure is illustrated with three simple problems.
Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms.
Friedrich, Tobias; Neumann, Frank
2015-01-01
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1 + 1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints, we show that the GSEMO achieves a (1 - 1/e)-approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of K ≥ 2 matroids, we show that the (1 + 1) EA achieves a (1/k + δ)-approximation in expected polynomial time for any constant δ > 0. Turning to nonmonotone symmetric submodular functions with k ≥ 1 matroid intersection constraints, we show that the GSEMO achieves a 1/((k + 2)(1 + ε))-approximation in expected time O(n(k + 6)log(n)/ε.
Heinstein, M.W.
1997-10-01
A contact enforcement algorithm has been developed for matrix-free quasistatic finite element techniques. Matrix-free (iterative) solution algorithms such as nonlinear Conjugate Gradients (CG) and Dynamic Relaxation (DR) are distinctive in that the number of iterations required for convergence is typically of the same order as the number of degrees of freedom of the model. From iteration to iteration the contact normal and tangential forces vary significantly making contact constraint satisfaction tenuous. Furthermore, global determination and enforcement of the contact constraints every iteration could be questioned on the grounds of efficiency. This work addresses this situation by introducing an intermediate iteration for treating the active gap constraint and at the same time exactly (kinematically) enforcing the linearized gap rate constraint for both frictionless and frictional response.
Leaf Sequencing Algorithm Based on MLC Shape Constraint
NASA Astrophysics Data System (ADS)
Jing, Jia; Pei, Xi; Wang, Dong; Cao, Ruifen; Lin, Hui
2012-06-01
Intensity modulated radiation therapy (IMRT) requires the determination of the appropriate multileaf collimator settings to deliver an intensity map. The purpose of this work was to attempt to regulate the shape between adjacent multileaf collimator apertures by a leaf sequencing algorithm. To qualify and validate this algorithm, the integral test for the segment of the multileaf collimator of ARTS was performed with clinical intensity map experiments. By comparisons and analyses of the total number of monitor units and number of segments with benchmark results, the proposed algorithm performed well while the segment shape constraint produced segments with more compact shapes when delivering the planned intensity maps, which may help to reduce the multileaf collimator's specific effects.
Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.
Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian
2017-01-01
Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.
A multiagent evolutionary algorithm for constraint satisfaction problems.
Liu, Jing; Zhong, Weicai; Jiao, Licheng
2006-02-01
With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10(4) to 10(7), and has a linear time complexity. Even for 10(7)-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained.
NASA Astrophysics Data System (ADS)
Paksi, A. B. N.; Ma'ruf, A.
2016-02-01
In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.
Typical performance of approximation algorithms for NP-hard problems
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-11-01
Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.
NASA Astrophysics Data System (ADS)
Hung, Shih-Yu
2009-01-01
In this paper, Ni-Co/nano-Al2O3 composite electroforming was used to make the metallic micro-mold for a microlens array. The microstructures require higher hardness to improve the wear resistance and lifetime. Nano-Al2O3 was applied to strengthen the Ni-Co matrix by a new micro-electroforming technique. The hardness and internal stress of Ni-Co/nano-Al2O3 composite deposit were investigated. The results showed that the hardness increased with the increasing Al2O3 content, but at the cost of deformation. Increasing the Al2O3 content in the composite was not always beneficial to the electroformed mold for microlens array fabrication. This work will concentrate on the relationship between important mechanical properties and electrolyte parameters of Ni-Co/nano-Al2O3 composite electroforming. Electrolyte parameters such as Al2O3 content, Al2O3 particle diameter, Co content, stress reducer and current density will be examined with respect to internal stress and hardness. In the present study, low stress and high hardness electroforming with the constraint of low surface roughness is carried out using SNAOA algorithm to reduce internal stress and increase service life of micro-mold during the forming process. The results show that the internal stress and the RMS roughness are only 0.54 MPa and 4.8 nm, respectively, for the optimal electrolyte parameters combination of SNAOA design.
NASA Astrophysics Data System (ADS)
Virrueta, A.; Gaines, J.; O'Hern, C. S.; Regan, L.
2015-03-01
Current research in the O'Hern and Regan laboratories focuses on the development of hard-sphere models with stereochemical constraints for protein structure prediction as an alternative to molecular dynamics methods that utilize knowledge-based corrections in their force-fields. Beginning with simple hydrophobic dipeptides like valine, leucine, and isoleucine, we have shown that our model is able to reproduce the side-chain dihedral angle distributions derived from sets of high-resolution protein crystal structures. However, methionine remains an exception - our model yields a chi-3 side-chain dihedral angle distribution that is relatively uniform from 60 to 300 degrees, while the observed distribution displays peaks at 60, 180, and 300 degrees. Our goal is to resolve this discrepancy by considering clashes with neighboring residues, and averaging the reduced distribution of allowable methionine structures taken from a set of crystallized proteins. We will also re-evaluate the electron density maps from which these protein structures are derived to ensure that the methionines and their local environments are correctly modeled. This work will ultimately serve as a tool for computing side-chain entropy and protein stability. A. V. is supported by an NSF Graduate Research Fellowship and a Ford Foundation Fellowship. J. G. is supported by NIH training Grant NIH-5T15LM007056-28.
Event-chain Monte Carlo algorithms for hard-sphere systems.
Bernard, Etienne P; Krauth, Werner; Wilson, David B
2009-11-01
In this paper we present the event-chain algorithms, which are fast Markov-chain Monte Carlo methods for hard spheres and related systems. In a single move of these rejection-free methods, an arbitrarily long chain of particles is displaced, and long-range coherent motion can be induced. Numerical simulations show that event-chain algorithms clearly outperform the conventional Metropolis method. Irreversible versions of the algorithms, which violate detailed balance, improve the speed of the method even further. We also compare our method with a recent implementations of the molecular-dynamics algorithm.
NASA Astrophysics Data System (ADS)
Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.
2015-12-01
This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or
Regularization of multiplicative iterative algorithms with nonnegative constraint
NASA Astrophysics Data System (ADS)
Benvenuto, Federico; Piana, Michele
2014-03-01
This paper studies the regularization of the constrained maximum likelihood iterative algorithms applied to incompatible ill-posed linear inverse problems. Specifically, we introduce a novel stopping rule which defines a regularization algorithm for the iterative space reconstruction algorithm in the case of least-squares minimization. Further we show that the same rule regularizes the expectation maximization algorithm in the case of Kullback-Leibler minimization, provided a well-justified modification of the definition of Tikhonov regularization is introduced. The performances of this stopping rule are illustrated in the case of an image reconstruction problem in the x-ray solar astronomy.
Approximation algorithms for NEXTtime-hard periodically specified problems and domino problems
Marathe, M.V.; Hunt, H.B., III; Stearns, R.E.; Rosenkrantz, D.J.
1996-02-01
We study the efficient approximability of two general class of problems: (1) optimization versions of the domino problems studies in [Ha85, Ha86, vEB83, SB84] and (2) graph and satisfiability problems when specified using various kinds of periodic specifications. Both easiness and hardness results are obtained. Our efficient approximation algorithms and schemes are based on extensions of the ideas. Two of properties of our results obtained here are: (1) For the first time, efficient approximation algorithms and schemes have been developed for natural NEXPTIME-complete problems. (2) Our results are the first polynomial time approximation algorithms with good performance guarantees for `hard` problems specified using various kinds of periodic specifications considered in this paper. Our results significantly extend the results in [HW94, Wa93, MH+94].
A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, Fred T.
1992-01-01
A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.
Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chiou, Jin-Chern
1990-01-01
Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.
Convergence Rate of the Successive Zooming Genetic Algorithm for Band-Widths of Equality Constraint
NASA Astrophysics Data System (ADS)
Kwon, Y. D.; Han, S. W.; Do, J. W.
Modern optimization techniques, such as the steepest descent method, Newton's method, Rosen's gradient projection method, genetic algorithms, etc., have been developed and quickly improved with the progress of digital computers. The steepest descent method and Newton's method are applied efficiently to unconstrained problems. For many engineering problems involving constraints, the genetic algorithm and SUMT1are applied with relative ease. Genetic algorithms2have global search characteristics and relatively good convergence rates. Recently, a Successive Zooming Genetic Algorithm (SZGA)3,4 was introduced that can search the precise optimal solution at any level of desired accuracy. In the case of engineering problems involving an equality constraint, even if good optimization techniques are applied to the constraint problems, a proper constraint range can lead to a more rapid convergence and precise solution. This study investigated the proper band-width of an equality constraint using the Successive Zooming Genetic Algorithm (SZGA) technique both theoretically and numerically. We were able to find a certain band-width range of the rapid convergence for each problem, and a broad but more general one too.
A Fast Algorithm for Denoising Magnitude Diffusion-Weighted Images with Rank and Edge Constraints
Lam, Fan; Liu, Ding; Song, Zhuang; Schuff, Norbert; Liang, Zhi-Pei
2015-01-01
Purpose To accelerate denoising of magnitude diffusion-weighted images subject to joint rank and edge constraints. Methods We extend a previously proposed majorize-minimize (MM) method for statistical estimation that involves noncentral χ distributions and joint rank and edge constraints. A new algorithm is derived which decomposes the constrained noncentral χ denoising problem into a series of constrained Gaussian denoising problems each of which is then solved using an efficient alternating minimization scheme. Results The performance of the proposed algorithm has been evaluated using both simulated and experimental data. Results from simulations based on ex vivo data show that the new algorithm achieves about a factor of 10 speed up over the original Quasi-Newton based algorithm. This improvement in computational efficiency enabled denoising of large data sets containing many diffusion-encoding directions. The denoising performance of the new efficient algorithm is found to be comparable to or even better than that of the original slow algorithm. For an in vivo high-resolution Q-ball acquisition, comparison of fiber tracking results around hippocampus region before and after denoising will also be shown to demonstrate the denoising effects of the new algorithm. Conclusion The optimization problem associated with denoising noncentral χ distributed diffusion-weighted images subject to joint rank and edge constraints can be solved efficiently using an MM-based algorithm. PMID:25733066
2006-01-01
system. Our simulation studies and implementation measurements reveal that GUS performs close to, if not better than, the existing algorithms for the...satisfying application time con straints. The most widely studied time constraint is the deadline. A deadline time con straint for an application...optimality criteria, such as resource dependencies and precedence 3 constraints. Scheduling tasks with non-step TUF’s has been studied in the past
Constraint Drive Generation of Vision Algorithms on an Elastic Infrastructure
2014-10-01
classifiers, image search indexes, human annotators, and heterogeneous computer vision algorithms. Processing is performed using the Apache Hadoop cluster...workers). Picarus is a web-service that executes large-scale visual analysis jobs using Hadoop with data stored on 10 Approved for Public Release...Installed Picarus (which requires Hadoop , HBase, and Redis) on two govcloud servers. Wrote up documentation for picarus adminis- tration http://goo.gl
Control of Boolean networks: hardness results and algorithms for tree structured networks.
Akutsu, Tatsuya; Hayashida, Morihiro; Ching, Wai-Ki; Ng, Michael K
2007-02-21
Finding control strategies of cells is a challenging and important problem in the post-genomic era. This paper considers theoretical aspects of the control problem using the Boolean network (BN), which is a simplified model of genetic networks. It is shown that finding a control strategy leading to the desired global state is computationally intractable (NP-hard) in general. Furthermore, this hardness result is extended for BNs with considerably restricted network structures. These results justify existing exponential time algorithms for finding control strategies for probabilistic Boolean networks (PBNs). On the other hand, this paper shows that the control problem can be solved in polynomial time if the network has a tree structure. Then, this algorithm is extended for the case where the network has a few loops and the number of time steps is small. Though this paper focuses on theoretical aspects, biological implications of the theoretical results are also discussed.
Parallelized event chain algorithm for dense hard sphere and polymer systems
Kampmann, Tobias A. Boltz, Horst-Holger; Kierfeld, Jan
2015-01-15
We combine parallelization and cluster Monte Carlo for hard sphere systems and present a parallelized event chain algorithm for the hard disk system in two dimensions. For parallelization we use a spatial partitioning approach into simulation cells. We find that it is crucial for correctness to ensure detailed balance on the level of Monte Carlo sweeps by drawing the starting sphere of event chains within each simulation cell with replacement. We analyze the performance gains for the parallelized event chain and find a criterion for an optimal degree of parallelization. Because of the cluster nature of event chain moves massive parallelization will not be optimal. Finally, we discuss first applications of the event chain algorithm to dense polymer systems, i.e., bundle-forming solutions of attractive semiflexible polymers.
NASA Technical Reports Server (NTRS)
Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara
2001-01-01
In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.
NEW CONSTRAINTS ON THE BLACK HOLE LOW/HARD STATE INNER ACCRETION FLOW WITH NuSTAR
Miller, J. M.; King, A. L.; Tomsick, J. A.; Boggs, S. E.; Bachetti, M.; Wilkins, D.; Christensen, F. E.; Craig, W. W.; Fabian, A. C.; Kara, E.; Grefenstette, B. W.; Harrison, F. A.; Hailey, C. J.; Stern, D. K; Zhang, W. W.
2015-01-20
We report on an observation of the Galactic black hole candidate GRS 1739–278 during its 2014 outburst, obtained with NuSTAR. The source was captured at the peak of a rising ''low/hard'' state, at a flux of ∼0.3 Crab. A broad, skewed iron line and disk reflection spectrum are revealed. Fits to the sensitive NuSTAR spectra with a number of relativistically blurred disk reflection models yield strong geometrical constraints on the disk and hard X-ray ''corona''. Two models that explicitly assume a ''lamp post'' corona find its base to have a vertical height above the black hole of h=5{sub −2}{sup +7} GM/c{sup 2} and h = 18 ± 4 GM/c {sup 2} (90% confidence errors); models that do not assume a ''lamp post'' return emissivity profiles that are broadly consistent with coronae of this size. Given that X-ray microlensing studies of quasars and reverberation lags in Seyferts find similarly compact coronae, observations may now signal that compact coronae are fundamental across the black hole mass scale. All of the models fit to GRS 1739–278 find that the accretion disk extends very close to the black hole—the least stringent constraint is r{sub in}=5{sub −4}{sup +3} GM/c{sup 2}. Only two of the models deliver meaningful spin constraints, but a = 0.8 ± 0.2 is consistent with all of the fits. Overall, the data provide especially compelling evidence of an association between compact hard X-ray coronae and the base of relativistic radio jets in black holes.
A fast multigrid algorithm for energy minimization under planar density constraints.
Ron, D.; Safro, I.; Brandt, A.; Mathematics and Computer Science; Weizmann Inst. of Science
2010-09-07
The two-dimensional layout optimization problem reinforced by the efficient space utilization demand has a wide spectrum of practical applications. Formulating the problem as a nonlinear minimization problem under planar equality and/or inequality density constraints, we present a linear time multigrid algorithm for solving a correction to this problem. The method is demonstrated in various graph drawing (visualization) instances.
A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem
Bayardo, R.J. Jr.; Miranker, D.P.
1996-12-31
Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-space-bounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevance-bounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes.
On-line reentry guidance algorithm with both path and no-fly zone constraints
NASA Astrophysics Data System (ADS)
Zhang, Da; Liu, Lei; Wang, Yongji
2015-12-01
This study proposes an on-line predictor-corrector reentry guidance algorithm that satisfies path and no-fly zone constraints for hypersonic vehicles with a high lift-to-drag ratio. The proposed guidance algorithm can generate a feasible trajectory at each guidance cycle during the entry flight. In the longitudinal profile, numerical predictor-corrector approaches are used to predict the flight capability from current flight states to expected terminal states and to generate an on-line reference drag acceleration profile. The path constraints on heat rate, aerodynamic load, and dynamic pressure are implemented as a part of the predictor-corrector algorithm. A tracking control law is then designed to track the reference drag acceleration profile. In the lateral profile, a novel guidance algorithm is presented. The velocity azimuth angle error threshold and artificial potential field method are used to reduce heading error and to avoid the no-fly zone. Simulated results for nominal and dispersed cases show that the proposed guidance algorithm not only can avoid the no-fly zone but can also steer a typical entry vehicle along a feasible 3D trajectory that satisfies both terminal and path constraints.
NASA Astrophysics Data System (ADS)
Ghossein, Elias; Lévesque, Martin
2013-11-01
This paper presents a computationally-efficient algorithm for generating random periodic packings of hard ellipsoids. The algorithm is based on molecular dynamics where the ellipsoids are set in translational and rotational motion and their volumes gradually increase. Binary collision times are computed by simply finding the roots of a non-linear function. In addition, an original and efficient method to compute the collision time between an ellipsoid and a cube face is proposed. The algorithm can generate all types of ellipsoids (prolate, oblate and scalene) with very high aspect ratios (i.e., >10). It is the first time that such packings are reported in the literature. Orientations tensors were computed for the generated packings and it has been shown that ellipsoids had a uniform distribution of orientations. Moreover, it seems that for low aspect ratios (i.e., ⩽10), the volume fraction is the most influential parameter on the algorithm CPU time. For higher aspect ratios, the influence of the latter becomes as important as the volume fraction. All necessary pseudo-codes are given so that the reader can easily implement the algorithm.
Yurtkuran, Alkın
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834
Yurtkuran, Alkın; Emel, Erdal
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.
Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas
NASA Technical Reports Server (NTRS)
Smith, Barbara M.; Bennett, Sean
1992-01-01
A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.
Optimizations Of Coat-Hanger Die, Using Constraint Optimization Algorithm And Taguchi Method
NASA Astrophysics Data System (ADS)
Lebaal, Nadhir; Schmidt, Fabrice; Puissant, Stephan
2007-05-01
Polymer extrusion is one of the most important manufacturing methods used today. A flat die, is commonly used to extrude thin thermoplastics sheets. If the channel geometry in a flat die is not designed properly, the velocity at the die exit may be perturbed, which can affect the thickness across the width of the die. The ultimate goal of this work is to optimize the die channel geometry in a way that a uniform velocity distribution is obtained at the die exit. While optimizing the exit velocity distribution, we have coupled three-dimensional extrusion simulation software Rem3D®, with an automatic constraint optimization algorithm to control the maximum allowable pressure drop in the die; according to this constraint we can control the pressure in the die (decrease the pressure while minimizing the velocity dispersion across the die exit). For this purpose, we investigate the effect of the design variables in the objective and constraint function by using Taguchi method. In the second study we use the global response surface method with Kriging interpolation to optimize flat die geometry. Two optimization results are presented according to the imposed constraint on the pressure. The optimum is obtained with a very fast convergence (2 iterations). To respect the constraint while ensuring a homogeneous distribution of velocity, the results with a less severe constraint offers the best minimum.
NASA Astrophysics Data System (ADS)
Zhao, Fengjun; Qu, Xiaochao; Zhang, Xing; Poon, Ting-Chung; Kim, Taegeun; Kim, You Seok; Liang, Jimin
2012-03-01
The optical imaging takes advantage of coherent optics and has promoted the development of visualization of biological application. Based on the temporal coherence, optical coherence tomography can deliver three-dimensional optical images with superior resolutions, but the axial and lateral scanning is a time-consuming process. Optical scanning holography (OSH) is a spatial coherence technique which integrates three-dimensional object into a two-dimensional hologram through a two-dimensional optical scanning raster. The advantages of high lateral resolution and fast image acquisition offer it a great potential application in three-dimensional optical imaging, but the prerequisite is the accurate and practical reconstruction algorithm. Conventional method was first adopted to reconstruct sectional images and obtained fine results, but some drawbacks restricted its practicality. An optimization method based on 2 l norm obtained more accurate results than that of the conventional methods, but the intrinsic smooth of 2 l norm blurs the reconstruction results. In this paper, a hard-threshold based sparse inverse imaging algorithm is proposed to improve the sectional image reconstruction. The proposed method is characterized by hard-threshold based iterating with shrinkage threshold strategy, which only involves lightweight vector operations and matrix-vector multiplication. The performance of the proposed method has been validated by real experiment, which demonstrated great improvement on reconstruction accuracy at appropriate computational cost.
Zhang, Jinkai; Rivard, Benoit; Rogge, D.M.
2008-01-01
Spectral mixing is a problem inherent to remote sensing data and results in few image pixel spectra representing ″pure″ targets. Linear spectral mixture analysis is designed to address this problem and it assumes that the pixel-to-pixel variability in a scene results from varying proportions of spectral endmembers. In this paper we present a different endmember-search algorithm called the Successive Projection Algorithm (SPA). SPA builds on convex geometry and orthogonal projection common to other endmember search algorithms by including a constraint on the spatial adjacency of endmember candidate pixels. Consequently it can reduce the susceptibility to outlier pixels and generates realistic endmembers.This is demonstrated using two case studies (AVIRIS Cuprite cube and Probe-1 imagery for Baffin Island) where image endmembers can be validated with ground truth data. The SPA algorithm extracts endmembers from hyperspectral data without having to reduce the data dimensionality. It uses the spectral angle (alike IEA) and the spatial adjacency of pixels in the image to constrain the selection of candidate pixels representing an endmember. We designed SPA based on the observation that many targets have spatial continuity (e.g. bedrock lithologies) in imagery and thus a spatial constraint would be beneficial in the endmember search. An additional product of the SPA is data describing the change of the simplex volume ratio between successive iterations during the endmember extraction. It illustrates the influence of a new endmember on the data structure, and provides information on the convergence of the algorithm. It can provide a general guideline to constrain the total number of endmembers in a search. PMID:27879768
Kirsch, J D; Drennen, J K
1999-03-01
A new algorithm using common statistics was proposed for nondestructive near-infrared (near-IR) spectroscopic tablet hardness testing over a range of tablet potencies. The spectral features that allow near-IR tablet hardness testing were evaluated. Cimetidine tablets of 1-20% potency and 1-7 kp hardness were used for the development and testing of a new spectral best-fit algorithm for tablet hardness prediction. Actual tablet hardness values determined via a destructive diametral crushing test were used for construction of calibration models using principal component analysis/principal component regression (PCA/PCR) or the new algorithm. Both methods allowed the prediction of tablet hardness over the range of potencies studied. The spectral best-fit method compared favorably to the multivariate PCA/PCR method, but was easier to develop. The new approach offers advantages over wavelength-based regression models because the calculation of a spectral slope averages out the influence of individual spectral absorbance bands. The ability to generalize the hardness calibration over a range of potencies confirms the robust nature of the method.
Sun, Liping; Luo, Yonglong; Ding, Xintao; Zhang, Ji
2014-01-01
An important component of a spatial clustering algorithm is the distance measure between sample points in object space. In this paper, the traditional Euclidean distance measure is replaced with innovative obstacle distance measure for spatial clustering under obstacle constraints. Firstly, we present a path searching algorithm to approximate the obstacle distance between two points for dealing with obstacles and facilitators. Taking obstacle distance as similarity metric, we subsequently propose the artificial immune clustering with obstacle entity (AICOE) algorithm for clustering spatial point data in the presence of obstacles and facilitators. Finally, the paper presents a comparative analysis of AICOE algorithm and the classical clustering algorithms. Our clustering model based on artificial immune system is also applied to the case of public facility location problem in order to establish the practical applicability of our approach. By using the clone selection principle and updating the cluster centers based on the elite antibodies, the AICOE algorithm is able to achieve the global optimum and better clustering effect. PMID:25435862
Sun, Liping; Luo, Yonglong; Ding, Xintao; Zhang, Ji
2014-01-01
An important component of a spatial clustering algorithm is the distance measure between sample points in object space. In this paper, the traditional Euclidean distance measure is replaced with innovative obstacle distance measure for spatial clustering under obstacle constraints. Firstly, we present a path searching algorithm to approximate the obstacle distance between two points for dealing with obstacles and facilitators. Taking obstacle distance as similarity metric, we subsequently propose the artificial immune clustering with obstacle entity (AICOE) algorithm for clustering spatial point data in the presence of obstacles and facilitators. Finally, the paper presents a comparative analysis of AICOE algorithm and the classical clustering algorithms. Our clustering model based on artificial immune system is also applied to the case of public facility location problem in order to establish the practical applicability of our approach. By using the clone selection principle and updating the cluster centers based on the elite antibodies, the AICOE algorithm is able to achieve the global optimum and better clustering effect.
A new method for automatically measuring Vickers hardness based on region-point detection algorithm
NASA Astrophysics Data System (ADS)
Pan, Yong; Shan, Yuekang; Ji, Yu; Zhang, Shibo
2008-12-01
This paper presents a new method for automatically analyzing the digital image of Vickers hardness indentation called Region-Point detection algorithm. This method effectively overcomes the error of vertex detection due to curving indentation edges. In the Region-Detection, to obtain four small regions where the four vertexes locate, Sobel Operator is implemented to extract the edge points and Thick-line Hough Transform is utilized to fit the edge lines, then the four regions are selected according to the four intersection points of the thick lines. In the Point-Detection, to get the vertex's accurate position in every small region, Thick-line Hough Transform is used again to select useful edge points and Last Square Method is utilized to accurately fit lines. The interception point of the two lines in every region is the vertex of indentation. Then the length of the diagonal and the Vickers hardness could be calculated. Experiments show that the measured values agreed well with the standard values
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
Martín H, José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter α∈N. Nevertheless, here it is proved that the probability of requiring a value of α>k to obtain a solution for a random graph decreases exponentially: P(α>k)≤2(-(k+1)), making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results.
Bowen, J.; Dozier, G.
1996-12-31
This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
2008-09-18
fumarase; MAN ) mandelate racemase; PEP ) carboxypeptidase B; CDA ) E . coli cytidine deaminase; KSI ) ketosteroid isomerase; CMU ) chorismate...resumption of respiration. A 3D model of E . coli Ndh according to Schmid and Gerloff (2004). Putative flavin-, NADH-, and membrane-binding domains are...DATE 18 SEP 2008 2. REPORT TYPE 3 . DATES COVERED 00-00-2008 to 00-00-2008 4. TITLE AND SUBTITLE Some Very Hard Problems in Nature (Biology
NASA Astrophysics Data System (ADS)
Sohrabi, Foad; Davidson, Timothy N.
2016-06-01
We consider the problem of power allocation for the single-cell multi-user (MU) multiple-input single-output (MISO) downlink with quality-of-service (QoS) constraints. The base station acquires an estimate of the channels and, for a given beamforming structure, designs the power allocation so as to minimize the total transmission power required to ensure that target signal-to-interference-and-noise ratios at the receivers are met, subject to a specified outage probability. We consider scenarios in which the errors in the base station's channel estimates can be modelled as being zero-mean and Gaussian. Such a model is particularly suitable for time division duplex (TDD) systems with quasi-static channels, in which the base station estimates the channel during the uplink phase. Under that model, we employ a precise deterministic characterization of the outage probability to transform the chance-constrained formulation to a deterministic one. Although that deterministic formulation is not convex, we develop a coordinate descent algorithm that can be shown to converge to a globally optimal solution when the starting point is feasible. Insight into the structure of the deterministic formulation yields approximations that result in coordinate update algorithms with good performance and significantly lower computational cost. The proposed algorithms provide better performance than existing robust power loading algorithms that are based on tractable conservative approximations, and can even provide better performance than robust precoding algorithms based on such approximations.
Formal analysis, hardness, and algorithms for extracting internal structure of test-based problems.
Jaśkowski, Wojciech; Krawiec, Krzysztof
2011-01-01
Problems in which some elementary entities interact with each other are common in computational intelligence. This scenario, typical for coevolving artificial life agents, learning strategies for games, and machine learning from examples, can be formalized as a test-based problem and conveniently embedded in the common conceptual framework of coevolution. In test-based problems, candidate solutions are evaluated on a number of test cases (agents, opponents, examples). It has been recently shown that every test of such problem can be regarded as a separate objective, and the whole problem as multi-objective optimization. Research on reducing the number of such objectives while preserving the relations between candidate solutions and tests led to the notions of underlying objectives and internal problem structure, which can be formalized as a coordinate system that spatially arranges candidate solutions and tests. The coordinate system that spans the minimal number of axes determines the so-called dimension of a problem and, being an inherent property of every problem, is of particular interest. In this study, we investigate in-depth the formalism of a coordinate system and its properties, relate them to properties of partially ordered sets, and design an exact algorithm for finding a minimal coordinate system. We also prove that this problem is NP-hard and come up with a heuristic which is superior to the best algorithm proposed so far. Finally, we apply the algorithms to three abstract problems and demonstrate that the dimension of the problem is typically much lower than the number of tests, and for some problems converges to the intrinsic parameter of the problem--its a priori dimension.
An efficient algorithm for antenna synthesis updating following null-constraint changes
NASA Astrophysics Data System (ADS)
Magdy, M. A.; Paoloni, F. J.; Cheah, J. Y. C.
1985-08-01
The procedure to maximize the array signal to noise ratio with null constraints involves an optimization problem that can be solved efficiently using a modified Cholesky decomposition (UD) technique. Following changes in the main lobe and/or null positions, the optimal element weight vector can be updated without the need for a new complete matrix inversion. Some properties of the UD technique can be utilized such that the updating algorithm reprocesses only a part of the unit triangular matrix U. Proper ordering of matrix entries can minimize the dimension of the updated part.
Frutos, M.; Méndez, M.; Tohmé, F.; Broz, D.
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502
Hou Chin, Jia; Ratnavelu, Kuru
2017-01-01
Community structure is an important feature of a complex network, where detection of the community structure can shed some light on the properties of such a complex network. Amongst the proposed community detection methods, the label propagation algorithm (LPA) emerges as an effective detection method due to its time efficiency. Despite this advantage in computational time, the performance of LPA is affected by randomness in the algorithm. A modified LPA, called CLPA-GNR, was proposed recently and it succeeded in handling the randomness issues in the LPA. However, it did not remove the tendency for trivial detection in networks with a weak community structure. In this paper, an improved CLPA-GNR is therefore proposed. In the new algorithm, the unassigned and assigned nodes are updated synchronously while the assigned nodes are updated asynchronously. A similarity score, based on the Sørensen-Dice index, is implemented to detect the initial communities and for breaking ties during the propagation process. Constraints are utilised during the label propagation and community merging processes. The performance of the proposed algorithm is evaluated on various benchmark and real-world networks. We find that it is able to avoid trivial detection while showing substantial improvement in the quality of detection. PMID:28374836
Training Neural Networks with Weight Constraints
1993-03-01
Hardware implementation of artificial neural networks imposes a variety of constraints. Finite weight magnitudes exist in both digital and analog...optimizing a network with weight constraints. Comparisons are made to the backpropagation training algorithm for networks with both unconstrained and hard-limited weight magnitudes. Neural networks , Analog, Digital, Stochastic
Homotopy Algorithm for Optimal Control Problems with a Second-order State Constraint
Hermant, Audrey
2010-02-15
This paper deals with optimal control problems with a regular second-order state constraint and a scalar control, satisfying the strengthened Legendre-Clebsch condition. We study the stability of structure of stationary points. It is shown that under a uniform strict complementarity assumption, boundary arcs are stable under sufficiently smooth perturbations of the data. On the contrary, nonreducible touch points are not stable under perturbations. We show that under some reasonable conditions, either a boundary arc or a second touch point may appear. Those results allow us to design an homotopy algorithm which automatically detects the structure of the trajectory and initializes the shooting parameters associated with boundary arcs and touch points.
Line Matching Algorithm for Aerial Image Combining image and object space similarity constraints
NASA Astrophysics Data System (ADS)
Wang, Jingxue; Wang, Weixi; Li, Xiaoming; Cao, Zhenyu; Zhu, Hong; Li, Miao; He, Biao; Zhao, Zhigang
2016-06-01
A new straight line matching method for aerial images is proposed in this paper. Compared to previous works, similarity constraints combining radiometric information in image and geometry attributes in object plane are employed in these methods. Firstly, initial candidate lines and the elevation values of lines projection plane are determined by corresponding points in neighborhoods of reference lines. Secondly, project reference line and candidate lines back forward onto the plane, and then similarity measure constraints are enforced to reduce the number of candidates and to determine the finial corresponding lines in a hierarchical way. Thirdly, "one-to-many" and "many-to-one" matching results are transformed into "one-to-one" by merging many lines into the new one, and the errors are eliminated simultaneously. Finally, endpoints of corresponding lines are detected by line expansion process combing with "image-object-image" mapping mode. Experimental results show that the proposed algorithm can be able to obtain reliable line matching results for aerial images.
An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction
ERIC Educational Resources Information Center
Bhasin, Harpreet
2011-01-01
Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…
NASA Astrophysics Data System (ADS)
Guo, Peng; Cheng, Wenming; Wang, Yi
2014-10-01
The quay crane scheduling problem (QCSP) determines the handling sequence of tasks at ship bays by a set of cranes assigned to a container vessel such that the vessel's service time is minimized. A number of heuristics or meta-heuristics have been proposed to obtain the near-optimal solutions to overcome the NP-hardness of the problem. In this article, the idea of generalized extremal optimization (GEO) is adapted to solve the QCSP with respect to various interference constraints. The resulting GEO is termed the modified GEO. A randomized searching method for neighbouring task-to-QC assignments to an incumbent task-to-QC assignment is developed in executing the modified GEO. In addition, a unidirectional search decoding scheme is employed to transform a task-to-QC assignment to an active quay crane schedule. The effectiveness of the developed GEO is tested on a suite of benchmark problems introduced by K.H. Kim and Y.M. Park in 2004 (European Journal of Operational Research, Vol. 156, No. 3). Compared with other well-known existing approaches, the experiment results show that the proposed modified GEO is capable of obtaining the optimal or near-optimal solution in a reasonable time, especially for large-sized problems.
A Greedy reassignment algorithm for the PBS minimum monitor unit constraint.
Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin
2016-06-21
Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90 ± 1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5
A Greedy reassignment algorithm for the PBS minimum monitor unit constraint
NASA Astrophysics Data System (ADS)
Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin
2016-06-01
Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90 ± 1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5
Liang, Mei; Sun, Xiao-gang; Luan, Mei-sheng
2015-10-01
Temperature measurement is one of the important factors for ensuring product quality, reducing production cost and ensuring experiment safety in industrial manufacture and scientific experiment. Radiation thermometry is the main method for non-contact temperature measurement. The second measurement (SM) method is one of the common methods in the multispectral radiation thermometry. However, the SM method cannot be applied to on-line data processing. To solve the problems, a rapid inversion method for multispectral radiation true temperature measurement is proposed and constraint conditions of emissivity model are introduced based on the multispectral brightness temperature model. For non-blackbody, it can be drawn that emissivity is an increasing function in the interval if the brightness temperature is an increasing function or a constant function in a range and emissivity satisfies an inequality of emissivity and wavelength in that interval if the brightness temperature is a decreasing function in a range, according to the relationship of brightness temperatures at different wavelengths. The construction of emissivity assumption values is reduced from multiclass to one class and avoiding the unnecessary emissivity construction with emissivity model constraint conditions on the basis of brightness temperature information. Simulation experiments and comparisons for two different temperature points are carried out based on five measured targets with five representative variation trends of real emissivity. decreasing monotonically, increasing monotonically, first decreasing with wavelength and then increasing, first increasing and then decreasing and fluctuating with wavelength randomly. The simulation results show that compared with the SM method, for the same target under the same initial temperature and emissivity search range, the processing speed of the proposed algorithm is increased by 19.16%-43.45% with the same precision and the same calculation results.
DeMaere, Matthew Z.
2016-01-01
Background Chromosome conformation capture, coupled with high throughput DNA sequencing in protocols like Hi-C and 3C-seq, has been proposed as a viable means of generating data to resolve the genomes of microorganisms living in naturally occuring environments. Metagenomic Hi-C and 3C-seq datasets have begun to emerge, but the feasibility of resolving genomes when closely related organisms (strain-level diversity) are present in the sample has not yet been systematically characterised. Methods We developed a computational simulation pipeline for metagenomic 3C and Hi-C sequencing to evaluate the accuracy of genomic reconstructions at, above, and below an operationally defined species boundary. We simulated datasets and measured accuracy over a wide range of parameters. Five clustering algorithms were evaluated (2 hard, 3 soft) using an adaptation of the extended B-cubed validation measure. Results When all genomes in a sample are below 95% sequence identity, all of the tested clustering algorithms performed well. When sequence data contains genomes above 95% identity (our operational definition of strain-level diversity), a naive soft-clustering extension of the Louvain method achieves the highest performance. Discussion Previously, only hard-clustering algorithms have been applied to metagenomic 3C and Hi-C data, yet none of these perform well when strain-level diversity exists in a metagenomic sample. Our simple extension of the Louvain method performed the best in these scenarios, however, accuracy remained well below the levels observed for samples without strain-level diversity. Strain resolution is also highly dependent on the amount of available 3C sequence data, suggesting that depth of sequencing must be carefully considered during experimental design. Finally, there appears to be great scope to improve the accuracy of strain resolution through further algorithm development. PMID:27843713
NASA Astrophysics Data System (ADS)
Mizusawa, Masataka; Kurihara, Masahito
Although the maze (or gridworld) is one of the most widely used benchmark problems for real-time search algorithms, it is not sufficiently clear how the difference in the density of randomly positioned obstacles affects the structure of the state spaces and the performance of the algorithms. In particular, recent studies of the so-called phase transition phenomena that could cause dramatic change in their performance in a relatively small parameter range suggest that we should evaluate the performance in a parametric way with the parameter range wide enough to cover potential transition areas. In this paper, we present two measures for characterizing the hardness of randomly generated mazes parameterized by obstacle ratio and relate them to the performance of real-time search algorithms. The first measure is the entropy calculated from the probability of existence of solutions. The second is a measure based on total initial heuristic error between the actual cost and its heuristic estimation. We show that the maze problems are the most complicated in both measures when the obstacle ratio is around 41%. We then solve the parameterized maze problems with the well-known real-time search algorithms RTA*, LRTA*, and MARTA* to relate their performance to the proposed measures. Evaluating the number of steps required for a single problem solving by the three algorithms and the number of those required for the convergence of the learning process in LRTA*, we show that they all have a peak when the obstacle ratio is around 41%. The results support the relevance of the proposed measures. We also discuss the performance of the algorithms in terms of other statistical measures to get a quantitative, deeper understanding of their behavior.
Using Online Algorithms to Solve NP-Hard Problems More Efficiently in Practice
2007-12-01
Acknowledgments I would like to thank my advisor Stephen Smith, my co-author Daniel Golovin , my com- mittee members Avrim Blum, Carla Gomes, John Hooker, and...of the 13th European Conference on Artificial Intelligence (ECAI-98), pages 244–248, 1998. 4.2.2 [76] Matthew Streeter and Daniel Golovin . Online...algorithms for maximizing submodu- lar functions. Working paper, 2007. 1.1, 2.1.3 [77] Matthew Streeter, Daniel Golovin , and Stephen F. Smith. Combining
NASA Astrophysics Data System (ADS)
Pühlhofer, Gerd; Benbow, Wystan; Costamante, Luigi; Sol, Helene; Boisson, Catherine; Emmanoulopoulos, Dimitrios; Wagner, Stefan; Horns, Dieter; Giebels, Berrie
VHE observations of the distant (z=0.186) blazar 1ES 1101-232 with H.E.S.S. are used to constrain the extragalactic background light (EBL) in the optical to near infrared band. As the EBL traces the galaxy formation history of the universe, galaxy evolution models can therefore be tested with the data. In order to measure the EBL absorption effect on a blazar spectrum, we assume that usual constraints on the hardness of the intrinsic blazar spectrum are not violated. We present an update of the VHE spectrum obtained with H.E.S.S. and the multifrequency data that were taken simultaneously with the H.E.S.S. measurements. The data verify that the broadband characteristics of 1ES 1101-232 are similar to those of other, more nearby blazars, and strengthen the assumptions that were used to derive the EBL upper limit.
NASA Astrophysics Data System (ADS)
Sun, Junfeng; Chang, Qin; Hu, Xiaohui; Yang, Yueling
2015-04-01
In this paper, we investigate the contributions of hard spectator scattering and annihilation in B → PV decays within the QCD factorization framework. With available experimental data on B → πK* , ρK , πρ and Kϕ decays, comprehensive χ2 analyses of the parameters XA,Hi,f (ρA,Hi,f, ϕA,Hi,f) are performed, where XAf (XAi) and XH are used to parameterize the endpoint divergences of the (non)factorizable annihilation and hard spectator scattering amplitudes, respectively. Based on χ2 analyses, it is observed that (1) The topology-dependent parameterization scheme is feasible for B → PV decays; (2) At the current accuracy of experimental measurements and theoretical evaluations, XH = XAi is allowed by B → PV decays, but XH ≠ XAf at 68% C.L.; (3) With the simplification XH = XAi, parameters XAf and XAi should be treated individually. The above-described findings are very similar to those obtained from B → PP decays. Numerically, for B → PV decays, we obtain (ρA,Hi ,ϕA,Hi [ ° ]) = (2.87-1.95+0.66 , -145-21+14) and (ρAf, ϕAf [ ° ]) = (0.91-0.13+0.12 , -37-9+10) at 68% C.L. With the best-fit values, most of the theoretical results are in good agreement with the experimental data within errors. However, significant corrections to the color-suppressed tree amplitude α2 related to a large ρH result in the wrong sign for ACPdir (B- →π0K*-) compared with the most recent BABAR data, which presents a new obstacle in solving "ππ" and "πK" puzzles through α2. A crosscheck with measurements at Belle (or Belle II) and LHCb, which offer higher precision, is urgently expected to confirm or refute such possible mismatch.
Nakanishi, Takashi
2010-05-28
Dimensionally controlled and hierarchically assembled supramolecular architectures in nano/micro/bulk length scales are formed by self-organization of alkyl-conjugated fullerenes. The simple molecular design of covalently attaching hydrophobic long alkyl chains to fullerene (C(60)) is different from the conventional (hydrophobic-hydrophilic) amphiphilic molecular designs. The two different units of the alkyl-conjugated C(60) are incompatible but both are soluble in organic solvents. The van der Waals intermolecular forces among long hydrocarbon chains and the pi-pi interaction between C(60) moieties govern the self-organization of the alkyl-conjugated C(60) derivatives. A delicate balance between the pi-pi and van der Waals forces in the assemblies leads to a wide variety of supramolecular architectures and paves the way for developing supramolecular soft materials possessing various morphologies and functions. For instance, superhydrophobic films, electron-transporting thermotropic liquid crystals and room-temperature liquids have been demonstrated. Furthermore, the unique morphologies of the assemblies can be utilised as a template for the fabrication of nanostructured metallic surfaces in a highly reproducible and sustainable way. The resulting metallic surfaces can serve as excellent active substrates for surface-enhanced Raman scattering (SERS) owing to their plasmon enhancing characteristics. The use of self-assembling supramolecular objects as a structural template to fabricate innovative well-defined metal nanomaterials links soft matter chemistry to hard matter sciences.
NASA Astrophysics Data System (ADS)
Kong, Jian; Yao, Yibin; Shum, Che-Kwan
2014-05-01
Due to the sparsity of world's GNSS stations and limitations of projection angles, GNSS-based ionosphere tomography is a typical ill-posed problem. There are two main ways to solve this problem. Firstly the joint inversion method combining multi-source data is one of the effective ways. Secondly using a priori or reference ionosphere models, e.g., IRI or GIM models, as the constraints to improve the state of normal equation is another effective approach. The traditional way for adding constraints with virtual observations can only solve the problem of sparse stations but the virtual observations still lack horizontal grid constraints therefore unable to fundamentally improve the near-singularity characteristic of the normal equation. In this paper, we impose a priori constraints by increasing the virtual observations in n-dimensional space, which can greatly reduce the condition number of the normal equation. Then after the inversion region is gridded, we can form a stable structure among the grids with loose constraints. We then further consider that the ionosphere indeed changes within certain temporal scale, e.g., two hours. In order to establish a more sophisticated and realistic ionosphere model and obtain the real time ionosphere electron density velocity (IEDV) information, we introduce the grid electron density velocity parameters, which can be estimated with electron density parameters simultaneously. The velocity parameters not only can enhance the temporal resolution of the ionosphere model thereby reflecting more elaborate structure (short-term disturbances) under ionosphere disturbances status, but also provide a new way for the real-time detection and prediction of ionosphere 3D changes. We applied the new algorithm to the GNSS data collected in Europe for tomography inversion for ionosphere electron density and velocity at 2-hour resolutions, which are consistent throughout the whole day variation. We then validate the resulting tomography model
Genetic algorithm to design Laue lenses with optimal performance for focusing hard X- and γ-rays
NASA Astrophysics Data System (ADS)
Camattari, Riccardo; Guidi, Vincenzo
2014-10-01
To focus hard X- and γ-rays it is possible to use a Laue lens as a concentrator. With this optics it is possible to improve the detection of radiation for several applications, from the observation of the most violent phenomena in the sky to nuclear medicine applications for diagnostic and therapeutic purposes. We implemented a code named LaueGen, which is based on a genetic algorithm and aims to design optimized Laue lenses. The genetic algorithm was selected because optimizing a Laue lens is a complex and discretized problem. The output of the code consists of the design of a Laue lens, which is composed of diffracting crystals that are selected and arranged in such a way as to maximize the lens performance. The code allows managing crystals of any material and crystallographic orientation. The program is structured in such a way that the user can control all the initial lens parameters. As a result, LaueGen is highly versatile and can be used to design very small lenses, for example, for nuclear medicine, or very large lenses, for example, for satellite-borne astrophysical missions.
A pivoting algorithm for metabolic networks in the presence of thermodynamic constraints.
Nigam, R; Liang, S
2005-01-01
A linear programming algorithm is presented to constructively compute thermodynamically feasible fluxes and change in chemical potentials of reactions for a metabolic network. It is based on physical laws of mass conservation and the second law of thermodynamics that all chemical reactions should satisfy. As a demonstration, the algorithm has been applied to the core metabolic pathway of E. coli.
Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson
2006-08-01
We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.
Williams, P.T.
1993-09-01
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.
NASA Astrophysics Data System (ADS)
Krauze, W.; Makowski, P.; Kujawińska, M.
2015-06-01
Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.
NASA Astrophysics Data System (ADS)
Chang, Cheng; Xu, Wei; Chen-Wiegart, Yu-chen Karen; Wang, Jun; Yu, Dantong
2013-12-01
X-ray Absorption Near Edge Structure (XANES) imaging, an advanced absorption spectroscopy technique, at the Transmission X-ray Microscopy (TXM) Beamline X8C of NSLS enables high-resolution chemical mapping (a.k.a. chemical composition identification or chemical spectra fitting). Two-Dimensional (2D) chemical mapping has been successfully applied to study many functional materials to decide the percentages of chemical components at each pixel position of the material images. In chemical mapping, the attenuation coefficient spectrum of the material (sample) can be fitted with the weighted sum of standard spectra of individual chemical compositions, where the weights are the percentages to be calculated. In this paper, we first implemented and compared two fitting approaches: (i) a brute force enumeration method, and (ii) a constrained least square minimization algorithm proposed by us. Next, as 2D spectra fitting can be conducted pixel by pixel, so theoretically, both methods can be implemented in parallel. In order to demonstrate the feasibility of parallel computing in the chemical mapping problem and investigate how much efficiency improvement can be achieved, we used the second approach as an example and implemented a parallel version for a multi-core computer cluster. Finally we used a novel way to visualize the calculated chemical compositions, by which domain scientists could grasp the percentage difference easily without looking into the real data.
Wen, Ying; He, Lianghua; von Deneen, Karen M; Lu, Yue
2013-11-01
We present an effective method for brain tissue classification based on diffusion tensor imaging (DTI) data. The method accounts for two main DTI segmentation obstacles: random noise and magnetic field inhomogeneities. In the proposed method, DTI parametric maps were used to resolve intensity inhomogeneities of brain tissue segmentation because they could provide complementary information for tissues and define accurate tissue maps. An improved fuzzy c-means with spatial constraints proposal was used to enhance the noise and artifact robustness of DTI segmentation. Fuzzy c-means clustering with spatial constraints (FCM_S) could effectively segment images corrupted by noise, outliers, and other imaging artifacts. Its effectiveness contributes not only to the introduction of fuzziness for belongingness of each pixel but also to the exploitation of spatial contextual information. We proposed an improved FCM_S applied on DTI parametric maps, which explores the mean and covariance of the feature spatial information for automated segmentation of DTI. The experiments on synthetic images and real-world datasets showed that our proposed algorithms, especially with new spatial constraints, were more effective.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions
Semenov, Alexander; Zaikin, Oleg
2016-01-01
In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method.
NASA Astrophysics Data System (ADS)
Li, Dongxing; Zhao, Yan; Dong, Xu
2008-03-01
In general image restoration, the point spread function (PSF) of the imaging system, and the observation noise, are known a priori information. The aero-optics effect is yielded when the objects ( e.g, missile, aircraft etc.) are flying in high speed or ultrasonic speed. In this situation, the PSF and the observation noise are unknown a priori. The identification and the restoration of the turbulence degraded images is a challenging problem in the world. The algorithm based on the nonnegativity and support constraints recursive inverse filtering (NAS-RIF) is proposed in order to identify and restore the turbulence degraded images. The NAS-RIF technique applies to situations in which the scene consists of a finite support object against a uniformly black, grey, or white background. The restoration procedure of NAS-RIF involves recursive filtering of the blurred image to minimize a convex cost function. The algorithm proposed in this paper is that the turbulence degraded image is filtered before it passes the recursive filter. The conjugate gradient minimization routine was used for minimization of the NAS-RIF cost function. The algorithm based on the NAS-RIF is used to identify and restore the wind tunnel tested images. The experimental results show that the restoration effect is improved obviously.
NASA Astrophysics Data System (ADS)
Berger, Gilles; Million-Picallion, Lisa; Lefevre, Grégory; Delaunay, Sophie
2015-04-01
Introduction: The hydrothermal crystallization of silicates phases in the Si-Al-Fe system may lead to industrial constraints that can be encountered in the nuclear industry in at least two contexts: the geological repository for nuclear wastes and the formation of hard sludges in the steam generator of the PWR nuclear plants. In the first situation, the chemical reactions between the Fe-canister and the surrounding clays have been extensively studied in laboratory [1-7] and pilot experiments [8]. These studies demonstrated that the high reactivity of metallic iron leads to the formation of Fe-silicates, berthierine like, in a wide range of temperature. By contrast, the formation of deposits in the steam generators of PWR plants, called hard sludges, is a newer and less studied issue which can affect the reactor performance. Experiments: We present here a preliminary set of experiments reproducing the formation of hard sludges under conditions representative of the steam generator of PWR power plant: 275°C, diluted solutions maintained at low potential by hydrazine addition and at alkaline pH by low concentrations of amines and ammoniac. Magnetite, a corrosion by-product of the secondary circuit, is the source of iron while aqueous Si and Al, the major impurities in this system, are supplied either as trace elements in the circulating solution or by addition of amorphous silica and alumina when considering confined zones. The fluid chemistry is monitored by sampling aliquots of the solution. Eh and pH are continuously measured by hydrothermal Cormet© electrodes implanted in a titanium hydrothermal reactor. The transformation, or not, of the solid fraction was examined post-mortem. These experiments evidenced the role of Al colloids as precursor of cements composed of kaolinite and boehmite, and the passivation of amorphous silica (becoming unreactive) likely by sorption of aqueous iron. But no Fe-bearing was formed by contrast to many published studies on the Fe
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Torczon, Virginia
1998-01-01
We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.
2010-11-01
application type of analysis, only the methodology is presented here, which includes an algorithm for optimization and a corresponding conservative rate...of convergence based on no learning. The application part will be presented in the near future once data are available. It is expected that the...flux particuliers entre des paires de nœuds particulières. Bien qu’il s’agisse d’un type de mise en application d’analyse, seulement les méthodologies
NASA Astrophysics Data System (ADS)
Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang
2016-08-01
Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.
Algorithms for magnetic tomography—on the role of a priori knowledge and constraints
NASA Astrophysics Data System (ADS)
Hauer, Karl-Heinz; Potthast, Roland; Wannert, Martin
2008-08-01
Magnetic tomography investigates the reconstruction of currents from their magnetic fields. Here, we will study a number of projection methods in combination with the Tikhonov regularization for stabilization for the solution of the Biot-Savart integral equation Wj = H with the Biot-Savart integral operator W:(L2(Ω))3 → (L2(∂G))3 where \\overline{\\Omega} \\subset G . In particular, we study the role of a priori knowledge when incorporated into the choice of the projection spaces X_n \\subset (L^2(\\Omega))^3, n\\in {\\bb N} , for example the conditions div j = 0 or the use of the full boundary value problem div σgrad phivE = 0 in Ω, ν sdot σgrad phivE = g on ∂Ω with some known function g, where j = σgrad phivE and σ is an anisotropic matrix-valued conductivity. We will discuss and compare these schemes investigating the ill-posedness of each algorithm in terms of the behaviour of the singular values of the corresponding operators both when a priori knowledge is incorporated and when the geometrical setting is modified. Finally, we will numerically evaluate the stability constants in the practical setup of magnetic tomography for fuel cells and, thus, calculate usable error bounds for this important application area.
Soft Constraints in Nonlinear Spectral Fitting with Regularized Lineshape Deconvolution
Zhang, Yan; Shen, Jun
2012-01-01
This paper presents a novel method for incorporating a priori knowledge into regularized nonlinear spectral fitting as soft constraints. Regularization was recently introduced to lineshape deconvolution as a method for correcting spectral distortions. Here, the deconvoluted lineshape was described by a new type of lineshape model and applied to spectral fitting. The non-linear spectral fitting was carried out in two steps that were subject to hard constraints and soft constraints, respectively. The hard constraints step provided a starting point and, therefore, only the changes of the relevant variables were constrained in the soft constraints step and incorporated into the linear sub-steps of the Levenberg-Marquardt algorithm. The method was demonstrated using localized averaged echo time point resolved spectroscopy (PRESS) proton spectroscopy of human brains. PMID:22618964
NASA Astrophysics Data System (ADS)
Huseyin Turan, Hasan; Kasap, Nihat; Savran, Huseyin
2014-03-01
Nowadays, every firm uses telecommunication networks in different amounts and ways in order to complete their daily operations. In this article, we investigate an optimisation problem that a firm faces when acquiring network capacity from a market in which there exist several network providers offering different pricing and quality of service (QoS) schemes. The QoS level guaranteed by network providers and the minimum quality level of service, which is needed for accomplishing the operations are denoted as fuzzy numbers in order to handle the non-deterministic nature of the telecommunication network environment. Interestingly, the mathematical formulation of the aforementioned problem leads to the special case of a well-known two-dimensional bin packing problem, which is famous for its computational complexity. We propose two different heuristic solution procedures that have the capability of solving the resulting nonlinear mixed integer programming model with fuzzy constraints. In conclusion, the efficiency of each algorithm is tested in several test instances to demonstrate the applicability of the methodology.
Lonchampt, J.; Fessart, K.
2013-07-01
The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in the Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description
1994-05-04
works is the use of various extensions of regular grammars instead of constraints. 5 Subsequently, a number of set constraint approaches have been...particular, in the optimization process of the algorithm, since the soft constraint on the objective funtion is effectively turned into a hard constraint...genetic operators and DNA grammar rules, to scene analysis in iconic image processing. Several applications in artificial intelligence require that one
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
NASA Astrophysics Data System (ADS)
Beghein, C.; Lebedev, S.; van der Hilst, R.
2005-12-01
Interstation dispersion curves can be used to obtain regional 1D profiles of the crust and upper mantle. Unlike phase velocity maps, dispersion curves can be determined with small errors and for a broad frequency band. We want to determine what features interstation surface wave dispersion curves can constrain. Using synthetic data and the Neighbourhood Algorithm, a direct search approach that provides a full statistical assessment of model uncertainites and trade-offs, we investigate how well crustal and upper mantle structure can be recovered with fundamental Love and Rayleigh waves. We also determine how strong are the trade-offs between the different parameters and what depth resolution can we expect to achieve with the current level of precision of this type of data. Synthetic dispersion curves between approximately 7 and 340s were assigned realistic error bars, i.e. an increase of the relative uncertainty with the period but with an amplitude consistent with the one achieve in ``real'' measurements. These dispersion curves were generated by two types of isotropic model differing only by their crustal structure. One represents an oceanic region (shallow Moho) and the other corresponds to an archean continental area with a larger Moho depth. Preliminary results show that while the Moho depth, the shear-velocity structure in the transition zone, between 200 and 410km depth, and between the base of the crust and 50km depth are generally well recovered, crustal structure and Vs between between 50 and 200km depth are more difficult to constrain with Love waves or Rayleigh waves alone because of some trade-off between the two layers. When these two layers are put together, the resolution of Vs between 50 and 100km depth apperas to improve. Stucture deeper than the transition zone is not constrained by the data because of a lack of sensitivity. We explore the possibility of differentiating between an upper and lower crust as well, and we investigate whether a joint
Technology Transfer Automated Retrieval System (TEKTRAN)
This research was initiated to investigate the association between flour breadmaking traits and mixing characteristics and empirical dough rheological property under thermal stress. Flour samples from 30 hard spring wheat were analyzed by a mixolab standard procedure at optimum water absorptions. Mi...
Guturu, Parthasarathy; Dantu, Ram
2008-06-01
Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.
NASA Astrophysics Data System (ADS)
de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein
2012-12-01
In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009), 10.1103/PhysRevLett.103.188302] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011), 10.1103/PhysRevLett.107.155501] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.
de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein
2012-12-07
In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009)] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011)] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.
Bloom, Joshua S.; Prochaska, J.X.; Pooley, D.; Blake, C.W.; Foley, R.J.; Jha, S.; Ramirez-Ruiz, E.; Granot, J.; Filippenko, A.V.; Sigurdsson, S.; Barth, A.J.; Chen, H.-W.; Cooper, M.C.; Falco, E.E.; Gal, R.R.; Gerke, B.F.; Gladders, M.D.; Greene, J.E.; Hennanwi, J.; Ho, L.C.; Hurley, K.; /UC, Berkeley, Astron. Dept. /Lick Observ. /Harvard-Smithsonian Ctr. Astrophys. /Princeton, Inst. Advanced Study /KIPAC, Menlo Park /Penn State U., Astron. Astrophys. /UC, Irvine /MIT, MKI /UC, Davis /UC, Berkeley /Carnegie Inst. Observ. /UC, Berkeley, Space Sci. Dept. /Michigan U. /LBL, Berkeley /Spitzer Space Telescope
2005-06-07
The localization of the short-duration, hard-spectrum gamma-ray burst GRB050509b by the Swift satellite was a watershed event. Never before had a member of this mysterious subclass of classic GRBs been rapidly and precisely positioned in a sky accessible to the bevy of ground-based follow-up facilities. Thanks to the nearly immediate relay of the GRB position by Swift, we began imaging the GRB field 8 minutes after the burst and have continued during the 8 days since. Though the Swift X-ray Telescope (XRT) discovered an X-ray afterglow of GRB050509b, the first ever of a short-hard burst, thus far no convincing optical/infrared candidate afterglow or supernova has been found for the object. We present a re-analysis of the XRT afterglow and find an absolute position of R.A. = 12h36m13.59s, Decl. = +28{sup o}59'04.9'' (J2000), with a 1{sigma} uncertainty of 3.68'' in R.A., 3.52'' in Decl.; this is about 4'' to the west of the XRT position reported previously. Close to this position is a bright elliptical galaxy with redshift z = 0.2248 {+-} 0.0002, about 1' from the center of a rich cluster of galaxies. This cluster has detectable diffuse emission, with a temperature of kT = 5.25{sub -1.68}{sup +3.36} keV. We also find several ({approx}11) much fainter galaxies consistent with the XRT position from deep Keck imaging and have obtained Gemini spectra of several of these sources. Nevertheless we argue, based on positional coincidences, that the GRB and the bright elliptical are likely to be physically related. We thus have discovered reasonable evidence that at least some short-duration, hard-spectra GRBs are at cosmological distances. We also explore the connection of the properties of the burst and the afterglow, finding that GRB050509b was underluminous in both of these relative to long-duration GRBs. However, we also demonstrate that the ratio of the blast-wave energy to the {gamma}-ray energy is consistent with that of long-duration GRBs. We thus find plausible
Temporal Constraint Reasoning With Preferences
NASA Technical Reports Server (NTRS)
Khatib, Lina; Morris, Paul; Morris, Robert; Rossi, Francesca
2001-01-01
A number of reasoning problems involving the manipulation of temporal information can naturally be viewed as implicitly inducing an ordering of potential local decisions involving time (specifically, associated with durations or orderings of events) on the basis of preferences. For example. a pair of events might be constrained to occur in a certain order, and, in addition. it might be preferable that the delay between them be as large, or as small, as possible. This paper explores problems in which a set of temporal constraints is specified, where each constraint is associated with preference criteria for making local decisions about the events involved in the constraint, and a reasoner must infer a complete solution to the problem such that, to the extent possible, these local preferences are met in the best way. A constraint framework for reasoning about time is generalized to allow for preferences over event distances and durations, and we study the complexity of solving problems in the resulting formalism. It is shown that while in general such problems are NP-hard, some restrictions on the shape of the preference functions, and on the structure of the preference set, can be enforced to achieve tractability. In these cases, a simple generalization of a single-source shortest path algorithm can be used to compute a globally preferred solution in polynomial time.
NGC 5548: LACK OF A BROAD Fe K{alpha} LINE AND CONSTRAINTS ON THE LOCATION OF THE HARD X-RAY SOURCE
Brenneman, L. W.; Elvis, M.; Krongold, Y.; Liu, Y.; Mathur, S.
2012-01-01
We present an analysis of the co-added and individual 0.7-40 keV spectra from seven Suzaku observations of the Sy 1.5 galaxy NGC 5548 taken over a period of eight weeks. We conclude that the source has a moderately ionized, three-zone warm absorber, a power-law continuum, and exhibits contributions from cold, distant reflection. Relativistic reflection signatures are not significantly detected in the co-added data, and we place an upper limit on the equivalent width of a relativistically broad Fe K{alpha} line at EW {<=} 26 eV at 90% confidence. Thus NGC 5548 can be labeled as a 'weak' type 1 active galactic nucleus (AGN) in terms of its observed inner disk reflection signatures, in contrast to sources with very broad, strong iron lines such as MCG-6-30-15, which are likely much fewer in number. We compare physical properties of NGC 5548 and MCG-6-30-15 that might explain this difference in their reflection properties. Though there is some evidence that NGC 5548 may harbor a truncated inner accretion disk, this evidence is inconclusive, so we also consider light bending of the hard X-ray continuum emission in order to explain the lack of relativistic reflection in our observation. If the absence of a broad Fe K{alpha} line is interpreted in the light-bending context, we conclude that the source of the hard X-ray continuum lies at radii r{sub s} {approx}> 100 r{sub g}. We note, however, that light-bending models must be expanded to include a broader range of physical parameter space in order to adequately explain the spectral and timing properties of average AGNs, rather than just those with strong, broad iron lines.
NASA Astrophysics Data System (ADS)
Castro, Marcelo A.; Thomasson, David; Avila, Nilo A.; Hufton, Jennifer; Senseney, Justin; Johnson, Reed F.; Dyall, Julie
2013-03-01
Monkeypox virus is an emerging zoonotic pathogen that results in up to 10% mortality in humans. Knowledge of clinical manifestations and temporal progression of monkeypox disease is limited to data collected from rare outbreaks in remote regions of Central and West Africa. Clinical observations show that monkeypox infection resembles variola infection. Given the limited capability to study monkeypox disease in humans, characterization of the disease in animal models is required. A previous work focused on the identification of inflammatory patterns using PET/CT image modality in two non-human primates previously inoculated with the virus. In this work we extended techniques used in computer-aided detection of lung tumors to identify inflammatory lesions from monkeypox virus infection and their progression using CT images. Accurate estimation of partial volumes of lung lesions via segmentation is difficult because of poor discrimination between blood vessels, diseased regions, and outer structures. We used hard C-means algorithm in conjunction with landmark based registration to estimate the extent of monkeypox virus induced disease before inoculation and after disease progression. Automated estimation is in close agreement with manual segmentation.
NASA Astrophysics Data System (ADS)
Wang, Ke; Huang, Zhi; Zhong, Zhihua
2014-11-01
Due to the large variations of environment with ever-changing background and vehicles with different shapes, colors and appearances, to implement a real-time on-board vehicle recognition system with high adaptability, efficiency and robustness in complicated environments, remains challenging. This paper introduces a simultaneous detection and tracking framework for robust on-board vehicle recognition based on monocular vision technology. The framework utilizes a novel layered machine learning and particle filter to build a multi-vehicle detection and tracking system. In the vehicle detection stage, a layered machine learning method is presented, which combines coarse-search and fine-search to obtain the target using the AdaBoost-based training algorithm. The pavement segmentation method based on characteristic similarity is proposed to estimate the most likely pavement area. Efficiency and accuracy are enhanced by restricting vehicle detection within the downsized area of pavement. In vehicle tracking stage, a multi-objective tracking algorithm based on target state management and particle filter is proposed. The proposed system is evaluated by roadway video captured in a variety of traffics, illumination, and weather conditions. The evaluating results show that, under conditions of proper illumination and clear vehicle appearance, the proposed system achieves 91.2% detection rate and 2.6% false detection rate. Experiments compared to typical algorithms show that, the presented algorithm reduces the false detection rate nearly by half at the cost of decreasing 2.7%-8.6% detection rate. This paper proposes a multi-vehicle detection and tracking system, which is promising for implementation in an on-board vehicle recognition system with high precision, strong robustness and low computational cost.
Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2003-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.
Smell Detection Agent Based Optimization Algorithm
NASA Astrophysics Data System (ADS)
Vinod Chandra, S. S.
2016-09-01
In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.
Compact location problems with budget and communication constraints
Krumke, S.O.; Noltemeier, H.; Ravi, S.S.; Marathe, M.V.
1995-07-01
The authors consider the problem of placing a specified number p of facilities on the nodes of a given network with two nonnegative edge-weight functions so as to minimize the diameter of the placement with respect to the first weight function subject to a diameter or sum-constraint with respect to the second weight function. Define an ({alpha}, {beta})-approximation algorithm as a polynomial-time algorithm that produces a solution within {alpha} times the optimal value with respect to the first weight function, violating the constraint with respect to the second weight function by a factor of at most {beta}. They show that in general obtaining an ({alpha}, {beta})-approximation for any fixed {alpha}, {beta} {ge} 1 is NP-hard for any of these problems. They also present efficient approximation algorithms for several of the problems studied, when both edge-weight functions obey the triangle inequality.
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Robust H∞ stabilization of a hard disk drive system with a single-stage actuator
NASA Astrophysics Data System (ADS)
Harno, Hendra G.; Kiin Woon, Raymond Song
2015-04-01
This paper considers a robust H∞ control problem for a hard disk drive system with a single stage actuator. The hard disk drive system is modeled as a linear time-invariant uncertain system where its uncertain parameters and high-order dynamics are considered as uncertainties satisfying integral quadratic constraints. The robust H∞ control problem is transformed into a nonlinear optimization problem with a pair of parameterized algebraic Riccati equations as nonconvex constraints. The nonlinear optimization problem is then solved using a differential evolution algorithm to find stabilizing solutions to the Riccati equations. These solutions are used for synthesizing an output feedback robust H∞ controller to stabilize the hard disk drive system with a specified disturbance attenuation level.
Order-to-chaos transition in the hardness of random Boolean satisfiability problems
NASA Astrophysics Data System (ADS)
Varga, Melinda; Sumi, Róbert; Toroczkai, Zoltán; Ercsey-Ravasz, Mária
2016-05-01
Transient chaos is a ubiquitous phenomenon characterizing the dynamics of phase-space trajectories evolving towards a steady-state attractor in physical systems as diverse as fluids, chemical reactions, and condensed matter systems. Here we show that transient chaos also appears in the dynamics of certain efficient algorithms searching for solutions of constraint satisfaction problems that include scheduling, circuit design, routing, database problems, and even Sudoku. In particular, we present a study of the emergence of hardness in Boolean satisfiability (k -SAT), a canonical class of constraint satisfaction problems, by using an analog deterministic algorithm based on a system of ordinary differential equations. Problem hardness is defined through the escape rate κ , an invariant measure of transient chaos of the dynamical system corresponding to the analog algorithm, and it expresses the rate at which the trajectory approaches a solution. We show that for a given density of constraints and fixed number of Boolean variables N , the hardness of formulas in random k -SAT ensembles has a wide variation, approximable by a lognormal distribution. We also show that when increasing the density of constraints α , hardness appears through a second-order phase transition at αχ in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic. A similar behavior is found in 4-SAT as well, however, such a transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter κ ˜NB |α - αχ|1-γ with 0 <γ <1 . We demonstrate that the transition is generated by the appearance of metastable basins in the solution space as the density of constraints α is increased.
Order-to-chaos transition in the hardness of random Boolean satisfiability problems.
Varga, Melinda; Sumi, Róbert; Toroczkai, Zoltán; Ercsey-Ravasz, Mária
2016-05-01
Transient chaos is a ubiquitous phenomenon characterizing the dynamics of phase-space trajectories evolving towards a steady-state attractor in physical systems as diverse as fluids, chemical reactions, and condensed matter systems. Here we show that transient chaos also appears in the dynamics of certain efficient algorithms searching for solutions of constraint satisfaction problems that include scheduling, circuit design, routing, database problems, and even Sudoku. In particular, we present a study of the emergence of hardness in Boolean satisfiability (k-SAT), a canonical class of constraint satisfaction problems, by using an analog deterministic algorithm based on a system of ordinary differential equations. Problem hardness is defined through the escape rate κ, an invariant measure of transient chaos of the dynamical system corresponding to the analog algorithm, and it expresses the rate at which the trajectory approaches a solution. We show that for a given density of constraints and fixed number of Boolean variables N, the hardness of formulas in random k-SAT ensembles has a wide variation, approximable by a lognormal distribution. We also show that when increasing the density of constraints α, hardness appears through a second-order phase transition at α_{χ} in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic. A similar behavior is found in 4-SAT as well, however, such a transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter κ∼N^{B|α-α_{χ}|^{1-γ}} with 0<γ<1. We demonstrate that the transition is generated by the appearance of metastable basins in the solution space as the density of constraints α is increased.
Order-to-chaos transition in the hardness of random Boolean satisfiability problems
NASA Astrophysics Data System (ADS)
Varga, Melinda; Sumi, Robert; Ercsey-Ravasz, Maria; Toroczkai, Zoltan
Transient chaos is a phenomenon characterizing the dynamics of phase space trajectories evolving towards an attractor in physical systems. We show that transient chaos also appears in the dynamics of certain algorithms searching for solutions of constraint satisfaction problems (e.g., Sudoku). We present a study of the emergence of hardness in Boolean satisfiability (k-SAT) using an analog deterministic algorithm. Problem hardness is defined through the escape rate κ, an invariant measure of transient chaos, and it expresses the rate at which the trajectory approaches a solution. We show that the hardness in random k-SAT ensembles has a wide variation approximable by a lognormal distribution. We also show that when increasing the density of constraints α, hardness appears through a second-order phase transition at αc in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic, however, such transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter. We demonstrate that the transition is generated by the appearance of non-solution basins in the solution space as the density of constraints is increased.
Generalizing Atoms in Constraint Logic
NASA Technical Reports Server (NTRS)
Page, C. David, Jr.; Frisch, Alan M.
1991-01-01
This paper studies the generalization of atomic formulas, or atoms, that are augmented with constraints on or among their terms. The atoms may also be viewed as definite clauses whose antecedents express the constraints. Atoms are generalized relative to a body of background information about the constraints. This paper first examines generalization of atoms with only monadic constraints. The paper develops an algorithm for the generalization task and discusses algorithm complexity. It then extends the algorithm to apply to atoms with constraints of arbitrary arity. The paper also presents semantic properties of the generalizations computed by the algorithms, making the algorithms applicable to such problems as abduction, induction, and knowledge base verification. The paper emphasizes the application to induction and presents a pac-learning result for constrained atoms.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmüller, U.; Strozzi, T.
2012-12-01
The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Compact location problems with budget and communication constraints
Krumke, S.O.; Noltemeier, H.; Ravi, S.S.; Marathe, M.V.
1995-05-01
We consider the problem of placing a specified number p of facilities on the nodes of a given network with two nonnegative edge-weight functions so as to minimize the diameter of the placement with respect to the first distance function under diameter or sum-constraints with respect to the second weight function. Define an ({alpha}, {beta})-approximation algorithm as a polynomial-time algorithm that produces a solution within a times the optimal function value, violating the constraint with respect to the second distance function by a factor of at most {beta}. We observe that in general obtaining an ({alpha}, {beta})-approximation for any fixed {alpha}, {beta} {ge} 1 is NP-hard for any of these problems. We present efficient approximation algorithms for the case, when both edge-weight functions obey the triangle inequality. For the problem of minimizing the diameter under a diameter Constraint with respect to the second weight-function, we provide a (2,2)-approximation algorithm. We. also show that no polynomial time algorithm can provide an ({alpha},2 {minus} {var_epsilon})- or (2 {minus} {var_epsilon},{beta})-approximation for any fixed {var_epsilon} > 0 and {alpha},{beta} {ge} 1, unless P = NP. This result is proved to remain true, even if one fixes {var_epsilon}{prime} > 0 and allows the algorithm to place only 2p/{vert_bar}VI{vert_bar}/{sup 6 {minus} {var_epsilon}{prime}} facilities. Our techniques can be extended to the case, when either the objective or the constraint is of sum-type and also to handle additional weights on the nodes of the graph.
Constraint Optimization Literature Review
2015-11-01
hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...algorithm there exists a problem instance for which the runtime is exponential in the size of the problem input. This report reviews the literature on...NP-hard) in general, meaning that for any algorithm there exists a problem instance for which the runtime is exponential in the size of the problem
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Constraints in Genetic Programming
NASA Technical Reports Server (NTRS)
Janikow, Cezary Z.
1996-01-01
Genetic programming refers to a class of genetic algorithms utilizing generic representation in the form of program trees. For a particular application, one needs to provide the set of functions, whose compositions determine the space of program structures being evolved, and the set of terminals, which determine the space of specific instances of those programs. The algorithm searches the space for the best program for a given problem, applying evolutionary mechanisms borrowed from nature. Genetic algorithms have shown great capabilities in approximately solving optimization problems which could not be approximated or solved with other methods. Genetic programming extends their capabilities to deal with a broader variety of problems. However, it also extends the size of the search space, which often becomes too large to be effectively searched even by evolutionary methods. Therefore, our objective is to utilize problem constraints, if such can be identified, to restrict this space. In this publication, we propose a generic constraint specification language, powerful enough for a broad class of problem constraints. This language has two elements -- one reduces only the number of program instances, the other reduces both the space of program structures as well as their instances. With this language, we define the minimal set of complete constraints, and a set of operators guaranteeing offspring validity from valid parents. We also show that these operators are not less efficient than the standard genetic programming operators if one preprocesses the constraints - the necessary mechanisms are identified.
Agyepong, Irene Akua
2015-03-01
A major constraint to the application of any form of knowledge and principles is the awareness, understanding and acceptance of the knowledge and principles. Systems Thinking (ST) is a way of understanding and thinking about the nature of health systems and how to make and implement decisions within health systems to maximize desired and minimize undesired effects. A major constraint to applying ST within health systems in Low- and Middle-Income Countries (LMICs) would appear to be an awareness and understanding of ST and how to apply it. This is a fundamental constraint and in the increasing desire to enable the application of ST concepts in health systems in LMIC and understand and evaluate the effects; an essential first step is going to be enabling of a wide spread as well as deeper understanding of ST and how to apply this understanding.
The Probabilistic Admissible Region with Additional Constraints
NASA Astrophysics Data System (ADS)
Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.
The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea
Nan, Zhufen; Chi, Xuefen
2016-12-20
The IEEE 802.15.7 protocol suggests that it could coordinate the channel access process based on the competitive method of carrier sensing. However, the directionality of light and randomness of diffuse reflection would give rise to a serious imperfect carrier sense (ICS) problem [e.g., hidden node (HN) problem and exposed node (EN) problem], which brings great challenges in realizing the optical carrier sense multiple access (CSMA) mechanism. In this paper, the carrier sense process implemented by diffuse reflection light is modeled as the choice of independent sets. We establish an ICS model with the presence of ENs and HNs for the multi-point to multi-point visible light communication (VLC) uplink communications system. Considering the severe optical ICS problem, an optical hard core point process (OHCPP) is developed, which characterizes the optical CSMA for the indoor VLC uplink communications system. Due to the limited coverage of the transmitted optical signal, in our OHCPP, the ENs within the transmitters' carrier sense region could be retained provided that they could not corrupt the ongoing communications. Moreover, because of the directionality of both light emitting diode (LED) transmitters and receivers, theoretical analysis of the HN problem becomes difficult. In this paper, we derive the closed-form expression for approximating the outage probability and transmission capacity of VLC networks with the presence of HNs and ENs. Simulation results validate the analysis and also show the existence of an optimal physical carrier-sensing threshold that maximizes the transmission capacity for a given emission angle of LED.
A hybrid approach to protein folding problem integrating constraint programming with local search
2010-01-01
Background The protein folding problem remains one of the most challenging open problems in computational biology. Simplified models in terms of lattice structure and energy function have been proposed to ease the computational hardness of this optimization problem. Heuristic search algorithms and constraint programming are two common techniques to approach this problem. The present study introduces a novel hybrid approach to simulate the protein folding problem using constraint programming technique integrated within local search. Results Using the face-centered-cubic lattice model and 20 amino acid pairwise interactions energy function for the protein folding problem, a constraint programming technique has been applied to generate the neighbourhood conformations that are to be used in generic local search procedure. Experiments have been conducted for a few small and medium sized proteins. Results have been compared with both pure constraint programming approach and local search using well-established local move set. Substantial improvements have been observed in terms of final energy values within acceptable runtime using the hybrid approach. Conclusion Constraint programming approaches usually provide optimal results but become slow as the problem size grows. Local search approaches are usually faster but do not guarantee optimal solutions and tend to stuck in local minima. The encouraging results obtained on the small proteins show that these two approaches can be combined efficiently to obtain better quality solutions within acceptable time. It also encourages future researchers on adopting hybrid techniques to solve other hard optimization problems. PMID:20122212
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch.
Karthikeyan, M; Raja, T Sree Ranga
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch
Karthikeyan, M.; Sree Ranga Raja, T.
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710
NASA Technical Reports Server (NTRS)
Knox, C. E.
1983-01-01
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.
Knox, C.E.
1983-03-01
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.
Ji, Bin; Yuan, Xiaohui; Yuan, Yanbin
2017-02-24
Continuous berth allocation problem (BAPC) is a major optimization problem in transportation engineering. It mainly aims at minimizing the port stay time of ships by optimally scheduling ships to the berthing areas along quays while satisfying several practical constraints. Most of the previous literatures handle the BAPC by heuristics with different constraint handling strategies as it is proved NP-hard. In this paper, we transform the constrained single-objective BAPC (SBAPC) model into unconstrained multiobjective BAPC (MBAPC) model by converting the constraint violation as another objective, which is known as the multiobjective optimization (MOO) constraint handling technique. Then a bias selection modified non-dominated sorting genetic algorithm II (MNSGA-II) is proposed to optimize the MBAPC, in which an archive is designed as an efficient complementary mechanism to provide search bias toward the feasible solution. Finally, the proposed MBAPC model and the MNSGA-II approach are tested on instances from literature and generation. We compared the results obtained by MNSGA-II with other MOO algorithms under the MBAPC model and the results obtained by single-objective oriented methods under the SBAPC model. The comparison shows the feasibility of the MBAPC model and the advantages of the MNSGA-II algorithm.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1991-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocations for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its applications to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1991-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1993-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
FATIGUE OF BIOMATERIALS: HARD TISSUES
Arola, D.; Bajaj, D.; Ivancik, J.; Majd, H.; Zhang, D.
2009-01-01
The fatigue and fracture behavior of hard tissues are topics of considerable interest today. This special group of organic materials comprises the highly mineralized and load-bearing tissues of the human body, and includes bone, cementum, dentin and enamel. An understanding of their fatigue behavior and the influence of loading conditions and physiological factors (e.g. aging and disease) on the mechanisms of degradation are essential for achieving lifelong health. But there is much more to this topic than the immediate medical issues. There are many challenges to characterizing the fatigue behavior of hard tissues, much of which is attributed to size constraints and the complexity of their microstructure. The relative importance of the constituents on the type and distribution of defects, rate of coalescence, and their contributions to the initiation and growth of cracks, are formidable topics that have not reached maturity. Hard tissues also provide a medium for learning and a source of inspiration in the design of new microstructures for engineering materials. This article briefly reviews fatigue of hard tissues with shared emphasis on current understanding, the challenges and the unanswered questions. PMID:20563239
On Constraints in Assembly Planning
Calton, T.L.; Jones, R.E.; Wilson, R.H.
1998-12-17
Constraints on assembly plans vary depending on product, assembly facility, assembly volume, and many other factors. Assembly costs and other measures to optimize vary just as widely. To be effective, computer-aided assembly planning systems must allow users to express the plan selection criteria that appIy to their products and production environments. We begin this article by surveying the types of user criteria, both constraints and quality measures, that have been accepted by assembly planning systems to date. The survey is organized along several dimensions, including strategic vs. tactical criteria; manufacturing requirements VS. requirements of the automated planning process itself and the information needed to assess compliance with each criterion. The latter strongly influences the efficiency of planning. We then focus on constraints. We describe a framework to support a wide variety of user constraints for intuitive and efficient assembly planning. Our framework expresses all constraints on a sequencing level, specifying orders and conditions on part mating operations in a number of ways. Constraints are implemented as simple procedures that either accept or reject assembly operations proposed by the planner. For efficiency, some constraints are supplemented with special-purpose modifications to the planner's algorithms. Fast replanning enables an interactive plan-view-constrain-replan cycle that aids in constraint discovery and documentation. We describe an implementation of the framework in a computer-aided assembly planning system and experiments applying the system to a number of complex assemblies, including one with 472 parts.
Analysis of Algorithms: Coping with Hard Problems
ERIC Educational Resources Information Center
Kolata, Gina Bari
1974-01-01
Although today's computers can perform as many as one million operations per second, there are many problems that are still too large to be solved in a straightforward manner. Recent work indicates that many approximate solutions are useful and more efficient than exact solutions. (Author/RH)
Artificial immune algorithm for multi-depot vehicle scheduling problems
NASA Astrophysics Data System (ADS)
Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling
2008-10-01
In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.
NASA Astrophysics Data System (ADS)
Li, Yuzhong
Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.
Cluster and constraint analysis in tetrahedron packings.
Jin, Weiwei; Lu, Peng; Liu, Lufeng; Li, Shuixiang
2015-04-01
The disordered packings of tetrahedra often show no obvious macroscopic orientational or positional order for a wide range of packing densities, and it has been found that the local order in particle clusters is the main order form of tetrahedron packings. Therefore, a cluster analysis is carried out to investigate the local structures and properties of tetrahedron packings in this work. We obtain a cluster distribution of differently sized clusters, and peaks are observed at two special clusters, i.e., dimer and wagon wheel. We then calculate the amounts of dimers and wagon wheels, which are observed to have linear or approximate linear correlations with packing density. Following our previous work, the amount of particles participating in dimers is used as an order metric to evaluate the order degree of the hierarchical packing structure of tetrahedra, and an order map is consequently depicted. Furthermore, a constraint analysis is performed to determine the isostatic or hyperstatic region in the order map. We employ a Monte Carlo algorithm to test jamming and then suggest a new maximally random jammed packing of hard tetrahedra from the order map with a packing density of 0.6337.
Harmony search algorithm: application to the redundancy optimization problem
NASA Astrophysics Data System (ADS)
Nahas, Nabil; Thien-My, Dao
2010-09-01
The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.
Constraint programming based biomarker optimization.
Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng
2015-01-01
Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.
Foundations of support constraint machines.
Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello
2015-02-01
The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses.
Use of Justified Constraints in Coherent Diffractive Imaging
Kim, S.; McNulty, I.; Chen, Y. K.; Putkunz, C. T.; Dunand, D. C.
2011-09-09
We demonstrate the use of physically justified object constraints in x-ray Fresnel coherent diffractive imaging on a sample of nanoporous gold prepared by dealloying. Use of these constraints in the reconstruction algorithm enabled highly reliable imaging of the sample's shape and quantification of the 23- to 52-nm pore structure within it without use of a tight object support constraint.
A Monte Carlo Approach for Adaptive Testing with Content Constraints
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander
2008-01-01
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
Network interdiction with budget constraints
Santhi, Nankakishore; Pan, Feng
2009-01-01
Several scenarios exist in the modern interconnected world which call for efficient network interdiction algorithms. Applications are varied, including computer network security, prevention of spreading of Internet worms, policing international smuggling networks, controlling spread of diseases and optimizing the operation of large public energy grids. In this paper we consider some natural network optimization questions related to the budget constrained interdiction problem over general graphs. Many of these questions turn out to be computationally hard to tackle. We present a particularly interesting practical form of the interdiction question which we show to be computationally tractable. A polynomial time algorithm is then presented for this problem.
Parallel-batch scheduling and transportation coordination with waiting time constraint.
Gong, Hua; Chen, Daheng; Xu, Ke
2014-01-01
This paper addresses a parallel-batch scheduling problem that incorporates transportation of raw materials or semifinished products before processing with waiting time constraint. The orders located at the different suppliers are transported by some vehicles to a manufacturing facility for further processing. One vehicle can load only one order in one shipment. Each order arriving at the facility must be processed in the limited waiting time. The orders are processed in batches on a parallel-batch machine, where a batch contains several orders and the processing time of the batch is the largest processing time of the orders in it. The goal is to find a schedule to minimize the sum of the total flow time and the production cost. We prove that the general problem is NP-hard in the strong sense. We also demonstrate that the problem with equal processing times on the machine is NP-hard. Furthermore, a dynamic programming algorithm in pseudopolynomial time is provided to prove its ordinarily NP-hardness. An optimal algorithm in polynomial time is presented to solve a special case with equal processing times and equal transportation times for each order.
Constraint-based interactive assembly planning
Jones, R.E.; Wilson, R.H.; Calton, T.L.
1997-03-01
The constraints on assembly plans vary depending on the product, assembly facility, assembly volume, and many other factors. This paper describes the principles and implementation of a framework that supports a wide variety of user-specified constraints for interactive assembly planning. Constraints from many sources can be expressed on a sequencing level, specifying orders and conditions on part mating operations in a number of ways. All constraints are implemented as filters that either accept or reject assembly operations proposed by the planner. For efficiency, some constraints are supplemented with special-purpose modifications to the planner`s algorithms. Replanning is fast enough to enable a natural plan-view-constrain-replan cycle that aids in constraint discovery and documentation. We describe an implementation of the framework in a computer-aided assembly planning system and experiments applying the system to several complex assemblies. 12 refs., 2 figs., 3 tabs.
Ordering of hard particles between hard walls
NASA Astrophysics Data System (ADS)
Chrzanowska, A.; Teixeira, P. I. C.; Ehrentraut, H.; Cleaver, D. J.
2001-05-01
The structure of a fluid of hard Gaussian overlap particles of elongation κ = 5, confined between two hard walls, has been calculated from density-functional theory and Monte Carlo simulations. By using the exact expression for the excluded volume kernel (Velasco E and Mederos L 1998 J. Chem. Phys. 109 2361) and solving the appropriate Euler-Lagrange equation entirely numerically, we have been able to extend our theoretical predictions into the nematic phase, which had up till now remained relatively unexplored due to the high computational cost. Simulation reveals a rich adsorption behaviour with increasing bulk density, which is described semi-quantitatively by the theory without any adjustable parameters.
The Approximability of Learning and Constraint Satisfaction Problems
2010-10-07
The Approximability of Learning and Constraint Satisfaction Problems Yi Wu CMU-CS-10-142 October 7, 2010 School of Computer Science Carnegie Mellon...00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE The Approximability of Learning and Constraint Satisfaction Problems 5a. CONTRACT NUMBER 5b...approximability of two classes of NP-hard problems: Constraint Satisfaction Problems (CSPs) and Computational Learning Problems. For CSPs, we mainly study the
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
NASA Technical Reports Server (NTRS)
Hauser, D. L.; Buras, D. F.; Corbin, J. M.
1987-01-01
Rubber-hardness tester modified for use on rigid polyurethane foam. Provides objective basis for evaluation of improvements in foam manufacturing and inspection. Typical acceptance criterion requires minimum hardness reading of 80 on modified tester. With adequate correlation tests, modified tester used to measure indirectly tensile and compressive strengths of foam.
Session: Hard Rock Penetration
Tennyson, George P. Jr.; Dunn, James C.; Drumheller, Douglas S.; Glowka, David A.; Lysne, Peter
1992-01-01
This session at the Geothermal Energy Program Review X: Geothermal Energy and the Utility Market consisted of five presentations: ''Hard Rock Penetration - Summary'' by George P. Tennyson, Jr.; ''Overview - Hard Rock Penetration'' by James C. Dunn; ''An Overview of Acoustic Telemetry'' by Douglas S. Drumheller; ''Lost Circulation Technology Development Status'' by David A. Glowka; ''Downhole Memory-Logging Tools'' by Peter Lysne.
Object-oriented algorithmic laboratory for ordering sparse matrices
Kumfert, Gary Karl
2000-05-01
We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively
NASA Astrophysics Data System (ADS)
Liang, Xuecheng
Dynamic hardness (Pd) of 22 different pure metals and alloys having a wide range of elastic modulus, static hardness, and crystal structure were measured in a gas pulse system. The indentation contact diameter with an indenting sphere and the radius (r2) of curvature of the indentation were determined by the curve fitting of the indentation profile data. r 2 measured by the profilometer was compared with that calculated from Hertz equation in both dynamic and static conditions. The results indicated that the curvature change due to elastic recovery after unloading is approximately proportional to the parameters predicted by Hertz equation. However, r 2 is less than the radius of indenting sphere in many cases which is contradictory to Hertz analysis. This discrepancy is believed due to the difference between Hertzian and actual stress distributions underneath the indentation. Factors which influence indentation elastic recovery were also discussed. It was found that Tabor dynamic hardness formula always gives a lower value than that directly from dynamic hardness definition DeltaE/V because of errors mainly from Tabor's rebound equation and the assumption that dynamic hardness at the beginning of rebound process (Pr) is equal to kinetic energy change of an impact sphere over the formed crater volume (Pd) in the derivation process for Tabor's dynamic hardness formula. Experimental results also suggested that dynamic to static hardness ratio of a material is primarily determined by its crystal structure and static hardness. The effects of strain rate and temperature rise on this ratio were discussed. A vacuum rotating arm apparatus was built to measure Pd at 70, 127, and 381 mum sphere sizes, these results exhibited that Pd is highly depended on the sphere size due to the strain rate effects. P d was also used to substitute for static hardness to correlate with abrasion and erosion resistance of metals and alloys. The particle size effects observed in erosion were
Dynamic Constraint Satisfaction with Reasonable Global Constraints
NASA Technical Reports Server (NTRS)
Frank, Jeremy
2003-01-01
Previously studied theoretical frameworks for dynamic constraint satisfaction problems (DCSPs) employ a small set of primitive operators to modify a problem instance. They do not address the desire to model problems using sophisticated global constraints, and do not address efficiency questions related to incremental constraint enforcement. In this paper, we extend a DCSP framework to incorporate global constraints with flexible scope. A simple approach to incremental propagation after scope modification can be inefficient under some circumstances. We characterize the cases when this inefficiency can occur, and discuss two ways to alleviate this problem: adding rejection variables to the scope of flexible constraints, and adding new features to constraints that permit increased control over incremental propagation.
Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem
NASA Astrophysics Data System (ADS)
Tein, Lim Huai; Ramli, Razamin
2014-12-01
Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
ERIC Educational Resources Information Center
Stocker, H. Robert; Hilton, Thomas S. E.
1991-01-01
Suggests strategies that make hard disk organization easy and efficient, such as making, changing, and removing directories; grouping files by subject; naming files effectively; backing up efficiently; and using PATH. (JOW)
Canavan, G.H.
1997-02-01
The inference of the diameter of hard objects is insensitive to radiation efficiency. Deductions of radiation efficiency from observations are very sensitive - possibly overly so. Inferences of the initial velocity and trajectory vary similarly, and hence are comparably sensitive.
Condensation transition in polydisperse hard rods.
Evans, M R; Majumdar, S N; Pagonabarraga, I; Trizac, E
2010-01-07
We study a mass transport model, where spherical particles diffusing on a ring can stochastically exchange volume v, with the constraint of a fixed total volume V= sum(i=1) (N)v(i), N being the total number of particles. The particles, referred to as p-spheres, have a linear size that behaves as v(i) (1/p) and our model thus represents a gas of polydisperse hard rods with variable diameters v(i) (1/p). We show that our model admits a factorized steady state distribution which provides the size distribution that minimizes the free energy of a polydisperse hard-rod system, under the constraints of fixed N and V. Complementary approaches (explicit construction of the steady state distribution on the one hand; density functional theory on the other hand) completely and consistently specify the behavior of the system. A real space condensation transition is shown to take place for p>1; beyond a critical density a macroscopic aggregate is formed and coexists with a critical fluid phase. Our work establishes the bridge between stochastic mass transport approaches and the optimal polydispersity of hard sphere fluids studied in previous articles.
Constraint Theory and Roken Bond Bending Constraints in Oxide Glasses
NASA Astrophysics Data System (ADS)
Zhang, Min
can understand the rigidity percolation threshold shift from x = 0.20 to x = 0.23, if one assumes a fraction of 20% chalcogen atoms have their bond angle constraints broken. A simple interpretation is that these chalcogen atoms (with broken bond bending constraints) represent short floppy chain-segments connecting the more rigid tetrahedral Ge(Se_{1/2} )_4 units. Thus the concept of broken bond bending constraints plays an important role in promoting glass forming tendency of materials. The extended constraint theory has also found application in aspect of mechanical property of hydrogenated diamond like carbon, silicon carbide and silicon thin films. We have established for the first time a linear relationship between measured hardness and hardness index, a geometric parameter derived from constraint theory. The slopes of such linear functions for different type materials are determined by chemical effect that reflect bonding type and interaction strength among atoms.
Constraint monitoring in TOSCA
NASA Technical Reports Server (NTRS)
Beck, Howard
1992-01-01
The Job-Shop Scheduling Problem (JSSP) deals with the allocation of resources over time to factory operations. Allocations are subject to various constraints (e.g., production precedence relationships, factory capacity constraints, and limits on the allowable number of machine setups) which must be satisfied for a schedule to be valid. The identification of constraint violations and the monitoring of constraint threats plays a vital role in schedule generation in terms of the following: (1) directing the scheduling process; and (2) informing scheduling decisions. This paper describes a general mechanism for identifying constraint violations and monitoring threats to the satisfaction of constraints throughout schedule generation.
Constraint checking during error recovery
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.; Wong, Johnny S. K.
1993-01-01
The system-level software onboard a spacecraft is responsible for recovery from communication, power, thermal, and computer-health anomalies that may occur. The recovery must occur without disrupting any critical scientific or engineering activity that is executing at the time of the error. Thus, the error-recovery software may have to execute concurrently with the ongoing acquisition of scientific data or with spacecraft maneuvers. This work provides a technique by which the rules that constrain the concurrent execution of these processes can be modeled in a graph. An algorithm is described that uses this model to validate that the constraints hold for all concurrent executions of the error-recovery software with the software that controls the science and engineering activities of the spacecraft. The results are applicable to a variety of control systems with critical constraints on the timing and ordering of the events they control.
Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems.
Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo
2015-09-25
Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems.
Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems
Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo
2015-01-01
Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
ERIC Educational Resources Information Center
Parrino, Frank M.
2003-01-01
Interviews with school board members and administrators produced a list of suggestions for balancing a budget in hard times. Among these are changing calendars and schedules to reduce heating and cooling costs; sharing personnel; rescheduling some extracurricular activities; and forming cooperative agreements with other districts. (MLF)
ERIC Educational Resources Information Center
Berry, John N., III
2009-01-01
Roberta Stevens and Kent Oliver are campaigning hard for the presidency of the American Library Association (ALA). Stevens is outreach projects and partnerships officer at the Library of Congress. Oliver is executive director of the Stark County District Library in Canton, Ohio. They have debated, discussed, and posted web sites, Facebook pages,…
ERIC Educational Resources Information Center
Sturgeon, Julie
2008-01-01
Acting on information from students who reported seeing a classmate looking at inappropriate material on a school computer, school officials used forensics software to plunge the depths of the PC's hard drive, searching for evidence of improper activity. Images were found in a deleted Internet Explorer cache as well as deleted file space.…
Moisture influence on near-infrared prediction of wheat hardness
NASA Astrophysics Data System (ADS)
Windham, William R.; Gaines, Charles S.; Leffler, Richard G.
1991-02-01
Recently near infrared (NTR) reflectance instrumentation has been used to provide an empirical measure of wheat hardness. This hardness scale is based on the radiation scattering properties of meal particles at 1680 and 2230 nm. Hard wheats have a larger mean particles size (PS) after grinding than soft wheats. However wheat kernel moisture content can influence mean PS after grinding. The objective of this study was to determine the sensitivity of MR wheat hardness measurements to moisture content and to make the hardness score independent of moisture by correcting hardness measurements for the actual moisture content of measured samples. Forty wheat cultivars composed of hard red winter hard red spring soft red winter and soft white winter were used. Wheat kernel subsamples were stored at 20 40 60 and 80 relative humidity (RH). After equilibration samples were ground and the meal analyzed for hardness score (HS) and moisture. HS were 48 50 54 and 65 for 20 40 60 and 80 RH respectively. Differences in HS within each wheat class were the result of a moisture induced change in the PS of the meal. An algorithm was developed to correct HS to 11 moisture. This correction provides HS that are nearly independent of moisture content. 1.
Unemployment: Hard-Core or Hard-Shell?
ERIC Educational Resources Information Center
Lauer, Robert H.
1972-01-01
The term hard-core'' makes the unemployed culpable; the term hard shell'' shifts the burden to the employer, and the evidence from the suburban plant indicates that a substantial part of the problem must lie there. (DM)
NASA Astrophysics Data System (ADS)
Adams, Philip; Prozorov, Ruslan
2005-03-01
We present the magnetic response of Type-II superconductivity in the extreme pinning limit, where screening currents within an order of magnitude of the Ginzburg-Landau depairing critical current density develop upon the application of a magnetic field. We show that this ``super-hard'' limit is well approximated in highly disordered, cold drawn, Nb wire whose magnetization response is characterized by a cascade of Meissner-like phases, each terminated by a catastrophic collapse of the magnetization. Direct magneto-optic measurements of the flux penetration depth in the virgin magnetization branch are in excellent agreement with the exponential model in which Jc(B)=Jco(-B/Bo), where Jco˜5x10^6 A/cm^2 for Nb. The implications for the fundamental limiting hardness of a superconductor will be discussed.
Quiet planting in the locked constraints satisfaction problems
Zdeborova, Lenka; Krzakala, Florent
2009-01-01
We study the planted ensemble of locked constraint satisfaction problems. We describe the connection between the random and planted ensembles. The use of the cavity method is combined with arguments from reconstruction on trees and first and second moment considerations; in particular the connection with the reconstruction on trees appears to be crucial. Our main result is the location of the hard region in the planted ensemble, thus providing hard satisfiable benchmarks. In a part of that hard region instances have with high probability a single satisfying assignment.
About some types of constraints in problems of routing
NASA Astrophysics Data System (ADS)
Petunin, A. A.; Polishuk, E. G.; Chentsov, A. G.; Chentsov, P. A.; Ukolov, S. S.
2016-12-01
Many routing problems arising in different applications can be interpreted as a discrete optimization problem with additional constraints. The latter include generalized travelling salesman problem (GTSP), to which task of tool routing for CNC thermal cutting machines is sometimes reduced. Technological requirements bound to thermal fields distribution during cutting process are of great importance when developing algorithms for this task solution. These requirements give rise to some specific constraints for GTSP. This paper provides a mathematical formulation for the problem of thermal fields calculating during metal sheet thermal cutting. Corresponding algorithm with its programmatic implementation is considered. The mathematical model allowing taking such constraints into account considering other routing problems is discussed either.
On Reformulating Planning as Dynamic Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Jonsson, Ari K.; Morris, Paul; Koga, Dennis (Technical Monitor)
2000-01-01
In recent years, researchers have reformulated STRIPS planning problems as SAT problems or CSPs. In this paper, we discuss the Constraint-Based Interval Planning (CBIP) paradigm, which can represent planning problems incorporating interval time and resources. We describe how to reformulate mutual exclusion constraints for a CBIP-based system, the Extendible Uniform Remote Operations Planner Architecture (EUROPA). We show that reformulations involving dynamic variable domains restrict the algorithms which can be used to solve the resulting DCSP. We present an alternative formulation which does not employ dynamic domains, and describe the relative merits of the different reformulations.
Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search.
Rajalakshmi, S; Maguteeswaran, R
2015-01-01
A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established.
Resource Allocation in Cooperative OFDMA Systems with Fairness Constraint
NASA Astrophysics Data System (ADS)
Li, Hongxing; Luo, Hanwen; Wang, Xinbing; Ding, Ming; Chen, Wen
This letter investigates a subchannel and power allocation (SPA) algorithm which maximizes the throughput of a user under the constraints of total transmit power and fair subchannel occupation among relay nodes. The proposed algorithm reduces computational complexity from exponential to linear in the number of subchannels at the expense of a small performance loss.
On Random Betweenness Constraints
NASA Astrophysics Data System (ADS)
Goerdt, Andreas
Ordering constraints are analogous to instances of the satisfiability problem in conjunctive normalform, but instead of a boolean assignment we consider a linear ordering of the variables in question. A clause becomes true given a linear ordering iff the relative ordering of its variables obeys the constraint considered.
Creating Positive Task Constraints
ERIC Educational Resources Information Center
Mally, Kristi K.
2006-01-01
Constraints are characteristics of the individual, the task, or the environment that mold and shape movement choices and performances. Constraints can be positive--encouraging proficient movements or negative--discouraging movement or promoting ineffective movements. Physical educators must analyze, evaluate, and determine the effect various…
Credit Constraints in Education
ERIC Educational Resources Information Center
Lochner, Lance; Monge-Naranjo, Alexander
2012-01-01
We review studies of the impact of credit constraints on the accumulation of human capital. Evidence suggests that credit constraints have recently become important for schooling and other aspects of households' behavior. We highlight the importance of early childhood investments, as their response largely determines the impact of credit…
Constraint Reasoning Over Strings
NASA Technical Reports Server (NTRS)
Koga, Dennis (Technical Monitor); Golden, Keith; Pang, Wanlin
2003-01-01
This paper discusses an approach to representing and reasoning about constraints over strings. We discuss how many string domains can often be concisely represented using regular languages, and how constraints over strings, and domain operations on sets of strings, can be carried out using this representation.
Powered Descent Guidance with General Thrust-Pointing Constraints
NASA Technical Reports Server (NTRS)
Carson, John M., III; Acikmese, Behcet; Blackmore, Lars
2013-01-01
The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.
Sheinberg, H.
1983-07-26
A composition of matter having a Rockwell A hardness of at least 85 is formed from a precursor mixture comprising between 3 and 10 wt % boron carbide and the remainder a metal mixture comprising from 70 to 90% tungsten or molybdenum, with the remainder of the metal mixture comprising nickel and iron or a mixture thereof. The composition has a relatively low density of between 7 and 14 g/cc. The precursor is preferably hot pressed to yield a composition having greater than 100% of theoretical density.
Sheinberg, Haskell
1986-01-01
A composition of matter having a Rockwell A hardness of at least 85 is formed from a precursor mixture comprising between 3 and 10 weight percent boron carbide and the remainder a metal mixture comprising from 70 to 90 percent tungsten or molybdenum, with the remainder of the metal mixture comprising nickel and iron or a mixture thereof. The composition has a relatively low density of between 7 to 14 g/cc. The precursor is preferably hot pressed to yield a composition having greater than 100% of theoretical density.
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and
Inclusive Flavour Tagging Algorithm
NASA Astrophysics Data System (ADS)
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-10-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.
Arching in tapped deposits of hard disks.
Pugnaloni, Luis A; Valluzzi, Marcos G; Valluzzi, Lucas G
2006-05-01
We simulate the tapping of a bed of hard disks in a rectangular box by using a pseudodynamic algorithm. In these simulations, arches are unambiguously defined and we can analyze their properties as a function of the tapping amplitude. We find that an order-disorder transition occurs within a narrow range of tapping amplitudes as has been seen by others. Arches are always present in the system although they exhibit regular shapes in the ordered regime. Interestingly, an increase in the number of arches does not always correspond to a reduction in the packing fraction. This is in contrast with what is found in three-dimensional systems.
Bech, A. O.; Kipling, M. D.; Heather, J. C.
1962-01-01
In Great Britain there have been no published reports of respiratory disease occurring amongst workers in the hard metal (tungsten carbide) industry. In this paper the clinical and radiological findings in six cases and the pathological findings in one are described. In two cases physiological studies indicated mild alveolar diffusion defects. Histological examination in a fatal case revealed diffuse pulmonary interstitial fibrosis with marked peribronchial and perivascular fibrosis and bronchial epithelial hyperplasia and metaplasia. Radiological surveys revealed the sporadic occurrence and low incidence of the disease. The alterations in respiratory mechanics which occurred in two workers following a day's exposure to dust are described. Airborne dust concentrations are given. The industrial process is outlined and the literature is reviewed. The toxicity of the metals is discussed, and our findings are compared with those reported from Europe and the United States. We are of the opinion that the changes which we would describe as hard metal disease are caused by the inhalation of dust at work and that the component responsible may be cobalt. Images PMID:13970036
A dual method for optimal control problems with initial and final boundary constraints.
NASA Technical Reports Server (NTRS)
Pironneau, O.; Polak, E.
1973-01-01
This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.
Total-variation regularization with bound constraints
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
Approximate resolution of hard numbering problems
Bailleux, O.; Chabrier, J.J.
1996-12-31
We present a new method for estimating the number of solutions of constraint satisfaction problems. We use a stochastic forward checking algorithm for drawing a sample of paths from a search tree. With this sample, we compute two values related to the number of solutions of a CSP instance. First, an unbiased estimate, second, a lower bound with an arbitrary low error probability. We will describe applications to the Boolean Satisfiability problem and the Queens problem. We shall give some experimental results for these problems.
Join-Graph Propagation Algorithms
Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina
2010-01-01
The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057
Fan, Quan-Yong; Yang, Guang-Hong
2017-01-01
The state inequality constraints have been hardly considered in the literature on solving the nonlinear optimal control problem based the adaptive dynamic programming (ADP) method. In this paper, an actor-critic (AC) algorithm is developed to solve the optimal control problem with a discounted cost function for a class of state-constrained nonaffine nonlinear systems. To overcome the difficulties resulting from the inequality constraints and the nonaffine nonlinearities of the controlled systems, a novel transformation technique with redesigned slack functions and a pre-compensator method are introduced to convert the constrained optimal control problem into an unconstrained one for affine nonlinear systems. Then, based on the policy iteration (PI) algorithm, an online AC scheme is proposed to learn the nearly optimal control policy for the obtained affine nonlinear dynamics. Using the information of the nonlinear model, novel adaptive update laws are designed to guarantee the convergence of the neural network (NN) weights and the stability of the affine nonlinear dynamics without the requirement for the probing signal. Finally, the effectiveness of the proposed method is validated by simulation studies.
Constraint Embedding Technique for Multibody System Dynamics
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Cheng, Michael K.
2011-01-01
Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with
Evolutionary Algorithm for Calculating Available Transfer Capability
NASA Astrophysics Data System (ADS)
Šošić, Darko; Škokljev, Ivan
2013-09-01
The paper presents an evolutionary algorithm for calculating available transfer capability (ATC). ATC is a measure of the transfer capability remaining in the physical transmission network for further commercial activity over and above already committed uses. In this paper, MATLAB software is used to determine the ATC between any bus in deregulated power systems without violating system constraints such as thermal, voltage, and stability constraints. The algorithm is applied on IEEE 5 bus system and on IEEE 30 bus system.
Nonlinear equality constraints in feasible sequential quadratic programming
Lawrence, C.; Tits, A.
1994-12-31
In this talk we show that convergence of a feasible sequential quadratic programming algorithm modified to handle smooth nonlinear equality constraints. The modification of the algorithm to incorporate equality constraints is based on a scheme proposed by Mayne and Polak and is implemented in fsqp/cfsqp, an optimization package that generates feasible iterates. Nonlinear equality constraints are treated as {open_quotes}{<=}-type constraints to be satisfied by all iterates, thus precluding any positive value, and an exact penalty term is added to the objective function which penalizes negative values. For example, the problem minimize f(x) s.t. h(x) = 0, with h(x) a scalar, is replaced by minimize f(x) - ch(x) s.t. h(x) {<=} 0. The modified problem is equivalent to the original problem when c is large enough (but finite). Such a value is determined automatically via iterative adjustments.
Programming the gradient projection algorithm
NASA Technical Reports Server (NTRS)
Hargrove, A.
1983-01-01
The gradient projection method of numerical optimization which is applied to problems having linear constraints but nonlinear objective functions is described and analyzed. The algorithm is found to be efficient and thorough for small systems, but requires the addition of auxiliary methods and programming for large scale systems with severe nonlinearities. In order to verify the theoretical results a digital computer is used to simulate the algorithm.
Overview: Hard Rock Penetration
Dunn, J.C.
1992-08-01
The Hard Rock Penetration program is developing technology to reduce the costs of drilling and completing geothermal wells. Current projects include: lost circulation control, rock penetration mechanics, instrumentation, and industry/DOE cost shared projects of the Geothermal Drilling organization. Last year, a number of accomplishments were achieved in each of these areas. A new flow meter being developed to accurately measure drilling fluid outflow was tested extensively during Long Valley drilling. Results show that this meter is rugged, reliable, and can provide useful measurements of small differences in fluid inflow and outflow rates. By providing early indications of fluid gain or loss, improved control of blow-out and lost circulation problems during geothermal drilling can be expected. In the area of downhole tools for lost circulation control, the concept of a downhole injector for injecting a two-component, fast-setting cementitious mud was developed. DOE filed a patent application for this concept during FY 91. The design criteria for a high-temperature potassium, uranium, thorium logging tool featuring a downhole data storage computer were established, and a request for proposals was submitted to tool development companies. The fundamental theory of acoustic telemetry in drill strings was significantly advanced through field experimentation and analysis. A new understanding of energy loss mechanisms was developed.
Overview - Hard Rock Penetration
Dunn, James C.
1992-03-24
The Hard Rock Penetration program is developing technology to reduce the costs of drilling and completing geothermal wells. Current projects include: lost circulation control, rock penetration mechanics, instrumentation, and industry/DOE cost shared projects of the Geothermal Drilling Organization. Last year, a number of accomplishments were achieved in each of these areas. A new flow meter being developed to accurately measure drilling fluid outflow was tested extensively during Long Valley drilling. Results show that this meter is rugged, reliable, and can provide useful measurements of small differences in fluid inflow and outflow rates. By providing early indications of fluid gain or loss, improved control of blow-out and lost circulation problems during geothermal drilling can be expected. In the area of downhole tools for lost circulation control, the concept of a downhole injector for injecting a two-component, fast-setting cementitious mud was developed. DOE filed a patent application for this concept during FY 91. The design criteria for a high-temperature potassium, uranium, thorium logging tool featuring a downhole data storage computer were established, and a request for proposals was submitted to tool development companies. The fundamental theory of acoustic telemetry in drill strings was significantly advanced through field experimentation and analysis. A new understanding of energy loss mechanisms was developed.
Overview: Hard Rock Penetration
Dunn, J.C.
1992-01-01
The Hard Rock Penetration program is developing technology to reduce the costs of drilling and completing geothermal wells. Current projects include: lost circulation control, rock penetration mechanics, instrumentation, and industry/DOE cost shared projects of the Geothermal Drilling organization. Last year, a number of accomplishments were achieved in each of these areas. A new flow meter being developed to accurately measure drilling fluid outflow was tested extensively during Long Valley drilling. Results show that this meter is rugged, reliable, and can provide useful measurements of small differences in fluid inflow and outflow rates. By providing early indications of fluid gain or loss, improved control of blow-out and lost circulation problems during geothermal drilling can be expected. In the area of downhole tools for lost circulation control, the concept of a downhole injector for injecting a two-component, fast-setting cementitious mud was developed. DOE filed a patent application for this concept during FY 91. The design criteria for a high-temperature potassium, uranium, thorium logging tool featuring a downhole data storage computer were established, and a request for proposals was submitted to tool development companies. The fundamental theory of acoustic telemetry in drill strings was significantly advanced through field experimentation and analysis. A new understanding of energy loss mechanisms was developed.
Measuring the Hardness of Minerals
ERIC Educational Resources Information Center
Bushby, Jessica
2005-01-01
The author discusses Moh's hardness scale, a comparative scale for minerals, whereby the softest mineral (talc) is placed at 1 and the hardest mineral (diamond) is placed at 10, with all other minerals ordered in between, according to their hardness. Development history of the scale is outlined, as well as a description of how the scale is used…
Constraints on relaxion windows
NASA Astrophysics Data System (ADS)
Choi, Kiwoon; Im, Sang Hui
2016-12-01
We examine the low energy phenomenology of the relaxion solution to the weak scale hierarchy problem. Assuming that the Hubble friction is responsible for a dissipation of the relaxion energy, we identify the cosmological relaxion window which corresponds to the parameter region compatible with a given value of the acceptable number of inflationary e-foldings. We then discuss a variety of observational constraints on the relaxion window, including those from astrophysical and cosmological considerations. We find that majority of the parameter space with a relaxion mass m ϕ ≳ 100 eV or a relaxion decay constant f ≲107GeV is excluded by existing constraints. There is an interesting parameter region with m ϕ ˜ 0 .2 - 10 GeV and f ˜ few - 200 TeV, which is allowed by existing constraints, but can be probed soon by future beam dump experiments such as the SHiP experiment, or by improved EDM experiments.
Numerical prediction of microstructure and hardness in multicycle simulations
Oddy, A.S.; McDill, J.M.J.
1996-06-01
Thermal-microstructural predictions are made and compared to physical simulations of heat-affected zones in multipass and weaved welds. The microstructural prediction algorithm includes reaustenitization kinetics, grain growth, austenite decomposition kinetics, hardness, and tempering. Microstructural simulation of weaved welds requires that the algorithm include transient reaustenitization, austenite decomposition for arbitrary thermal cycles including during reheating, and tempering. Material properties for each of these phenomena are taken from the best available literature. The numerical predictions are compared with the results of physical simulations made at the Metals Technology Laboratory, CANMET, on a Gleeble 1500 simulator. Thermal histories used in the physical simulations included single-pass welds, isothermal tempering, two-cycle, and three-cycle welds. The two- and three-cycle welds include temper-bead and weaved-weld simulations. A recurring theme in the analysis is the significant variation found in the material properties for the same grade of steel. This affected all the material properties used including those governing reaustenitization, austenite grain growth, austenite decomposition, and hardness. Hardness measurements taken from the literature show a variation of {+-}5 to 30 HV on the same sample. Alloy differences within the allowable range also led to hardness variations of {+-}30 HV for the heat-affected zone of multipass welds. The predicted hardnesses agree extremely well with those taken from the physical simulations.
Book Review: Constraining Constraints.
ERIC Educational Resources Information Center
Kessen, William; Reznick, J. Steven
1993-01-01
Reviews "The Epigenesis of Mind: Essays on Biology and Cognition" (S. Carey and R. Gelman, editors), a collection of essays that present a hard-scientific vision of cognitive development. Examines the arguments this work articulates and then determines the place it occupies in the analysis of the state of developmental psychology as presented in…
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
De Bruijn Superwalk with Multiplicities Problem is NP-hard
2013-01-01
De Bruijn Superwalk with Multiplicities Problem is the problem of finding a walk in the de Bruijn graph containing several walks as subwalks and passing through each edge the exactly predefined number of times (equal to the multiplicity of this edge). This problem has been stated in the talk by Paul Medvedev and Michael Brudno on the first RECOMB Satellite Conference on Open Problems in Algorithmic Biology in August 2012. In this paper we show that this problem is NP-hard. Combined with results of previous works it means that all known models for genome assembly are NP-hard. PMID:23734822
Generalized arc consistency for global cardinality constraint
Regin, J.C.
1996-12-31
A global cardinality constraint (gcc) is specified in terms of a set of variables X = (x{sub 1},..., x{sub p}) which take their values in a subset of V = (v{sub 1},...,v{sub d}). It constrains the number of times a value v{sub i} {epsilon} V is assigned to a variable in X to be in an interval [l{sub i}, c{sub i}]. Cardinality constraints have proved very useful in many real-life problems, such as scheduling, timetabling, or resource allocation. A gcc is more general than a constraint of difference, which requires each interval to be. In this paper, we present an efficient way of implementing generalized arc consistency for a gcc. The algorithm we propose is based on a new theorem of flow theory. Its space complexity is O({vert_bar}X{vert_bar} {times} {vert_bar}V{vert_bar}) and its time complexity is O({vert_bar}X{vert_bar}{sup 2} {times} {vert_bar}V{vert_bar}). We also show how this algorithm can efficiently be combined with other filtering techniques.
Beta Backscatter Measures the Hardness of Rubber
NASA Technical Reports Server (NTRS)
Morrissey, E. T.; Roje, F. N.
1986-01-01
Nondestructive testing method determines hardness, on Shore scale, of room-temperature-vulcanizing silicone rubber. Measures backscattered beta particles; backscattered radiation count directly proportional to Shore hardness. Test set calibrated with specimen, Shore hardness known from mechanical durometer test. Specimen of unknown hardness tested, and radiation count recorded. Count compared with known sample to find Shore hardness of unknown.
Thin coatings and films hardness evaluation
NASA Astrophysics Data System (ADS)
Matyunin, V. M.; Marchenkov, A. Yu; Demidov, A. N.; Karimbekov, M. A.
2016-10-01
The existing thin coatings and films hardness evaluation methods based on indentation with pyramidal indenter on various scale levels are expounded. The impact of scale factor on hardness values is performed. The experimental verification of several existing hardness evaluation methods regarding the substrate hardness value and the “coating - substrate” composite hardness value is made.
Fault-Tolerant, Radiation-Hard DSP
NASA Technical Reports Server (NTRS)
Czajkowski, David
2011-01-01
Commercial digital signal processors (DSPs) for use in high-speed satellite computers are challenged by the damaging effects of space radiation, mainly single event upsets (SEUs) and single event functional interrupts (SEFIs). Innovations have been developed for mitigating the effects of SEUs and SEFIs, enabling the use of very-highspeed commercial DSPs with improved SEU tolerances. Time-triple modular redundancy (TTMR) is a method of applying traditional triple modular redundancy on a single processor, exploiting the VLIW (very long instruction word) class of parallel processors. TTMR improves SEU rates substantially. SEFIs are solved by a SEFI-hardened core circuit, external to the microprocessor. It monitors the health of the processor, and if a SEFI occurs, forces the processor to return to performance through a series of escalating events. TTMR and hardened-core solutions were developed for both DSPs and reconfigurable field-programmable gate arrays (FPGAs). This includes advancement of TTMR algorithms for DSPs and reconfigurable FPGAs, plus a rad-hard, hardened-core integrated circuit that services both the DSP and FPGA. Additionally, a combined DSP and FPGA board architecture was fully developed into a rad-hard engineering product. This technology enables use of commercial off-the-shelf (COTS) DSPs in computers for satellite and other space applications, allowing rapid deployment at a much lower cost. Traditional rad-hard space computers are very expensive and typically have long lead times. These computers are either based on traditional rad-hard processors, which have extremely low computational performance, or triple modular redundant (TMR) FPGA arrays, which suffer from power and complexity issues. Even more frustrating is that the TMR arrays of FPGAs require a fixed, external rad-hard voting element, thereby causing them to lose much of their reconfiguration capability and in some cases significant speed reduction. The benefits of COTS high
Conflict-Aware Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Borden, Chester
2006-01-01
conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.
Nanoindentation hardness of mineralized tissues.
Oyen, Michelle L
2006-01-01
A series elastic and plastic deformation model [Sakai, M., 1999. The Meyer hardness: a measure for plasticity? Journal of Materials Research 14(9), 3630-3639] is used to deconvolute the resistance to plastic deformation from the plane strain modulus and contact hardness parameters obtained in a nanoindentation test. Different functional dependencies of contact hardness on the plane strain modulus are examined. Plastic deformation resistance values are computed from the modulus and contact hardness for engineering materials and mineralized tissues. Elastic modulus and plastic deformation resistance parameters are used to calculate elastic and plastic deformation components, and to examine the partitioning of indentation deformation between elastic and plastic. Both the numerical values of plastic deformation resistance and the direct computation of deformation partitioning reveal the intermediate mechanical responses of mineralized composites when compared with homogeneous engineering materials.
Trajectory optimization in the presence of constraints
NASA Astrophysics Data System (ADS)
McQuade, Timothy E.
1989-06-01
In many aerospace problems, it is necessary to determine vehicle trajectories that satisfy constraints. Typically two types of constraints are of interest. First, it may be desirable to satisfy a set of boundary conditions. Second, it may be necessary to limit the motion of the vehicle so that physical limits and hardware limits are not exceeded. In addition to these requirements, it may be necessary to optimize some measure of vehicle performance. In this thesis, the square root sweep method is used to solve a discrete-time linear quadratic optimal control problem. The optimal control problem arises from a Mayer form continuous-time nonlinear optimization problem. A method for solving the optimal control problem is derived. Called the square root sweep algorithm, the solution consists of a set of backward recursions for a set of square root parameters. The square root sweep algorithm is shown to be capable of treating Mayer form optimization problems. Heuristics for obtaining solutions are discussed. The square root sweep algorithm is used to solve several example optimization problems.
A Framework for Dynamic Constraint Reasoning Using Procedural Constraints
NASA Technical Reports Server (NTRS)
Jonsson, Ari K.; Frank, Jeremy D.
1999-01-01
Many complex real-world decision and control problems contain an underlying constraint reasoning problem. This is particularly evident in a recently developed approach to planning, where almost all planning decisions are represented by constrained variables. This translates a significant part of the planning problem into a constraint network whose consistency determines the validity of the plan candidate. Since higher-level choices about control actions can add or remove variables and constraints, the underlying constraint network is invariably highly dynamic. Arbitrary domain-dependent constraints may be added to the constraint network and the constraint reasoning mechanism must be able to handle such constraints effectively. Additionally, real problems often require handling constraints over continuous variables. These requirements present a number of significant challenges for a constraint reasoning mechanism. In this paper, we introduce a general framework for handling dynamic constraint networks with real-valued variables, by using procedures to represent and effectively reason about general constraints. The framework is based on a sound theoretical foundation, and can be proven to be sound and complete under well-defined conditions. Furthermore, the framework provides hybrid reasoning capabilities, as alternative solution methods like mathematical programming can be incorporated into the framework, in the form of procedures.
Hiding quiet solutions in random constraint satisfaction problems
Zdeborova, Lenka; Krzakala, Florent
2008-01-01
We study constraint satisfaction problems on the so-called planted random ensemble. We show that for a certain class of problems, e.g., graph coloring, many of the properties of the usual random ensemble are quantitatively identical in the planted random ensemble. We study the structural phase transitions and the easy-hard-easy pattern in the average computational complexity. We also discuss the finite temperature phase diagram, finding a close connection with the liquid-glass-solid phenomenology.
NASA Astrophysics Data System (ADS)
Jackson, C. S.; Hattab, M. W.; Huerta, G.
2014-12-01
Emergent constraints are observable quantities that provide some physical basis for testing or predicting how a climate model will respond to greenhouse gas forcing. Very few such constraints have been identified for the multi-model CMIP archive. Here we explore the question of whether constraints that apply to a single model, a perturbed parameter ensemble (PPE) of the Community Atmosphere Model (CAM3.1), can be applied to predicting the climate sensitivities of models within the CMIP archive. In particular we construct our predictive patterns from multivariate EOFs of the CAM3.1 ensemble control climate. Multiple regressive statistical models were created that do an excellent job of predicting CAM3.1 sensitivity to greenhouse gas forcing. However, these same patterns fail spectacularly to predict sensitivities of models within the CMIP archive. We attribute this failure to several factors. First, and perhaps the most important, is that the structures affecting climate sensitivity in CAM3.1 have a unique signature in the space of our multivariate EOF patterns that are unlike any other climate model. That is to say, we should not expect CAM3.1 to represent the way another models within CMIP archive respond to greenhouse gas forcing. The second, perhaps related, reason is that the CAM3.1 PPE does a poor job of spanning the range of climates and responses found within the CMIP archive. We shall discuss the implications of these results for the prospect of finding emergent constraints within the CMIP archive. We will also discuss what this may mean for establishing uncertainties in climate projections.
Constraint-based soft tissue simulation for virtual surgical training.
Tang, Wen; Wan, Tao Ruan
2014-11-01
Most of surgical simulators employ a linear elastic model to simulate soft tissue material properties due to its computational efficiency and the simplicity. However, soft tissues often have elaborate nonlinear material characteristics. Most prominently, soft tissues are soft and compliant to small strains, but after initial deformations they are very resistant to further deformations even under large forces. Such material characteristic is referred as the nonlinear material incompliant which is computationally expensive and numerically difficult to simulate. This paper presents a constraint-based finite-element algorithm to simulate the nonlinear incompliant tissue materials efficiently for interactive simulation applications such as virtual surgery. Firstly, the proposed algorithm models the material stiffness behavior of soft tissues with a set of 3-D strain limit constraints on deformation strain tensors. By enforcing a large number of geometric constraints to achieve the material stiffness, the algorithm reduces the task of solving stiff equations of motion with a general numerical solver to iteratively resolving a set of constraints with a nonlinear Gauss-Seidel iterative process. Secondly, as a Gauss-Seidel method processes constraints individually, in order to speed up the global convergence of the large constrained system, a multiresolution hierarchy structure is also used to accelerate the computation significantly, making interactive simulations possible at a high level of details. Finally, this paper also presents a simple-to-build data acquisition system to validate simulation results with ex vivo tissue measurements. An interactive virtual reality-based simulation system is also demonstrated.
Structure Constraints in a Constraint-Based Planner
NASA Technical Reports Server (NTRS)
Pang, Wan-Lin; Golden, Keith
2004-01-01
In this paper we report our work on a new constraint domain, where variables can take structured values. Earth-science data processing (ESDP) is a planning domain that requires the ability to represent and reason about complex constraints over structured data, such as satellite images. This paper reports on a constraint-based planner for ESDP and similar domains. We discuss our approach for translating a planning problem into a constraint satisfaction problem (CSP) and for representing and reasoning about structured objects and constraints over structures.
Towards Fast, Scalable Hard Particle Monte Carlo Simulations on GPUs
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Irrgang, M. Eric; Glaser, Jens; Harper, Eric S.; Engel, Michael; Glotzer, Sharon C.
2014-03-01
Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. We discuss the implementation of Monte Carlo for arbitrary hard shapes in HOOMD-blue, a GPU-accelerated particle simulation tool, to enable million particle simulations in a field where thousands is the norm. In this talk, we discuss our progress on basic parallel algorithms, optimizations that maximize GPU performance, and communication patterns for scaling to multiple GPUs. Research applications include colloidal assembly and other uses in materials design, biological aggregation, and operations research.
General heuristics algorithms for solving capacitated arc routing problem
NASA Astrophysics Data System (ADS)
Fadzli, Mohammad; Najwa, Nurul; Masran, Hafiz
2015-05-01
In this paper, we try to determine the near-optimum solution for the capacitated arc routing problem (CARP). In general, NP-hard CARP is a special graph theory specifically arises from street services such as residential waste collection and road maintenance. By purpose, the design of the CARP model and its solution techniques is to find optimum (or near-optimum) routing cost for a fleet of vehicles involved in operation. In other words, finding minimum-cost routing is compulsory in order to reduce overall operation cost that related with vehicles. In this article, we provide a combination of various heuristics algorithm to solve a real case of CARP in waste collection and benchmark instances. These heuristics work as a central engine in finding initial solutions or near-optimum in search space without violating the pre-setting constraints. The results clearly show that these heuristics algorithms could provide good initial solutions in both real-life and benchmark instances.
Sparse Covariance Matrix Estimation With Eigenvalue Constraints.
Liu, Han; Wang, Lie; Zhao, Tuo
2014-04-01
We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online.
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
Computerized Classification Testing under Practical Constraints with a Polytomous Model.
ERIC Educational Resources Information Center
Lau, C. Allen; Wang, Tianyou
A study was conducted to extend the sequential probability ratio testing (SPRT) procedure with the polytomous model under some practical constraints in computerized classification testing (CCT), such as methods to control item exposure rate, and to study the effects of other variables, including item information algorithms, test difficulties, item…
Teaching Database Design with Constraint-Based Tutors
ERIC Educational Resources Information Center
Mitrovic, Antonija; Suraweera, Pramuditha
2016-01-01
Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…
NASA Astrophysics Data System (ADS)
Schmidt, Greg; Witham, Brandon; Valore, Jason; Holland, Ben; Dalton, Jason
2012-06-01
Military, police, and industrial surveillance operations could benefit from having sensors deployed in configurations that maximize collection capability. We describe a surveillance planning approach that optimizes sensor placements to collect information about targets of interest by using information from predictive geospatial analytics, the physical environment, and surveillance constraints. We designed a tool that accounts for multiple sensor aspects-collection footprints, groupings, and characteristics; multiple optimization objectives-surveillance requirements and predicted threats; and multiple constraints-sensing, physical environment (including terrain), and geographic surveillance constraints. The tool uses a discrete grid model to keep track of geographic sensing objectives and constraints, and from these, estimate probabilities for collection containment and detection. We devised an evolutionary algorithm and polynomial time approximation schemes (PTAS) to optimize the tool variables above to generate the positions and aspect for a network of sensors. We also designed algorithms to coordinate a mixture of sensors with different competing objectives, competing constraints, couplings, and proximity constraints.
A Framework for Optimal Control Allocation with Structural Load Constraints
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc
2010-01-01
Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.
Constraints influencing sports wheelchair propulsion performance and injury risk
2013-01-01
The Paralympic Games are the pinnacle of sport for many athletes with a disability. A potential issue for many wheelchair athletes is how to train hard to maximise performance while also reducing the risk of injuries, particularly to the shoulder due to the accumulation of stress placed on this joint during activities of daily living, training and competition. The overall purpose of this narrative review was to use the constraints-led approach of dynamical systems theory to examine how various constraints acting upon the wheelchair-user interface may alter hand rim wheelchair performance during sporting activities, and to a lesser extent, their injury risk. As we found no studies involving Paralympic athletes that have directly utilised the dynamical systems approach to interpret their data, we have used this approach to select some potential constraints and discussed how they may alter wheelchair performance and/or injury risk. Organism constraints examined included player classifications, wheelchair setup, training and intrinsic injury risk factors. Task constraints examined the influence of velocity and types of locomotion (court sports vs racing) in wheelchair propulsion, while environmental constraints focused on forces that tend to oppose motion such as friction and surface inclination. Finally, the ecological validity of the research studies assessing wheelchair propulsion was critiqued prior to recommendations for practice and future research being given. PMID:23557065
ERIC Educational Resources Information Center
Moreland, James D., Jr
2013-01-01
This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…
Noise reduction in adaptive-optics imagery with the use of support constraints
NASA Astrophysics Data System (ADS)
Matson, Charles L.; Roggemann, Michael C.
1995-02-01
The use of support constraints for noise reduction in images obtained with telescopes that use adaptive optics for atmospheric correction is discussed. Noise covariances are derived for these type of data, including the effects of photon noise and CCD read noise. The effectiveness of support constraints in achieving noise reduction is discussed in terms of these noise properties and in terms of the types of algorithms used to enforce the support constraint. Both a convex-projections and a cost-function minimization algorithm are used to enforce the support constraints, and it is shown with the use of computer simulations and field data that the cost-function algorithm results in artifacts in the reconstructions. The convex-projections algorithms produced mean-square-error decreases in the image domain of approximately 10% for high light levels but essentially no error decreases for low light levels. We emphasize images that are well resolved by the telescope and adaptive-optics system.
Practical engineering of hard spin-glass instances
NASA Astrophysics Data System (ADS)
Marshall, Jeffrey; Martin-Mayor, Victor; Hen, Itay
2016-07-01
Recent technological developments in the field of experimental quantum annealing have made prototypical annealing optimizers with hundreds of qubits commercially available. The experimental demonstration of a quantum speedup for optimization problems has since then become a coveted, albeit elusive goal. Recent studies have shown that the so far inconclusive results, regarding a quantum enhancement, may have been partly due to the benchmark problems used being unsuitable. In particular, these problems had inherently too simple a structure, allowing for both traditional resources and quantum annealers to solve them with no special efforts. The need therefore has arisen for the generation of harder benchmarks which would hopefully possess the discriminative power to separate classical scaling of performance with size from quantum. We introduce here a practical technique for the engineering of extremely hard spin-glass Ising-type problem instances that does not require "cherry picking" from large ensembles of randomly generated instances. We accomplish this by treating the generation of hard optimization problems itself as an optimization problem, for which we offer a heuristic algorithm that solves it. We demonstrate the genuine thermal hardness of our generated instances by examining them thermodynamically and analyzing their energy landscapes, as well as by testing the performance of various state-of-the-art algorithms on them. We argue that a proper characterization of the generated instances offers a practical, efficient way to properly benchmark experimental quantum annealers, as well as any other optimization algorithm.
Adaptive laser link reconfiguration using constraint propagation
NASA Technical Reports Server (NTRS)
Crone, M. S.; Julich, P. M.; Cook, L. M.
1993-01-01
This paper describes Harris AI research performed on the Adaptive Link Reconfiguration (ALR) study for Rome Lab, and focuses on the application of constraint propagation to the problem of link reconfiguration for the proposed space based Strategic Defense System (SDS) Brilliant Pebbles (BP) communications system. According to the concept of operations at the time of the study, laser communications will exist between BP's and to ground entry points. Long-term links typical of RF transmission will not exist. This study addressed an initial implementation of BP's based on the Global Protection Against Limited Strikes (GPALS) SDI mission. The number of satellites and rings studied was representative of this problem. An orbital dynamics program was used to generate line-of-site data for the modeled architecture. This was input into a discrete event simulation implemented in the Harris developed COnstraint Propagation Expert System (COPES) Shell, developed initially on the Rome Lab BM/C3 study. Using a model of the network and several heuristics, the COPES shell was used to develop the Heuristic Adaptive Link Ordering (HALO) Algorithm to rank and order potential laser links according to probability of communication. A reduced set of links based on this ranking would then be used by a routing algorithm to select the next hop. This paper includes an overview of Constraint Propagation as an Artificial Intelligence technique and its embodiment in the COPES shell. It describes the design and implementation of both the simulation of the GPALS BP network and the HALO algorithm in COPES. This is described using a 59 Data Flow Diagram, State Transition Diagrams, and Structured English PDL. It describes a laser communications model and the heuristics involved in rank-ordering the potential communication links. The generation of simulation data is described along with its interface via COPES to the Harris developed View Net graphical tool for visual analysis of communications
A Constraint-Based Planner for Data Production
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Golden, Keith
2005-01-01
This paper presents a graph-based backtracking algorithm designed to support constrain-tbased planning in data production domains. This algorithm performs backtracking at two nested levels: the outer- backtracking following the structure of the planning graph to select planner subgoals and actions to achieve them and the inner-backtracking inside a subproblem associated with a selected action to find action parameter values. We show this algorithm works well in a planner applied to automating data production in an ecological forecasting system. We also discuss how the idea of multi-level backtracking may improve efficiency of solving semi-structured constraint problems.
Unraveling Quantum Annealers using Classical Hardness
NASA Astrophysics Data System (ADS)
Martin-Mayor, Victor; Hen, Itay
2015-10-01
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip.
Unraveling Quantum Annealers using Classical Hardness.
Martin-Mayor, Victor; Hen, Itay
2015-10-20
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as 'D-Wave' chips, promise to solve practical optimization problems potentially faster than conventional 'classical' computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize 'temperature chaos' as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip.
Unraveling Quantum Annealers using Classical Hardness
Martin-Mayor, Victor; Hen, Itay
2015-01-01
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
Asteroseismic constraints for Gaia
NASA Astrophysics Data System (ADS)
Creevey, O. L.; Thévenin, F.
2012-12-01
Distances from the Gaia mission will no doubt improve our understanding of stellar physics by providing an excellent constraint on the luminosity of the star. However, it is also clear that high precision stellar properties from, for example, asteroseismology, will also provide a needed input constraint in order to calibrate the methods that Gaia will use, e.g. stellar models or GSP_Phot. For solar-like stars (F, G, K IV/V), asteroseismic data delivers at the least two very important quantities: (1) the average large frequency separation < Δ ν > and (2) the frequency corresponding to the maximum of the modulated-amplitude spectrum ν_{max}. Both of these quantities are related directly to stellar parameters (radius and mass) and in particular their combination (gravity and density). We show how the precision in < Δ ν >, ν_{max}, and atmospheric parameters T_{eff} and [Fe/H] affect the determination of gravity (log g) for a sample of well-known stars. We find that log g can be determined within less than 0.02 dex accuracy for our sample while considering precisions in the data expected for V˜12 stars from Kepler data. We also derive masses and radii which are accurate to within 1σ of the accepted values. This study validates the subsequent use of all of the available asteroseismic data on solar-like stars from the Kepler field (>500 IV/V stars) in order to provide a very important constraint for Gaia calibration of GSP_Phot} through the use of log g. We note that while we concentrate on IV/V stars, both the CoRoT and Kepler fields contain asteroseismic data on thousands of giant stars which will also provide useful calibration measures.
Future hard disk drive systems
NASA Astrophysics Data System (ADS)
Wood, Roger
2009-03-01
This paper briefly reviews the evolution of today's hard disk drive with the additional intention of orienting the reader to the overall mechanical and electrical architecture. The modern hard disk drive is a miracle of storage capacity and function together with remarkable economy of design. This paper presents a personal view of future customer requirements and the anticipated design evolution of the components. There are critical decisions and great challenges ahead for the key technologies of heads, media, head-disk interface, mechanics, and electronics.
Magnetic levitation for hard superconductors
Kordyuk, A.A.
1998-01-01
An approach for calculating the interaction between a hard superconductor and a permanent magnet in the field-cooled case is proposed. The exact solutions were obtained for the point magnetic dipole over a flat ideally hard superconductor. We have shown that such an approach is adaptable to a wide practical range of melt-textured high-temperature superconductors{close_quote} systems with magnetic levitation. In this case, the energy losses can be calculated from the alternating magnetic field distribution on the superconducting sample surface. {copyright} {ital 1998 American Institute of Physics.}
TRACON Aircraft Arrival Planning and Optimization Through Spatial Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Bergh, Christopher P.; Krzeczowski, Kenneth J.; Davis, Thomas J.; Denery, Dallas G. (Technical Monitor)
1995-01-01
A new aircraft arrival planning and optimization algorithm has been incorporated into the Final Approach Spacing Tool (FAST) in the Center-TRACON Automation System (CTAS) developed at NASA-Ames Research Center. FAST simulations have been conducted over three years involving full-proficiency, level five air traffic controllers from around the United States. From these simulations an algorithm, called Spatial Constraint Satisfaction, has been designed, coded, undergone testing, and soon will begin field evaluation at the Dallas-Fort Worth and Denver International airport facilities. The purpose of this new design is an attempt to show that the generation of efficient and conflict free aircraft arrival plans at the runway does not guarantee an operationally acceptable arrival plan upstream from the runway -information encompassing the entire arrival airspace must be used in order to create an acceptable aircraft arrival plan. This new design includes functions available previously but additionally includes necessary representations of controller preferences and workload, operationally required amounts of extra separation, and integrates aircraft conflict resolution. As a result, the Spatial Constraint Satisfaction algorithm produces an optimized aircraft arrival plan that is more acceptable in terms of arrival procedures and air traffic controller workload. This paper discusses the current Air Traffic Control arrival planning procedures, previous work in this field, the design of the Spatial Constraint Satisfaction algorithm, and the results of recent evaluations of the algorithm.
Evaluation of Open-Source Hard Real Time Software Packages
NASA Technical Reports Server (NTRS)
Mattei, Nicholas S.
2004-01-01
Reliable software is, at times, hard to find. No piece of software can be guaranteed to work in every situation that may arise during its use here at Glenn Research Center or in space. The job of the Software Assurance (SA) group in the Risk Management Office is to rigorously test the software in an effort to ensure it matches the contract specifications. In some cases the SA team also researches new alternatives for selected software packages. This testing and research is an integral part of the department of Safety and Mission Assurance. Real Time operation in reference to a computer system is a particular style of handing the timing and manner with which inputs and outputs are handled. A real time system executes these commands and appropriate processing within a defined timing constraint. Within this definition there are two other classifications of real time systems: hard and soft. A soft real time system is one in which if the particular timing constraints are not rigidly met there will be no critical results. On the other hand, a hard real time system is one in which if the timing constraints are not met the results could be catastrophic. An example of a soft real time system is a DVD decoder. If the particular piece of data from the input is not decoded and displayed to the screen at exactly the correct moment nothing critical will become of it, the user may not even notice it. However, a hard real time system is needed to control the timing of fuel injections or steering on the Space Shuttle; a delay of even a fraction of a second could be catastrophic in such a complex system. The current real time system employed by most NASA projects is Wind River's VxWorks operating system. This is a proprietary operating system that can be configured to work with many of NASA s needs and it provides very accurate and reliable hard real time performance. The down side is that since it is a proprietary operating system it is also costly to implement. The prospect of
On handling ephemeral resource constraints in evolutionary search.
Allmendinger, Richard; Knowles, Joshua
2013-01-01
We consider optimization problems where the set of solutions available for evaluation at any given time t during optimization is some subset of the feasible space. This model is appropriate to describe many closed-loop optimization settings (i.e., where physical processes or experiments are used to evaluate solutions) where, due to resource limitations, it may be impossible to evaluate particular solutions at particular times (despite the solutions being part of the feasible space). We call the constraints determining which solutions are non-evaluable ephemeral resource constraints (ERCs). In this paper, we investigate two specific types of ERC: one encodes periodic resource availabilities, the other models commitment constraints that make the evaluable part of the space a function of earlier evaluations conducted. In an experimental study, both types of constraint are seen to impact the performance of an evolutionary algorithm significantly. To deal with the effects of the ERCs, we propose and test five different constraint-handling policies (adapted from those used to handle standard constraints), using a number of different test functions including a fitness landscape from a real closed-loop problem. We show that knowing information about the type of resource constraint in advance may be sufficient to select an effective policy for dealing with it, even when advance knowledge of the fitness landscape is limited.
Dynamic indentation hardness of materials
NASA Astrophysics Data System (ADS)
Koeppel, Brian James
Indentation hardness is one of the simplest and most commonly used measures for quickly characterizing material response under static loads. Hardness may mean resistance to cutting to a machinist, resistance to wear to a tribologist, or a measure of flow stress to a design engineer. In this simple technique, a predetermined force is applied to an indenter for 5-30 seconds causing it to penetrate a specimen. By measuring the load and the indentation size, a hardness value is determined. However, the rate of deformation during indenter penetration is of the order of 10sp{-4}\\ ssp{-1}. In most practical applications, such as high speed machining or impact, material deforms at strain rates in excess of 10sp3{-}10sp5\\ ssp{-1}. At such high rates, it is well established that the plastic behavior of materials is considerably different from their static counterpart. For example, materials exhibit an increase in their yield stress, flow stress, fracture stress, and fracture toughness at high strain rates. Hence, the use of static hardness as an indicator of material response under dynamic loads may not be appropriate. Accordingly, a simple dynamic indentation hardness tester is developed for characterizing materials at strain rates similar to those encountered in realistic situations. The experimental technique uses elastic stress wave propagation phenomena in a slender rod. The technique is designed to deliver a single indentation load of 100-200 mus duration. Similar to static measurements, the dynamic hardness is determined from the measured load and indentation size. Hardness measurements on a range of metals have revealed that the dynamic hardness is consistently greater than the static hardness. The increase in hardness is strongly dependent on the crystal structure of the material. The observed trends in hardness are also found to be consistent with the yield and flow stresses of these materials under uniaxial compression. Therefore, it is suggested that the
NASA Astrophysics Data System (ADS)
Zheng, Genrang; Lin, ZhengChun
The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.
WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations
NASA Astrophysics Data System (ADS)
Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi
We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.
Metrics for Hard Goods Merchandising.
ERIC Educational Resources Information Center
Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.
Designed to meet the job-related metric measurement needs of students interested in hard goods merchandising, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…
Playing the Numbers: Hard Choices
ERIC Educational Resources Information Center
Doyle, William R.
2009-01-01
Stateline.org recently called this recession the worst in 50 years for state budgets. As has been the case in past economic downturns, higher education looks to be particularly hard hit. Funds from the American Recovery and Relief Act may have postponed some of the difficulty for many colleges and universities, but the outlook for public higher…
ERIC Educational Resources Information Center
Atwell, Nancie
2003-01-01
Writers thrive when they are motivated to work hard, have regular opportunities to practice and reflect, and benefit from a knowledgeable teacher who knows writing. Student feedback to lessons during writing workshop helped guide Nancie Atwell in her quest to provide the richest and most efficient path to better writing.
Approximate learning algorithm in Boltzmann machines.
Yasuda, Muneki; Tanaka, Kazuyuki
2009-11-01
Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.
Algorithm-Independent Framework for Verifying Integer Constraints
2007-11-02
TAL [7], Crary and Weirich [5]’s resource bound certifitation, Wang and Appel [16]’s safe garbage collection and an attempt at making the whole PCC...S. Weirich . Resource bound certification. In Proc. 27th Annual ACM SIGPLAN- SIGACT Symp. on Principles of Programming Languages. ACM Press, 2000. 30
Numerical prediction of microstructure and hardness in multicycle simulations
NASA Astrophysics Data System (ADS)
Oddy, A. S.; McDill, J. M. J.
1996-06-01
Thermal-microstructural predictions are made and compared to physical simulations of heat-affected zones in multipass and weaved welds. The microstructural prediction algorithm includes reaustenitization kinetics, grain growth, austenite decomposition kinetics, hardness, and tempering. Microstructural simulation of weaved welds requires that the algorithm include transient reaustenitization, austenite decomposition for arbitrary thermal cycles including during reheating, and tempering. Material properties for each of these phenomena are taken from the best available literature. The numerical predictions are compared with the results of physical simulations made at the Metals Technology Laboratory, CANMET, on a Gleeble 1500 simulator. Thermal histories used in the physical simulations included single-pass welds, isothermal tempering, two-cycle, and three-cycle welds. The two-and three-cycle welds include temper-bead and weaved-weld simulations. A recurring theme in the analysis is the significant variation found in the material properties for the same grade of steel. This affected all the material properties used including those governing reaustenitization, austenite grain growth, austenite decomposition, and hardness. Hardness measurements taken from the literature show a variation of ±5 to 30 HV on the same sample. Alloy differences within the allowable range also led to hardness variations of ±30 HV for the heat-affected zone of multipass welds. The predicted hardnesses agree extremely well with those taken from the physical simulations. Some differences due to problems with the austenite decomposition properties were noted in that bainite formation was predicted to occur somewhat more rapidly than was found experimentally. Reaustenitization values predicted during the rapid excursions to intercritical temperatures were also in good qualitative agreement with those measured experimentally.
Hard processes in hadronic interactions
Satz, H. |; Wang, X.N.
1995-07-01
Quantum chromodynamics is today accepted as the fundamental theory of strong interactions, even though most hadronic collisions lead to final states for which quantitative QCD predictions are still lacking. It therefore seems worthwhile to take stock of where we stand today and to what extent the presently available data on hard processes in hadronic collisions can be accounted for in terms of QCD. This is one reason for this work. The second reason - and in fact its original trigger - is the search for the quark-gluon plasma in high energy nuclear collisions. The hard processes to be considered here are the production of prompt photons, Drell-Yan dileptons, open charm, quarkonium states, and hard jets. For each of these, we discuss the present theoretical understanding, compare the resulting predictions to available data, and then show what behaviour it leads to at RHIC and LHC energies. All of these processes have the structure mentioned above: they contain a hard partonic interaction, calculable perturbatively, but also the non-perturbative parton distribution within a hadron. These parton distributions, however, can be studied theoretically in terms of counting rule arguments, and they can be checked independently by measurements of the parton structure functions in deep inelastic lepton-hadron scattering. The present volume is the work of Hard Probe Collaboration, a group of theorists who are interested in the problem and were willing to dedicate a considerable amount of their time and work on it. The necessary preparation, planning and coordination of the project were carried out in two workshops of two weeks` duration each, in February 1994 at CERn in Geneva andin July 1994 at LBL in Berkeley.
A global approach to kinematic path planning to robots with holonomic and nonholonomic constraints
NASA Technical Reports Server (NTRS)
Divelbiss, Adam; Seereeram, Sanjeev; Wen, John T.
1993-01-01
Robots in applications may be subject to holonomic or nonholonomic constraints. Examples of holonomic constraints include a manipulator constrained through the contact with the environment, e.g., inserting a part, turning a crank, etc., and multiple manipulators constrained through a common payload. Examples of nonholonomic constraints include no-slip constraints on mobile robot wheels, local normal rotation constraints for soft finger and rolling contacts in grasping, and conservation of angular momentum of in-orbit space robots. The above examples all involve equality constraints; in applications, there are usually additional inequality constraints such as robot joint limits, self collision and environment collision avoidance constraints, steering angle constraints in mobile robots, etc. The problem of finding a kinematically feasible path that satisfies a given set of holonomic and nonholonomic constraints, of both equality and inequality types is addressed. The path planning problem is first posed as a finite time nonlinear control problem. This problem is subsequently transformed to a static root finding problem in an augmented space which can then be iteratively solved. The algorithm has shown promising results in planning feasible paths for redundant arms satisfying Cartesian path following and goal endpoint specifications, and mobile vehicles with multiple trailers. In contrast to local approaches, this algorithm is less prone to problems such as singularities and local minima.
Neural constraints on learning.
Sadtler, Patrick T; Quick, Kristin M; Golub, Matthew D; Chase, Steven M; Ryu, Stephen I; Tyler-Kabara, Elizabeth C; Yu, Byron M; Batista, Aaron P
2014-08-28
Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define this constraint. We employed a closed-loop intracortical brain-computer interface learning paradigm in which Rhesus macaques (Macaca mulatta) controlled a computer cursor by modulating neural activity patterns in the primary motor cortex. Using the brain-computer interface paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. The activity of a neural population can be represented in a high-dimensional space (termed the neural space), wherein each dimension corresponds to the activity of one neuron. These characteristic activity patterns comprise a low-dimensional subspace (termed the intrinsic manifold) within the neural space. The intrinsic manifold presumably reflects constraints imposed by the underlying neural circuitry. Here we show that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the intrinsic manifold. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the intrinsic manifold. These results suggest that the existing structure of a network can shape learning. On a timescale of hours, it seems to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already
Simulation of Hard Shadows on Large Spherical Terrains
NASA Astrophysics Data System (ADS)
Aslandere, Turgay; Flatken, Markus; Gerndt, Andreas
2016-12-01
Real-time rendering of high precision shadows using digital terrain models as input data is a challenging task. Especially when interactivity is targeted and level of detail data structures are utilized to tackle huge amount of data. In this paper, we present a real-time rendering approach for the computation of hard shadows using large scale digital terrain data obtained by satellite imagery. Our approach is based on an extended horizon mapping algorithm that avoids costly pre-computations and ensures high accuracy. This algorithm is further developed to handle large data. The proposed algorithms take the surface curvature of the large spherical bodies into account during the computation. The performance issues are discussed and the results are presented. The generated images can be exploited in 3D research and aerospace related areas.
Placement with Symmetry Constraints for Analog IC Layout Design Based on Tree Representation
NASA Astrophysics Data System (ADS)
Hirakawa, Natsumi; Fujiyoshi, Kunihiro
Symmetry constrains are the constraints that the given cells should be placed symmetrically in design of analog ICs. We use O-tree to represent placements and propose a decoding algorithm which can obtain one of the minimum placements satisfying the constraints. The decoding algorithm uses linear programming, which is too much time consuming. Therefore we propose a graph based method to recognize if there exists no placement satisfying both the given symmetry and O-tree constraints, and use the method before application of linear programming. The effectiveness of the proposed method was shown by computational experiments.
ϑ-SHAKE: An extension to SHAKE for the explicit treatment of angular constraints
NASA Astrophysics Data System (ADS)
Gonnet, Pedro; Walther, Jens H.; Koumoutsakos, Petros
2009-03-01
This paper presents ϑ-SHAKE, an extension to SHAKE, an algorithm for the resolution of holonomic constraints in molecular dynamics simulations, which allows for the explicit treatment of angular constraints. We show that this treatment is more efficient than the use of fictitious bonds, significantly reducing the overlap between the individual constraints and thus accelerating convergence. The new algorithm is compared with SHAKE, M-SHAKE, the matrix-based approach described by Ciccotti and Ryckaert and P-SHAKE for rigid water and octane.
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
Nanomechanics of hard films on compliant substrates.
Reedy, Earl David, Jr.; Emerson, John Allen; Bahr, David F.; Moody, Neville Reid; Zhou, Xiao Wang; Hales, Lucas; Adams, David Price; Yeager,John; Nyugen, Thao D.; Corona, Edmundo; Kennedy, Marian S.; Cordill, Megan J.
2009-09-01
Development of flexible thin film systems for biomedical, homeland security and environmental sensing applications has increased dramatically in recent years [1,2,3,4]. These systems typically combine traditional semiconductor technology with new flexible substrates, allowing for both the high electron mobility of semiconductors and the flexibility of polymers. The devices have the ability to be easily integrated into components and show promise for advanced design concepts, ranging from innovative microelectronics to MEMS and NEMS devices. These devices often contain layers of thin polymer, ceramic and metallic films where differing properties can lead to large residual stresses [5]. As long as the films remain substrate-bonded, they may deform far beyond their freestanding counterpart. Once debonded, substrate constraint disappears leading to film failure where compressive stresses can lead to wrinkling, delamination, and buckling [6,7,8] while tensile stresses can lead to film fracture and decohesion [9,10,11]. In all cases, performance depends on film adhesion. Experimentally it is difficult to measure adhesion. It is often studied using tape [12], pull off [13,14,15], and peel tests [16,17]. More recent techniques for measuring adhesion include scratch testing [18,19,20,21], four point bending [22,23,24], indentation [25,26,27], spontaneous blisters [28,29] and stressed overlayers [7,26,30,31,32,33]. Nevertheless, sample design and test techniques must be tailored for each system. There is a large body of elastic thin film fracture and elastic contact mechanics solutions for elastic films on rigid substrates in the published literature [5,7,34,35,36]. More recent work has extended these solutions to films on compliant substrates and show that increasing compliance markedly changes fracture energies compared with rigid elastic solution results [37,38]. However, the introduction of inelastic substrate response significantly complicates the problem [10,39,40]. As
Low dose hard x-ray contact microscopy assisted by a photoelectric conversion layer
Gomella, Andrew; Martin, Eric W.; Lynch, Susanna K.; Wen, Han; Morgan, Nicole Y.
2013-04-15
Hard x-ray contact microscopy provides images of dense samples at resolutions of tens of nanometers. However, the required beam intensity can only be delivered by synchrotron sources. We report on the use of a gold photoelectric conversion layer to lower the exposure dose by a factor of 40 to 50, allowing hard x-ray contact microscopy to be performed with a compact x-ray tube. We demonstrate the method in imaging the transmission pattern of a type of hard x-ray grating that cannot be fitted into conventional x-ray microscopes due to its size and shape. Generally the method is easy to implement and can record images of samples in the hard x-ray region over a large area in a single exposure, without some of the geometric constraints associated with x-ray microscopes based on zone-plate or other magnifying optics.
Optimal filter design subject to output delobe constraints
NASA Technical Reports Server (NTRS)
Fortmann, T. E.; Athans, M.
1972-01-01
The design of filters for detection and estimation in radar and communications systems is considered, with inequality constraints on the maximum output sidelobe levels. A constrained optimization problem in Hilbert space is formulated, incorporating the sidelobe constraints via a partial ordering of continuous functions. Generalized versions (in Hilbert space) of the Kuhn-Tucker and Duality Theorems allow the reduction of this problem to an unconstrained one in the dual space of regular Borel measures. A convergent algorithm is presented for computational solution of the dual problem.
Improvement of MEM-deconvolution by an additional constraint
NASA Astrophysics Data System (ADS)
Reiter, J.; Pfleiderer, J.
1986-09-01
An attempt is made to improve existing versions of the maximum entropy method (MEM) and their understanding. Additional constraints are discussed, especially the T-statistic which can significantly reduce the correlation between residuals and model. An implementation of the T constraint into MEM requires a new numerical algorithm, which is made to work most efficiently on modern vector-processing computers. The entropy functional is derived from simple mathematical assumptions. The new MEM version is tested with radio data of NGC 6946 and optical data from M 87.
Treatment of inequality constraints in power system state estimation
Clements, K.A.; Davis, P.W.; Frey, K.D.
1995-05-01
A new formulation of the power system state estimation problem and a new solution technique are presented. The formulation allows for inequality constraints such as Var limits on generators and transformer tap ratio limits. In addition, unmeasured loads can be modeled as unknown but bounded quantities. The solution technique is an interior point method that uses logarithmic barrier functions to treat the inequality constraints. The authors describe computational issues arising in the implementation of the algorithm. Numerical results are given for systems ranging in size from six to 118 buses.
Hard x-ray imaging polarimeter for PolariS
NASA Astrophysics Data System (ADS)
Hayashida, Kiyoshi; Kim, Juyong; Sadamoto, Masaaki; Yoshinaga, Keigo; Gunji, Shuichi; Mihara, Tatehiro; Kishimoto, Yuji; Kubo, Hidetoshi; Mizuno, Tsunefumi; Takahashi, Hiromitsu; Dotani, Tadayasu; Yonetoku, Daisuke; Nakamori, Takeshi; Yoneyama, Tomokage; Ikeyama, Yuki; Kamitsukasa, Fumiyoshi
2016-07-01
Hard X-ray imaging polarimeters are developed for the X-ray γ-ray polaeimtery satellite PolariS. The imaging polarimter is scattering type, in which anisotropy in the direction of Compton scattering is employed to measure the hard X-ray (10-80 keV) polarization, and is installed on the focal planes of hard X-ray telescopes. We have updated the design of the model so as to cover larger solid angles of scattering direction. We also examine the event selection algorithm to optimize the detection efficiency of recoiled electrons in plastic scintillators. We succeed in improving the efficiency by factor of about 3-4 from the previous algorithm and criteria for 18-30 keV incidence. For 23 keV X-ray incidence, the recoiled electron energy is about 1 keV. We measured the efficiency to detect recoiled electrons in this case, and found about half of the theoretical limit. The improvement in this efficiency directly leads to that in the detection efficiency. In other words, however, there is still a room for improvement. We examine various process in the detector, and estimate the major loss is primarily that of scintillation light in a plastic scintillator pillar with a very small cross section (2.68mm squared) and a long length (40mm). Nevertheless, the current model provides the MDP of 6% for 10mCrab sources, which are the targets of PolariS.
Transpecific microsatellites for hard pines.
Shepherd, M.; Cross, M.; Maguire, L.; Dieters, J.; Williams, G.; Henry, J.
2002-04-01
Microsatellites are difficult to recover from large plant genomes so cross-specific utilisation is an important source of markers. Fifty microsatellites were tested for cross-specific amplification and polymorphism to two New World hard pine species, slash pine ( Pinus elliottii var. elliottii) and Caribbean pine ( P. caribaea var. hondurensis). Twenty-nine (58%) markers amplified in both hard pine species, and 23 of these 29 were polymorphic. Soft pine (subgenus Strobus) microsatellite markers did amplify, but none were polymorphic. Pinus elliottii var. elliottii and P. caribaea var. hondurensis showed mutational changes in the flanking regions and the repeat motif that were informative for Pinus spp. phylogenetic relationships. Most allele length variation could be attributed to variability in repeat unit number. There was no evidence for ascertainment bias.
Ultrasonic material hardness depth measurement
Good, Morris S.; Schuster, George J.; Skorpik, James R.
1997-01-01
The invention is an ultrasonic surface hardness depth measurement apparatus and method permitting rapid determination of hardness depth of shafts, rods, tubes and other cylindrical parts. The apparatus of the invention has a part handler, sensor, ultrasonic electronics component, computer, computer instruction sets, and may include a display screen. The part handler has a vessel filled with a couplant, and a part rotator for rotating a cylindrical metal part with respect to the sensor. The part handler further has a surface follower upon which the sensor is mounted, thereby maintaining a constant distance between the sensor and the exterior surface of the cylindrical metal part. The sensor is mounted so that a front surface of the sensor is within the vessel with couplant between the front surface of the sensor and the part.
Ultrasonic material hardness depth measurement
Good, M.S.; Schuster, G.J.; Skorpik, J.R.
1997-07-08
The invention is an ultrasonic surface hardness depth measurement apparatus and method permitting rapid determination of hardness depth of shafts, rods, tubes and other cylindrical parts. The apparatus of the invention has a part handler, sensor, ultrasonic electronics component, computer, computer instruction sets, and may include a display screen. The part handler has a vessel filled with a couplant, and a part rotator for rotating a cylindrical metal part with respect to the sensor. The part handler further has a surface follower upon which the sensor is mounted, thereby maintaining a constant distance between the sensor and the exterior surface of the cylindrical metal part. The sensor is mounted so that a front surface of the sensor is within the vessel with couplant between the front surface of the sensor and the part. 12 figs.
Improving the Held and Karp Approach with Constraint Programming
NASA Astrophysics Data System (ADS)
Benchimol, Pascal; Régin, Jean-Charles; Rousseau, Louis-Martin; Rueher, Michel; van Hoeve, Willem-Jan
Held and Karp have proposed, in the early 1970s, a relaxation for the Traveling Salesman Problem (TSP) as well as a branch-and-bound procedure that can solve small to modest-size instances to optimality [4, 5]. It has been shown that the Held-Karp relaxation produces very tight bounds in practice, and this relaxation is therefore applied in TSP solvers such as Concorde [1]. In this short paper we show that the Held-Karp approach can benefit from well-known techniques in Constraint Programming (CP) such as domain filtering and constraint propagation. Namely, we show that filtering algorithms developed for the weighted spanning tree constraint [3, 8] can be adapted to the context of the Held and Karp procedure. In addition to the adaptation of existing algorithms, we introduce a special-purpose filtering algorithm based on the underlying mechanisms used in Prim's algorithm [7]. Finally, we explored two different branching schemes to close the integrality gap. Our initial experimental results indicate that the addition of the CP techniques to the Held-Karp method can be very effective.
Right ventricle segmentation with probability product kernel constraints.
Nambakhsh, Cyrus M S; Peters, Terry M; Islam, Ali; Ayed, Ismail Ben
2013-01-01
We propose a fast algorithm for 3D segmentation of the right ventricle (RV) in MRI using shape and appearance constraints based on probability product kernels (PPK). The proposed constraints remove the need for large, manually-segmented training sets and costly pose estimation (or registration) procedures, as is the case of the existing algorithms. We report comprehensive experiments, which demonstrate that the proposed algorithm (i) requires only a single subject for training; and (ii) yields a performance that is not significantly affected by the choice of the training data. Our PPK constraints are non-linear (high-order) functionals, which are not directly amenable to standard optimizers. We split the problem into several surrogate-functional optimizations, each solved via an efficient convex relaxation that is amenable to parallel implementations. We further introduce a scale variable that we optimize with fast fixed-point computations, thereby achieving pose invariance in real-time. Our parallelized implementation on a graphics processing unit (GPU) demonstrates that the proposed algorithm can yield a real-time solution for typical cardiac MRI volumes, with a speed-up of more than 20 times compared to the CPU version. We report a comprehensive experimental validations over 400 volumes acquired from 20 subjects, and demonstrate that the obtained 3D surfaces correlate with independent manual delineations.
NASA Astrophysics Data System (ADS)
Cutbill, Adam; Wang, G. Gary
2016-01-01
Constraints are necessary in optimization problems to steer optimization algorithms away from solutions which are not feasible or practical. However, redundant constraints are often added, which needlessly complicate the problem's description. This article introduces a probabilistic method to identify redundant inequality constraints for black-box optimization problems. The method uses Jaccard similarity to find item groups where the occurrence of a single item implies the occurrence of all other items in the group. The remaining groups are then mined with association analysis. Furthermore, unnecessary constraints are classified as redundant owing to co-occurrence, implication or covering. These classifications are presented as rules (in readable text), to indicate the relationships among constraints. The algorithm is applied to mathematical problems and to the engineering design of a pressure vessel. It was found that the rules are informative and correct, based on the available samples. Limitations of the proposed methods are also discussed.
Sahoo, Pradyumna Kumar; Mandal, Palash Kumar; Ghosh, Saradindu
2014-01-01
Schwannomas are benign encapsulated perineural tumors. The head and neck region is the most common site. Intraoral origin is seen in only 1% of cases, tongue being the most common site; its location in the palate is rare. We report a case of hard-palate schwannoma with bony erosion which was immunohistochemically confirmed. The tumor was excised completely intraorally. After two months of follow-up, the defect was found to be completely covered with palatal mucosa. PMID:25298716
NASA Astrophysics Data System (ADS)
Giorgi, Marco
2005-06-01
For the next generation of High Energy Physics (HEP) Experiments silicon microstrip detectors working in harsh radiation environments with excellent performances are necessary. The irradiation causes bulk and surface damages that modify the electrical properties of the detector. Solutions like AC coupled strips, overhanging metal contact, <100> crystal lattice orientation, low resistivity n-bulk and Oxygenated substrate are studied for rad-hard detectors. The paper presents an outlook of these technologies.
Microwave assisted hard rock cutting
Lindroth, David P.; Morrell, Roger J.; Blair, James R.
1991-01-01
An apparatus for the sequential fracturing and cutting of subsurface volume of hard rock (102) in the strata (101) of a mining environment (100) by subjecting the volume of rock to a beam (25) of microwave energy to fracture the subsurface volume of rock by differential expansion; and , then bringing the cutting edge (52) of a piece of conventional mining machinery (50) into contact with the fractured rock (102).
Self-organization and clustering algorithms
NASA Technical Reports Server (NTRS)
Bezdek, James C.
1991-01-01
Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed.
Naeem, Muhammad; Pareek, Udit; Lee, Daniel C.; Anpalagan, Alagan
2013-01-01
Due to the rapid increase in the usage and demand of wireless sensor networks (WSN), the limited frequency spectrum available for WSN applications will be extremely crowded in the near future. More sensor devices also mean more recharging/replacement of batteries, which will cause significant impact on the global carbon footprint. In this paper, we propose a relay-assisted cognitive radio sensor network (CRSN) that allocates communication resources in an environmentally friendly manner. We use shared band amplify and forward relaying for cooperative communication in the proposed CRSN. We present a multi-objective optimization architecture for resource allocation in a green cooperative cognitive radio sensor network (GC-CRSN). The proposed multi-objective framework jointly performs relay assignment and power allocation in GC-CRSN, while optimizing two conflicting objectives. The first objective is to maximize the total throughput, and the second objective is to minimize the total transmission power of CRSN. The proposed relay assignment and power allocation problem is a non-convex mixed-integer non-linear optimization problem (NC-MINLP), which is generally non-deterministic polynomial-time (NP)-hard. We introduce a hybrid heuristic algorithm for this problem. The hybrid heuristic includes an estimation-of-distribution algorithm (EDA) for performing power allocation and iterative greedy schemes for constraint satisfaction and relay assignment. We analyze the throughput and power consumption tradeoff in GC-CRSN. A detailed analysis of the performance of the proposed algorithm is presented with the simulation results. PMID:23584119
NASA Astrophysics Data System (ADS)
Dao, Son Duy; Abhary, Kazem; Marian, Romeo
2017-01-01
Integration of production planning and scheduling is a class of problems commonly found in manufacturing industry. This class of problems associated with precedence constraint has been previously modeled and optimized by the authors, in which, it requires a multidimensional optimization at the same time: what to make, how many to make, where to make and the order to make. It is a combinatorial, NP-hard problem, for which no polynomial time algorithm is known to produce an optimal result on a random graph. In this paper, the further development of Genetic Algorithm (GA) for this integrated optimization is presented. Because of the dynamic nature of the problem, the size of its solution is variable. To deal with this variability and find an optimal solution to the problem, GA with new features in chromosome encoding, crossover, mutation, selection as well as algorithm structure is developed herein. With the proposed structure, the proposed GA is able to "learn" from its experience. Robustness of the proposed GA is demonstrated by a complex numerical example in which performance of the proposed GA is compared with those of three commercial optimization solvers.
Highly parallel consistent labeling algorithm suitable for optoelectronic implementation.
Marsden, G C; Kiamilev, F; Esener, S; Lee, S H
1991-01-10
Constraint satisfaction problems require a search through a large set of possibilities. Consistent labeling is a method by which search spaces can be drastically reduced. We present a highly parallel consistent labeling algorithm, which achieves strong k-consistency for any value k and which can include higher-order constraints. The algorithm uses vector outer product, matrix summation, and matrix intersection operations. These operations require local computation with global communication and, therefore, are well suited to a optoelectronic implementation.
Velocity and energy distributions in microcanonical ensembles of hard spheres
NASA Astrophysics Data System (ADS)
Scalas, Enrico; Gabriel, Adrian T.; Martin, Edgar; Germano, Guido
2015-08-01
In a microcanonical ensemble (constant N V E , hard reflecting walls) and in a molecular dynamics ensemble (constant N V E PG , periodic boundary conditions) with a number N of smooth elastic hard spheres in a d -dimensional volume V having a total energy E , a total momentum P , and an overall center of mass position G , the individual velocity components, velocity moduli, and energies have transformed beta distributions with different arguments and shape parameters depending on d , N , E , the boundary conditions, and possible symmetries in the initial conditions. This can be shown marginalizing the joint distribution of individual energies, which is a symmetric Dirichlet distribution. In the thermodynamic limit the beta distributions converge to gamma distributions with different arguments and shape or scale parameters, corresponding respectively to the Gaussian, i.e., Maxwell-Boltzmann, Maxwell, and Boltzmann or Boltzmann-Gibbs distribution. These analytical results agree with molecular dynamics and Monte Carlo simulations with different numbers of hard disks or spheres and hard reflecting walls or periodic boundary conditions. The agreement is perfect with our Monte Carlo algorithm, which acts only on velocities independently of positions with the collision versor sampled uniformly on a unit half sphere in d dimensions, while slight deviations appear with our molecular dynamics simulations for the smallest values of N .
Credit Constraints for Higher Education
ERIC Educational Resources Information Center
Solis, Alex
2012-01-01
This paper exploits a natural experiment that produces exogenous variation on credit access to determine the effect on college enrollment. The paper assess how important are credit constraints to explain the gap in college enrollment by family income, and what would be the gap if credit constraints are eliminated. Progress in college and dropout…
Fixed Costs and Hours Constraints
ERIC Educational Resources Information Center
Johnson, William R.
2011-01-01
Hours constraints are typically identified by worker responses to questions asking whether they would prefer a job with more hours and more pay or fewer hours and less pay. Because jobs with different hours but the same rate of pay may be infeasible when there are fixed costs of employment or mandatory overtime premia, the constraint in those…
A Procedure for Empirical Initialization of Adaptive Testing Algorithms.
ERIC Educational Resources Information Center
van der Linden, Wim J.
In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability estimator. In this paper, an empirical initialization of…
Stochastic Hard-Sphere Dynamics for Hydrodynamics of Non-Ideal Fluids
Donev, A; Alder, B J; Garcia, A L
2008-02-26
A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.
Exploring fragment spaces under multiple physicochemical constraints
NASA Astrophysics Data System (ADS)
Pärn, Juri; Degen, Jörg; Rarey, Matthias
2007-06-01
We present a new algorithm for the enumeration of chemical fragment spaces under constraints. Fragment spaces consist of a set of molecular fragments and a set of rules that specifies how fragments can be combined. Although fragment spaces typically cover an infinite number of molecules, they can be enumerated in case that a physicochemical profile of the requested compounds is given. By using min-max ranges for a number of corresponding properties, our algorithm is able to enumerate all molecules which obey these properties. To speed up the calculation, the given ranges are used directly during the build-up process to guide the selection of fragments. Furthermore, a topology based fragment filter is used to skip most of the redundant fragment combinations. We applied the algorithm to 40 different target classes. For each of these, we generated tailored fragment spaces from sets of known inhibitors and additionally derived ranges for several physicochemical properties. We characterized the target-specific fragment spaces and were able to enumerate the complete chemical subspaces for most of the targets.
NASA Astrophysics Data System (ADS)
Wolfe, William J.; Wood, David; Sorensen, Stephen E.
1996-12-01
This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
Why Are Drugs So Hard to Quit?
MedlinePlus Videos and Cool Tools
... Quitting drugs is hard because addiction is a brain disease. Your brain is like a control tower that sends out ... and choices. Addiction changes the signals in your brain and makes it hard to feel OK without ...
More Older Women Hitting the Bottle Hard
... medlineplus.gov/news/fullstory_164321.html More Older Women Hitting the Bottle Hard Study found dramatic jump ... March 28, 2017 (HealthDay News) -- More older American women than ever are drinking -- and drinking hard, a ...
NASA Astrophysics Data System (ADS)
Lambrakos, S. G.; Boris, J. P.; Oran, E. S.; Chandrasekhar, I.; Nagumo, M.
1989-12-01
We present a new modification of the SHAKE algorithm, MSHAKE, that maintains fixed distances in molecular dynamics simulations of polyatomic molecules. The MSHAKE algorithm, which is applied by modifying the leapfrog algorithm to include forces of constraint, computes an initial estimate of constraint forces, then iteratively corrects the constraint forces required to maintain the fixed distances. Thus MSHAKE should always converge more rapidly than SHAKE. Further, the explicit determination of the constraint forces at each timestep makes MSHAKE convenient for use in molecular dynamics simulations where bond stress is a significant dynamical quantity.
Applicability of Dynamic Facilitation Theory to Binary Hard Disk Systems
NASA Astrophysics Data System (ADS)
Isobe, Masaharu; Keys, Aaron S.; Chandler, David; Garrahan, Juan P.
2016-09-01
We numerically investigate the applicability of dynamic facilitation (DF) theory for glass-forming binary hard disk systems where supercompression is controlled by pressure. By using novel efficient algorithms for hard disks, we are able to generate equilibrium supercompressed states in an additive nonequimolar binary mixture, where microcrystallization and size segregation do not emerge at high average packing fractions. Above an onset pressure where collective heterogeneous relaxation sets in, we find that relaxation times are well described by a "parabolic law" with pressure. We identify excitations, or soft spots, that give rise to structural relaxation and find that they are spatially localized, their average concentration decays exponentially with pressure, and their associated energy scale is logarithmic in the excitation size. These observations are consistent with the predictions of DF generalized to systems controlled by pressure rather than temperature.
NP-hardness of decoding quantum error-correction codes
Hsieh, Min-Hsiu; Le Gall, Francois
2011-05-15
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
Parameterized Complexity of k-Anonymity: Hardness and Tractability
NASA Astrophysics Data System (ADS)
Bonizzoni, Paola; Della Vedova, Gianluca; Dondi, Riccardo; Pirola, Yuri
The problem of publishing personal data without giving up privacy is becoming increasingly important. A precise formalization that has been recently proposed is the k-anonymity, where the rows of a table are partitioned in clusters of size at least k and all rows in a cluster become the same tuple after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is hard even when the stored values are over a binary alphabet or the table consists of a bounded number of columns. In this paper we study how the complexity of the problem is influenced by different parameters. First we show that the problem is W[1]-hard when parameterized by the value of the solution (and k). Then we exhibit a fixed-parameter algorithm when the problem is parameterized by the number of columns and the number of different values in any column.
Applicability of Dynamic Facilitation Theory to Binary Hard Disk Systems.
Isobe, Masaharu; Keys, Aaron S; Chandler, David; Garrahan, Juan P
2016-09-30
We numerically investigate the applicability of dynamic facilitation (DF) theory for glass-forming binary hard disk systems where supercompression is controlled by pressure. By using novel efficient algorithms for hard disks, we are able to generate equilibrium supercompressed states in an additive nonequimolar binary mixture, where microcrystallization and size segregation do not emerge at high average packing fractions. Above an onset pressure where collective heterogeneous relaxation sets in, we find that relaxation times are well described by a "parabolic law" with pressure. We identify excitations, or soft spots, that give rise to structural relaxation and find that they are spatially localized, their average concentration decays exponentially with pressure, and their associated energy scale is logarithmic in the excitation size. These observations are consistent with the predictions of DF generalized to systems controlled by pressure rather than temperature.
Warren G. Harding and the Press.
ERIC Educational Resources Information Center
Whitaker, W. Richard
There are many parallels between the Richard M. Nixon administration and Warren G. Harding's term: both Republicans, both touched by scandal, and both having a unique relationship with the press. But in Harding's case the relationship was a positive one. One of Harding's first official acts as president was to restore the regular White House news…
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Hard hats. 56.15002 Section 56.15002 Mineral... HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Personal Protection § 56.15002 Hard hats. All persons shall wear suitable hard hats when in or around a mine or plant where falling...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Hard hats. 57.15002 Section 57.15002 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... Underground § 57.15002 Hard hats. All persons shall wear suitable hard hats when in or around a mine or...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Hard seed. 201.21 Section 201.21 Agriculture..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Labeling Agricultural Seeds § 201.21 Hard seed. The label shall show the percentage of hard...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Hard seed. 201.30 Section 201.30 Agriculture..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT REGULATIONS Labeling Vegetable Seeds § 201.30 Hard seed. The label shall show the percentage of hard seed,...
Code of Federal Regulations, 2011 CFR
2011-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2012 CFR
2012-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2013 CFR
2013-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2014 CFR
2014-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2010 CFR
2010-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
A Hybrid Constraint Representation and Reasoning Framework
NASA Technical Reports Server (NTRS)
Golden, Keith; Pang, Wan-Lin
2003-01-01
This paper introduces JNET, a novel constraint representation and reasoning framework that supports procedural constraints and constraint attachments, providing a flexible way of integrating the constraint reasoner with a run- time software environment. Attachments in JNET are constraints over arbitrary Java objects, which are defined using Java code, at runtime, with no changes to the JNET source code.
Evolutionary constraints or opportunities?
Sharov, Alexei A.
2014-01-01
Natural selection is traditionally viewed as a leading factor of evolution, whereas variation is assumed to be random and non-directional. Any order in variation is attributed to epigenetic or developmental constraints that can hinder the action of natural selection. In contrast I consider the positive role of epigenetic mechanisms in evolution because they provide organisms with opportunities for rapid adaptive change. Because the term “constraint” has negative connotations, I use the term “regulated variation” to emphasize the adaptive nature of phenotypic variation, which helps populations and species to survive and evolve in changing environments. The capacity to produce regulated variation is a phenotypic property, which is not described in the genome. Instead, the genome acts as a switchboard, where mostly random mutations switch “on” or “off” preexisting functional capacities of organism components. Thus, there are two channels of heredity: informational (genomic) and structure-functional (phenotypic). Functional capacities of organisms most likely emerged in a chain of modifications and combinations of more simple ancestral functions. The role of DNA has been to keep records of these changes (without describing the result) so that they can be reproduced in the following generations. Evolutionary opportunities include adjustments of individual functions, multitasking, connection between various components of an organism, and interaction between organisms. The adaptive nature of regulated variation can be explained by the differential success of lineages in macro-evolution. Lineages with more advantageous patterns of regulated variation are likely to produce more species and secure more resources (i.e., long-term lineage selection). PMID:24769155
Infrared Kuiper Belt Constraints
Teplitz, V.L.; Stern, S.A.; Anderson, J.D.; Rosenbaum, D.; Scalise, R.J.; Wentzler, P.
1999-05-01
We compute the temperature and IR signal of particles of radius {ital a} and albedo {alpha} at heliocentric distance {ital R}, taking into account the emissivity effect, and give an interpolating formula for the result. We compare with analyses of {ital COBE} DIRBE data by others (including recent detection of the cosmic IR background) for various values of heliocentric distance {ital R}, particle radius {ital a}, and particle albedo {alpha}. We then apply these results to a recently developed picture of the Kuiper belt as a two-sector disk with a nearby, low-density sector (40{lt}R{lt}50{endash}90 AU) and a more distant sector with a higher density. We consider the case in which passage through a molecular cloud essentially cleans the solar system of dust. We apply a simple model of dust production by comet collisions and removal by the Poynting-Robertson effect to find limits on total and dust masses in the near and far sectors as a function of time since such a passage. Finally, we compare Kuiper belt IR spectra for various parameter values. Results of this work include: (1) numerical limits on Kuiper belt dust as a function of ({ital R}, {ital a}, {alpha}) on the basis of four alternative sets of constraints, including those following from recent discovery of the cosmic IR background by Hauser et al.; (2) application to the two-sector Kuiper belt model, finding mass limits and spectrum shape for different values of relevant parameters including dependence on time elapsed since last passage through a molecular cloud cleared the outer solar system of dust; and (3) potential use of spectral information to determine time since last passage of the Sun through a giant molecular cloud. {copyright} {ital {copyright} 1999.} {ital The American Astronomical Society}
NASA Astrophysics Data System (ADS)
Gusev, M. I.
2016-10-01
We study the penalty function type methods for computing the reachable sets of nonlinear control systems with state constraints. The state constraints are given by a finite system of smooth inequalities. The proposed methods are based on removing the state constraints by replacing the original system with an auxiliary system without constraints. This auxiliary system is obtained by modifying the set of velocities of the original system around the boundary of constraints. The right-hand side of the system depends on a penalty parameter. We prove that the reachable sets of the auxiliary system approximate in the Hausdorff metric the reachable set of the original system with state constraints as the penalty parameter tends to zero (infinity) and give the estimates of the rate of convergence. The numerical algorithms for computing the reachable sets, based on Pontryagin's maximum principle, are also considered.
Automatic Constraint Detection for 2D Layout Regularization.
Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter
2016-08-01
In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.
The Hard Problem of Cooperation
Eriksson, Kimmo; Strimling, Pontus
2012-01-01
Based on individual variation in cooperative inclinations, we define the “hard problem of cooperation” as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition) change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior. PMID:22792282
The hard problem of cooperation.
Eriksson, Kimmo; Strimling, Pontus
2012-01-01
Based on individual variation in cooperative inclinations, we define the "hard problem of cooperation" as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition) change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior.
Radiation Hardness Assurance (RHA) Guideline
NASA Technical Reports Server (NTRS)
Campola, Michael J.
2016-01-01
Radiation Hardness Assurance (RHA) consists of all activities undertaken to ensure that the electronics and materials of a space system perform to their design specifications after exposure to the mission space environment. The subset of interests for NEPP and the REAG, are EEE parts. It is important to register that all of these undertakings are in a feedback loop and require constant iteration and updating throughout the mission life. More detail can be found in the reference materials on applicable test data for usage on parts.
Radiation hard electronics for LHC
NASA Astrophysics Data System (ADS)
Raymond, M.; Millmore, M.; Hall, G.; Sachdeva, R.; French, M.; Nygård, E.; Yoshioka, K.
1995-02-01
A CMOS front end electronics chain is being developed by the RD20 collaboration for microstrip detector readout at LHC. It is based on a preamplifier and CR-RC filter, analogue pipeline and an analogue signal processor. Amplifiers and transistor test structures have been constructed and evaluated in detail using a Harris 1.2 μm radiation hardened CMOS process. Progress with larger scale elements, including 32 channel front end chips, is described. A radiation hard 128 channel chip, with a 40 MHz analogue multiplexer, is to be submitted for fabrication in July 1994 which will form the basis of the readout of the tracking system of the CMS experiment.
Hard Scattering Studies at Jlab
Harutyun Avagyan; Peter Bosted; Volker Burkert; Latifa Elouadrhiri
2005-09-01
We present current activities and future prospects for studies of hard scattering processes using the CLAS detector and the CEBAF polarized electron beam. Kinematic dependences of single and double spin asymmetries have been measured in a wide kinematic range at CLAS with a polarized NH{sub 3} and unpolarized liquid hydrogen targets. It has been shown that the data are consistent with factorization and observed target and beam asymmetries are in good agreement with measurements performed at higher energies, suggesting that the high energy-description of the semi-inclusive DIS process can be extended to the moderate energies of JLab measurements.
Thermopile detector radiation hard readout
NASA Astrophysics Data System (ADS)
Gaalema, Stephen; Van Duyne, Stephen; Gates, James L.; Foote, Marc C.
2010-08-01
The NASA Jupiter Europa Orbiter (JEO) conceptual payload contains a thermal instrument with six different spectral bands ranging from 8μm to 100μm. The thermal instrument is based on multiple linear arrays of thermopile detectors that are intrinsically radiation hard; however, the thermopile CMOS readout needs to be hardened to tolerate the radiation sources of the JEO mission. Black Forest Engineering is developing a thermopile readout to tolerate the JEO mission radiation sources. The thermal instrument and ROIC process/design techniques are described to meet the JEO mission requirements.
Russian Doll Search for solving Constraint Optimization problems
Verfaillie, G.; Lemaitre, M.
1996-12-31
If the Constraint Satisfaction framework has been extended to deal with Constraint Optimization problems, it appears that optimization is far more complex than satisfaction. One of the causes of the inefficiency of complete tree search methods, like Depth First Branch and Bound, lies in the poor quality of the lower bound on the global valuation of a partial assignment, even when using Forward Checking techniques. In this paper, we introduce the Russian Doll Search algorithm which replaces one search by n successive searches on nested subproblems (n being the number of problem variables), records the results of each search and uses them later, when solving larger subproblems, in order to improve the lower bound on the global valuation of any partial assignment. On small random problems and on large real scheduling problems, this algorithm yields surprisingly good results, which greatly improve as the problems get more constrained and the bandwidth of the used variable ordering diminishes.
Hierarchical motion estimation with smoothness constraints and postprocessing
NASA Astrophysics Data System (ADS)
Xie, Kan; Van Eycken, Luc; Oosterlinck, Andre J.
1996-01-01
How to acquire accurate and reliable motion parameters from an image sequence is a knotty problem for many applications in image processing, image recognition, and video coding, especially when scenes involve moving objects with various shapes and sizes as well as very fast and complicated motion. In this paper, an improved pel-based motion estimation (ME) algorithm with smoothness constraints is presented, which is based on the investigation and the comparison of different existing pel-based ME (or optical flow) algorithms. Then, in order to cope with various moving objects and their complex motion, a hierarchical ME algorithm with smoothness constraints and postprocessing is proposed. The experimental results show that the motion parameters obtained by the hierarchical ME algorithm are quite creditable and seem to be close to the real physical motion fields if the luminance intensity changes are due to the motion of objects. The hierarchical ME algorithm still provides approximate and smooth vector fields even for scenes that involve some motion-irrelevant intensity changes or blurring caused by violent motion.
Automated formulation of constraint satisfaction problems
Sabin, M.; Freuder, E.C.
1996-12-31
A wide variety of problems can be represented as constraint satisfaction problems (CSPs), and once so represented can be solved by a variety of effective algorithms. However, as with other powerful, general Al problem solving methods, we must still address the task of moving from a natural statement of the problem to a formulation of the problem as a CSP. This research addresses the task of automating this problem formulation process, using logic puzzles as a testbed. Beyond problem formulation per se, we address the issues of effective problem formulation, i.e. finding formulations that support more efficient solution, as well as incremental problem formulation that supports reasoning from partial information and are congenial to human thought processes.
Computing group cardinality constraint solutions for logistic regression problems.
Zhang, Yong; Kwon, Dongjin; Pohl, Kilian M
2017-01-01
We derive an algorithm to directly solve logistic regression based on cardinality constraint, group sparsity and use it to classify intra-subject MRI sequences (e.g. cine MRIs) of healthy from diseased subjects. Group cardinality constraint models are often applied to medical images in order to avoid overfitting of the classifier to the training data. Solutions within these models are generally determined by relaxing the cardinality constraint to a weighted feature selection scheme. However, these solutions relate to the original sparse problem only under specific assumptions, which generally do not hold for medical image applications. In addition, inferring clinical meaning from features weighted by a classifier is an ongoing topic of discussion. Avoiding weighing features, we propose to directly solve the group cardinality constraint logistic regression problem by generalizing the Penalty Decomposition method. To do so, we assume that an intra-subject series of images represents repeated samples of the same disease patterns. We model this assumption by combining series of measurements created by a feature across time into a single group. Our algorithm then derives a solution within that model by decoupling the minimization of the logistic regression function from enforcing the group sparsity constraint. The minimum to the smooth and convex logistic regression problem is determined via gradient descent while we derive a closed form solution for finding a sparse approximation of that minimum. We apply our method to cine MRI of 38 healthy controls and 44 adult patients that received reconstructive surgery of Tetralogy of Fallot (TOF) during infancy. Our method correctly identifies regions impacted by TOF and generally obtains statistically significant higher classification accuracy than alternative solutions to this model, i.e., ones relaxing group cardinality constraints.
Applying Motion Constraints Based on Test Data
NASA Technical Reports Server (NTRS)
Burlone, Michael
2014-01-01
MSC ADAMS is a simulation software that is used to analyze multibody dynamics. Using user subroutines, it is possible to apply motion constraints to the rigid bodies so that they match the motion profile collected from test data. This presentation describes the process of taking test data and passing it to ADAMS using user subroutines, and uses the Morpheus free-flight 4 test as an example of motion data used for this purpose. Morpheus is the name of a prototype lander vehicle built by NASA that serves as a test bed for various experimental technologies (see backup slides for details) MSC.ADAMS"TM" is used to play back telemetry data (vehicle orientation and position) from each test as the inputs to a 6-DoF general motion constraint (details in backup slides) The MSC.ADAMS"TM" playback simulations allow engineers to examine and analyze flight trajectory as well as observe vehicle motion from any angle and at any playback speed. This facilitates the development of robust and stable control algorithms, increasing reliability and reducing development costs of this developmental engine The simulation also incorporates a 3D model of the artificial hazard field, allowing engineers to visualize and measure performance of the developmental autonomous landing and hazard avoidance technology ADAMS is a multibody dynamics solver. It uses forces, constraints, and mass properties to numerically integrate equations of motion. The ADAMS solver will ask the motion subroutine for position, velocity, and acceleration values at various time steps. Those values must be continuous over the whole time domain. Each degree of freedom in the telemetry data can be examined separately; however, linear interpolation of the telemetry data is invalid, since there will be discontinuities in velocity and acceleration.
Hardness correlation for uranium and its alloys
Humphreys, D L; Romig, Jr, A D
1983-03-01
The hardness of 16 different uranium-titanium (U-Ti) alloys was measured on six (6) different hardness scales (R/sub A/, R/sub B/, R/sub C/, R/sub D/, Knoop, and Vickers). The alloys contained between 0.75 and 2.0 wt % Ti. All of the alloys were solutionized (850/sup 0/C, 1 h) and ice-water quenched to produce a supersaturated martensitic phase. A range of hardnesses was obtained by aging the samples for various times and temperatures. The correlation of various hardness scales was shown to be virtually identical to the hardness-scale correlation for steels. For more-accurate conversion from one hardness scale to another, least-squares-curve fits were determined for the various hardness-scale correlations. 34 figures, 5 tables.
Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem
NASA Astrophysics Data System (ADS)
Chen, Wei
2015-07-01
In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.
Self-accelerating massive gravity: Hidden constraints and characteristics
NASA Astrophysics Data System (ADS)
Motloch, Pavel; Hu, Wayne; Motohashi, Hayato
2016-05-01
Self-accelerating backgrounds in massive gravity provide an arena to explore the Cauchy problem for derivatively coupled fields that obey complex constraints which reduce the phase space degrees of freedom. We present here an algorithm based on the Kronecker form of a matrix pencil that finds all hidden constraints, for example those associated with derivatives of the equations of motion, and characteristic curves for any 1 +1 dimensional system of linear partial differential equations. With the Regge-Wheeler-Zerilli decomposition of metric perturbations into angular momentum and parity states, this technique applies to fully 3 +1 dimensional perturbations of massive gravity around any spherically symmetric self-accelerating background. Five spin modes of the massive graviton propagate once the constraints are imposed: two spin-2 modes with luminal characteristics present in the massless theory as well as two spin-1 modes and one spin-0 mode. Although the new modes all possess the same—typically spacelike—characteristic curves, the spin-1 modes are parabolic while the spin-0 modes are hyperbolic. The joint system, which remains coupled by nonderivative terms, cannot be solved as a simple Cauchy problem from a single noncharacteristic surface. We also illustrate the generality of the algorithm with other cases where derivative constraints reduce the number of propagating degrees of freedom or order of the equations.
Hard and Soft Safety Verifications
NASA Technical Reports Server (NTRS)
Wetherholt, Jon; Anderson, Brenda
2012-01-01
The purpose of this paper is to examine the differences between and the effects of hard and soft safety verifications. Initially, the terminology should be defined and clarified. A hard safety verification is datum which demonstrates how a safety control is enacted. An example of this is relief valve testing. A soft safety verification is something which is usually described as nice to have but it is not necessary to prove safe operation. An example of a soft verification is the loss of the Solid Rocket Booster (SRB) casings from Shuttle flight, STS-4. When the main parachutes failed, the casings impacted the water and sank. In the nose cap of the SRBs, video cameras recorded the release of the parachutes to determine safe operation and to provide information for potential anomaly resolution. Generally, examination of the casings and nozzles contributed to understanding of the newly developed boosters and their operation. Safety verification of SRB operation was demonstrated by examination for erosion or wear of the casings and nozzle. Loss of the SRBs and associated data did not delay the launch of the next Shuttle flight.
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-08-30
We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-02-28
We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
Constraint-based stereo matching
NASA Technical Reports Server (NTRS)
Kuan, D. T.
1987-01-01
The major difficulty in stereo vision is the correspondence problem that requires matching features in two stereo images. Researchers describe a constraint-based stereo matching technique using local geometric constraints among edge segments to limit the search space and to resolve matching ambiguity. Edge segments are used as image features for stereo matching. Epipolar constraint and individual edge properties are used to determine possible initial matches between edge segments in a stereo image pair. Local edge geometric attributes such as continuity, junction structure, and edge neighborhood relations are used as constraints to guide the stereo matching process. The result is a locally consistent set of edge segment correspondences between stereo images. These locally consistent matches are used to generate higher-level hypotheses on extended edge segments and junctions to form more global contexts to achieve global consistency.
Weighted constraints in generative linguistics.
Pater, Joe
2009-08-01
Harmonic Grammar (HG) and Optimality Theory (OT) are closely related formal frameworks for the study of language. In both, the structure of a given language is determined by the relative strengths of a set of constraints. They differ in how these strengths are represented: as numerical weights (HG) or as ranks (OT). Weighted constraints have advantages for the construction of accounts of language learning and other cognitive processes, partly because they allow for the adaptation of connectionist and statistical models. HG has been little studied in generative linguistics, however, largely due to influential claims that weighted constraints make incorrect predictions about the typology of natural languages, predictions that are not shared by the more popular OT. This paper makes the case that HG is in fact a promising framework for typological research, and reviews and extends the existing arguments for weighted over ranked constraints.
NASA Astrophysics Data System (ADS)
Gras, Vincent; Luong, Michel; Amadon, Alexis; Boulant, Nicolas
2015-12-01
In Magnetic Resonance Imaging at ultra-high field, kT-points radiofrequency pulses combined with parallel transmission are a promising technique to mitigate the B1 field inhomogeneity in 3D imaging applications. The optimization of the corresponding k-space trajectory for its slice-selective counterpart, i.e. the spokes method, has been shown in various studies to be very valuable but also dependent on the hardware and specific absorption rate constraints. Due to the larger number of degrees of freedom than for spokes excitations, joint design techniques based on the fine discretization (gridding) of the parameter space become hardly tractable for kT-points pulses. In this article, we thus investigate the simultaneous optimization of the 3D blipped k-space trajectory and of the kT-points RF pulses, using a magnitude least squares cost-function, with explicit constraints and in the large flip angle regime. A second-order active-set algorithm is employed due to its demonstrated success and robustness in similar problems. An analysis of global optimality and of the structure of the returned trajectories is proposed. The improvement provided by the k-space trajectory optimization is validated experimentally by measuring the flip angle on a spherical water phantom at 7T and via Quantum Process Tomography.
Thermodynamic constraints for biochemical networks.
Beard, Daniel A; Babson, Eric; Curtis, Edward; Qian, Hong
2004-06-07
The constraint-based approach to analysis of biochemical systems has emerged as a useful tool for rational metabolic engineering. Flux balance analysis (FBA) is based on the constraint of mass conservation; energy balance analysis (EBA) is based on non-equilibrium thermodynamics. The power of these approaches lies in the fact that the constraints are based on physical laws, and do not make use of unknown parameters. Here, we show that the network structure (i.e. the stoichiometric matrix) alone provides a system of constraints on the fluxes in a biochemical network which are feasible according to both mass balance and the laws of thermodynamics. A realistic example shows that these constraints can be sufficient for deriving unambiguous, biologically meaningful results. The thermodynamic constraints are obtained by comparing of the sign pattern of the flux vector to the sign patterns of the cycles of the internal cycle space via connection between stoichiometric network theory (SNT) and the mathematical theory of oriented matroids.
On the hardness of offline multi-objective optimization.
Teytaud, Olivier
2007-01-01
It has been empirically established that multiobjective evolutionary algorithms do not scale well with the number of conflicting objectives. This paper shows that the convergence rate of all comparison-based multi-objective algorithms, for the Hausdorff distance, is not much better than the convergence rate of the random search under certain conditions. The number of objectives must be very moderate and the framework should hold the following assumptions: the objectives are conflicting and the computational cost is lower bounded by the number of comparisons is a good model. Our conclusions are: (i) the number of conflicting objectives is relevant (ii) the criteria based on comparisons with random-search for multi-objective optimization is also relevant (iii) having more than 3-objectives optimization is very hard. Furthermore, we provide some insight into cross-over operators.
Efficient Algorithms for Langevin and DPD Dynamics.
Goga, N; Rzepiela, A J; de Vries, A H; Marrink, S J; Berendsen, H J C
2012-10-09
In this article, we present several algorithms for stochastic dynamics, including Langevin dynamics and different variants of Dissipative Particle Dynamics (DPD), applicable to systems with or without constraints. The algorithms are based on the impulsive application of friction and noise, thus avoiding the computational complexity of algorithms that apply continuous friction and noise. Simulation results on thermostat strength and diffusion properties for ideal gas, coarse-grained (MARTINI) water, and constrained atomic (SPC/E) water systems are discussed. We show that the measured thermal relaxation rates agree well with theoretical predictions. The influence of various parameters on the diffusion coefficient is discussed.
Applying Soft Arc Consistency to Distributed Constraint Optimization Problems
NASA Astrophysics Data System (ADS)
Matsui, Toshihiro; Silaghi, Marius C.; Hirayama, Katsutoshi; Yokoo, Makot; Matsuo, Hiroshi
The Distributed Constraint Optimization Problem (DCOP) is a fundamental framework of multi-agent systems. With DCOPs a multi-agent system is represented as a set of variables and a set of constraints/cost functions. Distributed task scheduling and distributed resource allocation can be formalized as DCOPs. In this paper, we propose an efficient method that applies directed soft arc consistency to a DCOP. In particular, we focus on DCOP solvers that employ pseudo-trees. A pseudo-tree is a graph structure for a constraint network that represents a partial ordering of variables. Some pseudo-tree-based search algorithms perform optimistic searches using explicit/implicit backtracking in parallel. However, for cost functions taking a wide range of cost values, such exact algorithms require many search iterations. Therefore additional improvements are necessary to reduce the number of search iterations. A previous study used a dynamic programming-based preprocessing technique that estimates the lower bound values of costs. However, there are opportunities for further improvements of efficiency. In addition, modifications of the search algorithm are necessary to use the estimated lower bounds. The proposed method applies soft arc consistency (soft AC) enforcement to DCOP. In the proposed method, directed soft AC is performed based on a pseudo-tree in a bottom up manner. Using the directed soft AC, the global lower bound value of cost functions is passed up to the root node of the pseudo-tree. It also totally reduces values of binary cost functions. As a result, the original problem is converted to an equivalent problem. The equivalent problem is efficiently solved using common search algorithms. Therefore, no major modifications are necessary in search algorithms. The performance of the proposed method is evaluated by experimentation. The results show that it is more efficient than previous methods.
Development of radiation hard scintillators
Markley, F.; Woods, D.; Pla-Dalmau, A.; Foster, G. ); Blackburn, R. )
1992-05-01
Substantial improvements have been made in the radiation hardness of plastic scintillators. Cylinders of scintillating materials 2.2 cm in diameter and 1 cm thick have been exposed to 10 Mrads of gamma rays at a dose rate of 1 Mrad/h in a nitrogen atmosphere. One of the formulations tested showed an immediate decrease in pulse height of only 4% and has remained stable for 12 days while annealing in air. By comparison a commercial PVT scintillator showed an immediate decrease of 58% and after 43 days of annealing in air it improved to a 14% loss. The formulated sample consisted of 70 parts by weight of Dow polystyrene, 30 pbw of pentaphenyltrimethyltrisiloxane (Dow Corning DC 705 oil), 2 pbw of p-terphenyl, 0.2 pbw of tetraphenylbutadiene, and 0.5 pbw of UVASIL299LM from Ferro.
NASA Technical Reports Server (NTRS)
Rothschild, R. E.
1981-01-01
Past hard X-ray and lower energy satellite instruments are reviewed and it is shown that observation above 20 keV and up to hundreds of keV can provide much valuable information on the astrophysics of cosmic sources. To calculate possible sensitivities of future arrays, the efficiencies of a one-atmosphere inch gas counter (the HEAO-1 A-2 xenon filled HED3) and a 3 mm phoswich scintillator (the HEAO-1 A-4 Na1 LED1) were compared. Above 15 keV, the scintillator was more efficient. In a similar comparison, the sensitivity of germanium detectors did not differ much from that of the scintillators, except at high energies where the sensitivity would remain flat and not rise with loss of efficiency. Questions to be addressed concerning the physics of active galaxies and the diffuse radiation background, black holes, radio pulsars, X-ray pulsars, and galactic clusters are examined.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
Decentralized Patrolling Under Constraints in Dynamic Environments.
Chen, Shaofei; Wu, Feng; Shen, Lincheng; Chen, Jing; Ramchurn, Sarvapali D
2015-12-22
We investigate a decentralized patrolling problem for dynamic environments where information is distributed alongside threats. In this problem, agents obtain information at a location, but may suffer attacks from the threat at that location. In a decentralized fashion, each agent patrols in a designated area of the environment and interacts with a limited number of agents. Therefore, the goal of these agents is to coordinate to gather as much information as possible while limiting the damage incurred. Hence, we model this class of problem as a transition-decoupled partially observable Markov decision process with health constraints. Furthermore, we propose scalable decentralized online algorithms based on Monte Carlo tree search and a factored belief vector. We empirically evaluate our algorithms on decentralized patrolling problems and benchmark them against the state-of-the-art online planning solver. The results show that our approach outperforms the state-of-the-art by more than 56% for six agents patrolling problems and can scale up to 24 agents in reasonable time.
ERIC Educational Resources Information Center
Végh, Ladislav
2016-01-01
The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…
A Hybrid Causal Search Algorithm for Latent Variable Models
Ogarrio, Juan Miguel; Spirtes, Peter; Ramsey, Joe
2017-01-01
Existing score-based causal model search algorithms such as GES (and a speeded up version, FGS) are asymptotically correct, fast, and reliable, but make the unrealistic assumption that the true causal graph does not contain any unmeasured confounders. There are several constraint-based causal search algorithms (e.g RFCI, FCI, or FCI+) that are asymptotically correct without assuming that there are no unmeasured confounders, but often perform poorly on small samples. We describe a combined score and constraint-based algorithm, GFCI, that we prove is asymptotically correct. On synthetic data, GFCI is only slightly slower than RFCI but more accurate than FCI, RFCI and FCI+. PMID:28239434
Efficient multiple-way graph partitioning algorithms
Dasdan, A.; Aykanat, C.
1995-12-01
Graph partitioning deals with evenly dividing a graph into two or more parts such that the total weight of edges interconnecting these parts, i.e., cutsize, is minimized. Graph partitioning has important applications in VLSI layout, mapping, and sparse Gaussian elimination. Since graph partitioning problem is NP-hard, we should resort to polynomial-time algorithms to obtain a good solution, or hopefully a near-optimal solution. Kernighan-Lin (KL) propsoed a 2-way partitioning algorithms. Fiduccia-Mattheyses (FM) introduced a faster version of KL algorithm. Sanchis (FMS) generalized FM algorithm to a multiple-way partitioning algorithm. Simulated Annealing (SA) is one of the most successful approaches that are not KL-based.
A modified multilevel scheme for internal and external constraints in virtual environments.
Arikatla, Venkata S; De, Suvranu
2013-01-01
Multigrid algorithms are gaining popularity in virtual reality simulations as they have a theoretically optimal performance that scales linearly with the number of degrees of freedom of the simulation system. We propose a multilevel approach that combines the efficiency of the multigrid algorithms with the ability to resolve multi-body constraints during interactive simulations. First, we develop a single level modified block Gauss-Seidel (MBGS) smoother that can incorporate constraints. This is subsequently incorporated in a standard multigrid V-cycle with corrections for constraints to form the modified multigrid V-cycle (MMgV). Numerical results show that the solver can resolve constraints while achieving the theoretical performance of multigrid schemes.
Laser-induced autofluorescence of oral cavity hard tissues
NASA Astrophysics Data System (ADS)
Borisova, E. G.; Uzunov, Tz. T.; Avramov, L. A.
2007-03-01
In current study oral cavity hard tissues autofluorescence was investigated to obtain more complete picture of their optical properties. As an excitation source nitrogen laser with parameters - 337,1 nm, 14 μJ, 10 Hz (ILGI-503, Russia) was used. In vitro spectra from enamel, dentine, cartilage, spongiosa and cortical part of the periodontal bones were registered using a fiber-optic microspectrometer (PC2000, "Ocean Optics" Inc., USA). Gingival fluorescence was also obtained for comparison of its spectral properties with that of hard oral tissues. Samples are characterized with significant differences of fluorescence properties one to another. It is clearly observed signal from different collagen types and collagen-cross links with maxima at 385, 430 and 480-490 nm. In dentine are observed only two maxima at 440 and 480 nm, related also to collagen structures. In samples of gingival and spongiosa were observed traces of hemoglobin - by its re-absorption at 545 and 575 nm, which distort the fluorescence spectra detected from these anatomic sites. Results, obtained in this study are foreseen to be used for development of algorithms for diagnosis and differentiation of teeth lesions and other problems of oral cavity hard tissues as periodontitis and gingivitis.
Habitat Suitability Index Models: Hard clam
Mulholland, Rosemarie
1984-01-01
Two species of hard clams occur along the Atlantic and Gulf of Mexico coasts of North America: the southern hard clam, Mercenaria campechiensis Gmelin 1791, and the northern hard clam, ~lercenaria mercenaria Linne 1758 (Wells 1957b). The latter species, also commonly kno\\'m as the quahog, was formerly named Venus mercenaria. The two species are closely related, produce viable hybrids (Menzel and Menzel 1965), and may be a single species.
JPIC-Rad-Hard JPEG2000 Image Compression ASIC
NASA Astrophysics Data System (ADS)
Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov
2010-08-01
JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.
Developmental constraints on behavioural flexibility.
Holekamp, Kay E; Swanson, Eli M; Van Meter, Page E
2013-05-19
We suggest that variation in mammalian behavioural flexibility not accounted for by current socioecological models may be explained in part by developmental constraints. From our own work, we provide examples of constraints affecting variation in behavioural flexibility, not only among individuals, but also among species and higher taxonomic units. We first implicate organizational maternal effects of androgens in shaping individual differences in aggressive behaviour emitted by female spotted hyaenas throughout the lifespan. We then compare carnivores and primates with respect to their locomotor and craniofacial adaptations. We inquire whether antagonistic selection pressures on the skull might impose differential functional constraints on evolvability of skulls and brains in these two orders, thus ultimately affecting behavioural flexibility in each group. We suggest that, even when carnivores and primates would theoretically benefit from the same adaptations with respect to behavioural flexibility, carnivores may nevertheless exhibit less behavioural flexibility than primates because of constraints imposed by past adaptations in the morphology of the limbs and skull. Phylogenetic analysis consistent with this idea suggests greater evolutionary lability in relative brain size within families of primates than carnivores. Thus, consideration of developmental constraints may help elucidate variation in mammalian behavioural flexibility.
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats or hard caps... Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color from those worn by experienced miners shall be worn...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats, or hard caps... STANDARDS-UNDERGROUND COAL MINES Miscellaneous § 75.1720-1 Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in...
Symmetry constraint for foreground extraction.
Fu, Huazhu; Cao, Xiaochun; Tu, Zhuowen; Lin, Dongdai
2014-05-01
Symmetry as an intrinsic shape property is often observed in natural objects. In this paper, we discuss how explicitly taking into account the symmetry constraint can enhance the quality of foreground object extraction. In our method, a symmetry foreground map is used to represent the symmetry structure of the image, which includes the symmetry matching magnitude and the foreground location prior. Then, the symmetry constraint model is built by introducing this symmetry structure into the graph-based segmentation function. Finally, the segmentation result is obtained via graph cuts. Our method encourages objects with symmetric parts to be consistently extracted. Moreover, our symmetry constraint model is applicable to weak symmetric objects under the part-based framework. Quantitative and qualitative experimental results on benchmark datasets demonstrate the advantages of our approach in extracting the foreground. Our method also shows improved results in segmenting objects with weak, complex symmetry properties.
Genetic map construction with constraints
Clark, D.A.; Rawlings, C.J.; Soursenot, S.
1994-12-31
A pilot program, CME, is described for generating a physical genetic map from hybridization fingerprinting data. CME is implemented in the parallel constraint logic programming language ElipSys. The features of constraint logic programming are used to enable the integration of preexisting mapping information (partial probe orders from cytogenetic maps and local physical maps) into the global map generation process, while parallelism enables the search space to be traversed more efficiently. CME was tested using data from chromosome 2 of Schizosaccharomyces pombe and was found able to generate maps as well as (and sometimes better than) a more traditional method. This paper illustrates the practical benefits of using a symbolic logic programming language and shows that the features of constraint handling and parallel execution bring the development of practical systems based on Al programming technologies nearer to being a reality.
Magnetotail dynamics under isobaric constraints
NASA Technical Reports Server (NTRS)
Birn, Joachim; Schindler, Karl; Janicke, Lutz; Hesse, Michael
1994-01-01
Using linear theory and nonlinear MHD simulations, we investigate the resistive and ideal MHD stability of two-dimensional plasma configurations under the isobaric constraint dP/dt = 0, which in ideal MHD is equivalent to conserving the pressure function P = P(A), where A denotes the magnetic flux. This constraint is satisfied for incompressible modes, such as Alfven waves, and for systems undergoing energy losses. The linear stability analysis leads to a Schroedinger equation, which can be investigated by standard quantum mechanics procedures. We present an application to a typical stretched magnetotail configuration. For a one-dimensional sheet equilibrium characteristic properties of tearing instability are rediscovered. However, the maximum growth rate scales with the 1/7 power of the resistivity, which implies much faster growth than for the standard tearing mode (assuming that the resistivity is small). The same basic eigen-mode is found also for weakly two-dimensional equilibria, even in the ideal MHD limit. In this case the growth rate scales with the 1/4 power of the normal magnetic field. The results of the linear stability analysis are confirmed qualitatively by nonlinear dynamic MHD simulations. These results suggest the interesting possibility that substorm onset, or the thinning in the late growth phase, is caused by the release of a thermodynamic constraint without the (immediate) necessity of releasing the ideal MHD constraint. In the nonlinear regime the resistive and ideal developments differ in that the ideal mode does not lead to neutral line formation without the further release of the ideal MHD constraint; instead a thin current sheet forms. The isobaric constraint is critically discussed. Under perhaps more realistic adiabatic conditions the ideal mode appears to be stable but could be driven by external perturbations and thus generate the thin current sheet in the late growth phase, before a nonideal instability sets in.
Brown, Christopher A.; Brown, Kevin S.
2010-01-01
Correlated amino acid substitution algorithms attempt to discover groups of residues that co-fluctuate due to either structural or functional constraints. Although these algorithms could inform both ab initio protein folding calculations and evolutionary studies, their utility for these purposes has been hindered by a lack of confidence in their predictions due to hard to control sources of error. To complicate matters further, naive users are confronted with a multitude of methods to choose from, in addition to the mechanics of assembling and pruning a dataset. We first introduce a new pair scoring method, called ZNMI (Z-scored-product Normalized Mutual Information), which drastically improves the performance of mutual information for co-fluctuating residue prediction. Second and more important, we recast the process of finding coevolving residues in proteins as a data-processing pipeline inspired by the medical imaging literature. We construct an ensemble of alignment partitions that can be used in a cross-validation scheme to assess the effects of choices made during the procedure on the resulting predictions. This pipeline sensitivity study gives a measure of reproducibility (how similar are the predictions given perturbations to the pipeline?) and accuracy (are residue pairs with large couplings on average close in tertiary structure?). We choose a handful of published methods, along with ZNMI, and compare their reproducibility and accuracy on three diverse protein families. We find that (i) of the algorithms tested, while none appear to be both highly reproducible and accurate, ZNMI is one of the most accurate by far and (ii) while users should be wary of predictions drawn from a single alignment, considering an ensemble of sub-alignments can help to determine both highly accurate and reproducible couplings. Our cross-validation approach should be of interest both to developers and end users of algorithms that try to detect correlated amino acid substitutions
Greenstone belt tectonics: Thermal constraints
NASA Technical Reports Server (NTRS)
Bickle, M. J.; Nisbet, E. G.
1986-01-01
Archaean rocks provide a record of the early stages of planetary evolution. The interpretation is frustrated by the probable unrepresentative nature of the preserved crust and by the well known ambiguities of tectonic geological synthesis. Broad constraints can be placed on the tectonic processes in the early Earth from global scale modeling of thermal and chemical evolution of the Earth and its hydrosphere and atmosphere. The Archean record is the main test of such models. Available general model constraints are outlined based on the global tectonic setting within which Archaean crust evolved and on the direct evidence the Archaean record provides, particularly the thermal state of the early Earth.
Spacecraft Attitude Maneuver Planning Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Kornfeld, Richard P.
2004-01-01
A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This approach for attitude path planning makes full use of a priori constraint knowledge and is computationally tractable enough to be executed onboard a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used as is or as an initial solution to initialize additional deterministic optimization algorithms. A number of representative case examples for time-fixed and time-varying conditions yielded search times that are typically on the order of minutes, thus demonstrating the viability of this method. This approach is applicable to all deep space and planet Earth missions requiring greater spacecraft autonomy, and greatly facilitates navigation and science observation planning.
Hardness of cubic solid solutions
Gao, Faming
2017-01-01
We demonstrate that a hardening rule exists in cubic solid solutions with various combinations of ionic, covalent and metallic bonding. It is revealed that the hardening stress ∆τFcg is determined by three factors: shear modulus G, the volume fraction of solute atoms fv, and the size misfit degree δb. A simple hardening correlation in KCl-KBr solid-solution is proposed as ∆τFcg = 0.27 G. It is applied to calculate the hardening behavior of the Ag-Au, KCl-KBr, InP-GaP, TiN-TiC, HfN-HfC, TiC-NbC and ZrC-NbC solid-solution systems. The composition dependence of hardness is elucidated quantitatively. The BN-BP solid-solution system is quantitatively predicted. We find a hardening plateau region around the x = 0.55–0.85 in BNxP1−x, where BNxP1−x solid solutions are far harder than cubic BN. Because the prediction is quantitative, it sets the stage for a broad range of applications. PMID:28054659
Hardness of cubic solid solutions
NASA Astrophysics Data System (ADS)
Gao, Faming
2017-01-01
We demonstrate that a hardening rule exists in cubic solid solutions with various combinations of ionic, covalent and metallic bonding. It is revealed that the hardening stress ∆τFcg is determined by three factors: shear modulus G, the volume fraction of solute atoms fv, and the size misfit degree δb. A simple hardening correlation in KCl-KBr solid-solution is proposed as ∆τFcg = 0.27 G. It is applied to calculate the hardening behavior of the Ag-Au, KCl-KBr, InP-GaP, TiN-TiC, HfN-HfC, TiC-NbC and ZrC-NbC solid-solution systems. The composition dependence of hardness is elucidated quantitatively. The BN-BP solid-solution system is quantitatively predicted. We find a hardening plateau region around the x = 0.55–0.85 in BNxP1‑x, where BNxP1‑x solid solutions are far harder than cubic BN. Because the prediction is quantitative, it sets the stage for a broad range of applications.
NASA Technical Reports Server (NTRS)
Schwartz, Richard A.
1986-01-01
High time resolution hard X-ray rates with good counting statistics over 5 energy intervals were obtained using a large area balloon-borne scintillation detector during the 27 June 1980 solar flare. The impulsive phase of the flare was comprised of a series of major bursts of several to several tens of seconds long. Superimposed on these longer bursts are numerous smaller approximately 0.5 to 1.0 second spikes. The time profiles for different energies were cross-correlated for the major bursts. The rapid burst decay rates and the simultaneous peaks below 120 keV both indicate a rapid electron energy loss process. Thus, the flux profiles reflect the electron acceleration/injection process. The fast rate data was obtained by a burst memory in 8 and 32 msec resolution over the entire main impulsive phase. These rates will be cross-correlated to look for short time delays and to find rapid fluctuations. However, a cursory examination shows that almost all fluctuations, down to the 5% level, were resolved with 256 msec bins.
A Closed-Form Solution to Retinex with Nonlocal Texture Constraints.
Zhao, Qi; Tan, Ping; Dai, Qiang; Shen, Li; Wu, Enhua; Lin, Stephen
2012-07-01
We propose a method for intrinsic image decomposition based on retinex theory and texture analysis. While most previous methods approach this problem by analyzing local gradient properties, our technique additionally identifies distant pixels with the same reflectance through texture analysis, and uses these nonlocal reflectance constraints to significantly reduce ambiguity in decomposition. We formulate the decomposition problem as the minimization of a quadratic function which incorporates both the retinex constraint and our nonlocal texture constraint. This optimization can be solved in closed form with the standard conjugate gradient algorithm. Extensive experimentation with comparisons to previous techniques validate our method in terms of both decomposition accuracy and runtime efficiency.
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1971-01-01
The method of steepest descent used in optimizing one-dimensional layered radiation shields is extended to multidimensional, multiconstraint situations. The multidimensional optimization algorithm and equations are developed for the case of a dose constraint in any one direction being dependent only on the shield thicknesses in that direction and independent of shield thicknesses in other directions. Expressions are derived for one-, two-, and three-dimensional cases (one, two, and three constraints). The precedure is applicable to the optimization of shields where there are different dose constraints and layering arrangements in the principal directions.
A Stochastic Approach to Diffeomorphic Point Set Registration With Landmark Constraints
Kolesov, Ivan; Lee, Jehoon; Sharp, Gregory; Vela, Patricio; Tannenbaum, Allen
2016-01-01
This work presents a deformable point set registration algorithm that seeks an optimal set of radial basis functions to describe the registration. A novel, global optimization approach is introduced composed of simulated annealing with a particle filter based generator function to perform the registration. It is shown how constraints can be incorporated into this framework. A constraint on the deformation is enforced whose role is to ensure physically meaningful fields (i.e., invertible). Further, examples in which landmark constraints serve to guide the registration are shown. Results on 2D and 3D data demonstrate the algorithm’s robustness to noise and missing information. PMID:26761731
Handschin, E.; Langer, M.; Kliokys, E.
1995-12-31
The possibility of power system state estimation with non-traditional measurement configuration is investigated. It is assumed that some substations are equipped with current magnitude measurements. Unique state estimation is possible, in such a situation, if currents are combined with voltage or power measurements and inequality constraints on node power injections are taken into account. The state estimation algorithm facilitating the efficient incorporation of inequality constraints is developed using an interior point optimization method. Simulation results showing the performance of the algorithm are presented. The method can be used for state estimation in medium voltage subtransmission and distribution networks.
New approaches to hard bubble suppression
NASA Technical Reports Server (NTRS)
Henry, R. D.; Besser, P. J.; Warren, R. G.; Whitcomb, E. C.
1973-01-01
Description of a new double-layer method for the suppression of hard bubbles that is more versatile than previously reported suppression techniques. It is shown that it may be possible to prevent hard bubble generation without recourse to exchange coupling of multilayer films.
Hard Spring Wheat Technical Committee 2016 Crop
Technology Transfer Automated Retrieval System (TEKTRAN)
Seven experimental lines of hard spring wheat were grown at up to five locations in 2016 and evaluated for kernel, milling, and bread baking quality against the check variety Glenn. Wheat samples were submitted through the Wheat Quality Council and processed and milled at the USDA-ARS Hard Red Spri...
"Hard Science" for Gifted 1st Graders
ERIC Educational Resources Information Center
DeGennaro, April
2006-01-01
"Hard Science" is designed to teach 1st grade gifted students accurate and high level science concepts. It is based upon their experience of the world and attempts to build a foundation for continued love and enjoyment of science. "Hard Science" provides field experiences and opportunities for hands-on discovery working beside experts in the field…
Hardness methods for testing maize kernels.
Fox, Glen; Manley, Marena
2009-07-08
Maize is a highly important crop to many countries around the world, through the sale of the maize crop to domestic processors and subsequent production of maize products and also provides a staple food to subsistance farms in undeveloped countries. In many countries, there have been long-term research efforts to develop a suitable hardness method that could assist the maize industry in improving efficiency in processing as well as possibly providing a quality specification for maize growers, which could attract a premium. This paper focuses specifically on hardness and reviews a number of methodologies as well as important biochemical aspects of maize that contribute to maize hardness used internationally. Numerous foods are produced from maize, and hardness has been described as having an impact on food quality. However, the basis of hardness and measurement of hardness are very general and would apply to any use of maize from any country. From the published literature, it would appear that one of the simpler methods used to measure hardness is a grinding step followed by a sieving step, using multiple sieve sizes. This would allow the range in hardness within a sample as well as average particle size and/or coarse/fine ratio to be calculated. Any of these parameters could easily be used as reference values for the development of near-infrared (NIR) spectroscopy calibrations. The development of precise NIR calibrations will provide an excellent tool for breeders, handlers, and processors to deliver specific cultivars in the case of growers and bulk loads in the case of handlers, thereby ensuring the most efficient use of maize by domestic and international processors. This paper also considers previous research describing the biochemical aspects of maize that have been related to maize hardness. Both starch and protein affect hardness, with most research focusing on the storage proteins (zeins). Both the content and composition of the zein fractions affect
Hardness Evolution of Gamma-Irradiated Polyoxymethylene
NASA Astrophysics Data System (ADS)
Hung, Chuan-Hao; Harmon, Julie P.; Lee, Sanboh
2016-12-01
This study focuses on analyzing hardness evolution in gamma-irradiated polyoxymethylene (POM) exposed to elevated temperatures after irradiation. Hardness increases with increasing annealing temperature and time, but decreases with increasing gamma ray dose. Hardness changes are attributed to defects generated in the microstructure and molecular structure. Gamma irradiation causes a decrease in the glass transition temperature, melting point, and extent of crystallinity. The kinetics of defects resulting in hardness changes follow a first-order structure relaxation. The rate constant adheres to an Arrhenius equation, and the corresponding activation energy decreases with increasing dose due to chain scission during gamma irradiation. The structure relaxation of POM has a lower energy barrier in crystalline regions than in amorphous ones. The hardness evolution in POM is an endothermic process due to the semi-crystalline nature of this polymer.
Thermal spray coatings replace hard chrome
Schroeder, M.; Unger, R.
1997-08-01
Hard chrome plating provides good wear and erosion resistance, as well as good corrosion protection and fine surface finishes. Until a few years ago, it could also be applied at a reasonable cost. However, because of the many environmental and financial sanctions that have been imposed on the process over the past several years, cost has been on a consistent upward trend, and is projected to continue to escalate. Therefore, it is very important to find a coating or a process that offers the same characteristics as hard chrome plating, but without the consequent risks. This article lists the benefits and limitations of hard chrome plating, and describes the performance of two thermal spray coatings (tungsten carbide and chromium carbide) that compared favorably with hard chrome plating in a series of tests. It also lists three criteria to determine whether plasma spray or hard chrome plating should be selected.
Thread Graphs, Linear Rank-Width and Their Algorithmic Applications
NASA Astrophysics Data System (ADS)
Ganian, Robert
The introduction of tree-width by Robertson and Seymour [7] was a breakthrough in the design of graph algorithms. A lot of research since then has focused on obtaining a width measure which would be more general and still allowed efficient algorithms for a wide range of NP-hard problems on graphs of bounded width. To this end, Oum and Seymour have proposed rank-width, which allows the solution of many such hard problems on a less restricted graph classes (see e.g. [3,4]). But what about problems which are NP-hard even on graphs of bounded tree-width or even on trees? The parameter used most often for these exceptionally hard problems is path-width, however it is extremely restrictive - for example the graphs of path-width 1 are exactly paths.
Constraint-Based Scheduling System
NASA Technical Reports Server (NTRS)
Zweben, Monte; Eskey, Megan; Stock, Todd; Taylor, Will; Kanefsky, Bob; Drascher, Ellen; Deale, Michael; Daun, Brian; Davis, Gene
1995-01-01
Report describes continuing development of software for constraint-based scheduling system implemented eventually on massively parallel computer. Based on machine learning as means of improving scheduling. Designed to learn when to change search strategy by analyzing search progress and learning general conditions under which resource bottleneck occurs.
Constraint elimination in dynamical systems
NASA Technical Reports Server (NTRS)
Singh, R. P.; Likins, P. W.
1989-01-01
Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.
NASA Technical Reports Server (NTRS)
Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide
2014-01-01
There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning
A new algorithm for constrained nonlinear least-squares problems, part 1
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, F. T.
1983-01-01
A Gauss-Newton algorithm is presented for solving nonlinear least squares problems. The problem statement may include simple bounds or more general constraints on the unknowns. The algorithm uses a trust region that allows the objective function to increase with logic for retreating to best values. The computations for the linear problem are done using a least squares system solver that allows for simple bounds and linear constraints. The trust region limits are defined by a box around the current point. In its current form the algorithm is effective only for problems with small residuals, linear constraints and dense Jacobian matrices. Results on a set of test problems are encouraging.
Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2005-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.
A method of hard X-ray phase-shifting digital holography.
Park, So Yeong; Hong, Chung Ki; Lim, Jun
2016-07-01
A new method of phase-shifting digital holography is demonstrated in the hard X-ray region. An in-line-type phase-shifting holography setup was installed in a 6.80 keV hard X-ray synchrotron beamline. By placing a phase plate consisting of a hole and a band at the focusing point of a Fresnel lens, the relative phase of the reference and objective beams could be successfully shifted for use with a three-step phase-shift algorithm. The system was verified by measuring the shape of a gold test pattern and a silica sphere.
NASA Astrophysics Data System (ADS)
Afzalirad, Mojtaba; Rezaeian, Javad
2016-04-01
This study involves an unrelated parallel machine scheduling problem in which sequence-dependent set-up times, different release dates, machine eligibility and precedence constraints are considered to minimize total late works. A new mixed-integer programming model is presented and two efficient hybrid meta-heuristics, genetic algorithm and ant colony optimization, combined with the acceptance strategy of the simulated annealing algorithm (Metropolis acceptance rule), are proposed to solve this problem. Manifestly, the precedence constraints greatly increase the complexity of the scheduling problem to generate feasible solutions, especially in a parallel machine environment. In this research, a new corrective algorithm is proposed to obtain the feasibility in all stages of the algorithms. The performance of the proposed algorithms is evaluated in numerical examples. The results indicate that the suggested hybrid ant colony optimization statistically outperformed the proposed hybrid genetic algorithm in solving large-size test problems.
Genetic algorithms for the vehicle routing problem
NASA Astrophysics Data System (ADS)
Volna, Eva
2016-06-01
The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.
Aerocapture Guidance Algorithm Comparison Campaign
NASA Technical Reports Server (NTRS)
Rousseau, Stephane; Perot, Etienne; Graves, Claude; Masciarelli, James P.; Queen, Eric
2002-01-01
The aerocapture is a promising technique for the future human interplanetary missions. The Mars Sample Return was initially based on an insertion by aerocapture. A CNES orbiter Mars Premier was developed to demonstrate this concept. Mainly due to budget constraints, the aerocapture was cancelled for the French orbiter. A lot of studies were achieved during the three last years to develop and test different guidance algorithms (APC, EC, TPC, NPC). This work was shared between CNES and NASA, with a fruitful joint working group. To finish this study an evaluation campaign has been performed to test the different algorithms. The objective was to assess the robustness, accuracy, capability to limit the load, and the complexity of each algorithm. A simulation campaign has been specified and performed by CNES, with a similar activity on the NASA side to confirm the CNES results. This evaluation has demonstrated that the numerical guidance principal is not competitive compared to the analytical concepts. All the other algorithms are well adapted to guaranty the success of the aerocapture. The TPC appears to be the more robust, the APC the more accurate, and the EC appears to be a good compromise.
Finite element solution of optimal control problems with inequality constraints
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1990-01-01
A finite-element method based on a weak Hamiltonian form of the necessary conditions is summarized for optimal control problems. Very crude shape functions (so simple that element numerical quadrature is not necessary) can be used to develop an efficient procedure for obtaining candidate solutions (i.e., those which satisfy all the necessary conditions) even for highly nonlinear problems. An extension of the formulation allowing for discontinuities in the states and derivatives of the states is given. A theory that includes control inequality constraints is fully developed. An advanced launch vehicle (ALV) model is presented. The model involves staging and control constraints, thus demonstrating the full power of the weak formulation to date. Numerical results are presented along with total elapsed computer time required to obtain the results. The speed and accuracy in obtaining the results make this method a strong candidate for a real-time guidance algorithm.
Causal Discovery from Subsampled Time Series Data by Constraint Optimization
Hyttinen, Antti; Plis, Sergey; Järvisalo, Matti; Eberhardt, Frederick; Danks, David
2017-01-01
This paper focuses on causal structure estimation from time series data in which measurements are obtained at a coarser timescale than the causal timescale of the underlying system. Previous work has shown that such subsampling can lead to significant errors about the system’s causal structure if not properly taken into account. In this paper, we first consider the search for the system timescale causal structures that correspond to a given measurement timescale structure. We provide a constraint satisfaction procedure whose computational performance is several orders of magnitude better than previous approaches. We then consider finite-sample data as input, and propose the first constraint optimization approach for recovering the system timescale causal structure. This algorithm optimally recovers from possible conflicts due to statistical errors. More generally, these advances allow for a robust and non-parametric estimation of system timescale causal structures from subsampled time series data. PMID:28203316
Causal Discovery from Subsampled Time Series Data by Constraint Optimization.
Hyttinen, Antti; Plis, Sergey; Järvisalo, Matti; Eberhardt, Frederick; Danks, David
2016-08-01
This paper focuses on causal structure estimation from time series data in which measurements are obtained at a coarser timescale than the causal timescale of the underlying system. Previous work has shown that such subsampling can lead to significant errors about the system's causal structure if not properly taken into account. In this paper, we first consider the search for the system timescale causal structures that correspond to a given measurement timescale structure. We provide a constraint satisfaction procedure whose computational performance is several orders of magnitude better than previous approaches. We then consider finite-sample data as input, and propose the first constraint optimization approach for recovering the system timescale causal structure. This algorithm optimally recovers from possible conflicts due to statistical errors. More generally, these advances allow for a robust and non-parametric estimation of system timescale causal structures from subsampled time series data.
Competitive learning with pairwise constraints.
Covões, Thiago F; Hruschka, Eduardo R; Ghosh, Joydeep
2013-01-01
Constrained clustering has been an active research topic since the last decade. Most studies focus on batch-mode algorithms. This brief introduces two algorithms for on-line constrained learning, named on-line linear constrained vector quantization error (O-LCVQE) and constrained rival penalized competitive learning (C-RPCL). The former is a variant of the LCVQE algorithm for on-line settings, whereas the latter is an adaptation of the (on-line) RPCL algorithm to deal with constrained clustering. The accuracy results--in terms of the normalized mutual information (NMI)--from experiments with nine datasets show that the partitions induced by O-LCVQE are competitive with those found by the (batch-mode) LCVQE. Compared with this formidable baseline algorithm, it is surprising that C-RPCL can provide better partitions (in terms of the NMI) for most of the datasets. Also, experiments on a large dataset show that on-line algorithms for constrained clustering can significantly reduce the computational time.
Analysis of Hard Thin Film Coating
NASA Technical Reports Server (NTRS)
Shen, Dashen
1998-01-01
MSFC is interested in developing hard thin film coating for bearings. The wearing of the bearing is an important problem for space flight engine. Hard thin film coating can drastically improve the surface of the bearing and improve the wear-endurance of the bearing. However, many fundamental problems in surface physics, plasma deposition, etc, need further research. The approach is using electron cyclotron resonance chemical vapor deposition (ECRCVD) to deposit hard thin film an stainless steel bearing. The thin films in consideration include SiC, SiN and other materials. An ECRCVD deposition system is being assembled at MSFC.
Analysis of Hard Thin Film Coating
NASA Technical Reports Server (NTRS)
Shen, Dashen
1998-01-01
Marshall Space Flight Center (MSFC) is interested in developing hard thin film coating for bearings. The wearing of the bearing is an important problem for space flight engine. Hard thin film coating can drastically improve the surface of the bearing and improve the wear-endurance of the bearing. However, many fundamental problems in surface physics, plasma deposition, etc, need further research. The approach is using Electron Cyclotron Resonance Chemical Vapor Deposition (ECRCVD) to deposit hard thin film on stainless steel bearing. The thin films in consideration include SiC, SiN and other materials. An ECRCVD deposition system is being assembled at MSFC.
Self-assembly in colloidal hard-sphere systems
NASA Astrophysics Data System (ADS)
Filion, L. C.
2011-01-01
In this thesis, we examine the phase behaviour and nucleation in a variety of hard-sphere systems. In Chapter 1 we present a short introduction and describe some of the simulation techniques used in this thesis. One of the main difficulties in predicting the phase behaviour in colloidal, atomic and nanoparticle systems is in determining the stable crystalline phases. To address this problem, in Chapters 2 and 4 we present two different methods for predicting possible crystal phases. In Chapter 2, we apply a genetic algorithm to binary hard-sphere mixtures and use it to predict the best-packed structures for this system. In Chapter 4 we present a novel method based on Monte Carlo simulations to predict possible crystalline structures for a variety of models. When the possible phases are known, full free-energy calculations can be used to predict the phase diagrams. This is the focus of Chapters 3 and 5. In Chapter 3, we examine the phase behaviour for binary hard-sphere mixtures with size ratios of the large and small spheres between 0.74 and 0.85. Between size ratios 0.76 and 0.84 we find regions where the binary Laves phases are stable, in addition to monodisperse face-centered-cubic (FCC) crystals of the large and small spheres and a binary liquid. For size ratios 0.74 and 0.85 we find only the monodisperse FCC crystals and the binary liquid. In Chapter 5 we examine the phase behaviour of binary hard-sphere mixtures with size ratios between 0.3 and 0.42. In this range, we find an interstitial solid solution (ISS) to be stable, as well as FCC crystals of the small and large spheres, and a binary fluid. The ISS phase consists of an FCC crystal of the large particles with some of the octahedral holes filled by smaller particles. We show that this filling fraction can be tuned from 0 to 100%. Additionally, we examine the diffusive properties of the small particles in the ISS for size ratio 0.3. In contrast to most systems, we find a region where the diffusion
Dirichlet Boundary Control of Hyperbolic Equations in the Presence of State Constraints
Mordukhovich, Boris S. Raymond, Jean-Pierre
2004-03-15
We study optimal control problems for hyperbolic equations (focusing on the multidimensional wave equation) with control functions in the Dirichlet boundary conditions under hard/pointwise control and state constraints. Imposing appropriate convexity assumptions on the cost integral functional, we establish the existence of optimal control and derive new necessary optimality conditions in the integral form of the Pontryagin Maximum Principle for hyperbolic state-constrained systems.
Observational constraints on exponential gravity
Yang, Louis; Lee, Chung-Chi; Luo, Ling-Wei; Geng, Chao-Qiang
2010-11-15
We study the observational constraints on the exponential gravity model of f(R)=-{beta}R{sub s}(1-e{sup -R/R}{sub s}). We use the latest observational data including Supernova Cosmology Project Union2 compilation, Two-Degree Field Galaxy Redshift Survey, Sloan Digital Sky Survey Data Release 7, and Seven-Year Wilkinson Microwave Anisotropy Probe in our analysis. From these observations, we obtain a lower bound on the model parameter {beta} at 1.27 (95% C.L.) but no appreciable upper bound. The constraint on the present matter density parameter is 0.245<{Omega}{sub m}{sup 0}<0.311 (95% C.L.). We also find out the best-fit value of model parameters on several cases.
Functional constraints on phenomenological coefficients
NASA Astrophysics Data System (ADS)
Klika, Václav; Pavelka, Michal; Benziger, Jay B.
2017-02-01
Thermodynamic fluxes (diffusion fluxes, heat flux, etc.) are often proportional to thermodynamic forces (gradients of chemical potentials, temperature, etc.) via the matrix of phenomenological coefficients. Onsager's relations imply that the matrix is symmetric, which reduces the number of unknown coefficients is reduced. In this article we demonstrate that for a class of nonequilibrium thermodynamic models in addition to Onsager's relations the phenomenological coefficients must share the same functional dependence on the local thermodynamic state variables. Thermodynamic models and experimental data should be validated through consistency with the functional constraint. We present examples of coupled heat and mass transport (thermodiffusion) and coupled charge and mass transport (electro-osmotic drag). Additionally, these newly identified constraints further reduce the number of experiments needed to describe the phenomenological coefficient.
A compendium of chameleon constraints
NASA Astrophysics Data System (ADS)
Burrage, Clare; Sakstein, Jeremy
2016-11-01
The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f(R) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical and laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.
Integral Constraints and MHD Stability
NASA Astrophysics Data System (ADS)
Jensen, T. H.
2003-10-01
Determining stability of a plasma in MHD equilibrium, energetically isolated by a conducting wall, requires an assumption on what governs the dynamics of the plasma. One example is the assumption that the plasma obeys ideal MHD, leading to the well known ``δ W" criteria [I. Bernstein, et al., Proc. Roy. Soc. London A244, 17 (1958)]. A radically different approach was used by Taylor [J.B. Taylor, Rev. Mod. Phys. 58, 741 (1986)] in assuming that the dynamics of the plasma is restricted only by the requirement that helicity, an integral constant associated with the plasma, is conserved. The relevancy of Taylor's assumption is supported by the agreement between resulting theoretical results and experimental observations. Another integral constraint involves the canonical angular momentum of the plasma particles. One consequence of using this constraint is that tokamak plasmas have no poloidal current in agreement with some current hole tokamak observations [T.H. Jensen, Phys. Lett. A 305, 183 (2002)].
Scheduling Earth Observing Satellites with Evolutionary Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2003-01-01
We hypothesize that evolutionary algorithms can effectively schedule coordinated fleets of Earth observing satellites. The constraints are complex and the bottlenecks are not well understood, a condition where evolutionary algorithms are often effective. This is, in part, because evolutionary algorithms require only that one can represent solutions, modify solutions, and evaluate solution fitness. To test the hypothesis we have developed a representative set of problems, produced optimization software (in Java) to solve them, and run experiments comparing techniques. This paper presents initial results of a comparison of several evolutionary and other optimization techniques; namely the genetic algorithm, simulated annealing, squeaky wheel optimization, and stochastic hill climbing. We also compare separate satellite vs. integrated scheduling of a two satellite constellation. While the results are not definitive, tests to date suggest that simulated annealing is the best search technique and integrated scheduling is superior.
A segmentation algorithm for noisy images
Xu, Y.; Olman, V.; Uberbacher, E.C.
1996-12-31
This paper presents a 2-D image segmentation algorithm and addresses issues related to its performance on noisy images. The algorithm segments an image by first constructing a minimum spanning tree representation of the image and then partitioning the spanning tree into sub-trees representing different homogeneous regions. The spanning tree is partitioned in such a way that the sum of gray-level variations over all partitioned subtrees is minimized under the constraints that each subtree has at least a specified number of pixels and two adjacent subtrees have significantly different ``average`` gray-levels. Two types of noise, transmission errors and Gaussian additive noise. are considered and their effects on the segmentation algorithm are studied. Evaluation results have shown that the segmentation algorithm is robust in the presence of these two types of noise.
Managing Restaurant Tables using Constraints
NASA Astrophysics Data System (ADS)
Vidotto, Alfio; Brown, Kenneth N.; Beck, J. Christopher
Restaurant table management can have significant impact on both profitability and the customer experience. The core of the issue is a complex dynamic combinatorial problem. We show how to model the problem as constraint satisfaction, with extensions which generate flexible seating plans and which maintain stability when changes occur. We describe an implemented system which provides advice to users in real time. The system is currently being evaluated in a restaurant environment.
Macroscopic constraints on string unification
Taylor, T.R.
1989-03-01
The comparison of sting theory with experiment requires a huge extrapolation from the microscopic distances, of order of the Planck length, up to the macroscopic laboratory distances. The quantum effects give rise to large corrections to the macroscopic predictions of sting unification. I discus the model-independent constraints on the gravitational sector of string theory due to the inevitable existence of universal Fradkin-Tseytlin dilatons. 9 refs.
Adaptive Search through Constraint Violations
1990-01-01
ZIP Code) 3939 O’Hara Street 800 North Quincy Street Pittsburgh, PA 15260 Arlington, VA 22217-5000 8a NAME OF FUNDING/SPONSORING Bb OFFICE SYMBOL 9...Pittsburgh, PA . Smith, D. A., Greeno, J. G., & Vitolo, T. M., (in press). A model of competence for counting. Cognitive Science. VanLehn, K. (in press...1990). Adaptive search through constraint violations (Technical Report No. KUL-90-01). Pittsburgh, PA : Learning Research and Development Center
Counting Heron Triangles with Constraints
2013-01-25
A3 INTEGERS 13 (2013) COUNTING HERON TRIANGLES WITH CONSTRAINTS Pantelimon Stănică Applied Mathematics, Naval Postgraduate School, Monterey...12, Revised: 10/12/12, Accepted: 1/13/13, Published: 1/25/13 Abstract Heron triangles have the property that all three of their sides as well as their...area are positive integers. In this paper, we give some estimates for the number of Heron triangles with two of their sides fixed. We provide a
An active set algorithm for tracing parametrized optima
NASA Technical Reports Server (NTRS)
Rakowska, J.; Haftka, R. T.; Watson, L. T.
1991-01-01
Optimization problems often depend on parameters that define constraints or objective functions. It is often necessary to know the effect of a change in a parameter on the optimum solution. An algorithm is presented here for tracking paths of optimal solutions of inequality constrained nonlinear programming problems as a function of a parameter. The proposed algorithm employs homotopy zero-curve tracing techniques to track segments where the set of active constraints is unchanged. The transition between segments is handled by considering all possible sets of active constraints and eliminating nonoptimal ones based on the signs of the Lagrange multipliers and the derivatives of the optimal solutions with respect to the parameter. A spring-mass problem is used to illustrate all possible kinds of transition events, and the algorithm is applied to a well-known ten-bar truss structural optimization problem.
Quantum Algorithm for Linear Programming Problems
NASA Astrophysics Data System (ADS)
Joag, Pramod; Mehendale, Dhananjay
The quantum algorithm (PRL 103, 150502, 2009) solves a system of linear equations with exponential speedup over existing classical algorithms. We show that the above algorithm can be readily adopted in the iterative algorithms for solving linear programming (LP) problems. The first iterative algorithm that we suggest for LP problem follows from duality theory. It consists of finding nonnegative solution of the equation forduality condition; forconstraints imposed by the given primal problem and for constraints imposed by its corresponding dual problem. This problem is called the problem of nonnegative least squares, or simply the NNLS problem. We use a well known method for solving the problem of NNLS due to Lawson and Hanson. This algorithm essentially consists of solving in each iterative step a new system of linear equations . The other iterative algorithms that can be used are those based on interior point methods. The same technique can be adopted for solving network flow problems as these problems can be readily formulated as LP problems. The suggested quantum algorithm cansolveLP problems and Network Flow problems of very large size involving millions of variables.
Infrared Constraint on Ultraviolet Theories
Tsai, Yuhsin
2012-08-01
While our current paradigm of particle physics, the Standard Model (SM), has been extremely successful at explaining experiments, it is theoretically incomplete and must be embedded into a larger framework. In this thesis, we review the main motivations for theories beyond the SM (BSM) and the ways such theories can be constrained using low energy physics. The hierarchy problem, neutrino mass and the existence of dark matter (DM) are the main reasons why the SM is incomplete . Two of the most plausible theories that may solve the hierarchy problem are the Randall-Sundrum (RS) models and supersymmetry (SUSY). RS models usually suffer from strong flavor constraints, while SUSY models produce extra degrees of freedom that need to be hidden from current experiments. To show the importance of infrared (IR) physics constraints, we discuss the flavor bounds on the anarchic RS model in both the lepton and quark sectors. For SUSY models, we discuss the difficulties in obtaining a phenomenologically allowed gaugino mass, its relation to R-symmetry breaking, and how to build a model that avoids this problem. For the neutrino mass problem, we discuss the idea of generating small neutrino masses using compositeness. By requiring successful leptogenesis and the existence of warm dark matter (WDM), we can set various constraints on the hidden composite sector. Finally, to give an example of model independent bounds from collider experiments, we show how to constrain the DM–SM particle interactions using collider results with an effective coupling description.
Infrared constraints on ultraviolet theories
NASA Astrophysics Data System (ADS)
Tsai, Yuhsin
2012-01-01
While our current paradigm of particle physics, the Standard Model (SM), has been extremely successful at explaining experiments, it is theoretically incomplete and must be embedded into a larger framework. In this thesis, we review the main motivations for theories beyond the SM (BSM) and the ways such theories can be constrained using low energy physics. The hierarchy problem, neutrino mass and the existence of dark matter (DM) are the main reasons why the SM is incomplete . Two of the most plausible theories that may solve the hierarchy problem are the Randall-Sundrum (RS) models and supersymmetry (SUSY). RS models usually suffer from strong flavor constraints, while SUSY models produce extra degrees of freedom that need to be hidden from current experiments. To show the importance of infrared (IR) physics constraints, we discuss the flavor bounds on the anarchic RS model in both the lepton and quark sectors. For SUSY models, we discuss the difficulties in obtaining a phenomenologically allowed gaugino mass, its relation to R-symmetry breaking, and how to build a model that avoids this problem. For the neutrino mass problem, we discuss the idea of generating small neutrino masses using compositeness. By requiring successful leptogenesis and the existence of warm dark matter (WDM), we can set various constraints on the hidden composite sector. Finally, to give an example of model independent bounds from collider experiments, we show how to constrain the DM-SM particle interactions using collider results with an effective coupling description.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Automated radiation hard ASIC design tool
NASA Technical Reports Server (NTRS)
White, Mike; Bartholet, Bill; Baze, Mark
1993-01-01
A commercial based, foundry independent, compiler design tool (ChipCrafter) with custom radiation hardened library cells is described. A unique analysis approach allows low hardness risk for Application Specific IC's (ASIC's). Accomplishments, radiation test results, and applications are described.
21 CFR 133.150 - Hard cheeses.
Code of Federal Regulations, 2014 CFR
2014-04-01
... action of harmless lactic-acid-producing bacteria, with or without other harmless flavor-producing... minutes, or for a time and at a temperature equivalent thereto in phosphatase destruction. A hard...
Macroindentation hardness measurement-Modernization and applications.
Patel, Sarsvat; Sun, Changquan Calvin
2016-06-15
In this study, we first developed a modernized indentation technique for measuring tablet hardness. This technique is featured by rapid digital image capture, using a calibrated light microscope, and precise area-determination. We then systematically studied effects of key experimental parameters, including indentation force, speed, and holding time, on measured hardness of a very soft material, hydroxypropyl cellulose, and a very hard material, dibasic calcium phosphate, to cover a wide range of material properties. Based on the results, a holding period of 3min at the peak indentation load is recommended to minimize the effect of testing speed on H. Using this method, we show that an exponential decay function well describes the relationship between tablet hardness and porosity for seven commonly used pharmaceutical powders investigated in this work. We propose that H and H at zero porosity may be used to quantify the tablet deformability and powder plasticity, respectively.
Electronic Teaching: Hard Disks and Networks.
ERIC Educational Resources Information Center
Howe, Samuel F.
1984-01-01
Describes floppy-disk and hard-disk based networks, electronic systems linking microcomputers together for the purpose of sharing peripheral devices, and presents points to remember when shopping for a network. (MBR)
Novel hard compositions and methods of preparation
Sheinberg, H.
1981-02-03
Novel very hard compositions of matter are prepared by using in all embodiments only a minor amount of a particular carbide (or materials which can form the carbide in situ when subjected to heat and pressure); and no strategic cobalt is needed. Under a particular range of conditions, densified compositions of matter of the invention are prepared having hardnesses on the Rockwell A test substantially equal to the hardness of pure tungsten carbide and to two of the hardest commercial cobalt-bonded tungsten carbides. Alternately, other compositions of the invention which have slightly lower hardnesses than those described above in one embodiment also possess the advantage of requiring no tungsten and in another embodiment possess the advantage of having a good fracture toughness value.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., if any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. [24 FR 3953, May...
Code of Federal Regulations, 2011 CFR
2011-01-01
..., if any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. [24 FR 3953, May...
Code of Federal Regulations, 2012 CFR
2012-01-01
... any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. [32 FR 12779, Sept....
Code of Federal Regulations, 2011 CFR
2011-01-01
... any is present, for any seed required to be labeled as to the percentage of germination, and the percentage of hard seed shall not be included as part of the germination percentage. [32 FR 12779, Sept....
A Hybrid Constraint Representation and Reasoning Framework
NASA Technical Reports Server (NTRS)
Golden, Keith; Pang, Wanlin
2004-01-01
In this paper, we introduce JNET, a novel constraint representation and reasoning framework that supports procedural constraints and constraint attachments, providing a flexible way of integrating the constraint system with a runtime software environment and improving its applicability. We describe how JNET is applied to a real-world problem - NASA's Earth-science data processing domain, and demonstrate how JNET can be extended, without any knowledge of how it is implemented, to meet the growing demands of real-world applications.
Optimal reactive planning with security constraints
Thomas, W.R.; Cheng, D.T.Y.; Dixon, A.M.; Thorp, J.D.; Dunnett, R.M.; Schaff, G.
1995-12-31
The National Grid Company (NGC) of England and Wales has developed a computer program, SCORPION, to help system planners optimize the location and size of new reactive compensation plant on the transmission system. The reactive power requirements of the NGC system have risen as a result of increased power flows and the shorter timescale on which power stations are commissioned and withdrawn from service. In view of the high costs involved, it is important that reactive compensation be installed as economically as possible, without compromising security. Traditional methods based on iterative use of a load flow program are labor intensive and subjective. SCORPION determines a near-optimal pattern of new reactive sources which are required to satisfy voltage constraints for normal and contingent states of operation of the transmission system. The algorithm processes the system states sequentially, instead of optimizing all of them simultaneously. This allows a large number of system states to be considered with an acceptable run time and computer memory requirement. Installed reactive sources are treated as continuous, rather than discrete, variables. However, the program has a restart facility which enables the user to add realistically sized reactive sources explicitly and thereby work towards a realizable solution to the planning problem.
Genetic Algorithms for Digital Quantum Simulations.
Las Heras, U; Alvarez-Rodriguez, U; Solano, E; Sanz, M
2016-06-10
We propose genetic algorithms, which are robust optimization techniques inspired by natural selection, to enhance the versatility of digital quantum simulations. In this sense, we show that genetic algorithms can be employed to increase the fidelity and optimize the resource requirements of digital quantum simulation protocols while adapting naturally to the experimental constraints. Furthermore, this method allows us to reduce not only digital errors but also experimental errors in quantum gates. Indeed, by adding ancillary qubits, we design a modular gate made out of imperfect gates, whose fidelity is larger than the fidelity of any of the constituent gates. Finally, we prove that the proposed modular gates are resilient against different gate errors.
Hard Suit With Adjustable Torso Length
NASA Technical Reports Server (NTRS)
Vykukal, Hubert C.
1987-01-01
Torso sizing rings allow single suit to fit variety of people. Sizing rings inserted between coupling rings of torso portion of hard suit. Number of rings chosen to fit torso length of suit to that of wearer. Rings mate with, and seal to, coupling rings and to each other. New adjustable-size concept with cost-saving feature applied to other suits not entirely constructed of "hard" materials, such as chemical defense suits and suits for industrial-hazard cleanup.
A Novel Approach to Hardness Testing
NASA Technical Reports Server (NTRS)
Spiegel, F. Xavier; West, Harvey A.
1996-01-01
This paper gives a description of the application of a simple rebound time measuring device and relates the determination of relative hardness of a variety of common engineering metals. A relation between rebound time and hardness will be sought. The effect of geometry and surface condition will also be discussed in order to acquaint the student with the problems associated with this type of method.
Laser Ablatin of Dental Hard Tissue
Seka, W.; Rechmann, P.; Featherstone, J.D.B.; Fried, D.
2007-07-31
This paper discusses ablation of dental hard tissue using pulsed lasers. It focuses particularly on the relevant tissue and laser parameters and some of the basic ablation processes that are likely to occur. The importance of interstitial water and its phase transitions is discussed in some detail along with the ablation processes that may or may not directly involve water. The interplay between tissue parameters and laser parameters in the outcome of the removal of dental hard tissue is discussed in detail.
Learning and Parallelization Boost Constraint Search
ERIC Educational Resources Information Center
Yun, Xi
2013-01-01
Constraint satisfaction problems are a powerful way to abstract and represent academic and real-world problems from both artificial intelligence and operations research. A constraint satisfaction problem is typically addressed by a sequential constraint solver running on a single processor. Rather than construct a new, parallel solver, this work…
Geomagnetic field models incorporating physical constraints on the secular variation
NASA Technical Reports Server (NTRS)
Constable, Catherine; Parker, Robert L.
1993-01-01
This proposal has been concerned with methods for constructing geomagnetic field models that incorporate physical constraints on the secular variation. The principle goal that has been accomplished is the development of flexible algorithms designed to test whether the frozen flux approximation is adequate to describe the available geomagnetic data and their secular variation throughout this century. These have been applied to geomagnetic data from both the early and middle part of this century and convincingly demonstrate that there is no need to invoke violations of the frozen flux hypothesis in order to satisfy the available geomagnetic data.
Bayesian Stereo Matching Method Based on Edge Constraints.
Li, Jie; Shi, Wenxuan; Deng, Dexiang; Jia, Wenyan; Sun, Mingui
2012-12-01
A new global stereo matching method is presented that focuses on the handling of disparity, discontinuity and occlusion. The Bayesian approach is utilized for dense stereo matching problem formulated as a maximum a posteriori Markov Random Field (MAP-MRF) problem. In order to improve stereo matching performance, edges are incorporated into the Bayesian model as a soft constraint. Accelerated belief propagation is applied to obtain the maximum a posteriori estimates in the Markov random field. The proposed algorithm is evaluated using the Middlebury stereo benchmark. Our experimental results comparing with some state-of-the-art stereo matching methods demonstrate that the proposed method provides superior disparity maps with a subpixel precision.
Heuristic algorithm for off-lattice protein folding problem*
Chen, Mao; Huang, Wen-qi
2006-01-01
Enlightened by the law of interactions among objects in the physical world, we propose a heuristic algorithm for solving the three-dimensional (3D) off-lattice protein folding problem. Based on a physical model, the problem is converted from a nonlinear constraint-satisfied problem to an unconstrained optimization problem which can be solved by the well-known gradient method. To improve the efficiency of our algorithm, a strategy was introduced to generate initial configuration. Computational results showed that this algorithm could find states with lower energy than previously proposed ground states obtained by nPERM algorithm for all chains with length ranging from 13 to 55. PMID:16365919
Hard x ray highlights of AR 5395
NASA Technical Reports Server (NTRS)
Schwartz, R. A.; Dennis, Brian R.
1989-01-01
Active Region 5395 produced an exceptional series of hard x ray bursts notable for their frequency, intensity, and impulsivity. Over the two weeks from March 6 to 19, 447 hard x ray flares were observed by the Hard X Ray Burst Spectrometer on Solar Maximum Mission (HXRBS/SMM), a rate of approx. 35 per day which exceeded the previous high by more than 50 percent. During one 5 day stretch, more than 250 flares were detected, also a new high. The three largest GOES X-flares were observed by HXRBS and had hard x ray rates over 100,000 s(exp -1) compared with only ten flares above 100,000(exp -1) during the previous nine years of the mission. An ongoing effort for the HXRBS group has been the correlated analysis of hard x ray data with flare data at other wavelengths with the most recent emphasis on those measurements with spatial information. During a series of bursts from AR 5395 at 1644 to 1648 UT on 12 March 1989, simultaneous observations were made by HXRBS and UVSP (Ultra Violet Spectrometer Polarimeter) on SMM, the two-element Owens Valley Radio Observatory (OVRO) interferometric array, and R. Canfield's H-alpha Echelle spectrograph at the National Solar Observatory at Sacramento Peak. The data show strong correlations in the hard x ray, microwave, and UV lightcurves. This event will be the subject of a combined analysis.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.
Transport coefficients for dense hard-disk systems.
García-Rojo, Ramón; Luding, Stefan; Brey, J Javier
2006-12-01
A study of the transport coefficients of a system of elastic hard disks based on the use of Helfand-Einstein expressions is reported. The self-diffusion, the viscosity, and the heat conductivity are examined with averaging techniques especially appropriate for event-driven molecular dynamics algorithms with periodic boundary conditions. The density and size dependence of the results are analyzed, and comparison with the predictions from Enskog's theory is carried out. In particular, the behavior of the transport coefficients in the vicinity of the fluid-solid transition is investigated and a striking power law divergence of the viscosity with density is obtained in this region, while all other examined transport coefficients show a drop in that density range in relation to the Enskog's prediction. Finally, the deviations are related to shear band instabilities and the concept of dilatancy.
Origin of the computational hardness for learning with binary synapses
NASA Astrophysics Data System (ADS)
Huang, Haiping; Kabashima, Yoshiyuki
2014-11-01
Through supervised learning in a binary perceptron one is able to classify an extensive number of random patterns by a proper assignment of binary synaptic weights. However, to find such assignments in practice is quite a nontrivial task. The relation between the weight space structure and the algorithmic hardness has not yet been fully understood. To this end, we analytically derive the Franz-Parisi potential for the binary perceptron problem by starting from an equilibrium solution of weights and exploring the weight space structure around it. Our result reveals the geometrical organization of the weight space; the weight space is composed of isolated solutions, rather than clusters of exponentially many close-by solutions. The pointlike clusters far apart from each other in the weight space explain the previously observed glassy behavior of stochastic local search heuristics.
A disturbance based control/structure design algorithm
NASA Technical Reports Server (NTRS)
Mclaren, Mark D.; Slater, Gary L.
1989-01-01
Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1992-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
Roundoff error effects on spatial lattice algorithm
NASA Technical Reports Server (NTRS)
An, S. H.; Yao, K.
1986-01-01
The floating-point roundoff error effect under finite word length limitations is analyzed for the time updates of reflection coefficients in the spatial lattice algorithm. It is shown that recursive computation is superior to direct computation under finite word length limitations. Moreover, the forgetting factor, which is conventionally used to smooth the time variations of the inputs, is also a crucial parameter in the consideration of the system stability and adaptability under finite word length constraints.
Damped Arrow-Hurwicz algorithm for sphere packing
NASA Astrophysics Data System (ADS)
Degond, Pierre; Ferreira, Marina A.; Motsch, Sebastien
2017-03-01
We consider algorithms that, from an arbitrarily sampling of N spheres (possibly overlapping), find a close packed configuration without overlapping. These problems can be formulated as minimization problems with non-convex constraints. For such packing problems, we observe that the classical iterative Arrow-Hurwicz algorithm does not converge. We derive a novel algorithm from a multi-step variant of the Arrow-Hurwicz scheme with damping. We compare this algorithm with classical algorithms belonging to the class of linearly constrained Lagrangian methods and show that it performs better. We provide an analysis of the convergence of these algorithms in the simple case of two spheres in one spatial dimension. Finally, we investigate the behaviour of our algorithm when the number of spheres is large in two and three spatial dimensions.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Simulated annealing algorithm for solving chambering student-case assignment problem
NASA Astrophysics Data System (ADS)
Ghazali, Saadiah; Abdul-Rahman, Syariza
2015-12-01
The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.
An Automated Cloud-edge Detection Algorithm Using Cloud Physics and Radar Data
NASA Technical Reports Server (NTRS)
Ward, Jennifer G.; Merceret, Francis J.; Grainger, Cedric A.
2003-01-01
An automated cloud edge detection algorithm was developed and extensively tested. The algorithm uses in-situ cloud physics data measured by a research aircraft coupled with ground-based weather radar measurements to determine whether the aircraft is in or out of cloud. Cloud edges are determined when the in/out state changes, subject to a hysteresis constraint. The hysteresis constraint prevents isolated transient cloud puffs or data dropouts from being identified as cloud boundaries. The algorithm was verified by detailed manual examination of the data set in comparison to the results from application of the automated algorithm.
An algorithm for converting a virtual-bond chain into a complete polypeptide backbone chain
NASA Technical Reports Server (NTRS)
Luo, N.; Shibata, M.; Rein, R.
1991-01-01
A systematic analysis is presented of the algorithm for converting a virtual-bond chain, defined by the coordinates of the alpha-carbons of a given protein, into a complete polypeptide backbone. An alternative algorithm, based upon the same set of geometric parameters used in the Purisima-Scheraga algorithm but with a different "linkage map" of the algorithmic procedures, is proposed. The global virtual-bond chain geometric constraints are more easily separable from the loal peptide geometric and energetic constraints derived from, for example, the Ramachandran criterion, within the framework of this approach.
Optical mechanical analogy and nonlinear nonholonomic constraints.
Bloch, Anthony M; Rojo, Alberto G
2016-02-01
In this paper we establish a connection between particle trajectories subject to a nonholonomic constraint and light ray trajectories in a variable index of refraction. In particular, we extend the analysis of systems with linear nonholonomic constraints to the dynamics of particles in a potential subject to nonlinear velocity constraints. We contrast the long time behavior of particles subject to a constant kinetic energy constraint (a thermostat) to particles with the constraint of parallel velocities. We show that, while in the former case the velocities of each particle equalize in the limit, in the latter case all the kinetic energies of each particle remain the same.
Resolving manipulator redundancy under inequality constraints
Cheng, F.T.; Chen, T.H.; Sun, Y.Y. . Dept. of Electrical Engineering)
1994-02-01
Due to hardware limitations, physical constraints such as joint rate bounds, joint angle limits, and joint torque constraints always exist. In this paper, these constraints are considered into the general formulation of the redundant inverse kinematic problem. To take these physical constraints into account, the computationally efficient Compact Quadratic Programming (QP) method is formed to resolve the constrained kinematic redundancy problem. In addition, the Compact-Inverse QP method is also formulated to remedy the unescapable singularity problem with inequality constraints. Two examples are given to demonstrate the generality and superiority of these two methods: to eliminate the drift phenomenon caused by self motion and to remedy saturation-type nonlinearity problem.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
A Nonmonotone Trust Region Method for Nonlinear Programming with Simple Bound Constraints
Chen, Z.-W. Han, J.-Y. Xu, D.-C.
2001-07-01
In this paper we propose a nonmonotone trust region algorithm for optimization with simple bound constraints. Under mild conditions, we prove the global convergence of the algorithm. For the monotone case it is also proved that the correct active set can be identified in a finite number of iterations if the strict complementarity slackness condition holds, and so the proposed algorithm reduces finally to an unconstrained minimization method in a finite number of iterations, allowing a fast asymptotic rate of convergence. Numerical experiments show that the method is efficient.
NASA Technical Reports Server (NTRS)
Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Amador, Arthur V.; Spitale, Joseph N.
1993-01-01
Robotic spacecraft are controlled by sets of commands called 'sequences.' These sequences must be checked against mission constraints. Making our existing constraint checking program faster would enable new capabilities in our uplink process. Therefore, we are rewriting this program to run on a parallel computer. To do so, we had to determine how to run constraint-checking algorithms in parallel and create a new method of specifying spacecraft models and constraints. This new specification gives us a means of representing flight systems and their predicted response to commands which could be used in a variety of applications throughout the command process, particularly during anomaly or high-activity operations. This commonality could reduce operations cost and risk for future complex missions. Lessons learned in applying some parts of this system to the TOPEX/Poseidon mission will be described.
Lv, Yueyong; Hu, Qinglei; Ma, Guangfu; Zhou, Jiakang
2011-10-01
This paper treats the problem of synchronized control of spacecraft formation flying (SFF) in the presence of input constraint and parameter uncertainties. More specifically, backstepping based robust control is first developed for the total 6 DOF dynamic model of SFF with parameter uncertainties, in which the model consists of relative translation and attitude rotation. Then this controller is redesigned to deal with the input constraint problem by incorporating a command filter such that the generated control could be implementable even under physical or operating constraints on the control input. The convergence of the proposed control algorithms is proved by the Lyapunov stability theorem. Compared with conventional methods, illustrative simulations of spacecraft formation flying are conducted to verify the effectiveness of the proposed approach to achieve the spacecraft track the desired attitude and position trajectories in a synchronized fashion even in the presence of uncertainties, external disturbances and control saturation constraint.
Partial constraint satisfaction approaches for optimal operation of a hydropower system
NASA Astrophysics Data System (ADS)
Ferreira, Andre R.; Teegavarapu, Ramesh S. V.
2012-09-01
Optimal operation models for a hydropower system using partial constraint satisfaction (PCS) approaches are proposed and developed in this study. The models use mixed integer nonlinear programming (MINLP) formulations with binary variables. The models also integrate a turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream water quality impairment. New PCS-based models for hydropower optimization formulations are developed using binary and continuous evaluator functions to maximize the constraint satisfaction. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to solve the optimization formulations. Decision maker's preferences towards power production targets and water quality improvements are incorporated using partial satisfaction constraints to obtain compromise operating rules for a multi-objective reservoir operation problem dominated by conflicting goals of energy production, water quality and consumptive water uses.
"Short, Hard Gamma-Ray Bursts - Mystery Solved?????"
NASA Technical Reports Server (NTRS)
Parsons, A.
2006-01-01
After over a decade of speculation about the nature of short-duration hard-spectrum gamma-ray bursts (GRBs), the recent detection of afterglow emission from a small number of short bursts has provided the first physical constraints on possible progenitor models. While the discovery of afterglow emission from long GRBs was a real breakthrough linking their origin to star forming galaxies, and hence the death of massive stars, the progenitors, energetics, and environments for short gamma-ray burst events remain elusive despite a few recent localizations. Thus far, the nature of the host galaxies measured indicates that short GRBs arise from an old (> 1 Gyr) stellar population, strengthening earlier suggestions and providing support for coalescing compact object binaries as the progenitors. On the other hand, some of the short burst afterglow observations cannot be easily explained in the coalescence scenario. These observations raise the possibility that short GRBs may have different or multiple progenitors systems. The study of the short-hard GRB afterglows has been made possible by the Swift Gamma-ray Burst Explorer, launched in November of 2004. Swift is equipped with a coded aperture gamma-ray telescope that can observe up to 2 steradians of the sky and can compute the position of a gamma-ray burst to within 2-3 arcmin in less than 10 seconds. The Swift spacecraft can slew on to this burst position without human intervention, allowing its on-board x ray and optical telescopes to study the afterglow within 2 minutes of the original GRB trigger. More Swift short burst detections and afterglow measurements are needed before we can declare that the mystery of short gamma-ray burst is solved.
Deducing Electron Properties from Hard X-Ray Observations
NASA Technical Reports Server (NTRS)
Kontar, E. P.; Brown, J. C.; Emslie, A. G.; Hajdas, W.; Holman, G. D.; Hurford, G. J.; Kasparova, J.; Mallik, P. C. V.; Massone, A. M.; McConnell, M. L.; Piana, M.; Prato, M.; Schmahl, E. J.; Suarez-Garcia, E.
2011-01-01
X-radiation from energetic electrons is the prime diagnostic of flare-accelerated electrons. The observed X-ray flux (and polarization state) is fundamentally a convolution of the cross-section for the hard X-ray emission process(es) in question with the electron distribution function, which is in turn a function of energy, direction, spatial location and time. To address the problems of particle propagation and acceleration one needs to infer as much information as possible on this electron distribution function, through a deconvolution of this fundamental relationship. This review presents recent progress toward this goal using spectroscopic, imaging and polarization measurements, primarily from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). Previous conclusions regarding the energy, angular (pitch angle) and spatial distributions of energetic electrons in solar flares are critically reviewed. We discuss the role and the observational evidence of several radiation processes: free-free electron-ion, free-free electron-electron, free-bound electron-ion, photoelectric absorption and Compton backscatter (albedo), using both spectroscopic and imaging techniques. This unprecedented quality of data allows for the first time inference of the angular distributions of the X-ray-emitting electrons and improved model-independent inference of electron energy spectra and emission measures of thermal plasma. Moreover, imaging spectroscopy has revealed hitherto unknown details of solar flare morphology and detailed spectroscopy of coronal, footpoint and extended sources in flaring regions. Additional attempts to measure hard X-ray polarization were not sufficient to put constraints on the degree of anisotropy of electrons, but point to the importance of obtaining good quality polarization data in the future.
Trajectory constraints in qualitative simulation
Brajnik, G.; Clancy, D.J.
1996-12-31
We present a method for specifying temporal constraints on trajectories of dynamical systems and enforcing them during qualitative simulation. This capability can be used to focus a simulation, simulate non-autonomous and piecewise-continuous systems, reason about boundary condition problems and incorporate observations into the simulation. The method has been implemented in TeQSIM, a qualitative simulator that combines the expressive power of qualitative differential equations with temporal logic. It interleaves temporal logic model checking with the simulation to constrain and refine the resulting predicted behaviors and to inject discontinuous changes into the simulation.
Constraints on the timeon model
NASA Astrophysics Data System (ADS)
Araki, Takeshi; Geng, C. Q.
2009-04-01
The timeon model recently proposed by Friedberg and Lee has a potential problem of flavor changing neutral currents (FCNCs) if the mass of the timeon is small. In order to avoid, we introduce a small dimensionless parameter to suppress FCNCs. Even in this case, we find that the timeon mass must be larger than 151 GeV to satisfy all the constraints from processes involving FCNCs in the quark sectors. We also extend the timeon model to the lepton sector and examine the leptonic processes.
Closure constraints for hyperbolic tetrahedra
NASA Astrophysics Data System (ADS)
Charles, Christoph; Livine, Etera R.
2015-07-01
We investigate the generalization of loop gravity's twisted geometries to a q-deformed gauge group. In the standard undeformed case, loop gravity is a formulation of general relativity as a diffeomorphism-invariant SU(2) gauge theory. Its classical states are graphs provided with algebraic data. In particular, closure constraints at every node of the graph ensure their interpretation as twisted geometries. Dual to each node, one has a polyhedron embedded in flat space {{{R}}3}. One then glues them, allowing for both curvature and torsion. It was recently conjectured that q-deforming the gauge group SU(2) would allow us to account for a non-vanishing cosmological constant Λ \
Persistence-length renormalization of polymers in a crowded environment of hard disks.
Schöbl, S; Sturm, S; Janke, W; Kroy, K
2014-12-05
The most conspicuous property of a semiflexible polymer is its persistence length, defined as the decay length of tangent correlations along its contour. Using an efficient stochastic growth algorithm to sample polymers embedded in a quenched hard-disk fluid, we find apparent wormlike chain statistics with a renormalized persistence length. We identify a universal form of the disorder renormalization that suggests itself as a quantitative measure of molecular crowding.
Algorithm Optimally Allocates Actuation of a Spacecraft
NASA Technical Reports Server (NTRS)
Motaghedi, Shi
2007-01-01
A report presents an algorithm that solves the following problem: Allocate the force and/or torque to be exerted by each thruster and reaction-wheel assembly on a spacecraft for best performance, defined as minimizing the error between (1) the total force and torque commanded by the spacecraft control system and (2) the total of forces and torques actually exerted by all the thrusters and reaction wheels. The algorithm incorporates the matrix vector relationship between (1) the total applied force and torque and (2) the individual actuator force and torque values. It takes account of such constraints as lower and upper limits on the force or torque that can be applied by a given actuator. The algorithm divides the aforementioned problem into two optimization problems that it solves sequentially. These problems are of a type, known in the art as semi-definite programming problems, that involve linear matrix inequalities. The algorithm incorporates, as sub-algorithms, prior algorithms that solve such optimization problems very efficiently. The algorithm affords the additional advantage that the solution requires the minimum rate of consumption of fuel for the given best performance.
The BQP-hardness of approximating the Jones polynomial
NASA Astrophysics Data System (ADS)
Aharonov, Dorit; Arad, Itai
2011-03-01
A celebrated important result due to Freedman et al (2002 Commun. Math. Phys. 227 605-22) states that providing additive approximations of the Jones polynomial at the kth root of unity, for constant k=5 and k>=7, is BQP-hard. Together with the algorithmic results of Aharonov et al (2005) and Freedman et al (2002 Commun. Math. Phys. 227 587-603), this gives perhaps the most natural BQP-complete problem known today and motivates further study of the topic. In this paper, we focus on the universality proof; we extend the result of Freedman et al (2002) to ks that grow polynomially with the number of strands and crossings in the link, thus extending the BQP-hardness of Jones polynomial approximations to all values to which the AJL algorithm applies (Aharonov et al 2005), proving that for all those values, the problems are BQP-complete. As a side benefit, we derive a fairly elementary proof of the Freedman et al density result, without referring to advanced results from Lie algebra representation theory, making this important result accessible to a wider audience in the computer science research community. We make use of two general lemmas we prove, the bridge lemma and the decoupling lemma, which provide tools for establishing the density of subgroups in SU(n). Those tools seem to be of independent interest in more general contexts of proving the quantum universality. Our result also implies a completely classical statement, that the multiplicative approximations of the Jones polynomial, at exactly the same values, are #P-hard, via a recent result due to Kuperberg (2009 arXiv:0908.0512). Since the first publication of those results in their preliminary form (Aharonov and Arad 2006 arXiv:quant-ph/0605181), the methods we present here have been used in several other contexts (Aharonov and Arad 2007 arXiv:quant-ph/0702008; Peter and Stephen 2008 Quantum Inf. Comput. 8 681). The present paper is an improved and extended version of the results presented by Aharonov and Arad
Optimal Sampling-Based Motion Planning under Differential Constraints: the Driftless Case
Schmerling, Edward; Janson, Lucas; Pavone, Marco
2015-01-01
Motion planning under differential constraints is a classic problem in robotics. To date, the state of the art is represented by sampling-based techniques, with the Rapidly-exploring Random Tree algorithm as a leading example. Yet, the problem is still open in many aspects, including guarantees on the quality of the obtained solution. In this paper we provide a thorough theoretical framework to assess optimality guarantees of sampling-based algorithms for planning under differential constraints. We exploit this framework to design and analyze two novel sampling-based algorithms that are guaranteed to converge, as the number of samples increases, to an optimal solution (namely, the Differential Probabilistic RoadMap algorithm and the Differential Fast Marching Tree algorithm). Our focus is on driftless control-affine dynamical models, which accurately model a large class of robotic systems. In this paper we use the notion of convergence in probability (as opposed to convergence almost surely): the extra mathematical flexibility of this approach yields convergence rate bounds — a first in the field of optimal sampling-based motion planning under differential constraints. Numerical experiments corroborating our theoretical results are presented and discussed. PMID:26618041
Solano-Altamirano, J M; Goldman, Saul
2015-12-01
We determined the total system elastic Helmholtz free energy, under the constraints of constant temperature and volume, for systems comprised of one or more perfectly bonded hard spherical inclusions (i.e. "hard spheres") embedded in a finite spherical elastic solid. Dirichlet boundary conditions were applied both at the surface(s) of the hard spheres, and at the outer surface of the elastic solid. The boundary conditions at the surface of the spheres were used to describe the rigid displacements of the spheres, relative to their initial location(s) in the unstressed initial state. These displacements, together with the initial positions, provided the final shape of the strained elastic solid. The boundary conditions at the outer surface of the elastic medium were used to ensure constancy of the system volume. We determined the strain and stress tensors numerically, using a method that combines the Neuber-Papkovich spherical harmonic decomposition, the Schwartz alternating method, and Least-squares for determining the spherical harmonic expansion coefficients. The total system elastic Helmholtz free energy was determined by numerically integrating the elastic Helmholtz free energy density over the volume of the elastic solid, either by a quadrature, or a Monte Carlo method, or both. Depending on the initial position of the hard sphere(s) (or equivalently, the shape of the un-deformed stress-free elastic solid), and the displacements, either stationary or non-stationary Helmholtz free energy minima were found. The non-stationary minima, which involved the hard spheres nearly in contact with one another, corresponded to lower Helmholtz free energies, than did the stationary minima, for which the hard spheres were further away from one another.
Hard error generation by neutron irradiation
Browning, J.S.; Gover, J.E.; Wrobel, T.F.; Hass, K.J.; Nasby, R.D.; Simpson, R.L.; Posey, L.D.; Boos, R.E.; Block, R.C.
1987-01-01
We have observed that neutron-induced fission of uranium contaminants present in alumina ceramic package lids results in the release of fission fragments that can cause hard errors in metal nitride-oxidenonvolatile RAMs (MNOS NVRAMs). Hard error generation requires the simultaneous presence of (1) a fission fragment with a linear energy transfer (LET) greater than 20 MeV/mg/cm/sup 2/ moving at an angle of 30/sup 0/ or less from the electric field in the high-field, gate region of the memory transistor and (2) a WRITE or ERASE voltage on the oxide-nitride transistor gate. In reactor experiments, we observe these hard errors when a ceramic lid is used on both MNOS NVRAMs and polysilicon-nitride-oxide-semiconductor (SNOS) capacitors, but hard errors are not observed when a gold-plated kovar lid is used on the package containing these die. We have mapped the tracks of the fission fragments released from the ceramic lids with a mica track detector and used a Monte Carlo model of fission fragment transport through the ceramic lid to measure the concentration of uranium present in the lids. Our concentration measurements are in excellent agreement with others' measurements of uranium concentration in ceramic lids. Our Monte Carlo analyses also agree closely with our measurements of hard error probability in MNOS NVRAMs. 15 refs., 13 figs., 8 tabs.
Haptic search for hard and soft spheres.
van Polanen, Vonne; Bergmann Tiest, Wouter M; Kappers, Astrid M L
2012-01-01
In this study the saliency of hardness and softness were investigated in an active haptic search task. Two experiments were performed to explore these properties in different contexts. In Experiment 1, blindfolded participants had to grasp a bundle of spheres and determine the presence of a hard target among soft distractors or vice versa. If the difference in compliance between target and distractors was small, reaction times increased with the number of items for both features; a serial strategy was found to be used. When the difference in compliance was large, the reaction times were independent of the number of items, indicating a parallel strategy. In Experiment 2, blindfolded participants pressed their hand on a display filled with hard and soft items. In the search for a soft target, increasing reaction times with the number of items were found, but the location of target and distractors appeared to have a large influence on the search difficulty. In the search for a hard target, reaction times did not depend on the number of items. In sum, this showed that both hardness and softness are salient features.
Novel hard compositions and methods of preparation
Sheinberg, Haskell
1983-08-23
Novel very hard compositions of matter are prepared by using in all embodiments only a minor amount of a particular carbide (or materials which can form the carbide in situ when subjected to heat and pressure); and no strategic cobalt is needed. Under a particular range of conditions, densified compositions of matter of the invention are prepared having hardnesses on the Rockwell A test substantially equal to the hardness of pure tungsten carbide and to two of the hardest commercial cobalt-bonded tungsten carbides. Alternately, other compositions of the invention which have slightly lower hardnesses than those described above in one embodiment also possess the advantage of requiring no tungsten and in another embodiment possess the advantage of having a good fracture toughness value. Photomicrographs show that the shapes of the grains of the alloy mixture with which the minor amount of carbide (or carbide-formers) is mixed are radically altered from large, rounded to small, very angular by the addition of the carbide. Superiority of one of these hard compositions of matter over cobalt-bonded tungsten carbide for ultra-high pressure anvil applications was demonstrated.
Novel hard compositions and methods of preparation
Sheinberg, H.
1983-08-23
Novel very hard compositions of matter are prepared by using in all embodiments only a minor amount of a particular carbide (or materials which can form the carbide in situ when subjected to heat and pressure); and no strategic cobalt is needed. Under a particular range of conditions, densified compositions of matter of the invention are prepared having hardnesses on the Rockwell A test substantially equal to the hardness of pure tungsten carbide and to two of the hardest commercial cobalt-bonded tungsten carbides. Alternately, other compositions of the invention which have slightly lower hardnesses than those described above in one embodiment also possess the advantage of requiring no tungsten and in another embodiment possess the advantage of having a good fracture toughness value. Photomicrographs show that the shapes of the grains of the alloy mixture with which the minor amount of carbide (or carbide-formers) is mixed are radically altered from large, rounded to small, very angular by the addition of the carbide. Superiority of one of these hard compositions of matter over cobalt-bonded tungsten carbide for ultra-high pressure anvil applications was demonstrated. 3 figs.
Hard QCD rescattering in few nucleon systems
NASA Astrophysics Data System (ADS)
Maheswari, Dhiraj; Sargsian, Misak
2017-01-01
The theoretical framework of hard QCD rescattering mechanism (HRM) is extended to calculate the high energy γ3 He -> pd reaction at 900 center of mass angle. In HRM model , the incoming high energy photon strikes a quark from one of the nucleons in the target which subsequently undergoes hard rescattering with the quarks from the other nucleons generating hard two-body baryonic system in the final state of the reaction. Based on the HRM, a parameter free expression for the differential cross section for the reaction is derived, expressed through the 3 He -> pd transition spectral function, hard pd -> pd elastic scattering cross section and the effective charge of the quarks being interchanged in the hard rescattering process. The numerical estimates obtained from this expression for the differential cross section are in a good agreement with the data recently obtained at the Jefferson Lab experiment, showing the energy scaling of cross section with an exponent of s-17, also consistent with the quark counting rule. The angular and energy dependences of the cross section are also predicted within HRM which are in good agreement with the preliminary data of these distributions. Research is supported by the US Department of Energy.
Solving NP-Hard Problems with Physarum-Based Ant Colony System.
Liu, Yuxin; Gao, Chao; Zhang, Zili; Lu, Yuxiao; Chen, Shi; Liang, Mingxin; Tao, Li
2017-01-01
NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS.
Statistical Inference in Hidden Markov Models Using k-Segment Constraints.
Titsias, Michalis K; Holmes, Christopher C; Yau, Christopher
2016-01-02
Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward-backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online.
Statistical Inference in Hidden Markov Models Using k-Segment Constraints
Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher
2016-01-01
Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674
NASA Technical Reports Server (NTRS)
Sarkar, Nilanjan; Yun, Xiaoping; Kumar, Vijay
1994-01-01
There are many examples of mechanical systems that require rolling contacts between two or more rigid bodies. Rolling contacts engender nonholonomic constraints in an otherwise holonomic system. In this article, we develop a unified approach to the control of mechanical systems subject to both holonomic and nonholonomic constraints. We first present a state space realization of a constrained system. We then discuss the input-output linearization and zero dynamics of the system. This approach is applied to the dynamic control of mobile robots. Two types of control algorithms for mobile robots are investigated: trajectory tracking and path following. In each case, a smooth nonlinear feedback is obtained to achieve asymptotic input-output stability and Lagrange stability of the overall system. Simulation results are presented to demonstrate the effectiveness of the control algorithms and to compare the performane of trajectory-tracking and path-following algorithms.
NASA Astrophysics Data System (ADS)
Sarkar, Nilanjan; Yun, Xiaoping; Kumar, Vijay
1994-02-01
There are many examples of mechanical systems that require rolling contacts between two or more rigid bodies. Rolling contacts engender nonholonomic constraints in an otherwise holonomic system. In this article, we develop a unified approach to the control of mechanical systems subject to both holonomic and nonholonomic constraints. We first present a state space realization of a constrained system. We then discuss the input-output linearization and zero dynamics of the system. This approach is applied to the dynamic control of mobile robots. Two types of control algorithms for mobile robots are investigated: trajectory tracking and path following. In each case, a smooth nonlinear feedback is obtained to achieve asymptotic input-output stability and Lagrange stability of the overall system. Simulation results are presented to demonstrate the effectiveness of the control algorithms and to compare the performane of trajectory-tracking and path-following algorithms.
Hosseini-Asl, Ehsan; Zurada, Jacek M; Nasraoui, Olfa
2016-12-01
We demonstrate a new deep learning autoencoder network, trained by a nonnegativity constraint algorithm (nonnegativity-constrained autoencoder), that learns features that show part-based representation of data. The learning algorithm is based on constraining negative weights. The performance of the algorithm is assessed based on decomposing data into parts and its prediction performance is tested on three standard image data sets and one text data set. The results indicate that the nonnegativity constraint forces the autoencoder to learn features that amount to a part-based representation of data, while improving sparsity and reconstruction quality in comparison with the traditional sparse autoencoder and nonnegative matrix factorization. It is also shown that this newly acquired representation improves the prediction performance of a deep neural network.
Smedskjaer, Morten M.; Bauchy, Mathieu; Mauro, John C.; Rzoska, Sylwester J.; Bockowski, Michal
2015-10-28
The properties of glass are determined not only by temperature, pressure, and composition, but also by their complete thermal and pressure histories. Here, we show that glasses of identical composition produced through thermal annealing and through quenching from elevated pressure can result in samples with identical density and mean interatomic distances, yet different bond angle distributions, medium-range structures, and, thus, macroscopic properties. We demonstrate that hardness is higher when the density increase is obtained through thermal annealing rather than through pressure-quenching. Molecular dynamics simulations reveal that this arises because pressure-quenching has a larger effect on medium-range order, while annealing has a larger effect on short-range structures (sharper bond angle distribution), which ultimately determine hardness according to bond constraint theory. Our work could open a new avenue towards industrially useful glasses that are identical in terms of composition and density, but with differences in thermodynamic, mechanical, and rheological properties due to unique structural characteristics.
Photon-splitting limits to the hardness of emission in strongly magnetized soft gamma repeaters
NASA Technical Reports Server (NTRS)
Baring, Matthew G.
1995-01-01
Soft gamma repeaters are characterized by recurrent activity consisting of short-duration outbursts of high-energy emission that is typically of temperature less than 40 keV. One recent model of repeaters is that they originate in the environs of neutron stars with superstrong magnetic fields, perhaps greater than 10(exp 14) G. In such fields, the exotic process of magnetic photon splitting gamma yields gamma gamma acts very effectively to reprocess gamma-ray radiation down to hard X-ray energies. In this Letter, the action of photon splitting is considered in some detail, via the solution of photon kinetic equations, determining how it limits the hardness of emission in strongly magnetized repeaters, and thereby obtaining observational constraints to the field in SGR 1806-20.
Imaging the sun in hard X-rays - Spatial and rotating modulation collimators
NASA Technical Reports Server (NTRS)
Campbell, Jonathan W.; Davis, John M.; Emslie, A. G.
1991-01-01
Several approaches to imaging hard X-rays emitted from solar flares have been proposed or are planned for the nineties including the spatial modulation collimator (SMC) and the rotating modulation collimator (RMC). A survey of current solar flare theoretical literature indicates the desirability of spatial resolutions down to 1 arcsecond, field of views greater than the full solar disk (i.e., 32 arcminutes), and temporal resolutions down to 1 second. Although the sun typically provides relatively high flux levels, the requirement for 1 second temporal resolution raises the question as to the viability of Fourier telescopes subject to the aforementioned constraints. A basic photon counting, Monte Carlo 'end-to-end' model telescope was employed using the Astronomical Image Processing System (AIPS) for image reconstruction. The resulting solar flare hard X-ray images compared against typical observations indicated that both telescopes show promise for the future.
Correlative analysis of hard and soft x ray observations of solar flares
NASA Technical Reports Server (NTRS)
Zarro, Dominic M.
1994-01-01
We have developed a promising new technique for jointly analyzing BATSE hard X-ray observations of solar flares with simultaneous soft X-ray observations. The technique is based upon a model in which electric currents and associated electric fields are responsible for the respective heating and particle acceleration that occur in solar flares. A useful by-product of this technique is the strength and evolution of the coronal electric field. The latter permits one to derive important flare parameters such as the current density, the number of current filaments composing the loop, and ultimately the hard X-ray spectrum produced by the runaway electrons. We are continuing to explore the technique by applying it to additional flares for which we have joint BATSE/Yohkoh observations. A central assumption of our analysis is the constant of proportionality alpha relating the hard X-ray flux above 50 keV and the rate of electron acceleration. For a thick-target model of hard X-ray production, it can be shown that cv is in fact related to the spectral index and low-energy cutoff of precipitating electrons. The next step in our analysis is to place observational constraints on the latter parameters using the joint BATSE/Yohkoh data.
Algorithm Animation with Galant.
Stallmann, Matthias F
2017-01-01
Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.
Research in the Hard Sciences, and in Very Hard "Softer" Domains
ERIC Educational Resources Information Center
Phillips, D. C.
2014-01-01
The author of this commentary argues that physical scientists are attempting to advance knowledge in the so-called hard sciences, whereas education researchers are laboring to increase knowledge and understanding in an "extremely hard" but softer domain. Drawing on the work of Popper and Dewey, this commentary highlights the relative…
Hard Water and Soft Soap: Dependence of Soap Performance on Water Hardness
ERIC Educational Resources Information Center
Osorio, Viktoria K. L.; de Oliveira, Wanda; El Seoud, Omar A.; Cotton, Wyatt; Easdon, Jerry
2005-01-01
The demonstration of the performance of soap in different aqueous solutions, which is due to water hardness and soap formulation, is described. The demonstrations use safe, inexpensive reagents and simple glassware and equipment, introduce important everyday topics, stimulates the students to consider the wider consequences of water hardness and…
"We Can Get Everything We Want if We Try Hard": Young People, Celebrity, Hard Work
ERIC Educational Resources Information Center
Mendick, Heather; Allen, Kim; Harvey, Laura
2015-01-01
Drawing on 24 group interviews on celebrity with 148 students aged 14-17 across six schools, we show that "hard work" is valued by young people in England. We argue that we should not simply celebrate this investment in hard work. While it opens up successful subjectivities to previously excluded groups, it reproduces neoliberal…
Hydroeconomic optimization of reservoir management under downstream water quality constraints
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Holm, Peter E.; Trapp, Stefan; Rosbjerg, Dan; Bauer-Gottwein, Peter
2015-10-01
A hydroeconomic optimization approach is used to guide water management in a Chinese river basin with the objectives of meeting water quantity and water quality constraints, in line with the China 2011 No. 1 Policy Document and 2015 Ten-point Water Plan. The proposed modeling framework couples water quantity and water quality management and minimizes the total costs over a planning period assuming stochastic future runoff. The outcome includes cost-optimal reservoir releases, groundwater pumping, water allocation, wastewater treatments and water curtailments. The optimization model uses a variant of stochastic dynamic programming known as the water value method. Nonlinearity arising from the water quality constraints is handled with an effective hybrid method combining genetic algorithms and linear programming. Untreated pollutant loads are represented by biochemical oxygen demand (BOD), and the resulting minimum dissolved oxygen (DO) concentration is computed with the Streeter-Phelps equation and constrained to match Chinese water quality targets. The baseline water scarcity and operational costs are estimated to 15.6 billion CNY/year. Compliance to water quality grade III causes a relatively low increase to 16.4 billion CNY/year. Dilution plays an important role and increases the share of surface water allocations to users situated furthest downstream in the system. The modeling framework generates decision rules that result in the economically efficient strategy for complying with both water quantity and water quality constraints.
Yeguas, Enrique; Joan-Arinyo, Robert; Victoria Luz N, Mar A
2011-01-01
The availability of a model to measure the performance of evolutionary algorithms is very important, especially when these algorithms are applied to solve problems with high computational requirements. That model would compute an index of the quality of the solution reached by the algorithm as a function of run-time. Conversely, if we fix an index of quality for the solution, the model would give the number of iterations to be expected. In this work, we develop a statistical model to describe the performance of PBIL and CHC evolutionary algorithms applied to solve the root identification problem. This problem is basic in constraint-based, geometric parametric modeling, as an instance of general constraint-satisfaction problems. The performance model is empirically validated over a benchmark with very large search spaces.
Computational search for rare-earth free hard-magnetic materials
NASA Astrophysics Data System (ADS)
Flores Livas, José A.; Sharma, Sangeeta; Dewhurst, John Kay; Gross, Eberhard; MagMat Team
2015-03-01
It is difficult to over state the importance of hard magnets for human life in modern times; they enter every walk of our life from medical equipments (NMR) to transport (trains, planes, cars, etc) to electronic appliances (for house hold use to computers). All the known hard magnets in use today contain rare-earth elements, extraction of which is expensive and environmentally harmful. Rare-earths are also instrumental in tipping the balance of world economy as most of them are mined in limited specific parts of the world. Hence it would be ideal to have similar characteristics as a hard magnet but without or at least with reduced amount of rare-earths. This is the main goal of our work: search for rare-earth-free magnets. To do so we employ a combination of density functional theory and crystal prediction methods. The quantities which define a hard magnet are magnetic anisotropy energy (MAE) and saturation magnetization (Ms), which are the quantities we maximize in search for an ideal magnet. In my talk I will present details of the computation search algorithm together with some potential newly discovered rare-earth free hard magnet. J.A.F.L. acknowledge financial support from EU's 7th Framework Marie-Curie scholarship program within the ``ExMaMa'' Project (329386).
A deterministic algorithm for constrained enumeration of transmembrane protein folds.
Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.
2004-07-01
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
Constrained minimization of smooth functions using a genetic algorithm
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.; Pamadi, Bandu N.
1994-01-01
The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.
Learning in stochastic neural networks for constraint satisfaction problems
NASA Technical Reports Server (NTRS)
Johnston, Mark D.; Adorf, Hans-Martin
1989-01-01
Researchers describe a newly-developed artificial neural network algorithm for solving constraint satisfaction problems (CSPs) which includes a learning component that can significantly improve the performance of the network from run to run. The network, referred to as the Guarded Discrete Stochastic (GDS) network, is based on the discrete Hopfield network but differs from it primarily in that auxiliary networks (guards) are asymmetrically coupled to the main network to enforce certain types of constraints. Although the presence of asymmetric connections implies that the network may not converge, it was found that, for certain classes of problems, the network often quickly converges to find satisfactory solutions when they exist. The network can run efficiently on serial machines and can find solutions to very large problems (e.g., N-queens for N as large as 1024). One advantage of the network architecture is that network connection strengths need not be instantiated when the network is established: they are needed only when a participating neural element transitions from off to on. They have exploited this feature to devise a learning algorithm, based on consistency techniques for discrete CSPs, that updates the network biases and connection strengths and thus improves the network performance.
Saltwater and hard water bentonite mud
Pabley, A. S.
1985-02-19
A seawater/saltwater or hard water bentonite mud for use in drilling, and process for preparing same, comprising sequentially adding to seawater, to saltwater of a chloride concentration up to saturation, or hard water: a caustic agent; a filtration control agent; and bentonite. The resultant drilling mud meets API standards for viscosity and water loss, and is stable after aging and at tempertures in excess of 100/sup 0/ c. In another embodiment, the additives are premixed as dry ingredients and hydrated with seawater, saltwater or hard water. Unlike other bentonite drilling muds, the muds of this invention require no fresh water in their preparation, which makes them particularly useful at off-shore and remote on-shore drilling locations. The muds of this invention using bentonite further require less clay than known saltwater muds made with attapulgite, and provides superior filtration control, viscosity and stability.
Erosion testing of hard materials and coatings
Hawk, Jeffrey A.
2005-04-29
Erosion is the process by which unconstrained particles, usually hard, impact a surface, creating damage that leads to material removal and component failure. These particles are usually very small and entrained in fluid of some type, typically air. The damage that occurs as a result of erosion depends on the size of the particles, their physical characteristics, the velocity of the particle/fluid stream, and their angle of impact on the surface of interest. This talk will discuss the basics of jet erosion testing of hard materials, composites and coatings. The standard test methods will be discussed as well as alternative approaches to determining the erosion rate of materials. The damage that occurs will be characterized in genera1 terms, and examples will be presented for the erosion behavior of hard materials and coatings (both thick and thin).
Potential Health Impacts of Hard Water
Sengupta, Pallav
2013-01-01
In the past five decades or so evidence has been accumulating about an environmental factor, which appears to be influencing mortality, in particular, cardiovascular mortality, and this is the hardness of the drinking water. In addition, several epidemiological investigations have demonstrated the relation between risk for cardiovascular disease, growth retardation, reproductive failure, and other health problems and hardness of drinking water or its content of magnesium and calcium. In addition, the acidity of the water influences the reabsorption of calcium and magnesium in the renal tubule. Not only, calcium and magnesium, but other constituents also affect different health aspects. Thus, the present review attempts to explore the health effects of hard water and its constituents. PMID:24049611
Hard template synthesis of metal nanowires
Kawamura, Go; Muto, Hiroyuki; Matsuda, Atsunori
2014-01-01
Metal nanowires (NWs) have attracted much attention because of their high electron conductivity, optical transmittance, and tunable magnetic properties. Metal NWs have been synthesized using soft templates such as surface stabilizing molecules and polymers, and hard templates such as anodic aluminum oxide, mesoporous oxide, carbon nanotubes. NWs prepared from hard templates are composites of metals and the oxide/carbon matrix. Thus, selecting appropriate elements can simplify the production of composite devices. The resulting NWs are immobilized and spatially arranged, as dictated by the ordered porous structure of the template. This avoids the NWs from aggregating, which is common for NWs prepared with soft templates in solution. Herein, the hard template synthesis of metal NWs is reviewed, and the resulting structures, properties and potential applications are discussed. PMID:25453031
Algorithms for Multiple Fault Diagnosis With Unreliable Tests
NASA Technical Reports Server (NTRS)
Shakeri, Mojdeh; Raghavan, Vijaya; Pattipati, Krishna R.; Patterson-Hine, Ann
1997-01-01
In this paper, we consider the problem of constructing optimal and near-optimal multiple fault diagnosis (MFD) in bipartite systems with unreliable (imperfect) tests. It is known that exact computation of conditional probabilities for multiple fault diagnosis is NP-hard. The novel feature of our diagnostic algorithms is the use of Lagrangian relaxation and subgradient optimization methods to provide: (1) near optimal solutions for the MFD problem, and (2) upper bounds for an optimal branch-and-bound algorithm. The proposed method is illustrated using several examples. Computational results indicate that: (1) our algorithm has superior computational performance to the existing algorithms (approximately three orders of magnitude improvement), (2) the near optimal algorithm generates the most likely candidates with a very high accuracy, and (3) our algorithm can find the most likely candidates in systems with as many as 1000 faults.
Macro cell placement with neural net algorithms
NASA Astrophysics Data System (ADS)
Storti-Gajani, Giancarlo
Placement of VLSI (Very Large Scale Integration) macro cells is one of the hard problems encountered in the process of integrated circuits design. Since the problem is essentially NP-complete a solution must be searched for with the aid of heuristics using, maybe, non deterministic strategies. A new algorithm for cell preplacement based on neural nets that may be very well extended to find solution of the final placement problem is presented. Simulations for the part of the algorithm concerning preplacement are carried out on several different examples giving always a sharply decreasing cost function (where cost is evaluated essentially on total length of wires given a rectangular boundary). The direct mapping between neural units and VLSI blocks that is adopted in the algorithm makes the extension to the final placement problem quite simple. Simulation programs are implemented in a interpreted mathematical simulation language and a C language implementation is currently under way.
Hard-Core Unemployment: A Selected, Annotated Bibliography.
ERIC Educational Resources Information Center
Cameron, Colin, Comp.; Menon, Anila Bhatt, Comp.
This annotated bibliography contains references to various films, articles, and books on the subject of hard-core unemployment, and is divided into the following sections: (1) The Sociology of the Hard-Core Milieu, (2) Training Programs, (3) Business and the Hard-Core, (4) Citations of Miscellaneous References on Hard-Core Unemployment, (5)…
Flexibility of hard gas permeable contact lenses.
Stevenson, R W
1988-11-01
Gas permeable (GP) lenses can flex on some eyes producing unpredictable clinical results. A method of measuring the flexibility of hard GP materials has been developed and shown to be repeatable. Materials in the form of flats rather than lenses were used. Differences between materials were found and in general a linear relation was shown to exist between maximum flexing and quoted oxygen permeability (r = 0.78, p less than 0.05). It is recommended that flexibility be measured and reported in the data presented with all new GP polymers. The term "hard" rather than "rigid" in describing GP lenses is suggested.
Novel Aspects of Hard Diffraction in QCD
Brodsky, Stanley J.; /SLAC
2005-12-14
Initial- and final-state interactions from gluon-exchange, normally neglected in the parton model have a profound effect in QCD hard-scattering reactions, leading to leading-twist single-spin asymmetries, diffractive deep inelastic scattering, diffractive hard hadronic reactions, and nuclear shadowing and antishadowing--leading-twist physics not incorporated in the light-front wavefunctions of the target computed in isolation. I also discuss the use of diffraction to materialize the Fock states of a hadronic projectile and test QCD color transparency.
Leishman, S.; Gray, P.; Fothergill, J.E.
1995-12-31
The sequential assignment of protein 2D NMR data has been tackled by many automated and semi-automated systems. One area that these systems have not tackled is the searching of the TOCSY spectrum looking for cross peaks and chemical shift values for hydrogen nuclei that are at the end of long side chains. This paper describes our system for solving this problem using constraint logic programming and compares our constraint satisfaction algorithm to a standard backtracking version.
An algorithm for the solution of dynamic linear programs
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
1989-01-01
The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation
Causality constraints in conformal field theory
Hartman, Thomas; Jain, Sachin; Kundu, Sandipan
2016-05-17
Causality places nontrivial constraints on QFT in Lorentzian signature, for example fixing the signs of certain terms in the low energy Lagrangian. In d dimensional conformal field theory, we show how such constraints are encoded in crossing symmetry of Euclidean correlators, and derive analogous constraints directly from the conformal bootstrap (analytically). The bootstrap setup is a Lorentzian four-point function corresponding to propagation through a shockwave. Crossing symmetry fixes the signs of certain log terms that appear in the conformal block expansion, which constrains the interactions of low-lying operators. As an application, we use the bootstrap to rederive the well known sign constraint on the (Φ)^{4} coupling in effective field theory, from a dual CFT. We also find constraints on theories with higher spin conserved currents. As a result, our analysis is restricted to scalar correlators, but we argue that similar methods should also impose nontrivial constraints on the interactions of spinning operators
Causality constraints in conformal field theory
NASA Astrophysics Data System (ADS)
Hartman, Thomas; Jain, Sachin; Kundu, Sandipan
2016-05-01
Causality places nontrivial constraints on QFT in Lorentzian signature, for example fixing the signs of certain terms in the low energy Lagrangian. In d dimensional conformal field theory, we show how such constraints are encoded in crossing symmetry of Euclidean correlators, and derive analogous constraints directly from the conformal bootstrap (analytically). The bootstrap setup is a Lorentzian four-point function corresponding to propagation through a shockwave. Crossing symmetry fixes the signs of certain log terms that appear in the conformal block expansion, which constrains the interactions of low-lying operators. As an application, we use the bootstrap to rederive the well known sign constraint on the (∂ ϕ)4 coupling in effective field theory, from a dual CFT. We also find constraints on theories with higher spin conserved currents. Our analysis is restricted to scalar correlators, but we argue that similar methods should also impose nontrivial constraints on the interactions of spinning operators.
An Investigation into the Elementary Temporal Structure of Solar Flare Hard X-Ray Bursts Using BATSE
NASA Technical Reports Server (NTRS)
Newton, Elizabeth
1998-01-01
The research performed under this contract is part of an on-going investigation to explore the finest time-resolution hard X-ray data available on solar flares. Since 1991, the Burst and Transient Source Experiment (BATSE) aboard the Compton Gamma Ray Observatory has provided almost continual monitoring of the Sun in the hard X-ray and gamma-ray region of the spectrum. BATSE provides for the first time a temporal resolution in the data comparable to the timescales on which flare particle energization occurs. Under this contract, we have employed an important but under-utilized BATSE data type, the Time-To-Spill (TTS) data, to address the question of how fine a temporal structure exists in flare hard X-ray emission. By establishing the extent to which 'energy release fragments,' or characteristic (recurrent) time structures, are building blocks of flare emission, it is possible to place constraints on particle acceleration theories.
Genetic Algorithm Approaches for Actuator Placement
NASA Technical Reports Server (NTRS)
Crossley, William A.
2000-01-01
This research investigated genetic algorithm approaches for smart actuator placement to provide aircraft maneuverability without requiring hinged flaps or other control surfaces. The effort supported goals of the Multidisciplinary Design Optimization focus efforts in NASA's Aircraft au program. This work helped to properly identify various aspects of the genetic algorithm operators and parameters that allow for placement of discrete control actuators/effectors. An improved problem definition, including better definition of the objective function and constraints, resulted from this research effort. The work conducted for this research used a geometrically simple wing model; however, an increasing number of potential actuator placement locations were incorporated to illustrate the ability of the GA to determine promising actuator placement arrangements. This effort's major result is a useful genetic algorithm-based approach to assist in the discrete actuator/effector placement problem.
Atom mapping with constraint programming.
Mann, Martin; Nahar, Feras; Schnorr, Norah; Backofen, Rolf; Stadler, Peter F; Flamm, Christoph
2014-01-01
Chemical reactions are rearrangements of chemical bonds. Each atom in an educt molecule thus appears again in a specific position of one of the reaction products. This bijection between educt and product atoms is not reported by chemical reaction databases, however, so that the "Atom Mapping Problem" of finding this bijection is left as an important computational task for many practical applications in computational chemistry and systems biology. Elementary chemical reactions feature a cyclic imaginary transition state (ITS) that imposes additional restrictions on the bijection between educt and product atoms that are not taken into account by previous approaches. We demonstrate that Constraint Programming is well-suited to solving the Atom Mapping Problem in this setting. The performance of our approach is evaluated for a manually curated subset of chemical reactions from the KEGG database featuring various ITS cycle layouts and reaction mechanisms.
Physical constraints for pathogen movement.
Schwarz, Ulrich S
2015-10-01
In this pedagogical review, we discuss the physical constraints that pathogens experience when they move in their host environment. Due to their small size, pathogens are living in a low Reynolds number world dominated by viscosity. For swimming pathogens, the so-called scallop theorem determines which kinds of shape changes can lead to productive motility. For crawling or gliding cells, the main resistance to movement comes from protein friction at the cell-environment interface. Viruses and pathogenic bacteria can also exploit intracellular host processes such as actin polymerization and motor-based transport, if they present the appropriate factors on their surfaces. Similar to cancer cells that also tend to cross various barriers, pathogens often combine several of these strategies in order to increase their motility and therefore their chances to replicate and spread.
Thermodynamic constraints on fluctuation phenomena.
Maroney, O J E
2009-12-01
The relationships among reversible Carnot cycles, the absence of perpetual motion machines, and the existence of a nondecreasing globally unique entropy function form the starting point of many textbook presentations of the foundations of thermodynamics. However, the thermal fluctuation phenomena associated with statistical mechanics has been argued to restrict the domain of validity of this basis of the second law of thermodynamics. Here we demonstrate that fluctuation phenomena can be incorporated into the traditional presentation, extending rather than restricting the domain of validity of the phenomenologically motivated second law. Consistency conditions lead to constraints upon the possible spectrum of thermal fluctuations. In a special case this uniquely selects the Gibbs canonical distribution and more generally incorporates the Tsallis distributions. No particular model of microscopic dynamics need be assumed.
Thermodynamic constraints on fluctuation phenomena
NASA Astrophysics Data System (ADS)
Maroney, O. J. E.
2009-12-01
The relationships among reversible Carnot cycles, the absence of perpetual motion machines, and the existence of a nondecreasing globally unique entropy function form the starting point of many textbook presentations of the foundations of thermodynamics. However, the thermal fluctuation phenomena associated with statistical mechanics has been argued to restrict the domain of validity of this basis of the second law of thermodynamics. Here we demonstrate that fluctuation phenomena can be incorporated into the traditional presentation, extending rather than restricting the domain of validity of the phenomenologically motivated second law. Consistency conditions lead to constraints upon the possible spectrum of thermal fluctuations. In a special case this uniquely selects the Gibbs canonical distribution and more generally incorporates the Tsallis distributions. No particular model of microscopic dynamics need be assumed.
Simpler way of imposing simplicity constraints
NASA Astrophysics Data System (ADS)
Banburski, Andrzej; Chen, Lin-Qing
2016-11-01
We investigate a way of imposing simplicity constraints in a holomorphic spin foam model that we recently introduced. Rather than imposing the constraints on the boundary spin network, as is usually done, one can impose the constraints directly on the spin foam propagator. We find that the two approaches have the same leading asymptotic behavior, with differences appearing at higher order. This allows us to obtain a model that greatly simplifies calculations, but still has Regge calculus as its semiclassical limit.
Initial value constraints with tensor matter
NASA Astrophysics Data System (ADS)
Jacobson, Ted
2011-12-01
In generally covariant metric gravity theories with tensor matter fields, the initial value constraint equations, unlike in general relativity, are in general not just the 0μ components of the metric field equation. This happens because higher derivatives can occur in the matter stress tensor. A universal form for these constraints is derived here from a generalized Bianchi identity that includes matter fields. As an application, the constraints for Einstein-aether theory are found.
Geomagnetic main field modeling using magnetohydrodynamic constraints
NASA Technical Reports Server (NTRS)
Estes, R. H.
1985-01-01
The influence of physical constraints are investigated which may be approximately satisfied by the Earth's liquid core on models of the geomagnetic main field and its secular variation. A previous report describes the methodology used to incorporate nonlinear equations of constraint into the main field model. The application of that methodology to the GSFC 12/83 field model to test the frozen-flux hypothesis and the usefulness of incorporating magnetohydrodynamic constraints for obtaining improved geomagnetic field models is described.
Scheduling with genetic algorithms
NASA Technical Reports Server (NTRS)
Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.
1994-01-01
In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.
van der Waals-Tonks-type equations of state for hard-disk and hard-sphere fluids.
Wang, Xian Zhi
2002-09-01
Using the known virial coefficients of hard-disk and hard-sphere fluids, we develop van der Waals-Tonks-type equations of state for hard-disk and hard-sphere fluids. In the low-density fluid regime, these equations of state are in good agreement with the simulation results and the existing equations of state.