A Hard Constraint Algorithm to Model Particle Interactions in DNA-laden Flows
Trebotich, D; Miller, G H; Bybee, M D
2006-08-01
We present a new method for particle interactions in polymer models of DNA. The DNA is represented by a bead-rod polymer model and is fully-coupled to the fluid. The main objective in this work is to implement short-range forces to properly model polymer-polymer and polymer-surface interactions, specifically, rod-rod and rod-surface uncrossing. Our new method is based on a rigid constraint algorithm whereby rods elastically bounce off one another to prevent crossing, similar to our previous algorithm used to model polymer-surface interactions. We compare this model to a classical (smooth) potential which acts as a repulsive force between rods, and rods and surfaces.
Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
Nonnegative Matrix Factorization with Rank Regularization and Hard Constraint.
Shang, Ronghua; Liu, Chiyang; Meng, Yang; Jiao, Licheng; Stolkin, Rustam
2017-09-01
Nonnegative matrix factorization (NMF) is well known to be an effective tool for dimensionality reduction in problems involving big data. For this reason, it frequently appears in many areas of scientific and engineering literature. This letter proposes a novel semisupervised NMF algorithm for overcoming a variety of problems associated with NMF algorithms, including poor use of prior information, negative impact on manifold structure of the sparse constraint, and inaccurate graph construction. Our proposed algorithm, nonnegative matrix factorization with rank regularization and hard constraint (NMFRC), incorporates label information into data representation as a hard constraint, which makes full use of prior information. NMFRC also measures pairwise similarity according to geodesic distance rather than Euclidean distance. This results in more accurate measurement of pairwise relationships, resulting in more effective manifold information. Furthermore, NMFRC adopts rank constraint instead of norm constraints for regularization to balance the sparseness and smoothness of data. In this way, the new data representation is more representative and has better interpretability. Experiments on real data sets suggest that NMFRC outperforms four other state-of-the-art algorithms in terms of clustering accuracy.
A constraint algorithm for singular Lagrangians subjected to nonholonomic constraints
de Leon, M.; de Diego, D.M.
1997-06-01
We construct a constraint algorithm for singular Lagrangian systems subjected to nonholonomic constraints which generalizes that of Dirac for constrained Hamiltonian systems. {copyright} {ital 1997 American Institute of Physics.}
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Direct binary search (DBS) algorithm with constraints
NASA Astrophysics Data System (ADS)
Chandu, Kartheek; Stanich, Mikel; Wu, Chai Wah; Trager, Barry
2013-02-01
In this paper, we describe adding constraints to the Direct Binary Search (DBS) algorithm. An example of a useful constraint, illustrated in this paper, is having only one dot per column and row. DBS with such constraints requires greater than two toggles during each trial operation. Implementations of the DBS algorithm traditionally limit operations to either one toggle or swap during each trial. The example case in this paper produces a wrap-around pattern with uniformly distributed ON pixels which will have a pleasing appearance with precisely one ON pixel per each column and row. The algorithm starts with an initial continuous tone image and an initial pattern having only one ON pixel per column and row. The auto correlation function of Human Visual System (HVS) model is determined along with an initial perceived error. Multiple operation pixel error processing during each iteration is used to enforce the one ON pixel per column and row constraint. The constraint of a single ON pixel per column and row is used as an example in this paper. Further modification of the DBS algorithm for other constraints is possible, based on the details given in the paper. A mathematical framework to extend the algorithm to the more general case of Direct Multi-bit Search (DMS) is presented.
Learning With Mixed Hard/Soft Pointwise Constraints.
Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello
2015-09-01
A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.
Iterative restoration algorithms for nonlinear constraint computing
NASA Astrophysics Data System (ADS)
Szu, Harold
A general iterative-restoration principle is introduced to facilitate the implementation of nonlinear optical processors. The von Neumann convergence theorem is generalized to include nonorthogonal subspaces which can be reduced to a special orthogonal projection operator by applying an orthogonality condition. This principle is shown to permit derivation of the Jacobi algorithm, the recursive principle, the van Cittert (1931) deconvolution method, the iteration schemes of Gerchberg (1974) and Papoulis (1975), and iteration schemes using two Fourier conjugate domains (e.g., Fienup, 1981). Applications to restoring the image of a double star and division by hard and soft zeros are discussed, and sample results are presented graphically.
Gemperline, Paul J; Cash, Eric
2003-08-15
A new algorithm for self-modeling curve resolution (SMCR) that yields improved results by incorporating soft constraints is described. The method uses least squares penalty functions to implement constraints in an alternating least squares algorithm, including nonnegativity, unimodality, equality, and closure constraints. By using least squares penalty functions, soft constraints are formulated rather than hard constraints. Significant benefits are (obtained using soft constraints, especially in the form of fewer distortions due to noise in resolved profiles. Soft equality constraints can also be used to introduce incomplete or partial reference information into SMCR solutions. Four different examples demonstrating application of the new method are presented, including resolution of overlapped HPLC-DAD peaks, flow injection analysis data, and batch reaction data measured by UV/visible and near-infrared spectroscopy (NIR). Each example was selected to show one aspect of the significant advantages of soft constraints over traditionally used hard constraints. Incomplete or partial reference information into self-modeling curve resolution models is described. The method offers a substantial improvement in the ability to resolve time-dependent concentration profiles from mixture spectra recorded as a function of time.
Protein threading with profiles and distance constraints using clique based algorithms.
Dukka, Bahadur K C; Tomita, Etsuji; Suzuki, Jun'ichi; Horimoto, Katsuhisa; Akutsu, Tatsuya
2006-02-01
With the advent of experimental technologies like chemical cross-linking, it has become possible to obtain distances between specific residues of a newly sequenced protein. These types of experiments usually are less time consuming than X-ray crystallography or NMR. Consequently, it is highly desired to develop a method that incorporates this distance information to improve the performance of protein threading methods. However, protein threading with profiles in which constraints on distances between residues are given is known to be NP-hard. By using the notion of a maximum edge-weight clique finding algorithm, we introduce a more efficient method called FTHREAD for profile threading with distance constraints that is 18 times faster than its predecessor CLIQUETHREAD. Moreover, we also present a novel practical algorithm NTHREAD for profile threading with Non-strict constraints. The overall performance of FTHREAD on a data set shows that although our algorithm uses a simple threading function, our algorithm performs equally well as some of the existing methods. Particularly, when there are some unsatisfied constraints, NTHREAD (Non-strict constraints threading algorithm) performs better than threading with FTHREAD (Strict constraints threading algorithm). We have also analyzed the effects of using a number of distance constraints. This algorithm helps the enhancement of alignment quality between the query sequence and template structure, once the corresponding template structure is determined for the target sequence.
Measuring Constraint-Set Utility for Partitional Clustering Algorithms
NASA Technical Reports Server (NTRS)
Davidson, Ian; Wagstaff, Kiri L.; Basu, Sugato
2006-01-01
Clustering with constraints is an active area of machine learning and data mining research. Previous empirical work has convincingly shown that adding constraints to clustering improves the performance of a variety of algorithms. However, in most of these experiments, results are averaged over different randomly chosen constraint sets from a given set of labels, thereby masking interesting properties of individual sets. We demonstrate that constraint sets vary significantly in how useful they are for constrained clustering; some constraint sets can actually decrease algorithm performance. We create two quantitative measures, informativeness and coherence, that can be used to identify useful constraint sets. We show that these measures can also help explain differences in performance for four particular constrained clustering algorithms.
Measuring Constraint-Set Utility for Partitional Clustering Algorithms
NASA Technical Reports Server (NTRS)
Davidson, Ian; Wagstaff, Kiri L.; Basu, Sugato
2006-01-01
Clustering with constraints is an active area of machine learning and data mining research. Previous empirical work has convincingly shown that adding constraints to clustering improves the performance of a variety of algorithms. However, in most of these experiments, results are averaged over different randomly chosen constraint sets from a given set of labels, thereby masking interesting properties of individual sets. We demonstrate that constraint sets vary significantly in how useful they are for constrained clustering; some constraint sets can actually decrease algorithm performance. We create two quantitative measures, informativeness and coherence, that can be used to identify useful constraint sets. We show that these measures can also help explain differences in performance for four particular constrained clustering algorithms.
Hard decoding algorithm for optimizing thresholds under general Markovian noise
NASA Astrophysics Data System (ADS)
Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond
2017-04-01
Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645
Bacanin, Nebojsa; Tuba, Milan
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.
Easy and hard testbeds for real-time search algorithms
Koenig, S.; Simmons, R.G.
1996-12-31
Although researchers have studied which factors influence the behavior of traditional search algorithms, currently not much is known about how domain properties influence the performance of real-time search algorithms. In this paper we demonstrate, both theoretically and experimentally, that Eulerian state spaces (a super set of undirected state spaces) are very easy for some existing real-time search algorithms to solve: even real-time search algorithms that can be intractable, in general, are efficient for Eulerian state spaces. Because traditional real-time search testbeds (such as the eight puzzle and gridworlds) are Eulerian, they cannot be used to distinguish between efficient and inefficient real-time search algorithms. It follows that one has to use non-Eulerian domains to demonstrate the general superiority of a given algorithm. To this end, we present two classes of hard-to-search state spaces and demonstrate the performance of various real-time search algorithms on them.
Rigidity transition in materials: hardness is driven by weak atomic constraints.
Bauchy, Mathieu; Qomi, Mohammad Javad Abdolhosseini; Bichara, Christophe; Ulm, Franz-Josef; Pellenq, Roland J-M
2015-03-27
Understanding the composition dependence of the hardness in materials is of primary importance for infrastructures and handled devices. Stimulated by the need for stronger protective screens, topological constraint theory has recently been used to predict the hardness in glasses. Herein, we report that the concept of rigidity transition can be extended to a broader range of materials than just glass. We show that hardness depends linearly on the number of angular constraints, which, compared to radial interactions, constitute the weaker ones acting between the atoms. This leads to a predictive model for hardness, generally applicable to any crystalline or glassy material.
NASA Astrophysics Data System (ADS)
Tang, Qiuhua; Li, Zixiang; Zhang, Liping; Floudas, C. A.; Cao, Xiaojun
2015-09-01
Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost function. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.
A synthetic dataset for evaluating soft and hard fusion algorithms
NASA Astrophysics Data System (ADS)
Graham, Jacob L.; Hall, David L.; Rimland, Jeffrey
2011-06-01
There is an emerging demand for the development of data fusion techniques and algorithms that are capable of combining conventional "hard" sensor inputs such as video, radar, and multispectral sensor data with "soft" data including textual situation reports, open-source web information, and "hard/soft" data such as image or video data that includes human-generated annotations. New techniques that assist in sense-making over a wide range of vastly heterogeneous sources are critical to improving tactical situational awareness in counterinsurgency (COIN) and other asymmetric warfare situations. A major challenge in this area is the lack of realistic datasets available for test and evaluation of such algorithms. While "soft" message sets exist, they tend to be of limited use for data fusion applications due to the lack of critical message pedigree and other metadata. They also lack corresponding hard sensor data that presents reasonable "fusion opportunities" to evaluate the ability to make connections and inferences that span the soft and hard data sets. This paper outlines the design methodologies, content, and some potential use cases of a COIN-based synthetic soft and hard dataset created under a United States Multi-disciplinary University Research Initiative (MURI) program funded by the U.S. Army Research Office (ARO). The dataset includes realistic synthetic reports from a variety of sources, corresponding synthetic hard data, and an extensive supporting database that maintains "ground truth" through logical grouping of related data into "vignettes." The supporting database also maintains the pedigree of messages and other critical metadata.
Performance of a Variable-Constraint-Length Viterbi Decoding Algorithm.
1982-08-01
AD-A25 021 PERFORMANCE O A VARIABLECONSTRAINTENGTHA VERRI DECODING ALGORITHM ( U* NAVAL OCEAN SYSTEMS CENTER SAN DIEGO CA JK TAMAKI AUG 82 NOSCTR-831...831 PERFORMANCE OF A VARIABLE- CONSTRAINT-LENGTH VITERBI DECODING ALGORITHM JK Tamaki August 1982 DTICSELECTE MAR 1 1983 B Approved for public release...Department ACKNOWLEDGEMENTS The author would like to thank L.E. Hoff for his inspiration and guidance, and R.L. Merk for his support and valuable
On the Convergence of Iterative Receiver Algorithms Utilizing Hard Decisions
NASA Astrophysics Data System (ADS)
Rößler, Jürgen F.; Gerstacker, Wolfgang H.
2010-12-01
The convergence of receivers performing iterative hard decision interference cancellation (IHDIC) is analyzed in a general framework for ASK, PSK, and QAM constellations. We first give an overview of IHDIC algorithms known from the literature applied to linear modulation and DS-CDMA-based transmission systems and show the relation to Hopfield neural network theory. It is proven analytically that IHDIC with serial update scheme always converges to a stable state in the estimated values in course of iterations and that IHDIC with parallel update scheme converges to cycles of length 2. Additionally, we visualize the convergence behavior with the aid of convergence charts. Doing so, we give insight into possible errors occurring in IHDIC which turn out to be caused by locked error situations. The derived results can directly be applied to those iterative soft decision interference cancellation (ISDIC) receivers whose soft decision functions approach hard decision functions in course of the iterations.
An algorithm for optimal structural design with frequency constraints
NASA Technical Reports Server (NTRS)
Kiusalaas, J.; Shaw, R. C. J.
1978-01-01
The paper presents a finite element method for minimum weight design of structures with lower-bound constraints on the natural frequencies, and upper and lower bounds on the design variables. The design algorithm is essentially an iterative solution of the Kuhn-Tucker optimality criterion. The three most important features of the algorithm are: (1) a small number of design iterations are needed to reach optimal or near-optimal design, (2) structural elements with a wide variety of size-stiffness may be used, the only significant restriction being the exclusion of curved beam and shell elements, and (3) the algorithm will work for multiple as well as single frequency constraints. The design procedure is illustrated with three simple problems.
Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms.
Friedrich, Tobias; Neumann, Frank
2015-01-01
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1 + 1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints, we show that the GSEMO achieves a (1 - 1/e)-approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of K ≥ 2 matroids, we show that the (1 + 1) EA achieves a (1/k + δ)-approximation in expected polynomial time for any constant δ > 0. Turning to nonmonotone symmetric submodular functions with k ≥ 1 matroid intersection constraints, we show that the GSEMO achieves a 1/((k + 2)(1 + ε))-approximation in expected time O(n(k + 6)log(n)/ε.
Heinstein, M.W.
1997-10-01
A contact enforcement algorithm has been developed for matrix-free quasistatic finite element techniques. Matrix-free (iterative) solution algorithms such as nonlinear Conjugate Gradients (CG) and Dynamic Relaxation (DR) are distinctive in that the number of iterations required for convergence is typically of the same order as the number of degrees of freedom of the model. From iteration to iteration the contact normal and tangential forces vary significantly making contact constraint satisfaction tenuous. Furthermore, global determination and enforcement of the contact constraints every iteration could be questioned on the grounds of efficiency. This work addresses this situation by introducing an intermediate iteration for treating the active gap constraint and at the same time exactly (kinematically) enforcing the linearized gap rate constraint for both frictionless and frictional response.
Leaf Sequencing Algorithm Based on MLC Shape Constraint
NASA Astrophysics Data System (ADS)
Jing, Jia; Pei, Xi; Wang, Dong; Cao, Ruifen; Lin, Hui
2012-06-01
Intensity modulated radiation therapy (IMRT) requires the determination of the appropriate multileaf collimator settings to deliver an intensity map. The purpose of this work was to attempt to regulate the shape between adjacent multileaf collimator apertures by a leaf sequencing algorithm. To qualify and validate this algorithm, the integral test for the segment of the multileaf collimator of ARTS was performed with clinical intensity map experiments. By comparisons and analyses of the total number of monitor units and number of segments with benchmark results, the proposed algorithm performed well while the segment shape constraint produced segments with more compact shapes when delivering the planned intensity maps, which may help to reduce the multileaf collimator's specific effects.
Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.
Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian
2017-01-01
Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.
A multiagent evolutionary algorithm for constraint satisfaction problems.
Liu, Jing; Zhong, Weicai; Jiao, Licheng
2006-02-01
With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10(4) to 10(7), and has a linear time complexity. Even for 10(7)-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained.
Hard X-Ray Constraints on Small-Scale Coronal Heating Events
NASA Astrophysics Data System (ADS)
Marsh, Andrew; Smith, David M.; Glesener, Lindsay; Klimchuk, James A.; Bradshaw, Stephen; Hannah, Iain; Vievering, Juliana; Ishikawa, Shin-Nosuke; Krucker, Sam; Christe, Steven
2017-08-01
A large body of evidence suggests that the solar corona is heated impulsively. Small-scale heating events known as nanoflares may be ubiquitous in quiet and active regions of the Sun. Hard X-ray (HXR) observations with unprecedented sensitivity >3 keV have recently been enabled through the use of focusing optics. We analyze active region spectra from the FOXSI-2 sounding rocket and the NuSTAR satellite to constrain the physical properties of nanoflares simulated with the EBTEL field-line-averaged hydrodynamics code. We model a wide range of X-ray spectra by varying the nanoflare heating amplitude, duration, delay time, and filling factor. Additional constraints on the nanoflare parameter space are determined from energy constraints and EUV/SXR data.
Faint Radio Source Constraints on the Origin of the Hard X-ray Background
NASA Astrophysics Data System (ADS)
Moran, E. C.; Helfand, D. J.
1999-04-01
ASCA and BeppoSAX have greatly expanded our understanding of the hard X-ray properties of nearby starburst and Seyfert galaxies, allowing, for the first time, detailed estimates of their respective contributions to the hard X-ray background (XRB) to be made. Unfortunately, the sensitivities of these instruments are insufficient to probe either population directly at intermediate and high redshifts, where the majority of the XRB originates. As a result, discrete-source XRB models must typically rely on highly uncertain assumptions about the evolution of potential contributors with cosmic time. Clearly, it would be helpful to identify an observational constraint that minimizes (or eliminates) the need for these assumptions. Since X-ray galaxies of all types produce radio emission in conjunction with their particular brand of activity, we propose that the faint radio source population may provide such a constraint. Existing deep radio surveys, which extend to the microjansky level, should contain both starburst and Seyfert galaxies at cosmological distances. However, optical identification programs carried out to date have revealed that the majority of sub-mJy radio sources are associated with star-forming galaxies rather than AGNs, suggesting that the starburst contribution to the XRB could be significant. By combining hard X-ray and radio data for nearby starburst galaxies with the measured log N--log S relation for sub-mJy radio sources, we estimate that starburst galaxies may produce as much as 15--45% of the 5 keV XRB. Preliminary results of a similar analysis for Seyfert galaxies are complimentary, indicating that these objects cannot be responsible for all of the hard XRB.
NASA Astrophysics Data System (ADS)
Paksi, A. B. N.; Ma'ruf, A.
2016-02-01
In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.
Typical performance of approximation algorithms for NP-hard problems
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-11-01
Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.
NASA Astrophysics Data System (ADS)
Hung, Shih-Yu
2009-01-01
In this paper, Ni-Co/nano-Al2O3 composite electroforming was used to make the metallic micro-mold for a microlens array. The microstructures require higher hardness to improve the wear resistance and lifetime. Nano-Al2O3 was applied to strengthen the Ni-Co matrix by a new micro-electroforming technique. The hardness and internal stress of Ni-Co/nano-Al2O3 composite deposit were investigated. The results showed that the hardness increased with the increasing Al2O3 content, but at the cost of deformation. Increasing the Al2O3 content in the composite was not always beneficial to the electroformed mold for microlens array fabrication. This work will concentrate on the relationship between important mechanical properties and electrolyte parameters of Ni-Co/nano-Al2O3 composite electroforming. Electrolyte parameters such as Al2O3 content, Al2O3 particle diameter, Co content, stress reducer and current density will be examined with respect to internal stress and hardness. In the present study, low stress and high hardness electroforming with the constraint of low surface roughness is carried out using SNAOA algorithm to reduce internal stress and increase service life of micro-mold during the forming process. The results show that the internal stress and the RMS roughness are only 0.54 MPa and 4.8 nm, respectively, for the optimal electrolyte parameters combination of SNAOA design.
NASA Astrophysics Data System (ADS)
Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.
2015-12-01
This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or
Event-chain Monte Carlo algorithms for hard-sphere systems.
Bernard, Etienne P; Krauth, Werner; Wilson, David B
2009-11-01
In this paper we present the event-chain algorithms, which are fast Markov-chain Monte Carlo methods for hard spheres and related systems. In a single move of these rejection-free methods, an arbitrarily long chain of particles is displaced, and long-range coherent motion can be induced. Numerical simulations show that event-chain algorithms clearly outperform the conventional Metropolis method. Irreversible versions of the algorithms, which violate detailed balance, improve the speed of the method even further. We also compare our method with a recent implementations of the molecular-dynamics algorithm.
NASA Astrophysics Data System (ADS)
Takahashi, Jun; Takabe, Satoshi; Hukushima, Koji
2017-07-01
A recently proposed exact algorithm for the maximum independent set problem is analyzed. The typical running time is improved exponentially in some parameter regions compared to simple binary search. Furthermore, the algorithm overcomes the core transition point, where the conventional leaf removal algorithm fails, and works up to the replica symmetry breaking (RSB) transition point. This suggests that a leaf removal core itself is not enough for typical hardness in the random maximum independent set problem, providing further evidence for RSB being the obstacle for algorithms in general.
Regularization of multiplicative iterative algorithms with nonnegative constraint
NASA Astrophysics Data System (ADS)
Benvenuto, Federico; Piana, Michele
2014-03-01
This paper studies the regularization of the constrained maximum likelihood iterative algorithms applied to incompatible ill-posed linear inverse problems. Specifically, we introduce a novel stopping rule which defines a regularization algorithm for the iterative space reconstruction algorithm in the case of least-squares minimization. Further we show that the same rule regularizes the expectation maximization algorithm in the case of Kullback-Leibler minimization, provided a well-justified modification of the definition of Tikhonov regularization is introduced. The performances of this stopping rule are illustrated in the case of an image reconstruction problem in the x-ray solar astronomy.
NASA Astrophysics Data System (ADS)
Virrueta, A.; Gaines, J.; O'Hern, C. S.; Regan, L.
2015-03-01
Current research in the O'Hern and Regan laboratories focuses on the development of hard-sphere models with stereochemical constraints for protein structure prediction as an alternative to molecular dynamics methods that utilize knowledge-based corrections in their force-fields. Beginning with simple hydrophobic dipeptides like valine, leucine, and isoleucine, we have shown that our model is able to reproduce the side-chain dihedral angle distributions derived from sets of high-resolution protein crystal structures. However, methionine remains an exception - our model yields a chi-3 side-chain dihedral angle distribution that is relatively uniform from 60 to 300 degrees, while the observed distribution displays peaks at 60, 180, and 300 degrees. Our goal is to resolve this discrepancy by considering clashes with neighboring residues, and averaging the reduced distribution of allowable methionine structures taken from a set of crystallized proteins. We will also re-evaluate the electron density maps from which these protein structures are derived to ensure that the methionines and their local environments are correctly modeled. This work will ultimately serve as a tool for computing side-chain entropy and protein stability. A. V. is supported by an NSF Graduate Research Fellowship and a Ford Foundation Fellowship. J. G. is supported by NIH training Grant NIH-5T15LM007056-28.
A Token-Bucket Based Rate Control Algorithm with Maximum and Minimum Rate Constraints
NASA Astrophysics Data System (ADS)
Kim, Han Seok; Park, Eun-Chan; Heo, Seo Weon
We propose a token-bucket based rate control algorithm that satisfies both maximum and minimum rate constraints with computational complexity of O(1). The proposed algorithm allocates the remaining bandwidth in a strict priority queuing manner to the flows with different priorities and in a weighted fair queuing manner to the flows within the same priority.
Guidetti, Marco; Young, A P
2011-07-01
We determine the complexity of several constraint-satisfaction problems using the heuristic algorithm WalkSAT. At large sizes N, the complexity increases exponentially with N in all cases. Perhaps surprisingly, out of all the models studied, the hardest for WalkSAT is the one for which there is a polynomial time algorithm.
A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, Fred T.
1992-01-01
A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.
A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints
NASA Technical Reports Server (NTRS)
Hanson, R. J.; Krogh, Fred T.
1992-01-01
A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.
Approximation algorithms for NEXTtime-hard periodically specified problems and domino problems
Marathe, M.V.; Hunt, H.B., III; Stearns, R.E.; Rosenkrantz, D.J.
1996-02-01
We study the efficient approximability of two general class of problems: (1) optimization versions of the domino problems studies in [Ha85, Ha86, vEB83, SB84] and (2) graph and satisfiability problems when specified using various kinds of periodic specifications. Both easiness and hardness results are obtained. Our efficient approximation algorithms and schemes are based on extensions of the ideas. Two of properties of our results obtained here are: (1) For the first time, efficient approximation algorithms and schemes have been developed for natural NEXPTIME-complete problems. (2) Our results are the first polynomial time approximation algorithms with good performance guarantees for `hard` problems specified using various kinds of periodic specifications considered in this paper. Our results significantly extend the results in [HW94, Wa93, MH+94].
Constraint genetic algorithm and its application in sintering proportioning
NASA Astrophysics Data System (ADS)
Wu, Tiebin; Liu, Yunlian; Tang, Wenyan; Li, Xinjun; Yu, Yi
2017-09-01
This paper puts forward a method for constrained optimization problems based on self-adaptive penalty function and improved genetic algorithm. In order to improve the speed of convergence and avoid premature convergence, a method based on good-point set theory has been proposed. By using good point set method for generating initial population, the initial population is uniformly distributed in the solution space. This paper Designs an elite reverse learning strategy, and proposes a mechanism to automatically adjust the crossover probability according to the individual advantages and disadvantages. The tests indicate that the proposed constrained genetic algorithm is efficient and feasible.
Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chiou, Jin-Chern
1990-01-01
Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.
Convergence Rate of the Successive Zooming Genetic Algorithm for Band-Widths of Equality Constraint
NASA Astrophysics Data System (ADS)
Kwon, Y. D.; Han, S. W.; Do, J. W.
Modern optimization techniques, such as the steepest descent method, Newton's method, Rosen's gradient projection method, genetic algorithms, etc., have been developed and quickly improved with the progress of digital computers. The steepest descent method and Newton's method are applied efficiently to unconstrained problems. For many engineering problems involving constraints, the genetic algorithm and SUMT1are applied with relative ease. Genetic algorithms2have global search characteristics and relatively good convergence rates. Recently, a Successive Zooming Genetic Algorithm (SZGA)3,4 was introduced that can search the precise optimal solution at any level of desired accuracy. In the case of engineering problems involving an equality constraint, even if good optimization techniques are applied to the constraint problems, a proper constraint range can lead to a more rapid convergence and precise solution. This study investigated the proper band-width of an equality constraint using the Successive Zooming Genetic Algorithm (SZGA) technique both theoretically and numerically. We were able to find a certain band-width range of the rapid convergence for each problem, and a broad but more general one too.
A Fast Algorithm for Denoising Magnitude Diffusion-Weighted Images with Rank and Edge Constraints
Lam, Fan; Liu, Ding; Song, Zhuang; Schuff, Norbert; Liang, Zhi-Pei
2015-01-01
Purpose To accelerate denoising of magnitude diffusion-weighted images subject to joint rank and edge constraints. Methods We extend a previously proposed majorize-minimize (MM) method for statistical estimation that involves noncentral χ distributions and joint rank and edge constraints. A new algorithm is derived which decomposes the constrained noncentral χ denoising problem into a series of constrained Gaussian denoising problems each of which is then solved using an efficient alternating minimization scheme. Results The performance of the proposed algorithm has been evaluated using both simulated and experimental data. Results from simulations based on ex vivo data show that the new algorithm achieves about a factor of 10 speed up over the original Quasi-Newton based algorithm. This improvement in computational efficiency enabled denoising of large data sets containing many diffusion-encoding directions. The denoising performance of the new efficient algorithm is found to be comparable to or even better than that of the original slow algorithm. For an in vivo high-resolution Q-ball acquisition, comparison of fiber tracking results around hippocampus region before and after denoising will also be shown to demonstrate the denoising effects of the new algorithm. Conclusion The optimization problem associated with denoising noncentral χ distributed diffusion-weighted images subject to joint rank and edge constraints can be solved efficiently using an MM-based algorithm. PMID:25733066
A fast algorithm for denoising magnitude diffusion-weighted images with rank and edge constraints.
Lam, Fan; Liu, Ding; Song, Zhuang; Schuff, Norbert; Liang, Zhi-Pei
2016-01-01
To accelerate denoising of magnitude diffusion-weighted images subject to joint rank and edge constraints. We extend a previously proposed majorize-minimize method for statistical estimation that involves noncentral χ distributions to incorporate joint rank and edge constraints. A new algorithm is derived which decomposes the constrained noncentral χ denoising problem into a series of constrained Gaussian denoising problems each of which is then solved using an efficient alternating minimization scheme. The performance of the proposed algorithm has been evaluated using both simulated and experimental data. Results from simulations based on ex vivo data show that the new algorithm achieves about a factor of 10 speed up over the original Quasi-Newton-based algorithm. This improvement in computational efficiency enabled denoising of large datasets containing many diffusion-encoding directions. The denoising performance of the new efficient algorithm is found to be comparable to or even better than that of the original slow algorithm. For an in vivo high-resolution Q-ball acquisition, comparison of fiber tracking results around hippocampus region before and after denoising will also be shown to demonstrate the denoising effects of the new algorithm. The optimization problem associated with denoising noncentral χ distributed diffusion-weighted images subject to joint rank and edge constraints can be solved efficiently using a majorize-minimize-based algorithm. © 2015 Wiley Periodicals, Inc.
Wang, C S; Lozano-Pérez, T; Tidor, B
1998-07-01
The determination of structures of multimers presents interesting new challenges. The structure(s) of the individual monomers must be found and the transformations to produce the packing interfaces must be described. A substantial difficulty results from ambiguities in assigning intermolecular distance measurements (from nuclear magnetic resonance, for example) to particular intermolecular interfaces in the structure. Here we present a rapid and efficient method to solve the packing and the assignment problems simultaneously given rigid monomer structures and (potentially ambiguous) intermolecular distance measurements. A promising application of this algorithm is to couple it with a monomer searching protocol such that each monomer structure consistent with intramolecular constraints can be subsequently input to the current algorithm to check whether it is consistent with (potentially ambiguous) intermolecular constraints. The algorithm AmbiPack uses a hierarchical division of the search space and the branch-and-bound algorithm to eliminate infeasible regions of the space. Local search methods are then focused on the remaining space. The algorithm generally runs faster as more constraints are included because more regions of the search space can be eliminated. This is not the case for other methods, for which additional constraints increase the complexity of the search space. The algorithm presented is guaranteed to find all solutions to a predetermined resolution. This resolution can be chosen arbitrarily to produce outputs at various level of detail. Illustrative applications are presented for the P22 tailspike protein (a trimer) and portions of beta-amyloid (an ordered aggregate).
Constraint Drive Generation of Vision Algorithms on an Elastic Infrastructure
2014-10-01
classifiers, image search indexes, human annotators, and heterogeneous computer vision algorithms. Processing is performed using the Apache Hadoop cluster...workers). Picarus is a web-service that executes large-scale visual analysis jobs using Hadoop with data stored on 10 Approved for Public Release...Installed Picarus (which requires Hadoop , HBase, and Redis) on two govcloud servers. Wrote up documentation for picarus adminis- tration http://goo.gl
Constraint factor in optimization of truss structures via flower pollination algorithm
NASA Astrophysics Data System (ADS)
Bekdaş, Gebrail; Nigdeli, Sinan Melih; Sayin, Baris
2017-07-01
The aim of the paper is to investigate the optimum design of truss structures by considering different stress and displacement constraints. For that reason, the flower pollination algorithm based methodology was applied for sizing optimization of space truss structures. Flower pollination algorithm is a metaheuristic algorithm inspired by the pollination process of flowering plants. By the imitation of cross-pollination and self-pollination processes, the randomly generation of sizes of truss members are done in two ways and these two types of optimization are controlled with a switch probability. In the study, a 72 bar space truss structure was optimized by using five different cases of the constraint limits. According to the results, a linear relationship between the optimum structure weight and constraint limits was observed.
2006-01-01
system. Our simulation studies and implementation measurements reveal that GUS performs close to, if not better than, the existing algorithms for the...satisfying application time con straints. The most widely studied time constraint is the deadline. A deadline time con straint for an application...optimality criteria, such as resource dependencies and precedence 3 constraints. Scheduling tasks with non-step TUF’s has been studied in the past
NASA Technical Reports Server (NTRS)
Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara
2001-01-01
In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.
Parallelized event chain algorithm for dense hard sphere and polymer systems
Kampmann, Tobias A. Boltz, Horst-Holger; Kierfeld, Jan
2015-01-15
We combine parallelization and cluster Monte Carlo for hard sphere systems and present a parallelized event chain algorithm for the hard disk system in two dimensions. For parallelization we use a spatial partitioning approach into simulation cells. We find that it is crucial for correctness to ensure detailed balance on the level of Monte Carlo sweeps by drawing the starting sphere of event chains within each simulation cell with replacement. We analyze the performance gains for the parallelized event chain and find a criterion for an optimal degree of parallelization. Because of the cluster nature of event chain moves massive parallelization will not be optimal. Finally, we discuss first applications of the event chain algorithm to dense polymer systems, i.e., bundle-forming solutions of attractive semiflexible polymers.
Control of Boolean networks: hardness results and algorithms for tree structured networks.
Akutsu, Tatsuya; Hayashida, Morihiro; Ching, Wai-Ki; Ng, Michael K
2007-02-21
Finding control strategies of cells is a challenging and important problem in the post-genomic era. This paper considers theoretical aspects of the control problem using the Boolean network (BN), which is a simplified model of genetic networks. It is shown that finding a control strategy leading to the desired global state is computationally intractable (NP-hard) in general. Furthermore, this hardness result is extended for BNs with considerably restricted network structures. These results justify existing exponential time algorithms for finding control strategies for probabilistic Boolean networks (PBNs). On the other hand, this paper shows that the control problem can be solved in polynomial time if the network has a tree structure. Then, this algorithm is extended for the case where the network has a few loops and the number of time steps is small. Though this paper focuses on theoretical aspects, biological implications of the theoretical results are also discussed.
An Improved PDR Indoor Locaion Algorithm Based on Probabilistic Constraints
NASA Astrophysics Data System (ADS)
You, Y.; Zhang, T.; Liu, Y.; Lu, Y.; Chu, X.; Feng, C.; Liu, S.
2017-09-01
In this paper, we proposed an indoor pedestrian positioning method which is probabilistic constrained by "multi-target encounter" when the initial position is known. The method is based on the Pedestrian Dead Reckoning (PDR) method. According to the PDR method of positioning error size and indoor road network structure, the buffer distance is determined reasonably and the buffer centering on the PDR location is generated. At the same time, key nodes are selected based on indoor network. In the premise of knowing the distance between multiple key nodes, the forward distance of pedestrians which entered from different nodes can be calculated and then we sum their distances and compared with the known distance between the key nodes, which determines whether pedestrians meet. When pedestrians meet, each two are seen as a cluster. The algorithm determines whether the range of the intersection of the buffer meet the conditions. When the condition is satisfied, the centre of the intersection area is taken as the pedestrian position. At the same time, based on the angle mutation of pedestrian which caused by the special structure of the indoor staircase, the pedestrian's location is matched to the real location of the key landmark (staircase). Then the cumulative error of the PDR method is eliminated. The method can locate more than one person at the same time, as long as you know the true location of a person, you can also know everyone's real location in the same cluster and efficiently achieve indoor pedestrian positioning.
A fast multigrid algorithm for energy minimization under planar density constraints.
Ron, D.; Safro, I.; Brandt, A.; Mathematics and Computer Science; Weizmann Inst. of Science
2010-09-07
The two-dimensional layout optimization problem reinforced by the efficient space utilization demand has a wide spectrum of practical applications. Formulating the problem as a nonlinear minimization problem under planar equality and/or inequality density constraints, we present a linear time multigrid algorithm for solving a correction to this problem. The method is demonstrated in various graph drawing (visualization) instances.
NEW CONSTRAINTS ON THE BLACK HOLE LOW/HARD STATE INNER ACCRETION FLOW WITH NuSTAR
Miller, J. M.; King, A. L.; Tomsick, J. A.; Boggs, S. E.; Bachetti, M.; Wilkins, D.; Christensen, F. E.; Craig, W. W.; Fabian, A. C.; Kara, E.; Grefenstette, B. W.; Harrison, F. A.; Hailey, C. J.; Stern, D. K; Zhang, W. W.
2015-01-20
We report on an observation of the Galactic black hole candidate GRS 1739–278 during its 2014 outburst, obtained with NuSTAR. The source was captured at the peak of a rising ''low/hard'' state, at a flux of ∼0.3 Crab. A broad, skewed iron line and disk reflection spectrum are revealed. Fits to the sensitive NuSTAR spectra with a number of relativistically blurred disk reflection models yield strong geometrical constraints on the disk and hard X-ray ''corona''. Two models that explicitly assume a ''lamp post'' corona find its base to have a vertical height above the black hole of h=5{sub −2}{sup +7} GM/c{sup 2} and h = 18 ± 4 GM/c {sup 2} (90% confidence errors); models that do not assume a ''lamp post'' return emissivity profiles that are broadly consistent with coronae of this size. Given that X-ray microlensing studies of quasars and reverberation lags in Seyferts find similarly compact coronae, observations may now signal that compact coronae are fundamental across the black hole mass scale. All of the models fit to GRS 1739–278 find that the accretion disk extends very close to the black hole—the least stringent constraint is r{sub in}=5{sub −4}{sup +3} GM/c{sup 2}. Only two of the models deliver meaningful spin constraints, but a = 0.8 ± 0.2 is consistent with all of the fits. Overall, the data provide especially compelling evidence of an association between compact hard X-ray coronae and the base of relativistic radio jets in black holes.
A Novel Energy Saving Algorithm with Frame Response Delay Constraint in IEEE 802.16e
NASA Astrophysics Data System (ADS)
Nga, Dinh Thi Thuy; Kim, Mingon; Kang, Minho
Sleep-mode operation of a Mobile Subscriber Station (MSS) in IEEE 802.16e effectively saves energy consumption; however, it induces frame response delay. In this letter, we propose an algorithm to quickly find the optimal value of the final sleep interval in sleep-mode in order to minimize energy consumption with respect to a given frame response delay constraint. The validations of our proposed algorithm through analytical results and simulation results suggest that our algorithm provide a potential guidance to energy saving.
A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem
Bayardo, R.J. Jr.; Miranker, D.P.
1996-12-31
Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-space-bounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevance-bounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes.
On-line reentry guidance algorithm with both path and no-fly zone constraints
NASA Astrophysics Data System (ADS)
Zhang, Da; Liu, Lei; Wang, Yongji
2015-12-01
This study proposes an on-line predictor-corrector reentry guidance algorithm that satisfies path and no-fly zone constraints for hypersonic vehicles with a high lift-to-drag ratio. The proposed guidance algorithm can generate a feasible trajectory at each guidance cycle during the entry flight. In the longitudinal profile, numerical predictor-corrector approaches are used to predict the flight capability from current flight states to expected terminal states and to generate an on-line reference drag acceleration profile. The path constraints on heat rate, aerodynamic load, and dynamic pressure are implemented as a part of the predictor-corrector algorithm. A tracking control law is then designed to track the reference drag acceleration profile. In the lateral profile, a novel guidance algorithm is presented. The velocity azimuth angle error threshold and artificial potential field method are used to reduce heading error and to avoid the no-fly zone. Simulated results for nominal and dispersed cases show that the proposed guidance algorithm not only can avoid the no-fly zone but can also steer a typical entry vehicle along a feasible 3D trajectory that satisfies both terminal and path constraints.
Zhang, Jinkai; Rivard, Benoit; Rogge, D M
2008-02-22
Spectral mixing is a problem inherent to remote sensing data and results in fewimage pixel spectra representing "pure" targets. Linear spectral mixture analysis isdesigned to address this problem and it assumes that the pixel-to-pixel variability in ascene results from varying proportions of spectral endmembers. In this paper we present adifferent endmember-search algorithm called the Successive Projection Algorithm (SPA).SPA builds on convex geometry and orthogonal projection common to other endmembersearch algorithms by including a constraint on the spatial adjacency of endmembercandidate pixels. Consequently it can reduce the susceptibility to outlier pixels andgenerates realistic endmembers.This is demonstrated using two case studies (AVIRISCuprite cube and Probe-1 imagery for Baffin Island) where image endmembers can bevalidated with ground truth data. The SPA algorithm extracts endmembers fromhyperspectral data without having to reduce the data dimensionality. It uses the spectralangle (alike IEA) and the spatial adjacency of pixels in the image to constrain the selectionof candidate pixels representing an endmember. We designed SPA based on theobservation that many targets have spatial continuity (e.g. bedrock lithologies) in imageryand thus a spatial constraint would be beneficial in the endmember search. An additionalproduct of the SPA is data describing the change of the simplex volume ratio between successive iterations during the endmember extraction. It illustrates the influence of a newendmember on the data structure, and provides information on the convergence of thealgorithm. It can provide a general guideline to constrain the total number of endmembersin a search.
Yurtkuran, Alkın; Emel, Erdal
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.
Yurtkuran, Alkın
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834
NASA Astrophysics Data System (ADS)
Ghossein, Elias; Lévesque, Martin
2013-11-01
This paper presents a computationally-efficient algorithm for generating random periodic packings of hard ellipsoids. The algorithm is based on molecular dynamics where the ellipsoids are set in translational and rotational motion and their volumes gradually increase. Binary collision times are computed by simply finding the roots of a non-linear function. In addition, an original and efficient method to compute the collision time between an ellipsoid and a cube face is proposed. The algorithm can generate all types of ellipsoids (prolate, oblate and scalene) with very high aspect ratios (i.e., >10). It is the first time that such packings are reported in the literature. Orientations tensors were computed for the generated packings and it has been shown that ellipsoids had a uniform distribution of orientations. Moreover, it seems that for low aspect ratios (i.e., ⩽10), the volume fraction is the most influential parameter on the algorithm CPU time. For higher aspect ratios, the influence of the latter becomes as important as the volume fraction. All necessary pseudo-codes are given so that the reader can easily implement the algorithm.
Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas
NASA Technical Reports Server (NTRS)
Smith, Barbara M.; Bennett, Sean
1992-01-01
A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.
A memory-efficient algorithm for multiple sequence alignment with constraints.
Lu, Chin Lung; Huang, Yen Pin
2005-01-01
Recently, the concept of the constrained sequence alignment was proposed to incorporate the knowledge of biologists about structures/functionalities/consensuses of their datasets into sequence alignment such that the user-specified residues/nucleotides are aligned together in the computed alignment. The currently developed programs use the so-called progressive approach to efficiently obtain a constrained alignment of several sequences. However, the kernels of these programs, the dynamic programming algorithms for computing an optimal constrained alignment between two sequences, run in (gamman2) memory, where gamma is the number of the constraints and n is the maximum of the lengths of sequences. As a result, such a high memory requirement limits the overall programs to align short sequences only. We adopt the divide-and-conquer approach to design a memory-efficient algorithm for computing an optimal constrained alignment between two sequences, which greatly reduces the memory requirement of the dynamic programming approaches at the expense of a small constant factor in CPU time. This new algorithm consumes only O(alphan) space, where alpha is the sum of the lengths of constraints and usually alpha < n in practical applications. Based on this algorithm, we have developed a memory-efficient tool for multiple sequence alignment with constraints. http://genome.life.nctu.edu.tw/MUSICME.
Optimizations Of Coat-Hanger Die, Using Constraint Optimization Algorithm And Taguchi Method
NASA Astrophysics Data System (ADS)
Lebaal, Nadhir; Schmidt, Fabrice; Puissant, Stephan
2007-05-01
Polymer extrusion is one of the most important manufacturing methods used today. A flat die, is commonly used to extrude thin thermoplastics sheets. If the channel geometry in a flat die is not designed properly, the velocity at the die exit may be perturbed, which can affect the thickness across the width of the die. The ultimate goal of this work is to optimize the die channel geometry in a way that a uniform velocity distribution is obtained at the die exit. While optimizing the exit velocity distribution, we have coupled three-dimensional extrusion simulation software Rem3D®, with an automatic constraint optimization algorithm to control the maximum allowable pressure drop in the die; according to this constraint we can control the pressure in the die (decrease the pressure while minimizing the velocity dispersion across the die exit). For this purpose, we investigate the effect of the design variables in the objective and constraint function by using Taguchi method. In the second study we use the global response surface method with Kriging interpolation to optimize flat die geometry. Two optimization results are presented according to the imposed constraint on the pressure. The optimum is obtained with a very fast convergence (2 iterations). To respect the constraint while ensuring a homogeneous distribution of velocity, the results with a less severe constraint offers the best minimum.
An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints
Rao, Yunqing; Qi, Dezhong; Li, Jinling
2013-01-01
For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem. PMID:24489491
Zhang, Jinkai; Rivard, Benoit; Rogge, D.M.
2008-01-01
Spectral mixing is a problem inherent to remote sensing data and results in few image pixel spectra representing ″pure″ targets. Linear spectral mixture analysis is designed to address this problem and it assumes that the pixel-to-pixel variability in a scene results from varying proportions of spectral endmembers. In this paper we present a different endmember-search algorithm called the Successive Projection Algorithm (SPA). SPA builds on convex geometry and orthogonal projection common to other endmember search algorithms by including a constraint on the spatial adjacency of endmember candidate pixels. Consequently it can reduce the susceptibility to outlier pixels and generates realistic endmembers.This is demonstrated using two case studies (AVIRIS Cuprite cube and Probe-1 imagery for Baffin Island) where image endmembers can be validated with ground truth data. The SPA algorithm extracts endmembers from hyperspectral data without having to reduce the data dimensionality. It uses the spectral angle (alike IEA) and the spatial adjacency of pixels in the image to constrain the selection of candidate pixels representing an endmember. We designed SPA based on the observation that many targets have spatial continuity (e.g. bedrock lithologies) in imagery and thus a spatial constraint would be beneficial in the endmember search. An additional product of the SPA is data describing the change of the simplex volume ratio between successive iterations during the endmember extraction. It illustrates the influence of a new endmember on the data structure, and provides information on the convergence of the algorithm. It can provide a general guideline to constrain the total number of endmembers in a search. PMID:27879768
An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints
Sung, Jinmo; Jeong, Bongju
2014-01-01
Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments. PMID:24701158
NASA Astrophysics Data System (ADS)
Zhao, Fengjun; Qu, Xiaochao; Zhang, Xing; Poon, Ting-Chung; Kim, Taegeun; Kim, You Seok; Liang, Jimin
2012-03-01
The optical imaging takes advantage of coherent optics and has promoted the development of visualization of biological application. Based on the temporal coherence, optical coherence tomography can deliver three-dimensional optical images with superior resolutions, but the axial and lateral scanning is a time-consuming process. Optical scanning holography (OSH) is a spatial coherence technique which integrates three-dimensional object into a two-dimensional hologram through a two-dimensional optical scanning raster. The advantages of high lateral resolution and fast image acquisition offer it a great potential application in three-dimensional optical imaging, but the prerequisite is the accurate and practical reconstruction algorithm. Conventional method was first adopted to reconstruct sectional images and obtained fine results, but some drawbacks restricted its practicality. An optimization method based on 2 l norm obtained more accurate results than that of the conventional methods, but the intrinsic smooth of 2 l norm blurs the reconstruction results. In this paper, a hard-threshold based sparse inverse imaging algorithm is proposed to improve the sectional image reconstruction. The proposed method is characterized by hard-threshold based iterating with shrinkage threshold strategy, which only involves lightweight vector operations and matrix-vector multiplication. The performance of the proposed method has been validated by real experiment, which demonstrated great improvement on reconstruction accuracy at appropriate computational cost.
An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.
Zhang, Ye; Yu, Tenglong; Wang, Wenwu
2014-01-01
Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.
An Analysis Dictionary Learning Algorithm under a Noisy Data Model with Orthogonality Constraint
Zhang, Ye; Yu, Tenglong
2014-01-01
Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms. PMID:25126605
Precise algorithm to generate random sequential addition of hard hyperspheres at saturation.
Zhang, G; Torquato, S
2013-11-01
The study of the packing of hard hyperspheres in d-dimensional Euclidean space R^{d} has been a topic of great interest in statistical mechanics and condensed matter theory. While the densest known packings are ordered in sufficiently low dimensions, it has been suggested that in sufficiently large dimensions, the densest packings might be disordered. The random sequential addition (RSA) time-dependent packing process, in which congruent hard hyperspheres are randomly and sequentially placed into a system without interparticle overlap, is a useful packing model to study disorder in high dimensions. Of particular interest is the infinite-time saturation limit in which the available space for another sphere tends to zero. However, the associated saturation density has been determined in all previous investigations by extrapolating the density results for nearly saturated configurations to the saturation limit, which necessarily introduces numerical uncertainties. We have refined an algorithm devised by us [S. Torquato, O. U. Uche, and F. H. Stillinger, Phys. Rev. E 74, 061308 (2006)] to generate RSA packings of identical hyperspheres. The improved algorithm produce such packings that are guaranteed to contain no available space in a large simulation box using finite computational time with heretofore unattained precision and across the widest range of dimensions (2≤d≤8). We have also calculated the packing and covering densities, pair correlation function g(2)(r), and structure factor S(k) of the saturated RSA configurations. As the space dimension increases, we find that pair correlations markedly diminish, consistent with a recently proposed "decorrelation" principle, and the degree of "hyperuniformity" (suppression of infinite-wavelength density fluctuations) increases. We have also calculated the void exclusion probability in order to compute the so-called quantizer error of the RSA packings, which is related to the second moment of inertia of the average
Kirsch, J D; Drennen, J K
1999-03-01
A new algorithm using common statistics was proposed for nondestructive near-infrared (near-IR) spectroscopic tablet hardness testing over a range of tablet potencies. The spectral features that allow near-IR tablet hardness testing were evaluated. Cimetidine tablets of 1-20% potency and 1-7 kp hardness were used for the development and testing of a new spectral best-fit algorithm for tablet hardness prediction. Actual tablet hardness values determined via a destructive diametral crushing test were used for construction of calibration models using principal component analysis/principal component regression (PCA/PCR) or the new algorithm. Both methods allowed the prediction of tablet hardness over the range of potencies studied. The spectral best-fit method compared favorably to the multivariate PCA/PCR method, but was easier to develop. The new approach offers advantages over wavelength-based regression models because the calculation of a spectral slope averages out the influence of individual spectral absorbance bands. The ability to generalize the hardness calibration over a range of potencies confirms the robust nature of the method.
Cassioli, Andrea; Bardiaux, Benjamin; Bouvier, Guillaume; Mucherino, Antonio; Alves, Rafael; Liberti, Leo; Nilges, Michael; Lavor, Carlile; Malliavin, Thérèse E
2015-01-28
The determination of protein structures satisfying distance constraints is an important problem in structural biology. Whereas the most common method currently employed is simulated annealing, there have been other methods previously proposed in the literature. Most of them, however, are designed to find one solution only. In order to explore exhaustively the feasible conformational space, we propose here an interval Branch-and-Prune algorithm (iBP) to solve the Distance Geometry Problem (DGP) associated to protein structure determination. This algorithm is based on a discretization of the problem obtained by recursively constructing a search space having the structure of a tree, and by verifying whether the generated atomic positions are feasible or not by making use of pruning devices. The pruning devices used here are directly related to features of protein conformations. We described the new algorithm iBP to generate protein conformations satisfying distance constraints, that would potentially allows a systematic exploration of the conformational space. The algorithm iBP has been applied on three α-helical peptides.
Expanding Metabolic Engineering Algorithms Using Feasible Space and Shadow Price Constraint Modules
Tervo, Christopher J.; Reed, Jennifer L.
2014-01-01
While numerous computational methods have been developed that use genome-scale models to propose mutants for the purpose of metabolic engineering, they generally compare mutants based on a single criteria (e.g., production rate at a mutant’s maximum growth rate). As such, these approaches remain limited in their ability to include multiple complex engineering constraints. To address this shortcoming, we have developed feasible space and shadow price constraint (FaceCon and ShadowCon) modules that can be added to existing mixed integer linear adaptive evolution metabolic engineering algorithms, such as OptKnock and OptORF. These modules allow strain designs to be identified amongst a set of multiple metabolic engineering algorithm solutions that are capable of high chemical production while also satisfying additional design criteria. We describe the various module implementations and their potential applications to the field of metabolic engineering. We then incorporated these modules into the OptORF metabolic engineering algorithm. Using an Escherichia coli genome-scale model (iJO1366), we generated different strain designs for the anaerobic production of ethanol from glucose, thus demonstrating the tractability and potential utility of these modules in metabolic engineering algorithms. PMID:25478320
Sun, Liping; Luo, Yonglong; Ding, Xintao; Zhang, Ji
2014-01-01
An important component of a spatial clustering algorithm is the distance measure between sample points in object space. In this paper, the traditional Euclidean distance measure is replaced with innovative obstacle distance measure for spatial clustering under obstacle constraints. Firstly, we present a path searching algorithm to approximate the obstacle distance between two points for dealing with obstacles and facilitators. Taking obstacle distance as similarity metric, we subsequently propose the artificial immune clustering with obstacle entity (AICOE) algorithm for clustering spatial point data in the presence of obstacles and facilitators. Finally, the paper presents a comparative analysis of AICOE algorithm and the classical clustering algorithms. Our clustering model based on artificial immune system is also applied to the case of public facility location problem in order to establish the practical applicability of our approach. By using the clone selection principle and updating the cluster centers based on the elite antibodies, the AICOE algorithm is able to achieve the global optimum and better clustering effect. PMID:25435862
Sun, Liping; Luo, Yonglong; Ding, Xintao; Zhang, Ji
2014-01-01
An important component of a spatial clustering algorithm is the distance measure between sample points in object space. In this paper, the traditional Euclidean distance measure is replaced with innovative obstacle distance measure for spatial clustering under obstacle constraints. Firstly, we present a path searching algorithm to approximate the obstacle distance between two points for dealing with obstacles and facilitators. Taking obstacle distance as similarity metric, we subsequently propose the artificial immune clustering with obstacle entity (AICOE) algorithm for clustering spatial point data in the presence of obstacles and facilitators. Finally, the paper presents a comparative analysis of AICOE algorithm and the classical clustering algorithms. Our clustering model based on artificial immune system is also applied to the case of public facility location problem in order to establish the practical applicability of our approach. By using the clone selection principle and updating the cluster centers based on the elite antibodies, the AICOE algorithm is able to achieve the global optimum and better clustering effect.
NASA Technical Reports Server (NTRS)
Fijany, A.; Featherstone, R.
1999-01-01
This paper presents a new formulation of the Constraint Force Algorithm that corrects a major limitation in the original, and sheds new light on the relationship between it and other dynamics algoritms.
Bowen, J.; Dozier, G.
1996-12-31
This paper introduces a hybrid evolutionary hill-climbing algorithm that quickly solves (Constraint Satisfaction Problems (CSPs)). This hybrid uses opportunistic arc and path revision in an interleaved fashion to reduce the size of the search space and to realize when to quit if a CSP is based on an inconsistent constraint network. This hybrid outperforms a well known hill-climbing algorithm, the Iterative Descent Method, on a test suite of 750 randomly generated CSPs.
Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms
NASA Astrophysics Data System (ADS)
Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid
2017-09-01
The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan
2016-01-01
Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical
A new method for automatically measuring Vickers hardness based on region-point detection algorithm
NASA Astrophysics Data System (ADS)
Pan, Yong; Shan, Yuekang; Ji, Yu; Zhang, Shibo
2008-12-01
This paper presents a new method for automatically analyzing the digital image of Vickers hardness indentation called Region-Point detection algorithm. This method effectively overcomes the error of vertex detection due to curving indentation edges. In the Region-Detection, to obtain four small regions where the four vertexes locate, Sobel Operator is implemented to extract the edge points and Thick-line Hough Transform is utilized to fit the edge lines, then the four regions are selected according to the four intersection points of the thick lines. In the Point-Detection, to get the vertex's accurate position in every small region, Thick-line Hough Transform is used again to select useful edge points and Last Square Method is utilized to accurately fit lines. The interception point of the two lines in every region is the vertex of indentation. Then the length of the diagonal and the Vickers hardness could be calculated. Experiments show that the measured values agreed well with the standard values
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
Martín H, José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter α∈N. Nevertheless, here it is proved that the probability of requiring a value of α>k to obtain a solution for a random graph decreases exponentially: P(α>k)≤2(-(k+1)), making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results.
NASA Astrophysics Data System (ADS)
Sohrabi, Foad; Davidson, Timothy N.
2016-06-01
We consider the problem of power allocation for the single-cell multi-user (MU) multiple-input single-output (MISO) downlink with quality-of-service (QoS) constraints. The base station acquires an estimate of the channels and, for a given beamforming structure, designs the power allocation so as to minimize the total transmission power required to ensure that target signal-to-interference-and-noise ratios at the receivers are met, subject to a specified outage probability. We consider scenarios in which the errors in the base station's channel estimates can be modelled as being zero-mean and Gaussian. Such a model is particularly suitable for time division duplex (TDD) systems with quasi-static channels, in which the base station estimates the channel during the uplink phase. Under that model, we employ a precise deterministic characterization of the outage probability to transform the chance-constrained formulation to a deterministic one. Although that deterministic formulation is not convex, we develop a coordinate descent algorithm that can be shown to converge to a globally optimal solution when the starting point is feasible. Insight into the structure of the deterministic formulation yields approximations that result in coordinate update algorithms with good performance and significantly lower computational cost. The proposed algorithms provide better performance than existing robust power loading algorithms that are based on tractable conservative approximations, and can even provide better performance than robust precoding algorithms based on such approximations.
An efficient algorithm for antenna synthesis updating following null-constraint changes
NASA Astrophysics Data System (ADS)
Magdy, M. A.; Paoloni, F. J.; Cheah, J. Y. C.
1985-08-01
The procedure to maximize the array signal to noise ratio with null constraints involves an optimization problem that can be solved efficiently using a modified Cholesky decomposition (UD) technique. Following changes in the main lobe and/or null positions, the optimal element weight vector can be updated without the need for a new complete matrix inversion. Some properties of the UD technique can be utilized such that the updating algorithm reprocesses only a part of the unit triangular matrix U. Proper ordering of matrix entries can minimize the dimension of the updated part.
Frutos, M.; Méndez, M.; Tohmé, F.; Broz, D.
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier. PMID:24489502
Frutos, M; Méndez, M; Tohmé, F; Broz, D
2013-01-01
Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs) for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier.
Formal analysis, hardness, and algorithms for extracting internal structure of test-based problems.
Jaśkowski, Wojciech; Krawiec, Krzysztof
2011-01-01
Problems in which some elementary entities interact with each other are common in computational intelligence. This scenario, typical for coevolving artificial life agents, learning strategies for games, and machine learning from examples, can be formalized as a test-based problem and conveniently embedded in the common conceptual framework of coevolution. In test-based problems, candidate solutions are evaluated on a number of test cases (agents, opponents, examples). It has been recently shown that every test of such problem can be regarded as a separate objective, and the whole problem as multi-objective optimization. Research on reducing the number of such objectives while preserving the relations between candidate solutions and tests led to the notions of underlying objectives and internal problem structure, which can be formalized as a coordinate system that spatially arranges candidate solutions and tests. The coordinate system that spans the minimal number of axes determines the so-called dimension of a problem and, being an inherent property of every problem, is of particular interest. In this study, we investigate in-depth the formalism of a coordinate system and its properties, relate them to properties of partially ordered sets, and design an exact algorithm for finding a minimal coordinate system. We also prove that this problem is NP-hard and come up with a heuristic which is superior to the best algorithm proposed so far. Finally, we apply the algorithms to three abstract problems and demonstrate that the dimension of the problem is typically much lower than the number of tests, and for some problems converges to the intrinsic parameter of the problem--its a priori dimension.
2008-09-18
fumarase; MAN ) mandelate racemase; PEP ) carboxypeptidase B; CDA ) E . coli cytidine deaminase; KSI ) ketosteroid isomerase; CMU ) chorismate...resumption of respiration. A 3D model of E . coli Ndh according to Schmid and Gerloff (2004). Putative flavin-, NADH-, and membrane-binding domains are...DATE 18 SEP 2008 2. REPORT TYPE 3 . DATES COVERED 00-00-2008 to 00-00-2008 4. TITLE AND SUBTITLE Some Very Hard Problems in Nature (Biology
NASA Astrophysics Data System (ADS)
Hou Chin, Jia; Ratnavelu, Kuru
2017-04-01
Community structure is an important feature of a complex network, where detection of the community structure can shed some light on the properties of such a complex network. Amongst the proposed community detection methods, the label propagation algorithm (LPA) emerges as an effective detection method due to its time efficiency. Despite this advantage in computational time, the performance of LPA is affected by randomness in the algorithm. A modified LPA, called CLPA-GNR, was proposed recently and it succeeded in handling the randomness issues in the LPA. However, it did not remove the tendency for trivial detection in networks with a weak community structure. In this paper, an improved CLPA-GNR is therefore proposed. In the new algorithm, the unassigned and assigned nodes are updated synchronously while the assigned nodes are updated asynchronously. A similarity score, based on the Sørensen-Dice index, is implemented to detect the initial communities and for breaking ties during the propagation process. Constraints are utilised during the label propagation and community merging processes. The performance of the proposed algorithm is evaluated on various benchmark and real-world networks. We find that it is able to avoid trivial detection while showing substantial improvement in the quality of detection.
Hou Chin, Jia; Ratnavelu, Kuru
2017-01-01
Community structure is an important feature of a complex network, where detection of the community structure can shed some light on the properties of such a complex network. Amongst the proposed community detection methods, the label propagation algorithm (LPA) emerges as an effective detection method due to its time efficiency. Despite this advantage in computational time, the performance of LPA is affected by randomness in the algorithm. A modified LPA, called CLPA-GNR, was proposed recently and it succeeded in handling the randomness issues in the LPA. However, it did not remove the tendency for trivial detection in networks with a weak community structure. In this paper, an improved CLPA-GNR is therefore proposed. In the new algorithm, the unassigned and assigned nodes are updated synchronously while the assigned nodes are updated asynchronously. A similarity score, based on the Sørensen-Dice index, is implemented to detect the initial communities and for breaking ties during the propagation process. Constraints are utilised during the label propagation and community merging processes. The performance of the proposed algorithm is evaluated on various benchmark and real-world networks. We find that it is able to avoid trivial detection while showing substantial improvement in the quality of detection. PMID:28374836
Training Neural Networks with Weight Constraints
1993-03-01
Hardware implementation of artificial neural networks imposes a variety of constraints. Finite weight magnitudes exist in both digital and analog...optimizing a network with weight constraints. Comparisons are made to the backpropagation training algorithm for networks with both unconstrained and hard-limited weight magnitudes. Neural networks , Analog, Digital, Stochastic
A novel delay-constraint routing algorithm in integrated space-ground communication networks
NASA Astrophysics Data System (ADS)
Yu, Xiaosong; Yang, Liu; Cao, Yuan; Zhao, Yongli; Chen, Xue; Zhang, Jie; Wang, Chunfeng
2016-03-01
In recent years, the integrated space-ground network communication system plays an increasingly important role in earth observation and space information confrontation for the civilian and military service. Their characteristic on wide coverage, which may be the only way to provide Internet access and communication services in many areas, has extensively promoted its significance. This paper discusses the architecture of integrated space-ground communication networks, and introduces a novel routing algorithm named Improved Store-and-forward Routing Mechanism (ISRM) to shorten the transmission delay in such a network. The proposed ISRM algorithm is based on store and forward mechanism, while it trying to find several alternative delay-constraint paths by building the route-related nodes encounter-probability information table and communication timing diagram. Simulation is conducted at the end, and comparisons between ISRM and baseline algorithm are given. The results show that ISRM can achieve relatively good performance in terms of transmission latency in integrated space-ground networks.
Homotopy Algorithm for Optimal Control Problems with a Second-order State Constraint
Hermant, Audrey
2010-02-15
This paper deals with optimal control problems with a regular second-order state constraint and a scalar control, satisfying the strengthened Legendre-Clebsch condition. We study the stability of structure of stationary points. It is shown that under a uniform strict complementarity assumption, boundary arcs are stable under sufficiently smooth perturbations of the data. On the contrary, nonreducible touch points are not stable under perturbations. We show that under some reasonable conditions, either a boundary arc or a second touch point may appear. Those results allow us to design an homotopy algorithm which automatically detects the structure of the trajectory and initializes the shooting parameters associated with boundary arcs and touch points.
Line Matching Algorithm for Aerial Image Combining image and object space similarity constraints
NASA Astrophysics Data System (ADS)
Wang, Jingxue; Wang, Weixi; Li, Xiaoming; Cao, Zhenyu; Zhu, Hong; Li, Miao; He, Biao; Zhao, Zhigang
2016-06-01
A new straight line matching method for aerial images is proposed in this paper. Compared to previous works, similarity constraints combining radiometric information in image and geometry attributes in object plane are employed in these methods. Firstly, initial candidate lines and the elevation values of lines projection plane are determined by corresponding points in neighborhoods of reference lines. Secondly, project reference line and candidate lines back forward onto the plane, and then similarity measure constraints are enforced to reduce the number of candidates and to determine the finial corresponding lines in a hierarchical way. Thirdly, "one-to-many" and "many-to-one" matching results are transformed into "one-to-one" by merging many lines into the new one, and the errors are eliminated simultaneously. Finally, endpoints of corresponding lines are detected by line expansion process combing with "image-object-image" mapping mode. Experimental results show that the proposed algorithm can be able to obtain reliable line matching results for aerial images.
McLaughlin, J C; Zwanziger, J W
1999-01-01
In simple oxide glasses the coordination number and oxidation state of the glass-forming element can be predicted directly from the "8--n" rule. Tellurite glasses, however, are unusual in that the coordination number of oxygen around tellurium varies without a corresponding change in the oxidation state of tellurium. To model sodium tellurite glasses successfully using the reverse Monte Carlo algorithm several new constraints have been added. Changes include extending the original coordination constraint to allow multiple coordination numbers, and the addition of a new coordination constraint to keep the oxidation state of tellurium constant by limiting the number of bridging and nonbridging oxygens bonded to each tellurium atom. In addition, the second moment of the distribution of dipolar couplings for sodium atoms obtained from a spin-echo NMR experiment was added as a new constraint. The resulting real-space models are presented and the effectiveness of the new constraints is discussed.
Hard Data Analytics Problems Make for Better Data Analysis Algorithms: Bioinformatics as an Example.
Bacardit, Jaume; Widera, Paweł; Lazzarini, Nicola; Krasnogor, Natalio
2014-09-01
Data mining and knowledge discovery techniques have greatly progressed in the last decade. They are now able to handle larger and larger datasets, process heterogeneous information, integrate complex metadata, and extract and visualize new knowledge. Often these advances were driven by new challenges arising from real-world domains, with biology and biotechnology a prime source of diverse and hard (e.g., high volume, high throughput, high variety, and high noise) data analytics problems. The aim of this article is to show the broad spectrum of data mining tasks and challenges present in biological data, and how these challenges have driven us over the years to design new data mining and knowledge discovery procedures for biodata. This is illustrated with the help of two kinds of case studies. The first kind is focused on the field of protein structure prediction, where we have contributed in several areas: by designing, through regression, functions that can distinguish between good and bad models of a protein's predicted structure; by creating new measures to characterize aspects of a protein's structure associated with individual positions in a protein's sequence, measures containing information that might be useful for protein structure prediction; and by creating accurate estimators of these structural aspects. The second kind of case study is focused on omics data analytics, a class of biological data characterized for having extremely high dimensionalities. Our methods were able not only to generate very accurate classification models, but also to discover new biological knowledge that was later ratified by experimentalists. Finally, we describe several strategies to tightly integrate knowledge extraction and data mining in order to create a new class of biodata mining algorithms that can natively embrace the complexity of biological data, efficiently generate accurate information in the form of classification/regression models, and extract valuable new
NASA Astrophysics Data System (ADS)
Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
A Greedy reassignment algorithm for the PBS minimum monitor unit constraint.
Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin
2016-06-21
Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90 ± 1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5
A Greedy reassignment algorithm for the PBS minimum monitor unit constraint
NASA Astrophysics Data System (ADS)
Lin, Yuting; Kooy, Hanne; Craft, David; Depauw, Nicolas; Flanz, Jacob; Clasie, Benjamin
2016-06-01
Proton pencil beam scanning (PBS) treatment plans are made of numerous unique spots of different weights. These weights are optimized by the treatment planning systems, and sometimes fall below the deliverable threshold set by the treatment delivery system. The purpose of this work is to investigate a Greedy reassignment algorithm to mitigate the effects of these low weight pencil beams. The algorithm is applied during post-processing to the optimized plan to generate deliverable plans for the treatment delivery system. The Greedy reassignment method developed in this work deletes the smallest weight spot in the entire field and reassigns its weight to its nearest neighbor(s) and repeats until all spots are above the minimum monitor unit (MU) constraint. Its performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The Greedy reassignment method was compared against two other post-processing methods. The evaluation criteria was the γ-index pass rate that compares the pre-processed and post-processed dose distributions. A planning metric was developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. For fields with a pass rate of 90 ± 1% the planning metric has a standard deviation equal to 18% of the centroid value showing that the planning metric and γ-index pass rate are correlated for the Greedy reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy reassignment method has 1.8 times better planning metric at 90% pass rate compared to other post-processing methods. As the planning metric and pass rate are correlated, the planning metric could provide an aid for implementing parameters during treatment planning, or even during facility design, in order to yield acceptable pass rates. More facilities are starting to implement PBS and some have spot sizes (one standard deviation) smaller than 5
NASA Astrophysics Data System (ADS)
Guo, Peng; Cheng, Wenming; Wang, Yi
2014-10-01
The quay crane scheduling problem (QCSP) determines the handling sequence of tasks at ship bays by a set of cranes assigned to a container vessel such that the vessel's service time is minimized. A number of heuristics or meta-heuristics have been proposed to obtain the near-optimal solutions to overcome the NP-hardness of the problem. In this article, the idea of generalized extremal optimization (GEO) is adapted to solve the QCSP with respect to various interference constraints. The resulting GEO is termed the modified GEO. A randomized searching method for neighbouring task-to-QC assignments to an incumbent task-to-QC assignment is developed in executing the modified GEO. In addition, a unidirectional search decoding scheme is employed to transform a task-to-QC assignment to an active quay crane schedule. The effectiveness of the developed GEO is tested on a suite of benchmark problems introduced by K.H. Kim and Y.M. Park in 2004 (European Journal of Operational Research, Vol. 156, No. 3). Compared with other well-known existing approaches, the experiment results show that the proposed modified GEO is capable of obtaining the optimal or near-optimal solution in a reasonable time, especially for large-sized problems.
Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Won-Kwang
2017-07-01
MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.
An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction
ERIC Educational Resources Information Center
Bhasin, Harpreet
2011-01-01
Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…
An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction
ERIC Educational Resources Information Center
Bhasin, Harpreet
2011-01-01
Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…
Liang, Mei; Sun, Xiao-gang; Luan, Mei-sheng
2015-10-01
Temperature measurement is one of the important factors for ensuring product quality, reducing production cost and ensuring experiment safety in industrial manufacture and scientific experiment. Radiation thermometry is the main method for non-contact temperature measurement. The second measurement (SM) method is one of the common methods in the multispectral radiation thermometry. However, the SM method cannot be applied to on-line data processing. To solve the problems, a rapid inversion method for multispectral radiation true temperature measurement is proposed and constraint conditions of emissivity model are introduced based on the multispectral brightness temperature model. For non-blackbody, it can be drawn that emissivity is an increasing function in the interval if the brightness temperature is an increasing function or a constant function in a range and emissivity satisfies an inequality of emissivity and wavelength in that interval if the brightness temperature is a decreasing function in a range, according to the relationship of brightness temperatures at different wavelengths. The construction of emissivity assumption values is reduced from multiclass to one class and avoiding the unnecessary emissivity construction with emissivity model constraint conditions on the basis of brightness temperature information. Simulation experiments and comparisons for two different temperature points are carried out based on five measured targets with five representative variation trends of real emissivity. decreasing monotonically, increasing monotonically, first decreasing with wavelength and then increasing, first increasing and then decreasing and fluctuating with wavelength randomly. The simulation results show that compared with the SM method, for the same target under the same initial temperature and emissivity search range, the processing speed of the proposed algorithm is increased by 19.16%-43.45% with the same precision and the same calculation results.
Using Online Algorithms to Solve NP-Hard Problems More Efficiently in Practice
2007-12-01
Example of such algorithms include the simulated annealing algorithm [50], genetic algorithms [32], and genetic programming [53, 54]. Each of these...812,∞] wildcat -skc [1795,∞] [1109,∞] [593,∞] wildcat -rnp [1795,∞] [1210,∞] [702,∞] As shown in Table 3.5, the greedy schedule significantly...Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, 1989. 3 183 [33] Carla P. Gomes and Bart Selman
DeMaere, Matthew Z.
2016-01-01
Background Chromosome conformation capture, coupled with high throughput DNA sequencing in protocols like Hi-C and 3C-seq, has been proposed as a viable means of generating data to resolve the genomes of microorganisms living in naturally occuring environments. Metagenomic Hi-C and 3C-seq datasets have begun to emerge, but the feasibility of resolving genomes when closely related organisms (strain-level diversity) are present in the sample has not yet been systematically characterised. Methods We developed a computational simulation pipeline for metagenomic 3C and Hi-C sequencing to evaluate the accuracy of genomic reconstructions at, above, and below an operationally defined species boundary. We simulated datasets and measured accuracy over a wide range of parameters. Five clustering algorithms were evaluated (2 hard, 3 soft) using an adaptation of the extended B-cubed validation measure. Results When all genomes in a sample are below 95% sequence identity, all of the tested clustering algorithms performed well. When sequence data contains genomes above 95% identity (our operational definition of strain-level diversity), a naive soft-clustering extension of the Louvain method achieves the highest performance. Discussion Previously, only hard-clustering algorithms have been applied to metagenomic 3C and Hi-C data, yet none of these perform well when strain-level diversity exists in a metagenomic sample. Our simple extension of the Louvain method performed the best in these scenarios, however, accuracy remained well below the levels observed for samples without strain-level diversity. Strain resolution is also highly dependent on the amount of available 3C sequence data, suggesting that depth of sequencing must be carefully considered during experimental design. Finally, there appears to be great scope to improve the accuracy of strain resolution through further algorithm development. PMID:27843713
NASA Astrophysics Data System (ADS)
Mizusawa, Masataka; Kurihara, Masahito
Although the maze (or gridworld) is one of the most widely used benchmark problems for real-time search algorithms, it is not sufficiently clear how the difference in the density of randomly positioned obstacles affects the structure of the state spaces and the performance of the algorithms. In particular, recent studies of the so-called phase transition phenomena that could cause dramatic change in their performance in a relatively small parameter range suggest that we should evaluate the performance in a parametric way with the parameter range wide enough to cover potential transition areas. In this paper, we present two measures for characterizing the hardness of randomly generated mazes parameterized by obstacle ratio and relate them to the performance of real-time search algorithms. The first measure is the entropy calculated from the probability of existence of solutions. The second is a measure based on total initial heuristic error between the actual cost and its heuristic estimation. We show that the maze problems are the most complicated in both measures when the obstacle ratio is around 41%. We then solve the parameterized maze problems with the well-known real-time search algorithms RTA*, LRTA*, and MARTA* to relate their performance to the proposed measures. Evaluating the number of steps required for a single problem solving by the three algorithms and the number of those required for the convergence of the learning process in LRTA*, we show that they all have a peak when the obstacle ratio is around 41%. The results support the relevance of the proposed measures. We also discuss the performance of the algorithms in terms of other statistical measures to get a quantitative, deeper understanding of their behavior.
Dexter, F; Macario, A; Traub, R D
1999-11-01
The algorithm to schedule add-on elective cases that maximizes operating room (OR) suite utilization is unknown. The goal of this study was to use computer simulation to evaluate 10 scheduling algorithms described in the management sciences literature to determine their relative performance at scheduling as many hours of add-on elective cases as possible into open OR time. From a surgical services information system for two separate surgical suites, the authors collected these data: (1) hours of open OR time available for add-on cases in each OR each day and (2) duration of each add-on case. These empirical data were used in computer simulations of case scheduling to compare algorithms appropriate for "variable-sized bin packing with bounded space." "Variable size" refers to differing amounts of open time in each "bin," or OR. The end point of the simulations was OR utilization (time an OR was used divided by the time the OR was available). Each day there were 0.24 +/- 0.11 and 0.28 +/- 0.23 simulated cases (mean +/- SD) scheduled to each OR in each of the two surgical suites. The algorithm that maximized OR utilization, Best Fit Descending with fuzzy constraints, achieved OR utilizations 4% larger than the algorithm with poorest performance. We identified the algorithm for scheduling add-on elective cases that maximizes OR utilization for surgical suites that usually have zero or one add-on elective case in each OR. The ease of implementation of the algorithm, either manually or in an OR information system, needs to be studied.
NASA Astrophysics Data System (ADS)
Pühlhofer, Gerd; Benbow, Wystan; Costamante, Luigi; Sol, Helene; Boisson, Catherine; Emmanoulopoulos, Dimitrios; Wagner, Stefan; Horns, Dieter; Giebels, Berrie
VHE observations of the distant (z=0.186) blazar 1ES 1101-232 with H.E.S.S. are used to constrain the extragalactic background light (EBL) in the optical to near infrared band. As the EBL traces the galaxy formation history of the universe, galaxy evolution models can therefore be tested with the data. In order to measure the EBL absorption effect on a blazar spectrum, we assume that usual constraints on the hardness of the intrinsic blazar spectrum are not violated. We present an update of the VHE spectrum obtained with H.E.S.S. and the multifrequency data that were taken simultaneously with the H.E.S.S. measurements. The data verify that the broadband characteristics of 1ES 1101-232 are similar to those of other, more nearby blazars, and strengthen the assumptions that were used to derive the EBL upper limit.
A pivoting algorithm for metabolic networks in the presence of thermodynamic constraints.
Nigam, R; Liang, S
2005-01-01
A linear programming algorithm is presented to constructively compute thermodynamically feasible fluxes and change in chemical potentials of reactions for a metabolic network. It is based on physical laws of mass conservation and the second law of thermodynamics that all chemical reactions should satisfy. As a demonstration, the algorithm has been applied to the core metabolic pathway of E. coli.
NASA Astrophysics Data System (ADS)
Kong, Jian; Yao, Yibin; Shum, Che-Kwan
2014-05-01
Due to the sparsity of world's GNSS stations and limitations of projection angles, GNSS-based ionosphere tomography is a typical ill-posed problem. There are two main ways to solve this problem. Firstly the joint inversion method combining multi-source data is one of the effective ways. Secondly using a priori or reference ionosphere models, e.g., IRI or GIM models, as the constraints to improve the state of normal equation is another effective approach. The traditional way for adding constraints with virtual observations can only solve the problem of sparse stations but the virtual observations still lack horizontal grid constraints therefore unable to fundamentally improve the near-singularity characteristic of the normal equation. In this paper, we impose a priori constraints by increasing the virtual observations in n-dimensional space, which can greatly reduce the condition number of the normal equation. Then after the inversion region is gridded, we can form a stable structure among the grids with loose constraints. We then further consider that the ionosphere indeed changes within certain temporal scale, e.g., two hours. In order to establish a more sophisticated and realistic ionosphere model and obtain the real time ionosphere electron density velocity (IEDV) information, we introduce the grid electron density velocity parameters, which can be estimated with electron density parameters simultaneously. The velocity parameters not only can enhance the temporal resolution of the ionosphere model thereby reflecting more elaborate structure (short-term disturbances) under ionosphere disturbances status, but also provide a new way for the real-time detection and prediction of ionosphere 3D changes. We applied the new algorithm to the GNSS data collected in Europe for tomography inversion for ionosphere electron density and velocity at 2-hour resolutions, which are consistent throughout the whole day variation. We then validate the resulting tomography model
Comparison of Algorithms for Reconstructing Electron Spectra from Solar Flare Hard X-Ray Spectra
NASA Astrophysics Data System (ADS)
Emslie, G.; Brown, J. C.; Holman, G. D.; Johns-Krull, C.; Kontar, E. P.; Massone, A. M.; Piana, M.
2005-05-01
The Ramaty High Energy Solar Spectroscopic Imager (RHESSI) is yielding solar flare hard X-ray (HXR) spectra with unprecedented resolution and precision. Such spectra enable the reconstruction of the effective mean source electron spectrum F?(E) by deconvolution of the photon spectrum I(ɛ) through the bremsstrahlung cross-section Q(ɛ,E). In this paper we report on an evaluation of three distinct "inverting" reconstruction techniques and one forward fitting procedure. We synthesized a variety of hypothetical F?(E) forms, with a variety of empirical features designed to represent diagnostics of electron acceleration and transport processes, generated the corresponding I(ɛ) with realistic random noise added, and performed "blind" (i.e. without knowledge of F?[E] in advance) recoveries of F?(E) for comparison with the originally assumed forms. In most cases the inversion methods gave very good reconstructions of F?(E). The forward fitting method did well in recovering large-scale features but, somewhat inevitably, failed to recover features outwith the parametric forms of F?(E), such as dips, bumps and positive slopes. However, examination of the distribution of photon spectrum residuals over ɛ should in principle permit refinement of the parametric form used.
Nakanishi, Takashi
2010-05-28
Dimensionally controlled and hierarchically assembled supramolecular architectures in nano/micro/bulk length scales are formed by self-organization of alkyl-conjugated fullerenes. The simple molecular design of covalently attaching hydrophobic long alkyl chains to fullerene (C(60)) is different from the conventional (hydrophobic-hydrophilic) amphiphilic molecular designs. The two different units of the alkyl-conjugated C(60) are incompatible but both are soluble in organic solvents. The van der Waals intermolecular forces among long hydrocarbon chains and the pi-pi interaction between C(60) moieties govern the self-organization of the alkyl-conjugated C(60) derivatives. A delicate balance between the pi-pi and van der Waals forces in the assemblies leads to a wide variety of supramolecular architectures and paves the way for developing supramolecular soft materials possessing various morphologies and functions. For instance, superhydrophobic films, electron-transporting thermotropic liquid crystals and room-temperature liquids have been demonstrated. Furthermore, the unique morphologies of the assemblies can be utilised as a template for the fabrication of nanostructured metallic surfaces in a highly reproducible and sustainable way. The resulting metallic surfaces can serve as excellent active substrates for surface-enhanced Raman scattering (SERS) owing to their plasmon enhancing characteristics. The use of self-assembling supramolecular objects as a structural template to fabricate innovative well-defined metal nanomaterials links soft matter chemistry to hard matter sciences.
NASA Astrophysics Data System (ADS)
Sun, Junfeng; Chang, Qin; Hu, Xiaohui; Yang, Yueling
2015-04-01
In this paper, we investigate the contributions of hard spectator scattering and annihilation in B → PV decays within the QCD factorization framework. With available experimental data on B → πK* , ρK , πρ and Kϕ decays, comprehensive χ2 analyses of the parameters XA,Hi,f (ρA,Hi,f, ϕA,Hi,f) are performed, where XAf (XAi) and XH are used to parameterize the endpoint divergences of the (non)factorizable annihilation and hard spectator scattering amplitudes, respectively. Based on χ2 analyses, it is observed that (1) The topology-dependent parameterization scheme is feasible for B → PV decays; (2) At the current accuracy of experimental measurements and theoretical evaluations, XH = XAi is allowed by B → PV decays, but XH ≠ XAf at 68% C.L.; (3) With the simplification XH = XAi, parameters XAf and XAi should be treated individually. The above-described findings are very similar to those obtained from B → PP decays. Numerically, for B → PV decays, we obtain (ρA,Hi ,ϕA,Hi [ ° ]) = (2.87-1.95+0.66 , -145-21+14) and (ρAf, ϕAf [ ° ]) = (0.91-0.13+0.12 , -37-9+10) at 68% C.L. With the best-fit values, most of the theoretical results are in good agreement with the experimental data within errors. However, significant corrections to the color-suppressed tree amplitude α2 related to a large ρH result in the wrong sign for ACPdir (B- →π0K*-) compared with the most recent BABAR data, which presents a new obstacle in solving "ππ" and "πK" puzzles through α2. A crosscheck with measurements at Belle (or Belle II) and LHCb, which offer higher precision, is urgently expected to confirm or refute such possible mismatch.
Shen, Cheng; Bao, Xuejing; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun
2017-07-10
We proposed two noise-robust iterative methods for phase retrieval and diffractive imaging based on the Pauta criterion and the smoothness constraint. The work is to address the noise issue plaguing the application of iterative phase retrieval algorithms in coherent diffraction imaging. It is demonstrated by numerical analysis and experimental results that our proposed algorithms have higher retrieval accuracy and faster convergence speed at a high shot noise level. Moreover, they are proved to hold the superiority to cope with other kinds of noises. Due to the inconvenience of conventional iteration indicators in experiments, a more reliable retrieval metric is put forward and verified its effectiveness. It should be noted that the proposed methods focus on exploiting the longitudinal diversity. It is anticipated that our work can further expand the application of iterative multi-image phase retrieval methods.
Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson
2006-08-01
We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.
A Cutting Plane Algorithm for Problems Containing Convex and Reverse Convex Constraints,
The method of cut generation used in this paper was initially described by Tui for minimizing a concave function subject to linear constraints. Balas...Glover, and Young have recognized the applicability of such ’convexity cuts ’ to integer problems. This paper shows that these cuts can be used in the solution of an even larger class of nonconvex problems.
Williams, P.T.
1993-09-01
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.
NASA Astrophysics Data System (ADS)
Krauze, W.; Makowski, P.; Kujawińska, M.
2015-06-01
Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.
Genetic algorithm to design Laue lenses with optimal performance for focusing hard X- and γ-rays
NASA Astrophysics Data System (ADS)
Camattari, Riccardo; Guidi, Vincenzo
2014-10-01
To focus hard X- and γ-rays it is possible to use a Laue lens as a concentrator. With this optics it is possible to improve the detection of radiation for several applications, from the observation of the most violent phenomena in the sky to nuclear medicine applications for diagnostic and therapeutic purposes. We implemented a code named LaueGen, which is based on a genetic algorithm and aims to design optimized Laue lenses. The genetic algorithm was selected because optimizing a Laue lens is a complex and discretized problem. The output of the code consists of the design of a Laue lens, which is composed of diffracting crystals that are selected and arranged in such a way as to maximize the lens performance. The code allows managing crystals of any material and crystallographic orientation. The program is structured in such a way that the user can control all the initial lens parameters. As a result, LaueGen is highly versatile and can be used to design very small lenses, for example, for nuclear medicine, or very large lenses, for example, for satellite-borne astrophysical missions.
Wen, Ying; He, Lianghua; von Deneen, Karen M; Lu, Yue
2013-11-01
We present an effective method for brain tissue classification based on diffusion tensor imaging (DTI) data. The method accounts for two main DTI segmentation obstacles: random noise and magnetic field inhomogeneities. In the proposed method, DTI parametric maps were used to resolve intensity inhomogeneities of brain tissue segmentation because they could provide complementary information for tissues and define accurate tissue maps. An improved fuzzy c-means with spatial constraints proposal was used to enhance the noise and artifact robustness of DTI segmentation. Fuzzy c-means clustering with spatial constraints (FCM_S) could effectively segment images corrupted by noise, outliers, and other imaging artifacts. Its effectiveness contributes not only to the introduction of fuzziness for belongingness of each pixel but also to the exploitation of spatial contextual information. We proposed an improved FCM_S applied on DTI parametric maps, which explores the mean and covariance of the feature spatial information for automated segmentation of DTI. The experiments on synthetic images and real-world datasets showed that our proposed algorithms, especially with new spatial constraints, were more effective.
NASA Astrophysics Data System (ADS)
Chang, Cheng; Xu, Wei; Chen-Wiegart, Yu-chen Karen; Wang, Jun; Yu, Dantong
2013-12-01
X-ray Absorption Near Edge Structure (XANES) imaging, an advanced absorption spectroscopy technique, at the Transmission X-ray Microscopy (TXM) Beamline X8C of NSLS enables high-resolution chemical mapping (a.k.a. chemical composition identification or chemical spectra fitting). Two-Dimensional (2D) chemical mapping has been successfully applied to study many functional materials to decide the percentages of chemical components at each pixel position of the material images. In chemical mapping, the attenuation coefficient spectrum of the material (sample) can be fitted with the weighted sum of standard spectra of individual chemical compositions, where the weights are the percentages to be calculated. In this paper, we first implemented and compared two fitting approaches: (i) a brute force enumeration method, and (ii) a constrained least square minimization algorithm proposed by us. Next, as 2D spectra fitting can be conducted pixel by pixel, so theoretically, both methods can be implemented in parallel. In order to demonstrate the feasibility of parallel computing in the chemical mapping problem and investigate how much efficiency improvement can be achieved, we used the second approach as an example and implemented a parallel version for a multi-core computer cluster. Finally we used a novel way to visualize the calculated chemical compositions, by which domain scientists could grasp the percentage difference easily without looking into the real data.
Efficient Haplotype Block Partitioning and Tag SNP Selection Algorithms under Various Constraints
Chen, Wen-Pei; Lin, Yaw-Ling
2013-01-01
Patterns of linkage disequilibrium plays a central role in genome-wide association studies aimed at identifying genetic variation responsible for common human diseases. These patterns in human chromosomes show a block-like structure, and regions of high linkage disequilibrium are called haplotype blocks. A small subset of SNPs, called tag SNPs, is sufficient to capture the haplotype patterns in each haplotype block. Previously developed algorithms completely partition a haplotype sample into blocks while attempting to minimize the number of tag SNPs. However, when resource limitations prevent genotyping all the tag SNPs, it is desirable to restrict their number. We propose two dynamic programming algorithms, incorporating many diversity evaluation functions, for haplotype block partitioning using a limited number of tag SNPs. We use the proposed algorithms to partition the chromosome 21 haplotype data. When the sample is fully partitioned into blocks by our algorithms, the 2,266 blocks and 3,260 tag SNPs are fewer than those identified by previous studies. We also demonstrate that our algorithms find the optimal solution by exploiting the nonmonotonic property of a common haplotype-evaluation function. PMID:24319694
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions
NASA Astrophysics Data System (ADS)
Li, Dongxing; Zhao, Yan; Dong, Xu
2008-03-01
In general image restoration, the point spread function (PSF) of the imaging system, and the observation noise, are known a priori information. The aero-optics effect is yielded when the objects ( e.g, missile, aircraft etc.) are flying in high speed or ultrasonic speed. In this situation, the PSF and the observation noise are unknown a priori. The identification and the restoration of the turbulence degraded images is a challenging problem in the world. The algorithm based on the nonnegativity and support constraints recursive inverse filtering (NAS-RIF) is proposed in order to identify and restore the turbulence degraded images. The NAS-RIF technique applies to situations in which the scene consists of a finite support object against a uniformly black, grey, or white background. The restoration procedure of NAS-RIF involves recursive filtering of the blurred image to minimize a convex cost function. The algorithm proposed in this paper is that the turbulence degraded image is filtered before it passes the recursive filter. The conjugate gradient minimization routine was used for minimization of the NAS-RIF cost function. The algorithm based on the NAS-RIF is used to identify and restore the wind tunnel tested images. The experimental results show that the restoration effect is improved obviously.
Semenov, Alexander; Zaikin, Oleg
2016-01-01
In this paper we propose an approach for constructing partitionings of hard variants of the Boolean satisfiability problem (SAT). Such partitionings can be used for solving corresponding SAT instances in parallel. For the same SAT instance one can construct different partitionings, each of them is a set of simplified versions of the original SAT instance. The effectiveness of an arbitrary partitioning is determined by the total time of solving of all SAT instances from it. We suggest the approach, based on the Monte Carlo method, for estimating time of processing of an arbitrary partitioning. With each partitioning we associate a point in the special finite search space. The estimation of effectiveness of the particular partitioning is the value of predictive function in the corresponding point of this space. The problem of search for an effective partitioning can be formulated as a problem of optimization of the predictive function. We use metaheuristic algorithms (simulated annealing and tabu search) to move from point to point in the search space. In our computational experiments we found partitionings for SAT instances encoding problems of inversion of some cryptographic functions. Several of these SAT instances with realistic predicted solving time were successfully solved on a computing cluster and in the volunteer computing project SAT@home. The solving time agrees well with estimations obtained by the proposed method.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Torczon, Virginia
1998-01-01
We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.
2010-11-01
application type of analysis, only the methodology is presented here, which includes an algorithm for optimization and a corresponding conservative rate...of convergence based on no learning. The application part will be presented in the near future once data are available. It is expected that the...flux particuliers entre des paires de nœuds particulières. Bien qu’il s’agisse d’un type de mise en application d’analyse, seulement les méthodologies
Shankar, T.J.; Sokhansanj, Shahabaddine
2010-02-01
Crossover and mutation are the main search operators of genetic algorithm, one of the most important features which distinguish it from other search algorithms like simulated annealing. A genetic algorithm adopts crossover and mutation as their main genetic operators. The present work was aimed to see the effect of genetic algorithm operators like crossover and mutation (Pc & Pm), population size (n), and number of iterations (I) on predicting the minimum hardness (N) of the biomaterial extrudate. The second order polynomial regression equation developed for the extrudate property hardness in terms of the independent variables like barrel temperature, screw speed, fish content of the feed, and feed moisture content was used as the objective function in the GA analysis. A simple genetic algorithm (SGA) with a crossover and mutation operators was used in the present study. A program was developed in C language for a SGA with a rank based fitness selection method. The upper limit of population and iterations were fixed at 100. It was observed that increasing population and iterations the prediction of function minimum improved drastically. Minimum predicted hardness values were achievable with a medium population of 50, iterations of 50 and crossover and mutation probabilities of 50 % and 0.5 %. Further the Pareto charts indicated that the effect of Pc was found to be more significant when population is 50 and Pm played a major role at low population ( 10). A crossover probability of 50 % and mutation probability of 0.5 % are the threshold values for the convergence of GA to reach a global search space. A minimum predicted hardness value of 3.82 (N) was observed for n = 60 and I = 100 and Pc & Pm of 85 % and 0.5 %.
NASA Astrophysics Data System (ADS)
Berger, Gilles; Million-Picallion, Lisa; Lefevre, Grégory; Delaunay, Sophie
2015-04-01
Introduction: The hydrothermal crystallization of silicates phases in the Si-Al-Fe system may lead to industrial constraints that can be encountered in the nuclear industry in at least two contexts: the geological repository for nuclear wastes and the formation of hard sludges in the steam generator of the PWR nuclear plants. In the first situation, the chemical reactions between the Fe-canister and the surrounding clays have been extensively studied in laboratory [1-7] and pilot experiments [8]. These studies demonstrated that the high reactivity of metallic iron leads to the formation of Fe-silicates, berthierine like, in a wide range of temperature. By contrast, the formation of deposits in the steam generators of PWR plants, called hard sludges, is a newer and less studied issue which can affect the reactor performance. Experiments: We present here a preliminary set of experiments reproducing the formation of hard sludges under conditions representative of the steam generator of PWR power plant: 275°C, diluted solutions maintained at low potential by hydrazine addition and at alkaline pH by low concentrations of amines and ammoniac. Magnetite, a corrosion by-product of the secondary circuit, is the source of iron while aqueous Si and Al, the major impurities in this system, are supplied either as trace elements in the circulating solution or by addition of amorphous silica and alumina when considering confined zones. The fluid chemistry is monitored by sampling aliquots of the solution. Eh and pH are continuously measured by hydrothermal Cormet© electrodes implanted in a titanium hydrothermal reactor. The transformation, or not, of the solid fraction was examined post-mortem. These experiments evidenced the role of Al colloids as precursor of cements composed of kaolinite and boehmite, and the passivation of amorphous silica (becoming unreactive) likely by sorption of aqueous iron. But no Fe-bearing was formed by contrast to many published studies on the Fe
NASA Astrophysics Data System (ADS)
Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang
2016-08-01
Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.
Algorithms for magnetic tomography—on the role of a priori knowledge and constraints
NASA Astrophysics Data System (ADS)
Hauer, Karl-Heinz; Potthast, Roland; Wannert, Martin
2008-08-01
Magnetic tomography investigates the reconstruction of currents from their magnetic fields. Here, we will study a number of projection methods in combination with the Tikhonov regularization for stabilization for the solution of the Biot-Savart integral equation Wj = H with the Biot-Savart integral operator W:(L2(Ω))3 → (L2(∂G))3 where \\overline{\\Omega} \\subset G . In particular, we study the role of a priori knowledge when incorporated into the choice of the projection spaces X_n \\subset (L^2(\\Omega))^3, n\\in {\\bb N} , for example the conditions div j = 0 or the use of the full boundary value problem div σgrad phivE = 0 in Ω, ν sdot σgrad phivE = g on ∂Ω with some known function g, where j = σgrad phivE and σ is an anisotropic matrix-valued conductivity. We will discuss and compare these schemes investigating the ill-posedness of each algorithm in terms of the behaviour of the singular values of the corresponding operators both when a priori knowledge is incorporated and when the geometrical setting is modified. Finally, we will numerically evaluate the stability constants in the practical setup of magnetic tomography for fuel cells and, thus, calculate usable error bounds for this important application area.
NASA Astrophysics Data System (ADS)
Naser, Hassan; Mouftah, Hussein
2004-05-01
We investigate the availability performance of networks with shared mesh restoration and demonstrate that these networks cannot provide highly available protection services. A major factor in the poor performance of shared mesh restoration is that the resources at backup links are shared among demands. If multiple service-affecting failures occur in the network a multitude of these demands will rush to utilize the spare resources on backup links. These resources are not adequate to serve all of these demands simultaneously. We propose a heuristic routing algorithm that attempts to improve the availability performance of shared mesh restoration. We measure the likelihood that a backup link will not be available to restore a newly arrived demand if or when more than one failure occurs in the network. We adjust the backup bandwidth on that link if the measured likelihood exceeds a preset threshold. As a typical representative, we show that the downtime improves by 7%, 11%, and 18% when the total backup bandwidth in the network is increased by 5%, 10%, and 20%, respectively. These values are obtained through a series of fitting experiments.
Soft Constraints in Nonlinear Spectral Fitting with Regularized Lineshape Deconvolution
Zhang, Yan; Shen, Jun
2012-01-01
This paper presents a novel method for incorporating a priori knowledge into regularized nonlinear spectral fitting as soft constraints. Regularization was recently introduced to lineshape deconvolution as a method for correcting spectral distortions. Here, the deconvoluted lineshape was described by a new type of lineshape model and applied to spectral fitting. The non-linear spectral fitting was carried out in two steps that were subject to hard constraints and soft constraints, respectively. The hard constraints step provided a starting point and, therefore, only the changes of the relevant variables were constrained in the soft constraints step and incorporated into the linear sub-steps of the Levenberg-Marquardt algorithm. The method was demonstrated using localized averaged echo time point resolved spectroscopy (PRESS) proton spectroscopy of human brains. PMID:22618964
NASA Astrophysics Data System (ADS)
Huseyin Turan, Hasan; Kasap, Nihat; Savran, Huseyin
2014-03-01
Nowadays, every firm uses telecommunication networks in different amounts and ways in order to complete their daily operations. In this article, we investigate an optimisation problem that a firm faces when acquiring network capacity from a market in which there exist several network providers offering different pricing and quality of service (QoS) schemes. The QoS level guaranteed by network providers and the minimum quality level of service, which is needed for accomplishing the operations are denoted as fuzzy numbers in order to handle the non-deterministic nature of the telecommunication network environment. Interestingly, the mathematical formulation of the aforementioned problem leads to the special case of a well-known two-dimensional bin packing problem, which is famous for its computational complexity. We propose two different heuristic solution procedures that have the capability of solving the resulting nonlinear mixed integer programming model with fuzzy constraints. In conclusion, the efficiency of each algorithm is tested in several test instances to demonstrate the applicability of the methodology.
Lonchampt, J.; Fessart, K.
2013-07-01
The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in the Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
NASA Astrophysics Data System (ADS)
Beghein, C.; Lebedev, S.; van der Hilst, R.
2005-12-01
Interstation dispersion curves can be used to obtain regional 1D profiles of the crust and upper mantle. Unlike phase velocity maps, dispersion curves can be determined with small errors and for a broad frequency band. We want to determine what features interstation surface wave dispersion curves can constrain. Using synthetic data and the Neighbourhood Algorithm, a direct search approach that provides a full statistical assessment of model uncertainites and trade-offs, we investigate how well crustal and upper mantle structure can be recovered with fundamental Love and Rayleigh waves. We also determine how strong are the trade-offs between the different parameters and what depth resolution can we expect to achieve with the current level of precision of this type of data. Synthetic dispersion curves between approximately 7 and 340s were assigned realistic error bars, i.e. an increase of the relative uncertainty with the period but with an amplitude consistent with the one achieve in ``real'' measurements. These dispersion curves were generated by two types of isotropic model differing only by their crustal structure. One represents an oceanic region (shallow Moho) and the other corresponds to an archean continental area with a larger Moho depth. Preliminary results show that while the Moho depth, the shear-velocity structure in the transition zone, between 200 and 410km depth, and between the base of the crust and 50km depth are generally well recovered, crustal structure and Vs between between 50 and 200km depth are more difficult to constrain with Love waves or Rayleigh waves alone because of some trade-off between the two layers. When these two layers are put together, the resolution of Vs between 50 and 100km depth apperas to improve. Stucture deeper than the transition zone is not constrained by the data because of a lack of sensitivity. We explore the possibility of differentiating between an upper and lower crust as well, and we investigate whether a joint
Guturu, Parthasarathy; Dantu, Ram
2008-06-01
Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.
NASA Astrophysics Data System (ADS)
de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein
2012-12-01
In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009), 10.1103/PhysRevLett.103.188302] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011), 10.1103/PhysRevLett.107.155501] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.
de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein
2012-12-07
In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009)] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011)] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.
Statistical physics of hard optimization problems
NASA Astrophysics Data System (ADS)
Zdeborová, Lenka
2009-06-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.
Statistical Physics of Hard Optimization Problems
NASA Astrophysics Data System (ADS)
Zdeborová, Lenka
2008-06-01
Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the NP-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this thesis is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.
USDA-ARS?s Scientific Manuscript database
This research was initiated to investigate the association between flour breadmaking traits and mixing characteristics and empirical dough rheological property under thermal stress. Flour samples from 30 hard spring wheat were analyzed by a mixolab standard procedure at optimum water absorptions. Mi...
Li, Baojun; Deng, Junjun; Lonn, Albert H; Hsieh, Jiang
2012-10-01
To further improve the image quality, in particularly, to suppress the boundary artifacts, in the extended scan field-of-view (SFOV) reconstruction. To combat projection truncation artifacts and to restore truncated objects outside the SFOV, an algorithm has previously been proposed based on fitting a partial water cylinder at the site of the truncation. Previous studies have shown this algorithm can simultaneously eliminate the truncation artifacts inside the SFOV and preserve the total amount of attenuation, owing to its emphasis on consistency conditions of the total attenuation in the parallel sampling geometry. Unfortunately, the water cylinder fitting parameters of this 2D algorithm are inclined to high noise fluctuation in the projection samples from image to image, causing anatomy boundaries artifacts, especially during helical scans with higher pitch (≥1.0). To suppress the boundary artifacts and further improve the image quality, the authors propose to use a roughness penalty function, based on the Huber regularization function, to reinforce the z-dimensional boundary consistency. Extensive phantom and clinical tests have been conducted to test the accuracy and robustness of the enhanced algorithm. Significant reduction in the boundary artifacts is observed in both phantom and clinical cases with the enhanced algorithm. The proposed algorithm also reduces the percent difference error between the horizontal and vertical diameters to well below 1%. It is also noticeable that the algorithm has improved CT number uniformity outside the SFOV compared to the original algorithm. The proposed algorithm is capable of suppressing boundary artifacts and improving the CT number uniformity outside the SFOV.
NASA Astrophysics Data System (ADS)
Trunfio, Roberto
2015-06-01
In a recent article, Guo, Cheng and Wang proposed a randomized search algorithm, called modified generalized extremal optimization (MGEO), to solve the quay crane scheduling problem for container groups under the assumption that schedules are unidirectional. The authors claim that the proposed algorithm is capable of finding new best solutions with respect to a well-known set of benchmark instances taken from the literature. However, as shown in this note, there are some errors in their work that can be detected by analysing the Gantt charts of two solutions provided by MGEO. In addition, some comments on the method used to evaluate the schedule corresponding to a task-to-quay crane assignment and on the search scheme of the proposed algorithm are provided. Finally, to assess the effectiveness of the proposed algorithm, the computational experiments are repeated and additional computational experiments are provided.
Temporal Constraint Reasoning With Preferences
NASA Technical Reports Server (NTRS)
Khatib, Lina; Morris, Paul; Morris, Robert; Rossi, Francesca
2001-01-01
A number of reasoning problems involving the manipulation of temporal information can naturally be viewed as implicitly inducing an ordering of potential local decisions involving time (specifically, associated with durations or orderings of events) on the basis of preferences. For example. a pair of events might be constrained to occur in a certain order, and, in addition. it might be preferable that the delay between them be as large, or as small, as possible. This paper explores problems in which a set of temporal constraints is specified, where each constraint is associated with preference criteria for making local decisions about the events involved in the constraint, and a reasoner must infer a complete solution to the problem such that, to the extent possible, these local preferences are met in the best way. A constraint framework for reasoning about time is generalized to allow for preferences over event distances and durations, and we study the complexity of solving problems in the resulting formalism. It is shown that while in general such problems are NP-hard, some restrictions on the shape of the preference functions, and on the structure of the preference set, can be enforced to achieve tractability. In these cases, a simple generalization of a single-source shortest path algorithm can be used to compute a globally preferred solution in polynomial time.
Bloom, Joshua S.; Prochaska, J.X.; Pooley, D.; Blake, C.W.; Foley, R.J.; Jha, S.; Ramirez-Ruiz, E.; Granot, J.; Filippenko, A.V.; Sigurdsson, S.; Barth, A.J.; Chen, H.-W.; Cooper, M.C.; Falco, E.E.; Gal, R.R.; Gerke, B.F.; Gladders, M.D.; Greene, J.E.; Hennanwi, J.; Ho, L.C.; Hurley, K.; /UC, Berkeley, Astron. Dept. /Lick Observ. /Harvard-Smithsonian Ctr. Astrophys. /Princeton, Inst. Advanced Study /KIPAC, Menlo Park /Penn State U., Astron. Astrophys. /UC, Irvine /MIT, MKI /UC, Davis /UC, Berkeley /Carnegie Inst. Observ. /UC, Berkeley, Space Sci. Dept. /Michigan U. /LBL, Berkeley /Spitzer Space Telescope
2005-06-07
The localization of the short-duration, hard-spectrum gamma-ray burst GRB050509b by the Swift satellite was a watershed event. Never before had a member of this mysterious subclass of classic GRBs been rapidly and precisely positioned in a sky accessible to the bevy of ground-based follow-up facilities. Thanks to the nearly immediate relay of the GRB position by Swift, we began imaging the GRB field 8 minutes after the burst and have continued during the 8 days since. Though the Swift X-ray Telescope (XRT) discovered an X-ray afterglow of GRB050509b, the first ever of a short-hard burst, thus far no convincing optical/infrared candidate afterglow or supernova has been found for the object. We present a re-analysis of the XRT afterglow and find an absolute position of R.A. = 12h36m13.59s, Decl. = +28{sup o}59'04.9'' (J2000), with a 1{sigma} uncertainty of 3.68'' in R.A., 3.52'' in Decl.; this is about 4'' to the west of the XRT position reported previously. Close to this position is a bright elliptical galaxy with redshift z = 0.2248 {+-} 0.0002, about 1' from the center of a rich cluster of galaxies. This cluster has detectable diffuse emission, with a temperature of kT = 5.25{sub -1.68}{sup +3.36} keV. We also find several ({approx}11) much fainter galaxies consistent with the XRT position from deep Keck imaging and have obtained Gemini spectra of several of these sources. Nevertheless we argue, based on positional coincidences, that the GRB and the bright elliptical are likely to be physically related. We thus have discovered reasonable evidence that at least some short-duration, hard-spectra GRBs are at cosmological distances. We also explore the connection of the properties of the burst and the afterglow, finding that GRB050509b was underluminous in both of these relative to long-duration GRBs. However, we also demonstrate that the ratio of the blast-wave energy to the {gamma}-ray energy is consistent with that of long-duration GRBs. We thus find plausible
NGC 5548: LACK OF A BROAD Fe K{alpha} LINE AND CONSTRAINTS ON THE LOCATION OF THE HARD X-RAY SOURCE
Brenneman, L. W.; Elvis, M.; Krongold, Y.; Liu, Y.; Mathur, S.
2012-01-01
We present an analysis of the co-added and individual 0.7-40 keV spectra from seven Suzaku observations of the Sy 1.5 galaxy NGC 5548 taken over a period of eight weeks. We conclude that the source has a moderately ionized, three-zone warm absorber, a power-law continuum, and exhibits contributions from cold, distant reflection. Relativistic reflection signatures are not significantly detected in the co-added data, and we place an upper limit on the equivalent width of a relativistically broad Fe K{alpha} line at EW {<=} 26 eV at 90% confidence. Thus NGC 5548 can be labeled as a 'weak' type 1 active galactic nucleus (AGN) in terms of its observed inner disk reflection signatures, in contrast to sources with very broad, strong iron lines such as MCG-6-30-15, which are likely much fewer in number. We compare physical properties of NGC 5548 and MCG-6-30-15 that might explain this difference in their reflection properties. Though there is some evidence that NGC 5548 may harbor a truncated inner accretion disk, this evidence is inconclusive, so we also consider light bending of the hard X-ray continuum emission in order to explain the lack of relativistic reflection in our observation. If the absence of a broad Fe K{alpha} line is interpreted in the light-bending context, we conclude that the source of the hard X-ray continuum lies at radii r{sub s} {approx}> 100 r{sub g}. We note, however, that light-bending models must be expanded to include a broader range of physical parameter space in order to adequately explain the spectral and timing properties of average AGNs, rather than just those with strong, broad iron lines.
Not Available
1987-01-01
This technical manual is a handbook dealing with all aspects of hardness testing. Every hardness testing method is fully covered, from Rockwell to ultrasonic hardness testing. Specific hardness testing problems are also discussed, and methods are offered for many applications. One chapter examines how to select the correct hardness testing method. A directory of manufacturers, distributors and suppliers of hardness testing equipment and supplies in the United States and Canada is also included. The book consist of eight chapters and an appendix. It discusses common concepts of hardness, and the theories and methods of hardness testing. Coverage includes specific hardness testing methods - Brinell, Rockwell, Vickers, and microhardness testing; and other hardness testing methods, such as scleroscope, ultrasonic, scratch and file testing, and hardness evaluation by eddy current testing.
NASA Astrophysics Data System (ADS)
Wang, Ke; Huang, Zhi; Zhong, Zhihua
2014-11-01
Due to the large variations of environment with ever-changing background and vehicles with different shapes, colors and appearances, to implement a real-time on-board vehicle recognition system with high adaptability, efficiency and robustness in complicated environments, remains challenging. This paper introduces a simultaneous detection and tracking framework for robust on-board vehicle recognition based on monocular vision technology. The framework utilizes a novel layered machine learning and particle filter to build a multi-vehicle detection and tracking system. In the vehicle detection stage, a layered machine learning method is presented, which combines coarse-search and fine-search to obtain the target using the AdaBoost-based training algorithm. The pavement segmentation method based on characteristic similarity is proposed to estimate the most likely pavement area. Efficiency and accuracy are enhanced by restricting vehicle detection within the downsized area of pavement. In vehicle tracking stage, a multi-objective tracking algorithm based on target state management and particle filter is proposed. The proposed system is evaluated by roadway video captured in a variety of traffics, illumination, and weather conditions. The evaluating results show that, under conditions of proper illumination and clear vehicle appearance, the proposed system achieves 91.2% detection rate and 2.6% false detection rate. Experiments compared to typical algorithms show that, the presented algorithm reduces the false detection rate nearly by half at the cost of decreasing 2.7%-8.6% detection rate. This paper proposes a multi-vehicle detection and tracking system, which is promising for implementation in an on-board vehicle recognition system with high precision, strong robustness and low computational cost.
Smell Detection Agent Based Optimization Algorithm
NASA Astrophysics Data System (ADS)
Vinod Chandra, S. S.
2016-09-01
In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.
NASA Astrophysics Data System (ADS)
Castro, Marcelo A.; Thomasson, David; Avila, Nilo A.; Hufton, Jennifer; Senseney, Justin; Johnson, Reed F.; Dyall, Julie
2013-03-01
Monkeypox virus is an emerging zoonotic pathogen that results in up to 10% mortality in humans. Knowledge of clinical manifestations and temporal progression of monkeypox disease is limited to data collected from rare outbreaks in remote regions of Central and West Africa. Clinical observations show that monkeypox infection resembles variola infection. Given the limited capability to study monkeypox disease in humans, characterization of the disease in animal models is required. A previous work focused on the identification of inflammatory patterns using PET/CT image modality in two non-human primates previously inoculated with the virus. In this work we extended techniques used in computer-aided detection of lung tumors to identify inflammatory lesions from monkeypox virus infection and their progression using CT images. Accurate estimation of partial volumes of lung lesions via segmentation is difficult because of poor discrimination between blood vessels, diseased regions, and outer structures. We used hard C-means algorithm in conjunction with landmark based registration to estimate the extent of monkeypox virus induced disease before inoculation and after disease progression. Automated estimation is in close agreement with manual segmentation.
Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2003-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.
Compact location problems with budget and communication constraints
Krumke, S.O.; Noltemeier, H.; Ravi, S.S.; Marathe, M.V.
1995-07-01
The authors consider the problem of placing a specified number p of facilities on the nodes of a given network with two nonnegative edge-weight functions so as to minimize the diameter of the placement with respect to the first weight function subject to a diameter or sum-constraint with respect to the second weight function. Define an ({alpha}, {beta})-approximation algorithm as a polynomial-time algorithm that produces a solution within {alpha} times the optimal value with respect to the first weight function, violating the constraint with respect to the second weight function by a factor of at most {beta}. They show that in general obtaining an ({alpha}, {beta})-approximation for any fixed {alpha}, {beta} {ge} 1 is NP-hard for any of these problems. They also present efficient approximation algorithms for several of the problems studied, when both edge-weight functions obey the triangle inequality.
Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859
Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem
2014-01-01
Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.
Designing a fuzzy scheduler for hard real-time systems
NASA Technical Reports Server (NTRS)
Yen, John; Lee, Jonathan; Pfluger, Nathan; Natarajan, Swami
1992-01-01
In hard real-time systems, tasks have to be performed not only correctly, but also in a timely fashion. If timing constraints are not met, there might be severe consequences. Task scheduling is the most important problem in designing a hard real-time system, because the scheduling algorithm ensures that tasks meet their deadlines. However, the inherent nature of uncertainty in dynamic hard real-time systems increases the problems inherent in scheduling. In an effort to alleviate these problems, we have developed a fuzzy scheduler to facilitate searching for a feasible schedule. A set of fuzzy rules are proposed to guide the search. The situation we are trying to address is the performance of the system when no feasible solution can be found, and therefore, certain tasks will not be executed. We wish to limit the number of important tasks that are not scheduled.
Robust H∞ stabilization of a hard disk drive system with a single-stage actuator
NASA Astrophysics Data System (ADS)
Harno, Hendra G.; Kiin Woon, Raymond Song
2015-04-01
This paper considers a robust H∞ control problem for a hard disk drive system with a single stage actuator. The hard disk drive system is modeled as a linear time-invariant uncertain system where its uncertain parameters and high-order dynamics are considered as uncertainties satisfying integral quadratic constraints. The robust H∞ control problem is transformed into a nonlinear optimization problem with a pair of parameterized algebraic Riccati equations as nonconvex constraints. The nonlinear optimization problem is then solved using a differential evolution algorithm to find stabilizing solutions to the Riccati equations. These solutions are used for synthesizing an output feedback robust H∞ controller to stabilize the hard disk drive system with a specified disturbance attenuation level.
Order-to-chaos transition in the hardness of random Boolean satisfiability problems
NASA Astrophysics Data System (ADS)
Varga, Melinda; Sumi, Róbert; Toroczkai, Zoltán; Ercsey-Ravasz, Mária
2016-05-01
Transient chaos is a ubiquitous phenomenon characterizing the dynamics of phase-space trajectories evolving towards a steady-state attractor in physical systems as diverse as fluids, chemical reactions, and condensed matter systems. Here we show that transient chaos also appears in the dynamics of certain efficient algorithms searching for solutions of constraint satisfaction problems that include scheduling, circuit design, routing, database problems, and even Sudoku. In particular, we present a study of the emergence of hardness in Boolean satisfiability (k -SAT), a canonical class of constraint satisfaction problems, by using an analog deterministic algorithm based on a system of ordinary differential equations. Problem hardness is defined through the escape rate κ , an invariant measure of transient chaos of the dynamical system corresponding to the analog algorithm, and it expresses the rate at which the trajectory approaches a solution. We show that for a given density of constraints and fixed number of Boolean variables N , the hardness of formulas in random k -SAT ensembles has a wide variation, approximable by a lognormal distribution. We also show that when increasing the density of constraints α , hardness appears through a second-order phase transition at αχ in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic. A similar behavior is found in 4-SAT as well, however, such a transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter κ ˜NB |α - αχ|1-γ with 0 <γ <1 . We demonstrate that the transition is generated by the appearance of metastable basins in the solution space as the density of constraints α is increased.
Order-to-chaos transition in the hardness of random Boolean satisfiability problems.
Varga, Melinda; Sumi, Róbert; Toroczkai, Zoltán; Ercsey-Ravasz, Mária
2016-05-01
Transient chaos is a ubiquitous phenomenon characterizing the dynamics of phase-space trajectories evolving towards a steady-state attractor in physical systems as diverse as fluids, chemical reactions, and condensed matter systems. Here we show that transient chaos also appears in the dynamics of certain efficient algorithms searching for solutions of constraint satisfaction problems that include scheduling, circuit design, routing, database problems, and even Sudoku. In particular, we present a study of the emergence of hardness in Boolean satisfiability (k-SAT), a canonical class of constraint satisfaction problems, by using an analog deterministic algorithm based on a system of ordinary differential equations. Problem hardness is defined through the escape rate κ, an invariant measure of transient chaos of the dynamical system corresponding to the analog algorithm, and it expresses the rate at which the trajectory approaches a solution. We show that for a given density of constraints and fixed number of Boolean variables N, the hardness of formulas in random k-SAT ensembles has a wide variation, approximable by a lognormal distribution. We also show that when increasing the density of constraints α, hardness appears through a second-order phase transition at α_{χ} in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic. A similar behavior is found in 4-SAT as well, however, such a transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter κ∼N^{B|α-α_{χ}|^{1-γ}} with 0<γ<1. We demonstrate that the transition is generated by the appearance of metastable basins in the solution space as the density of constraints α is increased.
Generalizing Atoms in Constraint Logic
NASA Technical Reports Server (NTRS)
Page, C. David, Jr.; Frisch, Alan M.
1991-01-01
This paper studies the generalization of atomic formulas, or atoms, that are augmented with constraints on or among their terms. The atoms may also be viewed as definite clauses whose antecedents express the constraints. Atoms are generalized relative to a body of background information about the constraints. This paper first examines generalization of atoms with only monadic constraints. The paper develops an algorithm for the generalization task and discusses algorithm complexity. It then extends the algorithm to apply to atoms with constraints of arbitrary arity. The paper also presents semantic properties of the generalizations computed by the algorithms, making the algorithms applicable to such problems as abduction, induction, and knowledge base verification. The paper emphasizes the application to induction and presents a pac-learning result for constrained atoms.
Order-to-chaos transition in the hardness of random Boolean satisfiability problems
NASA Astrophysics Data System (ADS)
Varga, Melinda; Sumi, Robert; Ercsey-Ravasz, Maria; Toroczkai, Zoltan
Transient chaos is a phenomenon characterizing the dynamics of phase space trajectories evolving towards an attractor in physical systems. We show that transient chaos also appears in the dynamics of certain algorithms searching for solutions of constraint satisfaction problems (e.g., Sudoku). We present a study of the emergence of hardness in Boolean satisfiability (k-SAT) using an analog deterministic algorithm. Problem hardness is defined through the escape rate κ, an invariant measure of transient chaos, and it expresses the rate at which the trajectory approaches a solution. We show that the hardness in random k-SAT ensembles has a wide variation approximable by a lognormal distribution. We also show that when increasing the density of constraints α, hardness appears through a second-order phase transition at αc in the random 3-SAT ensemble where dynamical trajectories become transiently chaotic, however, such transition does not occur for 2-SAT. This behavior also implies a novel type of transient chaos in which the escape rate has an exponential-algebraic dependence on the critical parameter. We demonstrate that the transition is generated by the appearance of non-solution basins in the solution space as the density of constraints is increased.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmüller, U.; Strozzi, T.
2012-12-01
The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Compact location problems with budget and communication constraints
Krumke, S.O.; Noltemeier, H.; Ravi, S.S.; Marathe, M.V.
1995-05-01
We consider the problem of placing a specified number p of facilities on the nodes of a given network with two nonnegative edge-weight functions so as to minimize the diameter of the placement with respect to the first distance function under diameter or sum-constraints with respect to the second weight function. Define an ({alpha}, {beta})-approximation algorithm as a polynomial-time algorithm that produces a solution within a times the optimal function value, violating the constraint with respect to the second distance function by a factor of at most {beta}. We observe that in general obtaining an ({alpha}, {beta})-approximation for any fixed {alpha}, {beta} {ge} 1 is NP-hard for any of these problems. We present efficient approximation algorithms for the case, when both edge-weight functions obey the triangle inequality. For the problem of minimizing the diameter under a diameter Constraint with respect to the second weight-function, we provide a (2,2)-approximation algorithm. We. also show that no polynomial time algorithm can provide an ({alpha},2 {minus} {var_epsilon})- or (2 {minus} {var_epsilon},{beta})-approximation for any fixed {var_epsilon} > 0 and {alpha},{beta} {ge} 1, unless P = NP. This result is proved to remain true, even if one fixes {var_epsilon}{prime} > 0 and allows the algorithm to place only 2p/{vert_bar}VI{vert_bar}/{sup 6 {minus} {var_epsilon}{prime}} facilities. Our techniques can be extended to the case, when either the objective or the constraint is of sum-type and also to handle additional weights on the nodes of the graph.
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Constraints in Genetic Programming
NASA Technical Reports Server (NTRS)
Janikow, Cezary Z.
1996-01-01
Genetic programming refers to a class of genetic algorithms utilizing generic representation in the form of program trees. For a particular application, one needs to provide the set of functions, whose compositions determine the space of program structures being evolved, and the set of terminals, which determine the space of specific instances of those programs. The algorithm searches the space for the best program for a given problem, applying evolutionary mechanisms borrowed from nature. Genetic algorithms have shown great capabilities in approximately solving optimization problems which could not be approximated or solved with other methods. Genetic programming extends their capabilities to deal with a broader variety of problems. However, it also extends the size of the search space, which often becomes too large to be effectively searched even by evolutionary methods. Therefore, our objective is to utilize problem constraints, if such can be identified, to restrict this space. In this publication, we propose a generic constraint specification language, powerful enough for a broad class of problem constraints. This language has two elements -- one reduces only the number of program instances, the other reduces both the space of program structures as well as their instances. With this language, we define the minimal set of complete constraints, and a set of operators guaranteeing offspring validity from valid parents. We also show that these operators are not less efficient than the standard genetic programming operators if one preprocesses the constraints - the necessary mechanisms are identified.
Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints
NASA Astrophysics Data System (ADS)
Cassandras, Christos G.; Zhuang, Shixin
2005-11-01
Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.
Agyepong, Irene Akua
2015-03-01
A major constraint to the application of any form of knowledge and principles is the awareness, understanding and acceptance of the knowledge and principles. Systems Thinking (ST) is a way of understanding and thinking about the nature of health systems and how to make and implement decisions within health systems to maximize desired and minimize undesired effects. A major constraint to applying ST within health systems in Low- and Middle-Income Countries (LMICs) would appear to be an awareness and understanding of ST and how to apply it. This is a fundamental constraint and in the increasing desire to enable the application of ST concepts in health systems in LMIC and understand and evaluate the effects; an essential first step is going to be enabling of a wide spread as well as deeper understanding of ST and how to apply this understanding.
The Probabilistic Admissible Region with Additional Constraints
NASA Astrophysics Data System (ADS)
Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.
The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea
CUTTING PLANE METHODS WITHOUT NESTED CONSTRAINT SETS
General conditions are given for the convergence of a class of cutting -plane algorithms without requiring that the constraint sets for the... cutting -planes include that of Kelley and a generalization of that used by Zoutendisk and Veinott. For algorithms with nested constraint sets, these
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch.
Karthikeyan, M; Raja, T Sree Ranga
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.
Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch
Karthikeyan, M.; Sree Ranga Raja, T.
2015-01-01
Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710
Wynant, Willy; Abrahamowicz, Michal
2016-11-01
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nan, Zhufen; Chi, Xuefen
2016-12-20
The IEEE 802.15.7 protocol suggests that it could coordinate the channel access process based on the competitive method of carrier sensing. However, the directionality of light and randomness of diffuse reflection would give rise to a serious imperfect carrier sense (ICS) problem [e.g., hidden node (HN) problem and exposed node (EN) problem], which brings great challenges in realizing the optical carrier sense multiple access (CSMA) mechanism. In this paper, the carrier sense process implemented by diffuse reflection light is modeled as the choice of independent sets. We establish an ICS model with the presence of ENs and HNs for the multi-point to multi-point visible light communication (VLC) uplink communications system. Considering the severe optical ICS problem, an optical hard core point process (OHCPP) is developed, which characterizes the optical CSMA for the indoor VLC uplink communications system. Due to the limited coverage of the transmitted optical signal, in our OHCPP, the ENs within the transmitters' carrier sense region could be retained provided that they could not corrupt the ongoing communications. Moreover, because of the directionality of both light emitting diode (LED) transmitters and receivers, theoretical analysis of the HN problem becomes difficult. In this paper, we derive the closed-form expression for approximating the outage probability and transmission capacity of VLC networks with the presence of HNs and ENs. Simulation results validate the analysis and also show the existence of an optimal physical carrier-sensing threshold that maximizes the transmission capacity for a given emission angle of LED.
A hybrid approach to protein folding problem integrating constraint programming with local search
2010-01-01
Background The protein folding problem remains one of the most challenging open problems in computational biology. Simplified models in terms of lattice structure and energy function have been proposed to ease the computational hardness of this optimization problem. Heuristic search algorithms and constraint programming are two common techniques to approach this problem. The present study introduces a novel hybrid approach to simulate the protein folding problem using constraint programming technique integrated within local search. Results Using the face-centered-cubic lattice model and 20 amino acid pairwise interactions energy function for the protein folding problem, a constraint programming technique has been applied to generate the neighbourhood conformations that are to be used in generic local search procedure. Experiments have been conducted for a few small and medium sized proteins. Results have been compared with both pure constraint programming approach and local search using well-established local move set. Substantial improvements have been observed in terms of final energy values within acceptable runtime using the hybrid approach. Conclusion Constraint programming approaches usually provide optimal results but become slow as the problem size grows. Local search approaches are usually faster but do not guarantee optimal solutions and tend to stuck in local minima. The encouraging results obtained on the small proteins show that these two approaches can be combined efficiently to obtain better quality solutions within acceptable time. It also encourages future researchers on adopting hybrid techniques to solve other hard optimization problems. PMID:20122212
Knox, C.E.
1983-03-01
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.
NASA Technical Reports Server (NTRS)
Knox, C. E.
1983-01-01
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.
Genetic algorithms for protein threading.
Yadgari, J; Amir, A; Unger, R
1998-01-01
Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).
Real-Time Scheduling in Heterogeneous Systems Considering Cache Reload Time Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Miryani, Mohammad Reza; Naghibzadeh, Mahmoud
Since optimal assignment of tasks in a multiprocessor system is, in almost all practical cases, an NP-hard problem, in recent years some algorithms based on genetic algorithms have been proposed. Some of these algorithms have considered real-time applications with multiple objectives, total tardiness, completion time, etc. Here, we propose a suboptimal static scheduler of nonpreemptable tasks in hard real-time heterogeneous multiprocessor systems considering time constraints and cache reload time. The approach makes use of genetic algorithm to minimize total completion time and number of processors used, simultaneously. One important issue which makes this research different from previous ones is cache reload time. The method is implemented and the results are compared against a similar method.
Improving Steiner trees of a network under multiple constraints
Krumke, S.O.; Noltemeier, H.; Marathe, M.V.; Ravi, R.; Ravi, S.S.
1996-07-01
The authors consider the problem of decreasing the edge weights of a given network so that the modified network has a Steiner tree in which two performance measures are simultaneously optimized. They formulate these problems, referred to as bicriteria network improvement problems, by specifying a budget on the total modification cost, a constraint on one of the performance measures and using the other performance measure as a minimization objective. Network improvement problems are known to be NP-hard even when only one performance measure is considered. The authors present the first polynomial time approximation algorithms for bicriteria network improvement problems. The approximation algorithms are for two pairs of performance measures, namely (diameter, total cost) and (degree, total cost). These algorithms produce solutions which are within a logarithmic factor of the optimum value of the minimization objective while violating the constraints only by a logarithmic factor. The techniques also yield approximation schemes when the given network has bounded treewidth. Many of the approximation results can be extended to more general network design problems.
NASA Astrophysics Data System (ADS)
Liu, Jingfa; Jiang, Yucong; Li, Gang; Xue, Yu; Liu, Zhaoxia; Zhang, Zhen
2015-08-01
The optimal layout problem of circle group in a circular container with performance constraints of equilibrium belongs to a class of NP-hard problem. The key obstacle of solving this problem is the lack of an effective global optimization method. We convert the circular packing problem with performance constraints of equilibrium into the unconstrained optimization problem by using quasi-physical strategy and penalty function method. By putting forward a new updating mechanism of the histogram function in energy landscape paving (ELP) method and incorporating heuristic conformation update strategies into the ELP method, we obtain an improved ELP (IELP) method. Subsequently, by combining the IELP method and the local search (LS) procedure, we put forward a hybrid algorithm, denoted by IELP-LS, for the circular packing problem with performance constraints of equilibrium. We test three sets of benchmarks consisting of 21 representative instances from the current literature. The proposed algorithm breaks the records of all 10 instances in the first set, and achieves the same or even better results than other methods in literature for 10 out of 11 instances in the second and third sets. The computational results show that the proposed algorithm is an effective method for solving the circular packing problem with performance constraints of equilibrium.
Ji, Bin; Yuan, Xiaohui; Yuan, Yanbin
2017-02-24
Continuous berth allocation problem (BAPC) is a major optimization problem in transportation engineering. It mainly aims at minimizing the port stay time of ships by optimally scheduling ships to the berthing areas along quays while satisfying several practical constraints. Most of the previous literatures handle the BAPC by heuristics with different constraint handling strategies as it is proved NP-hard. In this paper, we transform the constrained single-objective BAPC (SBAPC) model into unconstrained multiobjective BAPC (MBAPC) model by converting the constraint violation as another objective, which is known as the multiobjective optimization (MOO) constraint handling technique. Then a bias selection modified non-dominated sorting genetic algorithm II (MNSGA-II) is proposed to optimize the MBAPC, in which an archive is designed as an efficient complementary mechanism to provide search bias toward the feasible solution. Finally, the proposed MBAPC model and the MNSGA-II approach are tested on instances from literature and generation. We compared the results obtained by MNSGA-II with other MOO algorithms under the MBAPC model and the results obtained by single-objective oriented methods under the SBAPC model. The comparison shows the feasibility of the MBAPC model and the advantages of the MNSGA-II algorithm.
NASA Astrophysics Data System (ADS)
Yukita, Mihoko; Ptak, Andrew; Maccarone, Thomas J.; Hornschemeier, Ann E.; Wik, Daniel R.; Pottschmidt, Katja; Antoniou, Vallia; Baganoff, Frederick K.; Lehmer, Bret; Zezas, Andreas; Boyd, Patricia T.; Kennea, Jamie; Page, Kim L.
2016-04-01
Thanks to its better sensitivity and spatial resolution, NuSTAR allows us to investigate the E>10 keV properties of nearby galaxies. We now know that starburst galaxies, containing very young stellar populations, have X-ray spectra which drop quickly above 10 keV. We extend our investigation of hard X-ray properties to an older stellar population system, the bulge of M31. The NuSTAR and Swift simultaneous observations reveal a bright hard source dominating the M31 bulge above 20 keV, which is likely to be a counterpart of Swift J0042.6+4112 previously detected (but not classified) in the Swift BAT All-sky Hard X-ray Survey. This source had been classified as an XRB candidate in various Chandra and XMM-Newton studies; however, since it was not clear that it is the counterpart to the strong Swift J0042.6+4112 source at higher energies, the previous E < 10 keV observations did not generate much attention. The NuSTAR and Swift spectra of this source drop quickly at harder energies as observed in sources in starburst galaxies. The X-ray spectral properties of this source are very similar to those of an accreting pulsar; yet, we do not find a pulsation in the NuSTAR data. The existing deep HST images indicate no high mass donors at the location of this source, further suggesting that this source has an intermediate or low mass companion. The most likely scenario for the nature of this source is an X-ray pulsar with an intermediate/low mass companion similar to the Galactic Her X-1 system. We will also discuss other possibilities in more detail.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1991-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocations for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its applications to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1991-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
NASA Technical Reports Server (NTRS)
Zweben, Monte
1993-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem
NASA Astrophysics Data System (ADS)
Luo, Yabo; Waden, Yongo P.
2017-06-01
Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.
Artificial immune algorithm for multi-depot vehicle scheduling problems
NASA Astrophysics Data System (ADS)
Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling
2008-10-01
In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.
On Constraints in Assembly Planning
Calton, T.L.; Jones, R.E.; Wilson, R.H.
1998-12-17
Constraints on assembly plans vary depending on product, assembly facility, assembly volume, and many other factors. Assembly costs and other measures to optimize vary just as widely. To be effective, computer-aided assembly planning systems must allow users to express the plan selection criteria that appIy to their products and production environments. We begin this article by surveying the types of user criteria, both constraints and quality measures, that have been accepted by assembly planning systems to date. The survey is organized along several dimensions, including strategic vs. tactical criteria; manufacturing requirements VS. requirements of the automated planning process itself and the information needed to assess compliance with each criterion. The latter strongly influences the efficiency of planning. We then focus on constraints. We describe a framework to support a wide variety of user constraints for intuitive and efficient assembly planning. Our framework expresses all constraints on a sequencing level, specifying orders and conditions on part mating operations in a number of ways. Constraints are implemented as simple procedures that either accept or reject assembly operations proposed by the planner. For efficiency, some constraints are supplemented with special-purpose modifications to the planner's algorithms. Fast replanning enables an interactive plan-view-constrain-replan cycle that aids in constraint discovery and documentation. We describe an implementation of the framework in a computer-aided assembly planning system and experiments applying the system to a number of complex assemblies, including one with 472 parts.
Reformulating Constraints for Compilability and Efficiency
NASA Technical Reports Server (NTRS)
Tong, Chris; Braudaway, Wesley; Mohan, Sunil; Voigt, Kerstin
1992-01-01
KBSDE is a knowledge compiler that uses a classification-based approach to map solution constraints in a task specification onto particular search algorithm components that will be responsible for satisfying those constraints (e.g., local constraints are incorporated in generators; global constraints are incorporated in either testers or hillclimbing patchers). Associated with each type of search algorithm component is a subcompiler that specializes in mapping constraints into components of that type. Each of these subcompilers in turn uses a classification-based approach, matching a constraint passed to it against one of several schemas, and applying a compilation technique associated with that schema. While much progress has occurred in our research since we first laid out our classification-based approach [Ton91], we focus in this paper on our reformulation research. Two important reformulation issues that arise out of the choice of a schema-based approach are: (1) compilability-- Can a constraint that does not directly match any of a particular subcompiler's schemas be reformulated into one that does? and (2) Efficiency-- If the efficiency of the compiled search algorithm depends on the compiler's performance, and the compiler's performance depends on the form in which the constraint was expressed, can we find forms for constraints which compile better, or reformulate constraints whose forms can be recognized as ones that compile poorly? In this paper, we describe a set of techniques we are developing for partially addressing these issues.
Analysis of Algorithms: Coping with Hard Problems
ERIC Educational Resources Information Center
Kolata, Gina Bari
1974-01-01
Although today's computers can perform as many as one million operations per second, there are many problems that are still too large to be solved in a straightforward manner. Recent work indicates that many approximate solutions are useful and more efficient than exact solutions. (Author/RH)
FATIGUE OF BIOMATERIALS: HARD TISSUES
Arola, D.; Bajaj, D.; Ivancik, J.; Majd, H.; Zhang, D.
2009-01-01
The fatigue and fracture behavior of hard tissues are topics of considerable interest today. This special group of organic materials comprises the highly mineralized and load-bearing tissues of the human body, and includes bone, cementum, dentin and enamel. An understanding of their fatigue behavior and the influence of loading conditions and physiological factors (e.g. aging and disease) on the mechanisms of degradation are essential for achieving lifelong health. But there is much more to this topic than the immediate medical issues. There are many challenges to characterizing the fatigue behavior of hard tissues, much of which is attributed to size constraints and the complexity of their microstructure. The relative importance of the constituents on the type and distribution of defects, rate of coalescence, and their contributions to the initiation and growth of cracks, are formidable topics that have not reached maturity. Hard tissues also provide a medium for learning and a source of inspiration in the design of new microstructures for engineering materials. This article briefly reviews fatigue of hard tissues with shared emphasis on current understanding, the challenges and the unanswered questions. PMID:20563239
NASA Astrophysics Data System (ADS)
Li, Yuzhong
Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.
A Scheduling Algorithm for Replicated Real-Time Tasks
NASA Technical Reports Server (NTRS)
Yu, Albert C.; Lin, Kwei-Jay
1991-01-01
We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.
A Scheduling Algorithm for Replicated Real-Time Tasks
NASA Technical Reports Server (NTRS)
Yu, Albert C.; Lin, Kwei-Jay
1991-01-01
We present an algorithm for scheduling real-time periodic tasks on a multiprocessor system under fault-tolerant requirement. Our approach incorporates both the redundancy and masking technique and the imprecise computation model. Since the tasks in hard real-time systems have stringent timing constraints, the redundancy and masking technique are more appropriate than the rollback techniques which usually require extra time for error recovery. The imprecise computation model provides flexible functionality by trading off the quality of the result produced by a task with the amount of processing time required to produce it. It therefore permits the performance of a real-time system to degrade gracefully. We evaluate the algorithm by stochastic analysis and Monte Carlo simulations. The results show that the algorithm is resilient under hardware failures.
Program/System Constraints Analysis Report.
1981-03-01
Tactical Air Command to comply with the requirements of CDRL no. 8021. The project entailed the design and development of an instru- ctional system for...constraints identified such as weather, range availa- bility, and air space are "hard" and must be accomodated by the training system. Other constraints such... Air Training Command (ATC) d. Fighter Lead-in Training (FLIT) e. General Dynamics, Fort Worth Concurrent with the documentation review and the
Harmony search algorithm: application to the redundancy optimization problem
NASA Astrophysics Data System (ADS)
Nahas, Nabil; Thien-My, Dao
2010-09-01
The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.
Interacting Boson Problems Can Be QMA Hard
NASA Astrophysics Data System (ADS)
Wei, Tzu-Chieh; Mosca, Michele; Nayak, Ashwin
2010-01-01
Computing the ground-state energy of interacting electron problems has recently been shown to be hard for quantum Merlin Arthur (QMA), a quantum analogue of the complexity class NP. Fermionic problems are usually hard, a phenomenon widely attributed to the so-called sign problem. The corresponding bosonic problems are, according to conventional wisdom, tractable. Here, we demonstrate that the complexity of interacting boson problems is also QMA hard. Moreover, the bosonic version of N-representability problem is QMA complete. Consequently, these problems are unlikely to have efficient quantum algorithms.
Cluster and constraint analysis in tetrahedron packings.
Jin, Weiwei; Lu, Peng; Liu, Lufeng; Li, Shuixiang
2015-04-01
The disordered packings of tetrahedra often show no obvious macroscopic orientational or positional order for a wide range of packing densities, and it has been found that the local order in particle clusters is the main order form of tetrahedron packings. Therefore, a cluster analysis is carried out to investigate the local structures and properties of tetrahedron packings in this work. We obtain a cluster distribution of differently sized clusters, and peaks are observed at two special clusters, i.e., dimer and wagon wheel. We then calculate the amounts of dimers and wagon wheels, which are observed to have linear or approximate linear correlations with packing density. Following our previous work, the amount of particles participating in dimers is used as an order metric to evaluate the order degree of the hierarchical packing structure of tetrahedra, and an order map is consequently depicted. Furthermore, a constraint analysis is performed to determine the isostatic or hyperstatic region in the order map. We employ a Monte Carlo algorithm to test jamming and then suggest a new maximally random jammed packing of hard tetrahedra from the order map with a packing density of 0.6337.
Thermodynamic hardness and the maximum hardness principle
NASA Astrophysics Data System (ADS)
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Thermodynamic hardness and the maximum hardness principle.
Franco-Pérez, Marco; Gázquez, José L; Ayers, Paul W; Vela, Alberto
2017-08-21
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T(-1)(I-A), where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Constraint programming based biomarker optimization.
Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng
2015-01-01
Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.
Feature matching method with multigeometric constraints
NASA Astrophysics Data System (ADS)
Xu, Dong; Huang, Qian; Liu, Wenyong; Bessaih, Hadjar; Li, Chidong
2016-11-01
Feature correspondence is one of the essential difficulties in image processing, given that it is applied within a wide range in computer vision. Even though it has been studied for many years, feature correspondence is still far from being ideal. This paper proposes a multigeometric-constraint algorithm for finding correspondences between two sets of features. It does so by considering interior angles and edge lengths of triangles formed by third-order tuples of points. Multigeometric-constraints are formulated using matrices representing triangle similarities. The experimental evaluation showed that the multigeometric-constraint algorithm can significantly improve the matching precision and is robust to most geometric and photometric transformations including rotation, scale change, blur, viewpoint change, and JPEG compression as well as illumination change. The multigeometric-constraint algorithm was applied to object recognition which includes extraprocessing and affine transformation. The results showed that this approach works well for this recognition.
A Path Algorithm for Constrained Estimation
Zhou, Hua; Lange, Kenneth
2013-01-01
Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online. PMID:24039382
Foundations of support constraint machines.
Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello
2015-02-01
The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses.
Use of Justified Constraints in Coherent Diffractive Imaging
Kim, S.; McNulty, I.; Chen, Y. K.; Putkunz, C. T.; Dunand, D. C.
2011-09-09
We demonstrate the use of physically justified object constraints in x-ray Fresnel coherent diffractive imaging on a sample of nanoporous gold prepared by dealloying. Use of these constraints in the reconstruction algorithm enabled highly reliable imaging of the sample's shape and quantification of the 23- to 52-nm pore structure within it without use of a tight object support constraint.
A Monte Carlo Approach for Adaptive Testing with Content Constraints
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander
2008-01-01
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
Efficient dynamic constraints for animating articulated figures
NASA Technical Reports Server (NTRS)
Kokkevis, E.; Metaxas, D.; Badler, N. I. (Principal Investigator)
1998-01-01
This paper presents an efficient dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A non-trivial extension of Featherstone's O(n) recursive forward dynamics algorithm is derived which allows enforcing one or more constraints on the animated figures. We demonstrate how the constraint force evaluation algorithm we have developed makes it possible to simulate collisions between articulated figures, to compute the results of impulsive forces, to enforce joint limits, to model closed kinematic loops, and to robustly control motion at interactive rates. Particular care has been taken to make the algorithm not only fast, but also easy to implement and use. To better illustrate how the constraint force evaluation algorithm works, we provide pseudocode for its major components. Additionally, we analyze its computational complexity and finally we present examples demonstrating how our system has been used to generate interactive, physically correct complex motion with small user effort.
Network interdiction with budget constraints
Santhi, Nankakishore; Pan, Feng
2009-01-01
Several scenarios exist in the modern interconnected world which call for efficient network interdiction algorithms. Applications are varied, including computer network security, prevention of spreading of Internet worms, policing international smuggling networks, controlling spread of diseases and optimizing the operation of large public energy grids. In this paper we consider some natural network optimization questions related to the budget constrained interdiction problem over general graphs. Many of these questions turn out to be computationally hard to tackle. We present a particularly interesting practical form of the interdiction question which we show to be computationally tractable. A polynomial time algorithm is then presented for this problem.
Parallel-batch scheduling and transportation coordination with waiting time constraint.
Gong, Hua; Chen, Daheng; Xu, Ke
2014-01-01
This paper addresses a parallel-batch scheduling problem that incorporates transportation of raw materials or semifinished products before processing with waiting time constraint. The orders located at the different suppliers are transported by some vehicles to a manufacturing facility for further processing. One vehicle can load only one order in one shipment. Each order arriving at the facility must be processed in the limited waiting time. The orders are processed in batches on a parallel-batch machine, where a batch contains several orders and the processing time of the batch is the largest processing time of the orders in it. The goal is to find a schedule to minimize the sum of the total flow time and the production cost. We prove that the general problem is NP-hard in the strong sense. We also demonstrate that the problem with equal processing times on the machine is NP-hard. Furthermore, a dynamic programming algorithm in pseudopolynomial time is provided to prove its ordinarily NP-hardness. An optimal algorithm in polynomial time is presented to solve a special case with equal processing times and equal transportation times for each order.
Constraint-based interactive assembly planning
Jones, R.E.; Wilson, R.H.; Calton, T.L.
1997-03-01
The constraints on assembly plans vary depending on the product, assembly facility, assembly volume, and many other factors. This paper describes the principles and implementation of a framework that supports a wide variety of user-specified constraints for interactive assembly planning. Constraints from many sources can be expressed on a sequencing level, specifying orders and conditions on part mating operations in a number of ways. All constraints are implemented as filters that either accept or reject assembly operations proposed by the planner. For efficiency, some constraints are supplemented with special-purpose modifications to the planner`s algorithms. Replanning is fast enough to enable a natural plan-view-constrain-replan cycle that aids in constraint discovery and documentation. We describe an implementation of the framework in a computer-aided assembly planning system and experiments applying the system to several complex assemblies. 12 refs., 2 figs., 3 tabs.
The Approximability of Learning and Constraint Satisfaction Problems
2010-10-07
The Approximability of Learning and Constraint Satisfaction Problems Yi Wu CMU-CS-10-142 October 7, 2010 School of Computer Science Carnegie Mellon...00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE The Approximability of Learning and Constraint Satisfaction Problems 5a. CONTRACT NUMBER 5b...approximability of two classes of NP-hard problems: Constraint Satisfaction Problems (CSPs) and Computational Learning Problems. For CSPs, we mainly study the
Simulation results for the Viterbi decoding algorithm
NASA Technical Reports Server (NTRS)
Batson, B. H.; Moorehead, R. W.; Taqvi, S. Z. H.
1972-01-01
Concepts involved in determining the performance of coded digital communications systems are introduced. The basic concepts of convolutional encoding and decoding are summarized, and hardware implementations of sequential and maximum likelihood decoders are described briefly. Results of parametric studies of the Viterbi decoding algorithm are summarized. Bit error probability is chosen as the measure of performance and is calculated, by using digital computer simulations, for various encoder and decoder parameters. Results are presented for code rates of one-half and one-third, for constraint lengths of 4 to 8, for both hard-decision and soft-decision bit detectors, and for several important systematic and nonsystematic codes. The effect of decoder block length on bit error rate also is considered, so that a more complete estimate of the relationship between performance and decoder complexity can be made.
Extensions of output variance constrained controllers to hard constraints
NASA Technical Reports Server (NTRS)
Skelton, R.; Zhu, G.
1989-01-01
Covariance Controllers assign specified matrix values to the state covariance. A number of robustness results are directly related to the covariance matrix. The conservatism in known upperbounds on the H infinity, L infinity, and L (sub 2) norms for stability and disturbance robustness of linear uncertain systems using covariance controllers is illustrated with examples. These results are illustrated for continuous and discrete time systems. **** ONLY 2 BLOCK MARKERS FOUND -- RETRY *****
Portfolios of Quantum Algorithms
NASA Astrophysics Data System (ADS)
Maurer, Sebastian M.; Hogg, Tad; Huberman, Bernardo A.
2001-12-01
Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.
Object-oriented algorithmic laboratory for ordering sparse matrices
Kumfert, Gary Karl
2000-05-01
We focus on two known NP-hard problems that have applications in sparse matrix computations: the envelope/wavefront reduction problem and the fill reduction problem. Envelope/wavefront reducing orderings have a wide range of applications including profile and frontal solvers, incomplete factorization preconditioning, graph reordering for cache performance, gene sequencing, and spatial databases. Fill reducing orderings are generally limited to--but an inextricable part of--sparse matrix factorization. Our major contribution to this field is the design of new and improved heuristics for these NP-hard problems and their efficient implementation in a robust, cross-platform, object-oriented software package. In this body of research, we (1) examine current ordering algorithms, analyze their asymptotic complexity, and characterize their behavior in model problems, (2) introduce new and improved algorithms that address deficiencies found in previous heuristics, (3) implement an object-oriented library of these algorithms in a robust, modular fashion without significant loss of efficiency, and (4) extend our algorithms and software to address both generalized and constrained problems. We stress that the major contribution is the algorithms and the implementation; the whole being greater than the sum of its parts. The initial motivation for implementing our algorithms in object-oriented software was to manage the inherent complexity. During our research came the realization that the object-oriented implementation enabled new possibilities augmented algorithms that would not have been as natural to generalize from a procedural implementation. Some extensions are constructed from a family of related algorithmic components, thereby creating a poly-algorithm that can adapt its strategy to the properties of the specific problem instance dynamically. Other algorithms are tailored for special constraints by aggregating algorithmic components and having them collaboratively
Ordering of hard particles between hard walls
NASA Astrophysics Data System (ADS)
Chrzanowska, A.; Teixeira, P. I. C.; Ehrentraut, H.; Cleaver, D. J.
2001-05-01
The structure of a fluid of hard Gaussian overlap particles of elongation κ = 5, confined between two hard walls, has been calculated from density-functional theory and Monte Carlo simulations. By using the exact expression for the excluded volume kernel (Velasco E and Mederos L 1998 J. Chem. Phys. 109 2361) and solving the appropriate Euler-Lagrange equation entirely numerically, we have been able to extend our theoretical predictions into the nematic phase, which had up till now remained relatively unexplored due to the high computational cost. Simulation reveals a rich adsorption behaviour with increasing bulk density, which is described semi-quantitatively by the theory without any adjustable parameters.
Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem
NASA Astrophysics Data System (ADS)
Tein, Lim Huai; Ramli, Razamin
2014-12-01
Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.
Session: Hard Rock Penetration
Tennyson, George P. Jr.; Dunn, James C.; Drumheller, Douglas S.; Glowka, David A.; Lysne, Peter
1992-01-01
This session at the Geothermal Energy Program Review X: Geothermal Energy and the Utility Market consisted of five presentations: ''Hard Rock Penetration - Summary'' by George P. Tennyson, Jr.; ''Overview - Hard Rock Penetration'' by James C. Dunn; ''An Overview of Acoustic Telemetry'' by Douglas S. Drumheller; ''Lost Circulation Technology Development Status'' by David A. Glowka; ''Downhole Memory-Logging Tools'' by Peter Lysne.
NASA Technical Reports Server (NTRS)
Hauser, D. L.; Buras, D. F.; Corbin, J. M.
1987-01-01
Rubber-hardness tester modified for use on rigid polyurethane foam. Provides objective basis for evaluation of improvements in foam manufacturing and inspection. Typical acceptance criterion requires minimum hardness reading of 80 on modified tester. With adequate correlation tests, modified tester used to measure indirectly tensile and compressive strengths of foam.
NASA Technical Reports Server (NTRS)
Hauser, D. L.; Buras, D. F.; Corbin, J. M.
1987-01-01
Rubber-hardness tester modified for use on rigid polyurethane foam. Provides objective basis for evaluation of improvements in foam manufacturing and inspection. Typical acceptance criterion requires minimum hardness reading of 80 on modified tester. With adequate correlation tests, modified tester used to measure indirectly tensile and compressive strengths of foam.
Cugell, D.W. )
1992-06-01
Hard metal is a mixture of tungsten carbide and cobalt, to which small amounts of other metals may be added. It is widely used for industrial purposes whenever extreme hardness and high temperature resistance are needed, such as for cutting tools, oil well drilling bits, and jet engine exhaust ports. Cobalt is the component of hard metal that can be a health hazard. Respiratory diseases occur in workers exposed to cobalt--either in the production of hard metal, from machining hard metal parts, or from other sources. Adverse pulmonary reactions include asthma, hypersensitivity pneumonitis, and interstitial fibrosis. A peculiar, almost unique form of lung fibrosis, giant cell interstitial pneumonia, is closely linked with cobalt exposure.66 references.
Cugell, D W
1992-06-01
Hard metal is a mixture of tungsten carbide and cobalt, to which small amounts of other metals may be added. It is widely used for industrial purposes whenever extreme hardness and high temperature resistance are needed, such as for cutting tools, oil well drilling bits, and jet engine exhaust ports. Cobalt is the component of hard metal that can be a health hazard. Respiratory diseases occur in workers exposed to cobalt--either in the production of hard metal, from machining hard metal parts, or from other sources. Adverse pulmonary reactions include asthma, hypersensitivity pneumonitis, and interstitial fibrosis. A peculiar, almost unique form of lung fibrosis, giant cell interstitial pneumonia, is closely linked with cobalt exposure.
Dynamic Constraint Satisfaction with Reasonable Global Constraints
NASA Technical Reports Server (NTRS)
Frank, Jeremy
2003-01-01
Previously studied theoretical frameworks for dynamic constraint satisfaction problems (DCSPs) employ a small set of primitive operators to modify a problem instance. They do not address the desire to model problems using sophisticated global constraints, and do not address efficiency questions related to incremental constraint enforcement. In this paper, we extend a DCSP framework to incorporate global constraints with flexible scope. A simple approach to incremental propagation after scope modification can be inefficient under some circumstances. We characterize the cases when this inefficiency can occur, and discuss two ways to alleviate this problem: adding rejection variables to the scope of flexible constraints, and adding new features to constraints that permit increased control over incremental propagation.
NASA Astrophysics Data System (ADS)
Liang, Xuecheng
Dynamic hardness (Pd) of 22 different pure metals and alloys having a wide range of elastic modulus, static hardness, and crystal structure were measured in a gas pulse system. The indentation contact diameter with an indenting sphere and the radius (r2) of curvature of the indentation were determined by the curve fitting of the indentation profile data. r 2 measured by the profilometer was compared with that calculated from Hertz equation in both dynamic and static conditions. The results indicated that the curvature change due to elastic recovery after unloading is approximately proportional to the parameters predicted by Hertz equation. However, r 2 is less than the radius of indenting sphere in many cases which is contradictory to Hertz analysis. This discrepancy is believed due to the difference between Hertzian and actual stress distributions underneath the indentation. Factors which influence indentation elastic recovery were also discussed. It was found that Tabor dynamic hardness formula always gives a lower value than that directly from dynamic hardness definition DeltaE/V because of errors mainly from Tabor's rebound equation and the assumption that dynamic hardness at the beginning of rebound process (Pr) is equal to kinetic energy change of an impact sphere over the formed crater volume (Pd) in the derivation process for Tabor's dynamic hardness formula. Experimental results also suggested that dynamic to static hardness ratio of a material is primarily determined by its crystal structure and static hardness. The effects of strain rate and temperature rise on this ratio were discussed. A vacuum rotating arm apparatus was built to measure Pd at 70, 127, and 381 mum sphere sizes, these results exhibited that Pd is highly depended on the sphere size due to the strain rate effects. P d was also used to substitute for static hardness to correlate with abrasion and erosion resistance of metals and alloys. The particle size effects observed in erosion were
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
Data Structures and Algorithms.
ERIC Educational Resources Information Center
Wirth, Niklaus
1984-01-01
Built-in data structures are the registers and memory words where binary values are stored; hard-wired algorithms are the fixed rules, embodied in electronic logic circuits, by which stored data are interpreted as instructions to be executed. Various topics related to these two basic elements of every computer program are discussed. (JN)
Constraint Theory and Roken Bond Bending Constraints in Oxide Glasses
NASA Astrophysics Data System (ADS)
Zhang, Min
can understand the rigidity percolation threshold shift from x = 0.20 to x = 0.23, if one assumes a fraction of 20% chalcogen atoms have their bond angle constraints broken. A simple interpretation is that these chalcogen atoms (with broken bond bending constraints) represent short floppy chain-segments connecting the more rigid tetrahedral Ge(Se_{1/2} )_4 units. Thus the concept of broken bond bending constraints plays an important role in promoting glass forming tendency of materials. The extended constraint theory has also found application in aspect of mechanical property of hydrogenated diamond like carbon, silicon carbide and silicon thin films. We have established for the first time a linear relationship between measured hardness and hardness index, a geometric parameter derived from constraint theory. The slopes of such linear functions for different type materials are determined by chemical effect that reflect bonding type and interaction strength among atoms.
Constraint monitoring in TOSCA
NASA Technical Reports Server (NTRS)
Beck, Howard
1992-01-01
The Job-Shop Scheduling Problem (JSSP) deals with the allocation of resources over time to factory operations. Allocations are subject to various constraints (e.g., production precedence relationships, factory capacity constraints, and limits on the allowable number of machine setups) which must be satisfied for a schedule to be valid. The identification of constraint violations and the monitoring of constraint threats plays a vital role in schedule generation in terms of the following: (1) directing the scheduling process; and (2) informing scheduling decisions. This paper describes a general mechanism for identifying constraint violations and monitoring threats to the satisfaction of constraints throughout schedule generation.
Set Constraints and Logic Programming (Preprint)
2016-02-24
algorithm which is essentially the same as the normal form algorithm of Let X be the set of variables appearing in the original system These...This result is essential in the semantics of clpsc Theorem Every satisable system of set constraints has a regular solution PROOF...automata and tree grammars Technical Report DAIMI FN Aarhus University April T Fruhwirth E Shapiro M Y Vardi and E Yardeni
Constraint checking during error recovery
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.; Wong, Johnny S. K.
1993-01-01
The system-level software onboard a spacecraft is responsible for recovery from communication, power, thermal, and computer-health anomalies that may occur. The recovery must occur without disrupting any critical scientific or engineering activity that is executing at the time of the error. Thus, the error-recovery software may have to execute concurrently with the ongoing acquisition of scientific data or with spacecraft maneuvers. This work provides a technique by which the rules that constrain the concurrent execution of these processes can be modeled in a graph. An algorithm is described that uses this model to validate that the constraints hold for all concurrent executions of the error-recovery software with the software that controls the science and engineering activities of the spacecraft. The results are applicable to a variety of control systems with critical constraints on the timing and ordering of the events they control.
ERIC Educational Resources Information Center
Stocker, H. Robert; Hilton, Thomas S. E.
1991-01-01
Suggests strategies that make hard disk organization easy and efficient, such as making, changing, and removing directories; grouping files by subject; naming files effectively; backing up efficiently; and using PATH. (JOW)
ERIC Educational Resources Information Center
Stocker, H. Robert; Hilton, Thomas S. E.
1991-01-01
Suggests strategies that make hard disk organization easy and efficient, such as making, changing, and removing directories; grouping files by subject; naming files effectively; backing up efficiently; and using PATH. (JOW)
Canavan, G.H.
1997-02-01
The inference of the diameter of hard objects is insensitive to radiation efficiency. Deductions of radiation efficiency from observations are very sensitive - possibly overly so. Inferences of the initial velocity and trajectory vary similarly, and hence are comparably sensitive.
Condensation transition in polydisperse hard rods.
Evans, M R; Majumdar, S N; Pagonabarraga, I; Trizac, E
2010-01-07
We study a mass transport model, where spherical particles diffusing on a ring can stochastically exchange volume v, with the constraint of a fixed total volume V= sum(i=1) (N)v(i), N being the total number of particles. The particles, referred to as p-spheres, have a linear size that behaves as v(i) (1/p) and our model thus represents a gas of polydisperse hard rods with variable diameters v(i) (1/p). We show that our model admits a factorized steady state distribution which provides the size distribution that minimizes the free energy of a polydisperse hard-rod system, under the constraints of fixed N and V. Complementary approaches (explicit construction of the steady state distribution on the one hand; density functional theory on the other hand) completely and consistently specify the behavior of the system. A real space condensation transition is shown to take place for p>1; beyond a critical density a macroscopic aggregate is formed and coexists with a critical fluid phase. Our work establishes the bridge between stochastic mass transport approaches and the optimal polydispersity of hard sphere fluids studied in previous articles.
The Algorithm Selection Problem
NASA Technical Reports Server (NTRS)
Minton, Steve; Allen, John; Deiss, Ron (Technical Monitor)
1994-01-01
Work on NP-hard problems has shown that many instances of these theoretically computationally difficult problems are quite easy. The field has also shown that choosing the right algorithm for the problem can have a profound effect on the time needed to find a solution. However, to date there has been little work showing how to select the right algorithm for solving any particular problem. The paper refers to this as the algorithm selection problem. It describes some of the aspects that make this problem difficult, as well as proposes a technique for addressing it.
Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems.
Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo
2015-09-25
Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems.
Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems
Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo
2015-01-01
Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016
NASA Technical Reports Server (NTRS)
Chan, Hak-Wai; Yan, Tsun-Yee
1989-01-01
Algorithm developed for optimal routing of packets of data along links of multilink, multinode digital communication network. Algorithm iterative and converges to cost-optimal assignment independent of initial assignment. Each node connected to other nodes through links, each containing number of two-way channels. Algorithm assigns channels according to message traffic leaving and arriving at each node. Modified to take account of different priorities among packets belonging to different users by using different delay constraints or imposing additional penalties via cost function.
Micromagnetic Modeling of Hard Magnets
NASA Astrophysics Data System (ADS)
Fidler, J.
1997-03-01
The increasing impact of magnetic materials on many modern industries will continue well into the next century. Besides recording materials and soft magnetic devices , also hard magnetic materials are key components in transportation and information technologies, machines and many other systems. For the better understanding and the development of high performance permanent magnets a detailed understanding of the magnetization mechanisms leading to an improvement of the coercive field is necessary. Various approaches have been proposed to describe the coercivity of permanent magnets. Besides the micromagnetic approach of the nucleation of reversed domains, the expansion mechanism or the domain wall propagation, also other phenomenological approaches taking into account the magnetocrystalline anisotropy energy and the magnetic viscosity have been used. The hysteresis properties are governed by a combination of the intrinsic properties of the material, such as saturation polarization, magnetic exchange and magnetocrystalline anisotropy. The other important factors are the microstructural parameters, such as grain size, the orientation of the easy axes of the grains and the distribution of phases. The role of intergranular structure between the grains plays a significant role determining the magnetic properties, if the grain diameter is in the nanometer scale. It is intended to show the relationship between the magnetization reversal behavior and the real microstructure of various types of hard magnets, especially of rare earth permanent magnets. The theoretical treatment of the magnetization reversal processes is performed in the framework of the continuum theory of micromagnetism. Starting from a real microstructure, characterized by optical and electron microscopic techniques, the influence of the dipolar and the exchange interaction between hard magnetic grains has been demonstrated mainly on Nd-Fe-B magnets. We developed a numerical algorithm on the basis of
Softeners for hardness removal.
Shetty, Rashma; Manjunath, N T; Babu, B T Suresh
2005-10-01
The depletion of water resources, both surface and subsurface and deterioration of water quality made researchers and policy makers to think of the possible remedies to make water sources potable / wholesome. There is a need to address the problems of hardness and fluoride in subsurface water on priority basis. In this direction, bench scale studies were conducted to evaluate the performance of water softeners. Indepth studies were carried out at University B.D.T College of Engineering, Davangere, Karnataka, to assess the performance of bench scale softeners of D to H ratio 1:2, 1:3, 1:4 in removing hardness of varied concentrations from both synthetic and natural water samples. Studies revealed that irrespective of D to H ratio of softeners, the waters having hardness concentration up to 1000 mg/l can be treated to the same degree (81.68% and above). The findings of regeneration studies and cost economics are also summarized in this paper.
Niu, Sijie; Chen, Qiang; de Sisternes, Luis; Rubin, Daniel L; Zhang, Weiwei; Liu, Qinghuai
2014-11-01
Automatic segmentation of retinal layers in spectral domain optical coherence tomography (SD-OCT) images plays a vital role in the quantitative assessment of retinal disease, because it provides detailed information which is hard to process manually. A number of algorithms to automatically segment retinal layers have been developed; however, accurate edge detection is challenging. We developed an automatic algorithm for segmenting retinal layers based on dual-gradient and spatial correlation smoothness constraint. The proposed algorithm utilizes a customized edge flow to produce the edge map and a convolution operator to obtain local gradient map in the axial direction. A valid search region is then defined to identify layer boundaries. Finally, a spatial correlation smoothness constraint is applied to remove anomalous points at the layer boundaries. Our approach was tested on two datasets including 10 cubes from 10 healthy eyes and 15 cubes from 6 patients with age-related macular degeneration. A quantitative evaluation of our method was performed on more than 600 images from cubes obtained in five healthy eyes. Experimental results demonstrated that the proposed method can estimate six layer boundaries accurately. Mean absolute boundary positioning differences and mean absolute thickness differences (mean±SD) were 4.43±3.32 μm and 0.22±0.24 μm, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cocco, S.; Monasson, R.
2001-08-01
The computational complexity of solving random 3-Satisfiability (3-SAT) problems is investigated using statistical physics concepts and techniques related to phase transitions, growth processes and (real-space) renormalization flows. 3-SAT is a representative example of hard computational tasks; it consists in knowing whether a set of αN randomly drawn logical constraints involving N Boolean variables can be satisfied altogether or not. Widely used solving procedures, as the Davis-Putnam-Loveland-Logemann (DPLL) algorithm, perform a systematic search for a solution, through a sequence of trials and errors represented by a search tree. The size of the search tree accounts for the computational complexity, i.e. the amount of computational efforts, required to achieve resolution. In the present study, we identify, using theory and numerical experiments, easy (size of the search tree scaling polynomially with N) and hard (exponential scaling) regimes as a function of the ratio α of constraints per variable. The typical complexity is explicitly calculated in the different regimes, in very good agreement with numerical simulations. Our theoretical approach is based on the analysis of the growth of the branches in the search tree under the operation of DPLL. On each branch, the initial 3-SAT problem is dynamically turned into a more generic 2+p-SAT problem, where p and 1 - p are the fractions of constraints involving three and two variables respectively. The growth of each branch is monitored by the dynamical evolution of α and p and is represented by a trajectory in the static phase diagram of the random 2+p-SAT problem. Depending on whether or not the trajectories cross the boundary between satisfiable and unsatisfiable phases, single branches or full trees are generated by DPLL, resulting in easy or hard resolutions. Our picture for the origin of complexity can be applied to other computational problems solved by branch and bound algorithms.
ERIC Educational Resources Information Center
Parrino, Frank M.
2003-01-01
Interviews with school board members and administrators produced a list of suggestions for balancing a budget in hard times. Among these are changing calendars and schedules to reduce heating and cooling costs; sharing personnel; rescheduling some extracurricular activities; and forming cooperative agreements with other districts. (MLF)
ERIC Educational Resources Information Center
Parrino, Frank M.
2003-01-01
Interviews with school board members and administrators produced a list of suggestions for balancing a budget in hard times. Among these are changing calendars and schedules to reduce heating and cooling costs; sharing personnel; rescheduling some extracurricular activities; and forming cooperative agreements with other districts. (MLF)
ERIC Educational Resources Information Center
Berry, John N., III
2009-01-01
Roberta Stevens and Kent Oliver are campaigning hard for the presidency of the American Library Association (ALA). Stevens is outreach projects and partnerships officer at the Library of Congress. Oliver is executive director of the Stark County District Library in Canton, Ohio. They have debated, discussed, and posted web sites, Facebook pages,…
ERIC Educational Resources Information Center
Berry, John N., III
2009-01-01
Roberta Stevens and Kent Oliver are campaigning hard for the presidency of the American Library Association (ALA). Stevens is outreach projects and partnerships officer at the Library of Congress. Oliver is executive director of the Stark County District Library in Canton, Ohio. They have debated, discussed, and posted web sites, Facebook pages,…
ERIC Educational Resources Information Center
Sturgeon, Julie
2008-01-01
Acting on information from students who reported seeing a classmate looking at inappropriate material on a school computer, school officials used forensics software to plunge the depths of the PC's hard drive, searching for evidence of improper activity. Images were found in a deleted Internet Explorer cache as well as deleted file space.…
COMPLEXITY & APPROXIMABILITY OF QUANTIFIED & STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS
H. B. HUNT; M. V. MARATHE; R. E. STEARNS
2001-06-01
Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity or efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C,S,T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94] Our techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-Q-SAT(S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93, CF+94, Cr95, KSW97]. Keywords: NP-hardness; Approximation Algorithms; PSPACE-hardness; Quantified and Stochastic Constraint Satisfaction Problems.
About some types of constraints in problems of routing
NASA Astrophysics Data System (ADS)
Petunin, A. A.; Polishuk, E. G.; Chentsov, A. G.; Chentsov, P. A.; Ukolov, S. S.
2016-12-01
Many routing problems arising in different applications can be interpreted as a discrete optimization problem with additional constraints. The latter include generalized travelling salesman problem (GTSP), to which task of tool routing for CNC thermal cutting machines is sometimes reduced. Technological requirements bound to thermal fields distribution during cutting process are of great importance when developing algorithms for this task solution. These requirements give rise to some specific constraints for GTSP. This paper provides a mathematical formulation for the problem of thermal fields calculating during metal sheet thermal cutting. Corresponding algorithm with its programmatic implementation is considered. The mathematical model allowing taking such constraints into account considering other routing problems is discussed either.
Quiet planting in the locked constraints satisfaction problems
Zdeborova, Lenka; Krzakala, Florent
2009-01-01
We study the planted ensemble of locked constraint satisfaction problems. We describe the connection between the random and planted ensembles. The use of the cavity method is combined with arguments from reconstruction on trees and first and second moment considerations; in particular the connection with the reconstruction on trees appears to be crucial. Our main result is the location of the hard region in the planted ensemble, thus providing hard satisfiable benchmarks. In a part of that hard region instances have with high probability a single satisfying assignment.
Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search.
Rajalakshmi, S; Maguteeswaran, R
2015-01-01
A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established.
Wayne F. Boyer; Gurdeep S. Hura
2005-09-01
The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized task orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,
Moisture influence on near-infrared prediction of wheat hardness
NASA Astrophysics Data System (ADS)
Windham, William R.; Gaines, Charles S.; Leffler, Richard G.
1991-02-01
Recently near infrared (NTR) reflectance instrumentation has been used to provide an empirical measure of wheat hardness. This hardness scale is based on the radiation scattering properties of meal particles at 1680 and 2230 nm. Hard wheats have a larger mean particles size (PS) after grinding than soft wheats. However wheat kernel moisture content can influence mean PS after grinding. The objective of this study was to determine the sensitivity of MR wheat hardness measurements to moisture content and to make the hardness score independent of moisture by correcting hardness measurements for the actual moisture content of measured samples. Forty wheat cultivars composed of hard red winter hard red spring soft red winter and soft white winter were used. Wheat kernel subsamples were stored at 20 40 60 and 80 relative humidity (RH). After equilibration samples were ground and the meal analyzed for hardness score (HS) and moisture. HS were 48 50 54 and 65 for 20 40 60 and 80 RH respectively. Differences in HS within each wheat class were the result of a moisture induced change in the PS of the meal. An algorithm was developed to correct HS to 11 moisture. This correction provides HS that are nearly independent of moisture content. 1.
Inclusive Flavour Tagging Algorithm
NASA Astrophysics Data System (ADS)
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-10-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.
On Reformulating Planning as Dynamic Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Jonsson, Ari K.; Morris, Paul; Koga, Dennis (Technical Monitor)
2000-01-01
In recent years, researchers have reformulated STRIPS planning problems as SAT problems or CSPs. In this paper, we discuss the Constraint-Based Interval Planning (CBIP) paradigm, which can represent planning problems incorporating interval time and resources. We describe how to reformulate mutual exclusion constraints for a CBIP-based system, the Extendible Uniform Remote Operations Planner Architecture (EUROPA). We show that reformulations involving dynamic variable domains restrict the algorithms which can be used to solve the resulting DCSP. We present an alternative formulation which does not employ dynamic domains, and describe the relative merits of the different reformulations.
Optimum structural design with static aeroelastic constraints
NASA Technical Reports Server (NTRS)
Bowman, Keith B; Grandhi, Ramana V.; Eastep, F. E.
1989-01-01
The static aeroelastic performance characteristics, divergence velocity, control effectiveness and lift effectiveness are considered in obtaining an optimum weight structure. A typical swept wing structure is used with upper and lower skins, spar and rib thicknesses, and spar cap and vertical post cross-sectional areas as the design parameters. Incompressible aerodynamic strip theory is used to derive the constraint formulations, and aerodynamic load matrices. A Sequential Unconstrained Minimization Technique (SUMT) algorithm is used to optimize the wing structure to meet the desired performance constraints.
Unemployment: Hard-Core or Hard-Shell?
ERIC Educational Resources Information Center
Lauer, Robert H.
1972-01-01
The term hard-core'' makes the unemployed culpable; the term hard shell'' shifts the burden to the employer, and the evidence from the suburban plant indicates that a substantial part of the problem must lie there. (DM)
Resource Allocation in Cooperative OFDMA Systems with Fairness Constraint
NASA Astrophysics Data System (ADS)
Li, Hongxing; Luo, Hanwen; Wang, Xinbing; Ding, Ming; Chen, Wen
This letter investigates a subchannel and power allocation (SPA) algorithm which maximizes the throughput of a user under the constraints of total transmit power and fair subchannel occupation among relay nodes. The proposed algorithm reduces computational complexity from exponential to linear in the number of subchannels at the expense of a small performance loss.
Stochastic hard-sphere dynamics for hydrodynamics of nonideal fluids.
Donev, Aleksandar; Alder, Berni J; Garcia, Alejandro L
2008-08-15
A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.
NASA Astrophysics Data System (ADS)
Adams, Philip; Prozorov, Ruslan
2005-03-01
We present the magnetic response of Type-II superconductivity in the extreme pinning limit, where screening currents within an order of magnitude of the Ginzburg-Landau depairing critical current density develop upon the application of a magnetic field. We show that this ``super-hard'' limit is well approximated in highly disordered, cold drawn, Nb wire whose magnetization response is characterized by a cascade of Meissner-like phases, each terminated by a catastrophic collapse of the magnetization. Direct magneto-optic measurements of the flux penetration depth in the virgin magnetization branch are in excellent agreement with the exponential model in which Jc(B)=Jco(-B/Bo), where Jco˜5x10^6 A/cm^2 for Nb. The implications for the fundamental limiting hardness of a superconductor will be discussed.
Genetic-based unit commitment algorithm
Maifeld, T.T.; Sheble, G.B.
1996-08-01
This paper presents a new unit commitment scheduling algorithm. The proposed algorithm consist of using a genetic algorithm with domain specific mutation operators. The proposed algorithm can easily accommodate any constraint that can be true costed. Robustness of the proposed algorithm is demonstrated by comparison to a Lagrangian relaxation unit commitment algorithm on three different utilities. Results show the proposed algorithm finds good unit commitment schedules in a reasonable amount of computation time. Included in the appendix is an explanation of the true costing approach.
Powered Descent Guidance with General Thrust-Pointing Constraints
NASA Technical Reports Server (NTRS)
Carson, John M., III; Acikmese, Behcet; Blackmore, Lars
2013-01-01
The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and
Periodically kicked hard oscillators.
Cecchi, G. A.; Gonzalez, D. L.; Magnasco, M. O.; Mindlin, G. B.; Piro, O.; Santillan, A. J.
1993-01-01
A model of a hard oscillator with analytic solution is presented. Its behavior under periodic kicking, for which a closed form stroboscopic map can be obtained, is studied. It is shown that the general structure of such an oscillator includes four distinct regions; the outer two regions correspond to very small or very large amplitude of the external force and match the corresponding regions in soft oscillators (invertible degree one and degree zero circle maps, respectively). There are two new regions for intermediate amplitude of the forcing. Region 3 corresponds to moderate high forcing, and is intrinsic to hard oscillators; it is characterized by discontinuous circle maps with a flat segment. Region 2 (low moderate forcing) has a certain resemblance to a similar region in soft oscillators (noninvertible degree one circle maps); however, the limit set of the dynamics in this region is not a circle, but a branched manifold, obtained as the tangent union of a circle and an interval; the topological structure of this object is generated by the finite size of the repelling set, and is therefore also intrinsic to hard oscillators.
Mansur, Louis K; Bhattacharya, R; Blau, Peter Julian; Clemons, Art; Eberle, Cliff; Evans, H B; Janke, Christopher James; Jolly, Brian C; Lee, E H; Leonard, Keith J; Trejo, Rosa M; Rivard, John D
2010-01-01
High energy ion beam surface treatments were applied to a selected group of polymers. Of the six materials in the present study, four were thermoplastics (polycarbonate, polyethylene, polyethylene terephthalate, and polystyrene) and two were thermosets (epoxy and polyimide). The particular epoxy evaluated in this work is one of the resins used in formulating fiber reinforced composites for military helicopter blades. Measures of mechanical properties of the near surface regions were obtained by nanoindentation hardness and pin on disk wear. Attempts were also made to measure erosion resistance by particle impact. All materials were hardness tested. Pristine materials were very soft, having values in the range of approximately 0.1 to 0.5 GPa. Ion beam treatment increased hardness by up to 50 times compared to untreated materials. For reference, all materials were hardened to values higher than those typical of stainless steels. Wear tests were carried out on three of the materials, PET, PI and epoxy. On the ion beam treated epoxy no wear could be detected, whereas the untreated material showed significant wear.
On Random Betweenness Constraints
NASA Astrophysics Data System (ADS)
Goerdt, Andreas
Ordering constraints are analogous to instances of the satisfiability problem in conjunctive normalform, but instead of a boolean assignment we consider a linear ordering of the variables in question. A clause becomes true given a linear ordering iff the relative ordering of its variables obeys the constraint considered.
Creating Positive Task Constraints
ERIC Educational Resources Information Center
Mally, Kristi K.
2006-01-01
Constraints are characteristics of the individual, the task, or the environment that mold and shape movement choices and performances. Constraints can be positive--encouraging proficient movements or negative--discouraging movement or promoting ineffective movements. Physical educators must analyze, evaluate, and determine the effect various…
Credit Constraints in Education
ERIC Educational Resources Information Center
Lochner, Lance; Monge-Naranjo, Alexander
2012-01-01
We review studies of the impact of credit constraints on the accumulation of human capital. Evidence suggests that credit constraints have recently become important for schooling and other aspects of households' behavior. We highlight the importance of early childhood investments, as their response largely determines the impact of credit…
Creating Positive Task Constraints
ERIC Educational Resources Information Center
Mally, Kristi K.
2006-01-01
Constraints are characteristics of the individual, the task, or the environment that mold and shape movement choices and performances. Constraints can be positive--encouraging proficient movements or negative--discouraging movement or promoting ineffective movements. Physical educators must analyze, evaluate, and determine the effect various…
Constraint Reasoning Over Strings
NASA Technical Reports Server (NTRS)
Koga, Dennis (Technical Monitor); Golden, Keith; Pang, Wanlin
2003-01-01
This paper discusses an approach to representing and reasoning about constraints over strings. We discuss how many string domains can often be concisely represented using regular languages, and how constraints over strings, and domain operations on sets of strings, can be carried out using this representation.
Credit Constraints in Education
ERIC Educational Resources Information Center
Lochner, Lance; Monge-Naranjo, Alexander
2012-01-01
We review studies of the impact of credit constraints on the accumulation of human capital. Evidence suggests that credit constraints have recently become important for schooling and other aspects of households' behavior. We highlight the importance of early childhood investments, as their response largely determines the impact of credit…
NASA Technical Reports Server (NTRS)
Stein, J. A.
1982-01-01
Simple tool stakes hard-steel parts--that is, forces one part into recess on another, deforming receiving part so that it restrains inserted one. Tool allows small machine shops to stake hard steel without massive presses. Can be used, for example, to insert ball and spring into hard steel snap-tool body such as that used to turn socket wrenches. Use is not limited to hard steel; can be used as well to assemble parts made of softer materials.
Join-Graph Propagation Algorithms
Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina
2010-01-01
The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057
Constructive neural network learning algorithms
Parekh, R.; Yang, Jihoon; Honavar, V.
1996-12-31
Constructive Algorithms offer an approach for incremental construction of potentially minimal neural network architectures for pattern classification tasks. These algorithms obviate the need for an ad-hoc a-priori choice of the network topology. The constructive algorithm design involves alternately augmenting the existing network topology by adding one or more threshold logic units and training the newly added threshold neuron(s) using a stable variant of the perception learning algorithm (e.g., pocket algorithm, thermal perception, and barycentric correction procedure). Several constructive algorithms including tower, pyramid, tiling, upstart, and perception cascade have been proposed for 2-category pattern classification. These algorithms differ in terms of their topological and connectivity constraints as well as the training strategies used for individual neurons.
A dual method for optimal control problems with initial and final boundary constraints.
NASA Technical Reports Server (NTRS)
Pironneau, O.; Polak, E.
1973-01-01
This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.
A dual method for optimal control problems with initial and final boundary constraints.
NASA Technical Reports Server (NTRS)
Pironneau, O.; Polak, E.
1973-01-01
This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.
Multifrequency electrical impedance tomography using spectral constraints.
Malone, Emma; Sato Dos Santos, Gustavo; Holder, David; Arridge, Simon
2014-02-01
Multifrequency electrical impedance tomography (MFEIT) exploits the dependence of tissue impedance on frequency to recover an image of conductivity. MFEIT could provide emergency diagnosis of pathologies such as acute stroke, brain injury and breast cancer. We present a method for performing MFEIT using spectral constraints. Boundary voltage data is employed directly to reconstruct the volume fraction distribution of component tissues using a nonlinear method. Given that the reconstructed parameter is frequency independent, this approach allows for the simultaneous use of all multifrequency data, thus reducing the degrees of freedom of the reconstruction problem. Furthermore, this method allows for the use of frequency difference data in a nonlinear reconstruction algorithm. Results from empirical phantom measurements suggest that our fraction reconstruction method points to a new direction for the development of multifrequency EIT algorithms in the case that the spectral constraints are known, and may provide a unifying framework for static EIT imaging.
Total-variation regularization with bound constraints
Chartrand, Rick; Wohlberg, Brendt
2009-01-01
We present a new algorithm for bound-constrained total-variation (TV) regularization that in comparison with its predecessors is simple, fast, and flexible. We use a splitting approach to decouple TV minimization from enforcing the constraints. Consequently, existing TV solvers can be employed with minimal alteration. This also makes the approach straightforward to generalize to any situation where TV can be applied. We consider deblurring of images with Gaussian or salt-and-pepper noise, as well as Abel inversion of radiographs with Poisson noise. We incorporate previous iterative reweighting algorithms to solve the TV portion.
Sheinberg, Haskell
1986-01-01
A composition of matter having a Rockwell A hardness of at least 85 is formed from a precursor mixture comprising between 3 and 10 weight percent boron carbide and the remainder a metal mixture comprising from 70 to 90 percent tungsten or molybdenum, with the remainder of the metal mixture comprising nickel and iron or a mixture thereof. The composition has a relatively low density of between 7 to 14 g/cc. The precursor is preferably hot pressed to yield a composition having greater than 100% of theoretical density.
Sheinberg, H.
1983-07-26
A composition of matter having a Rockwell A hardness of at least 85 is formed from a precursor mixture comprising between 3 and 10 wt % boron carbide and the remainder a metal mixture comprising from 70 to 90% tungsten or molybdenum, with the remainder of the metal mixture comprising nickel and iron or a mixture thereof. The composition has a relatively low density of between 7 and 14 g/cc. The precursor is preferably hot pressed to yield a composition having greater than 100% of theoretical density.
Arching in tapped deposits of hard disks.
Pugnaloni, Luis A; Valluzzi, Marcos G; Valluzzi, Lucas G
2006-05-01
We simulate the tapping of a bed of hard disks in a rectangular box by using a pseudodynamic algorithm. In these simulations, arches are unambiguously defined and we can analyze their properties as a function of the tapping amplitude. We find that an order-disorder transition occurs within a narrow range of tapping amplitudes as has been seen by others. Arches are always present in the system although they exhibit regular shapes in the ordered regime. Interestingly, an increase in the number of arches does not always correspond to a reduction in the packing fraction. This is in contrast with what is found in three-dimensional systems.
Evolutionary Algorithm for Calculating Available Transfer Capability
NASA Astrophysics Data System (ADS)
Šošić, Darko; Škokljev, Ivan
2013-09-01
The paper presents an evolutionary algorithm for calculating available transfer capability (ATC). ATC is a measure of the transfer capability remaining in the physical transmission network for further commercial activity over and above already committed uses. In this paper, MATLAB software is used to determine the ATC between any bus in deregulated power systems without violating system constraints such as thermal, voltage, and stability constraints. The algorithm is applied on IEEE 5 bus system and on IEEE 30 bus system.
Revisiting the definition of local hardness and hardness kernel.
Polanco-Ramírez, Carlos A; Franco-Pérez, Marco; Carmona-Espíndola, Javier; Gázquez, José L; Ayers, Paul W
2017-05-17
An analysis of the hardness kernel and local hardness is performed to propose new definitions for these quantities that follow a similar pattern to the one that characterizes the quantities associated with softness, that is, we have derived new definitions for which the integral of the hardness kernel over the whole space of one of the variables leads to local hardness, and the integral of local hardness over the whole space leads to global hardness. A basic aspect of the present approach is that global hardness keeps its identity as the second derivative of energy with respect to the number of electrons. Local hardness thus obtained depends on the first and second derivatives of energy and electron density with respect to the number of electrons. When these derivatives are approximated by a smooth quadratic interpolation of energy, the expression for local hardness reduces to the one intuitively proposed by Meneses, Tiznado, Contreras and Fuentealba. However, when one combines the first directional derivatives with smooth second derivatives one finds additional terms that allow one to differentiate local hardness for electrophilic attack from the one for nucleophilic attack. Numerical results related to electrophilic attacks on substituted pyridines, substituted benzenes and substituted ethenes are presented to show the overall performance of the new definition.
Fan, Quan-Yong; Yang, Guang-Hong
2017-01-01
The state inequality constraints have been hardly considered in the literature on solving the nonlinear optimal control problem based the adaptive dynamic programming (ADP) method. In this paper, an actor-critic (AC) algorithm is developed to solve the optimal control problem with a discounted cost function for a class of state-constrained nonaffine nonlinear systems. To overcome the difficulties resulting from the inequality constraints and the nonaffine nonlinearities of the controlled systems, a novel transformation technique with redesigned slack functions and a pre-compensator method are introduced to convert the constrained optimal control problem into an unconstrained one for affine nonlinear systems. Then, based on the policy iteration (PI) algorithm, an online AC scheme is proposed to learn the nearly optimal control policy for the obtained affine nonlinear dynamics. Using the information of the nonlinear model, novel adaptive update laws are designed to guarantee the convergence of the neural network (NN) weights and the stability of the affine nonlinear dynamics without the requirement for the probing signal. Finally, the effectiveness of the proposed method is validated by simulation studies.
Bech, A. O.; Kipling, M. D.; Heather, J. C.
1962-01-01
In Great Britain there have been no published reports of respiratory disease occurring amongst workers in the hard metal (tungsten carbide) industry. In this paper the clinical and radiological findings in six cases and the pathological findings in one are described. In two cases physiological studies indicated mild alveolar diffusion defects. Histological examination in a fatal case revealed diffuse pulmonary interstitial fibrosis with marked peribronchial and perivascular fibrosis and bronchial epithelial hyperplasia and metaplasia. Radiological surveys revealed the sporadic occurrence and low incidence of the disease. The alterations in respiratory mechanics which occurred in two workers following a day's exposure to dust are described. Airborne dust concentrations are given. The industrial process is outlined and the literature is reviewed. The toxicity of the metals is discussed, and our findings are compared with those reported from Europe and the United States. We are of the opinion that the changes which we would describe as hard metal disease are caused by the inhalation of dust at work and that the component responsible may be cobalt. Images PMID:13970036
1989-10-31
static approach to constrained networks has been used to develop a dynamic theory of constraint networks for problem solving. The scientific results of this development resulted in six scientific publications . (KT)
Constraint Embedding Technique for Multibody System Dynamics
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Cheng, Michael K.
2011-01-01
Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with
NASA Astrophysics Data System (ADS)
Nikulin, A.; Levin, V. L.; Shuler, A. E.; Carr, M. J.; West, M. E.
2010-12-01
The Klyuchevskoy Group of volcanoes in Central Kamchatka, Russia is among the largest volcanic features on the planet, yet its position within the Kamchatka subduction zone is hard to explain with a simple tectonic mechanism. Geochemical evidence indicates that lavas of the Klyuchevskoy Group are typical products of subduction-induced flux melting, yet the depth to the subducting Pacific plate beneath this volcanic center (150-200km, Gotbatov et al., 1997) is much greater than the average depth of subduction associated with arc volcanism (108 +/-14, Tatsumi and Eggins, 1995). We present seismological and geochemical constraints on the upper mantle structure beneath the Klyuchevskoy Group, based on receiver function analysis of data collected by the Partnership in Research and Education (PIRE) project (2006-2010) and geochemical characteristics derived from open databases. In past research we identified a planar dipping seismic feature in the mantle wedge which is inclined at 35° plunging north at a depth of ~110 km and appears to be sharply bounded. We present improved constraints on this feature using data from 7 additional seismic stations and a modified receiver function migration algorithm. We present a compilation of geochemical data, which indicates that the Klyuchevskoy Group has a strong dependence on flux-induced melting to sustain its level of activity. We argue that the observed velocity anomaly in the upper mantle is a source of melts for the Klyuchevskoy Group and its presence may indicate the need to revisit the tectonic history and structure of Central Kamchatka.
Approximate resolution of hard numbering problems
Bailleux, O.; Chabrier, J.J.
1996-12-31
We present a new method for estimating the number of solutions of constraint satisfaction problems. We use a stochastic forward checking algorithm for drawing a sample of paths from a search tree. With this sample, we compute two values related to the number of solutions of a CSP instance. First, an unbiased estimate, second, a lower bound with an arbitrary low error probability. We will describe applications to the Boolean Satisfiability problem and the Queens problem. We shall give some experimental results for these problems.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem
Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849
Programming the gradient projection algorithm
NASA Technical Reports Server (NTRS)
Hargrove, A.
1983-01-01
The gradient projection method of numerical optimization which is applied to problems having linear constraints but nonlinear objective functions is described and analyzed. The algorithm is found to be efficient and thorough for small systems, but requires the addition of auxiliary methods and programming for large scale systems with severe nonlinearities. In order to verify the theoretical results a digital computer is used to simulate the algorithm.
Nonlinear equality constraints in feasible sequential quadratic programming
Lawrence, C.; Tits, A.
1994-12-31
In this talk we show that convergence of a feasible sequential quadratic programming algorithm modified to handle smooth nonlinear equality constraints. The modification of the algorithm to incorporate equality constraints is based on a scheme proposed by Mayne and Polak and is implemented in fsqp/cfsqp, an optimization package that generates feasible iterates. Nonlinear equality constraints are treated as {open_quotes}{<=}-type constraints to be satisfied by all iterates, thus precluding any positive value, and an exact penalty term is added to the objective function which penalizes negative values. For example, the problem minimize f(x) s.t. h(x) = 0, with h(x) a scalar, is replaced by minimize f(x) - ch(x) s.t. h(x) {<=} 0. The modified problem is equivalent to the original problem when c is large enough (but finite). Such a value is determined automatically via iterative adjustments.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Derivation of hard deadlines for real-time control systems
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Kim, Hagbae
1992-01-01
The computation-time delay in the feedback controller of a real-time control system may cause failure to update the control input during one or more sampling periods. A dynamic failure is said to occur if this delay exceeds a certain limit called a hard deadline. The authors present a method for calculating the hard deadlines in linear time-invariant control systems. To derive necessary conditions for (asymptotic) system stability, the state difference equation is modified based on an assumed maximum delay and the probability distribution of delays whose magnitudes are less than, or equal to, the assumed maximum delay. Moreover, the allowed state-space-which is derived from given input and state constraints-is used to calculate the hard deadline as a function of time and the system state. The authors consider a one-shot delay model in which a single event causes a dynamic failure.
Janka hardness using nonstandard specimens
David W. Green; Marshall Begel; William Nelson
2006-01-01
Janka hardness determined on 1.5- by 3.5-in. specimens (2Ã4s) was found to be equivalent to that determined using the 2- by 2-in. specimen specified in ASTM D 143. Data are presented on the relationship between Janka hardness and the strength of clear wood. Analysis of historical data determined using standard specimens indicated no difference between side hardness...
Constraints on relaxion windows
NASA Astrophysics Data System (ADS)
Choi, Kiwoon; Im, Sang Hui
2016-12-01
We examine the low energy phenomenology of the relaxion solution to the weak scale hierarchy problem. Assuming that the Hubble friction is responsible for a dissipation of the relaxion energy, we identify the cosmological relaxion window which corresponds to the parameter region compatible with a given value of the acceptable number of inflationary e-foldings. We then discuss a variety of observational constraints on the relaxion window, including those from astrophysical and cosmological considerations. We find that majority of the parameter space with a relaxion mass m ϕ ≳ 100 eV or a relaxion decay constant f ≲107GeV is excluded by existing constraints. There is an interesting parameter region with m ϕ ˜ 0 .2 - 10 GeV and f ˜ few - 200 TeV, which is allowed by existing constraints, but can be probed soon by future beam dump experiments such as the SHiP experiment, or by improved EDM experiments.
Superselection from canonical constraints
NASA Astrophysics Data System (ADS)
Hall, Michael J. W.
2004-08-01
The evolution of both quantum and classical ensembles may be described via the probability density P on configuration space, its canonical conjugate S, and an ensemble Hamiltonian \\skew3\\tilde{H}[P,S] . For quantum ensembles this evolution is, of course, equivalent to the Schrödinger equation for the wavefunction, which is linear. However, quite simple constraints on the canonical fields P and S correspond to nonlinear constraints on the wavefunction. Such constraints act to prevent certain superpositions of wavefunctions from being realized, leading to superselection-type rules. Examples leading to superselection for energy, spin direction and 'classicality' are given. The canonical formulation of the equations of motion, in terms of a probability density and its conjugate, provides a universal language for describing classical and quantum ensembles on both continuous and discrete configuration spaces, and is briefly reviewed in an appendix.
Constraint algebra in bigravity
Soloviev, V. O.
2015-07-15
The number of degrees of freedom in bigravity theory is found for a potential of general form and also for the potential proposed by de Rham, Gabadadze, and Tolley (dRGT). This aim is pursued via constructing a Hamiltonian formalismand studying the Poisson algebra of constraints. A general potential leads to a theory featuring four first-class constraints generated by general covariance. The vanishing of the respective Hessian is a crucial property of the dRGT potential, and this leads to the appearance of two additional second-class constraints and, hence, to the exclusion of a superfluous degree of freedom—that is, the Boulware—Deser ghost. The use of a method that permits avoiding an explicit expression for the dRGT potential is a distinctive feature of the present study.
Book Review: Constraining Constraints.
ERIC Educational Resources Information Center
Kessen, William; Reznick, J. Steven
1993-01-01
Reviews "The Epigenesis of Mind: Essays on Biology and Cognition" (S. Carey and R. Gelman, editors), a collection of essays that present a hard-scientific vision of cognitive development. Examines the arguments this work articulates and then determines the place it occupies in the analysis of the state of developmental psychology as presented in…
Overview: Hard Rock Penetration
Dunn, J.C.
1992-08-01
The Hard Rock Penetration program is developing technology to reduce the costs of drilling and completing geothermal wells. Current projects include: lost circulation control, rock penetration mechanics, instrumentation, and industry/DOE cost shared projects of the Geothermal Drilling organization. Last year, a number of accomplishments were achieved in each of these areas. A new flow meter being developed to accurately measure drilling fluid outflow was tested extensively during Long Valley drilling. Results show that this meter is rugged, reliable, and can provide useful measurements of small differences in fluid inflow and outflow rates. By providing early indications of fluid gain or loss, improved control of blow-out and lost circulation problems during geothermal drilling can be expected. In the area of downhole tools for lost circulation control, the concept of a downhole injector for injecting a two-component, fast-setting cementitious mud was developed. DOE filed a patent application for this concept during FY 91. The design criteria for a high-temperature potassium, uranium, thorium logging tool featuring a downhole data storage computer were established, and a request for proposals was submitted to tool development companies. The fundamental theory of acoustic telemetry in drill strings was significantly advanced through field experimentation and analysis. A new understanding of energy loss mechanisms was developed.
Overview - Hard Rock Penetration
Dunn, James C.
1992-03-24
The Hard Rock Penetration program is developing technology to reduce the costs of drilling and completing geothermal wells. Current projects include: lost circulation control, rock penetration mechanics, instrumentation, and industry/DOE cost shared projects of the Geothermal Drilling Organization. Last year, a number of accomplishments were achieved in each of these areas. A new flow meter being developed to accurately measure drilling fluid outflow was tested extensively during Long Valley drilling. Results show that this meter is rugged, reliable, and can provide useful measurements of small differences in fluid inflow and outflow rates. By providing early indications of fluid gain or loss, improved control of blow-out and lost circulation problems during geothermal drilling can be expected. In the area of downhole tools for lost circulation control, the concept of a downhole injector for injecting a two-component, fast-setting cementitious mud was developed. DOE filed a patent application for this concept during FY 91. The design criteria for a high-temperature potassium, uranium, thorium logging tool featuring a downhole data storage computer were established, and a request for proposals was submitted to tool development companies. The fundamental theory of acoustic telemetry in drill strings was significantly advanced through field experimentation and analysis. A new understanding of energy loss mechanisms was developed.
Overview: Hard Rock Penetration
Dunn, J.C.
1992-01-01
The Hard Rock Penetration program is developing technology to reduce the costs of drilling and completing geothermal wells. Current projects include: lost circulation control, rock penetration mechanics, instrumentation, and industry/DOE cost shared projects of the Geothermal Drilling organization. Last year, a number of accomplishments were achieved in each of these areas. A new flow meter being developed to accurately measure drilling fluid outflow was tested extensively during Long Valley drilling. Results show that this meter is rugged, reliable, and can provide useful measurements of small differences in fluid inflow and outflow rates. By providing early indications of fluid gain or loss, improved control of blow-out and lost circulation problems during geothermal drilling can be expected. In the area of downhole tools for lost circulation control, the concept of a downhole injector for injecting a two-component, fast-setting cementitious mud was developed. DOE filed a patent application for this concept during FY 91. The design criteria for a high-temperature potassium, uranium, thorium logging tool featuring a downhole data storage computer were established, and a request for proposals was submitted to tool development companies. The fundamental theory of acoustic telemetry in drill strings was significantly advanced through field experimentation and analysis. A new understanding of energy loss mechanisms was developed.
Numerical prediction of microstructure and hardness in multicycle simulations
Oddy, A.S.; McDill, J.M.J.
1996-06-01
Thermal-microstructural predictions are made and compared to physical simulations of heat-affected zones in multipass and weaved welds. The microstructural prediction algorithm includes reaustenitization kinetics, grain growth, austenite decomposition kinetics, hardness, and tempering. Microstructural simulation of weaved welds requires that the algorithm include transient reaustenitization, austenite decomposition for arbitrary thermal cycles including during reheating, and tempering. Material properties for each of these phenomena are taken from the best available literature. The numerical predictions are compared with the results of physical simulations made at the Metals Technology Laboratory, CANMET, on a Gleeble 1500 simulator. Thermal histories used in the physical simulations included single-pass welds, isothermal tempering, two-cycle, and three-cycle welds. The two- and three-cycle welds include temper-bead and weaved-weld simulations. A recurring theme in the analysis is the significant variation found in the material properties for the same grade of steel. This affected all the material properties used including those governing reaustenitization, austenite grain growth, austenite decomposition, and hardness. Hardness measurements taken from the literature show a variation of {+-}5 to 30 HV on the same sample. Alloy differences within the allowable range also led to hardness variations of {+-}30 HV for the heat-affected zone of multipass welds. The predicted hardnesses agree extremely well with those taken from the physical simulations.
Conflict-Aware Scheduling Algorithm
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Borden, Chester
2006-01-01
conflict-aware scheduling algorithm is being developed to help automate the allocation of NASA s Deep Space Network (DSN) antennas and equipment that are used to communicate with interplanetary scientific spacecraft. The current approach for scheduling DSN ground resources seeks to provide an equitable distribution of tracking services among the multiple scientific missions and is very labor intensive. Due to the large (and increasing) number of mission requests for DSN services, combined with technical and geometric constraints, the DSN is highly oversubscribed. To help automate the process, and reduce the DSN and spaceflight project labor effort required for initiating, maintaining, and negotiating schedules, a new scheduling algorithm is being developed. The scheduling algorithm generates a "conflict-aware" schedule, where all requests are scheduled based on a dynamic priority scheme. The conflict-aware scheduling algorithm allocates all requests for DSN tracking services while identifying and maintaining the conflicts to facilitate collaboration and negotiation between spaceflight missions. These contrast with traditional "conflict-free" scheduling algorithms that assign tracks that are not in conflict and mark the remainder as unscheduled. In the case where full schedule automation is desired (based on mission/event priorities, fairness, allocation rules, geometric constraints, and ground system capabilities/ constraints), a conflict-free schedule can easily be created from the conflict-aware schedule by removing lower priority items that are in conflict.
Measuring the Hardness of Minerals
ERIC Educational Resources Information Center
Bushby, Jessica
2005-01-01
The author discusses Moh's hardness scale, a comparative scale for minerals, whereby the softest mineral (talc) is placed at 1 and the hardest mineral (diamond) is placed at 10, with all other minerals ordered in between, according to their hardness. Development history of the scale is outlined, as well as a description of how the scale is used…
Generalized arc consistency for global cardinality constraint
Regin, J.C.
1996-12-31
A global cardinality constraint (gcc) is specified in terms of a set of variables X = (x{sub 1},..., x{sub p}) which take their values in a subset of V = (v{sub 1},...,v{sub d}). It constrains the number of times a value v{sub i} {epsilon} V is assigned to a variable in X to be in an interval [l{sub i}, c{sub i}]. Cardinality constraints have proved very useful in many real-life problems, such as scheduling, timetabling, or resource allocation. A gcc is more general than a constraint of difference, which requires each interval to be. In this paper, we present an efficient way of implementing generalized arc consistency for a gcc. The algorithm we propose is based on a new theorem of flow theory. Its space complexity is O({vert_bar}X{vert_bar} {times} {vert_bar}V{vert_bar}) and its time complexity is O({vert_bar}X{vert_bar}{sup 2} {times} {vert_bar}V{vert_bar}). We also show how this algorithm can efficiently be combined with other filtering techniques.
Quasi-Random Algorithms for Real-Time Spacecraft Motion Planning and Formation Flight
NASA Astrophysics Data System (ADS)
Frazzoli, E.
Many applications of current interest, including on-orbit servicing of large space structures, space-based interferometry, and distributed radar systems, involve several spacecraft maneuvering in close proximity of one another. Often, the mission requires that the spacecraft be able to react quickly to changes in the environment, for example to reconfigure the formation to investigate a target of opportunity, or to prevent damage from a failure. In these cases, the spacecraft need to solve in real time complex motion planning problems, minimizing meaningful cost functions, such as time or fuel consumption, and taking into account constraints such as collision and plume impingement avoidance. Such problems are provably hard from a computational point of view, in the sense that any deterministic, complete algorithm will require exponential time to find a feasible solution. Recent advances in the robotics field have provided a new class of algorithms based on randomization, which provides computational tractability, by relaxing the completeness requirement to probabilistic completeness (i.e. the solution will be found by such algorithms with arbitrarily high probability in polynomial time). Randomized algorithms have been developed and successfully applied by the author and other researchers to real-time motion planning problems involving autonomous air vehicles and spacecraft attitude motion. In this paper we present a new class of quasi- random algorithms, which, combining optimal orbital maneuvers and deterministic sampling strategies, are able to provide extremely fast and efficient planners. Moreover, these planners are able to guarantee the safety of the space system, that is the satisfaction of collision and plume impingement avoidance constraints, even in the face of finite computation times (i.e., when the planner has to be pre-empted). Formation reconfiguration examples will illustrate the effectiveness of the methods, and a discussion of the results will
De Bruijn Superwalk with Multiplicities Problem is NP-hard
2013-01-01
De Bruijn Superwalk with Multiplicities Problem is the problem of finding a walk in the de Bruijn graph containing several walks as subwalks and passing through each edge the exactly predefined number of times (equal to the multiplicity of this edge). This problem has been stated in the talk by Paul Medvedev and Michael Brudno on the first RECOMB Satellite Conference on Open Problems in Algorithmic Biology in August 2012. In this paper we show that this problem is NP-hard. Combined with results of previous works it means that all known models for genome assembly are NP-hard. PMID:23734822
NASA Technical Reports Server (NTRS)
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
Model Based Filtered Backprojection Algorithm: A Tutorial
2014-01-01
Purpose People have been wandering for a long time whether a filtered backprojection (FBP) algorithm is able to incorporate measurement noise in image reconstruction. The purpose of this tutorial is to develop such an FBP algorithm that is able to minimize an objective function with an embedded noise model. Methods An objective function is first set up to model measurement noise and to enforce some constraints so that the resultant image has some pre-specified properties. An iterative algorithm is used to minimize the objective function, and then the result of the iterative algorithm is converted into the Fourier domain, which in turn leads to an FBP algorithm. The model based FBP algorithm is almost the same as the conventional FBP algorithm, except for the filtering step. Results The model based FBP algorithm has been applied to low-dose x-ray CT, nuclear medicine, and real-time MRI applications. Compared with the conventional FBP algorithm, the model based FBP algorithm is more effective in reducing noise. Even though an iterative algorithm can achieve the same noise-reducing performance, the model based FBP algorithm is much more computationally efficient. Conclusions The model based FBP algorithm is an efficient and effective image reconstruction tool. In many applications, it can replace the state-of-the-art iterative algorithms, which usually have a heavy computational cost. The model based FBP algorithm is linear and it has advantages over a nonlinear iterative algorithm in parametric image reconstruction and noise analysis. PMID:25574421
Trajectory optimization in the presence of constraints
NASA Astrophysics Data System (ADS)
McQuade, Timothy E.
1989-06-01
In many aerospace problems, it is necessary to determine vehicle trajectories that satisfy constraints. Typically two types of constraints are of interest. First, it may be desirable to satisfy a set of boundary conditions. Second, it may be necessary to limit the motion of the vehicle so that physical limits and hardware limits are not exceeded. In addition to these requirements, it may be necessary to optimize some measure of vehicle performance. In this thesis, the square root sweep method is used to solve a discrete-time linear quadratic optimal control problem. The optimal control problem arises from a Mayer form continuous-time nonlinear optimization problem. A method for solving the optimal control problem is derived. Called the square root sweep algorithm, the solution consists of a set of backward recursions for a set of square root parameters. The square root sweep algorithm is shown to be capable of treating Mayer form optimization problems. Heuristics for obtaining solutions are discussed. The square root sweep algorithm is used to solve several example optimization problems.
Baryon Spectrum Analysis using Covariant Constraint Dynamics
NASA Astrophysics Data System (ADS)
Whitney, Joshua; Crater, Horace
2012-03-01
The energy spectrum of the baryons is determined by treating each of them as a three-body system with the interacting forces coming from a set of two-body potentials that depend on both the distance between the quarks and the spin and orbital angular momentum coupling terms. The Two Body Dirac equations of constraint dynamics derived by Crater and Van Alstine, matched with the quasipotential formalism of Todorov as the underlying two-body formalism are used, as well as the three-body constraint formalism of Sazdjian to integrate the three two-body equations into a single relativistically covariant three body equation for the bound state energies. The results are analyzed and compared to experiment using a best fit method and several different algorithms, including a gradient approach, and Monte Carlo method. Results for all well-known baryons are presented and compared to experiment, with good accuracy.
Beta Backscatter Measures the Hardness of Rubber
NASA Technical Reports Server (NTRS)
Morrissey, E. T.; Roje, F. N.
1986-01-01
Nondestructive testing method determines hardness, on Shore scale, of room-temperature-vulcanizing silicone rubber. Measures backscattered beta particles; backscattered radiation count directly proportional to Shore hardness. Test set calibrated with specimen, Shore hardness known from mechanical durometer test. Specimen of unknown hardness tested, and radiation count recorded. Count compared with known sample to find Shore hardness of unknown.
Thin coatings and films hardness evaluation
NASA Astrophysics Data System (ADS)
Matyunin, V. M.; Marchenkov, A. Yu; Demidov, A. N.; Karimbekov, M. A.
2016-10-01
The existing thin coatings and films hardness evaluation methods based on indentation with pyramidal indenter on various scale levels are expounded. The impact of scale factor on hardness values is performed. The experimental verification of several existing hardness evaluation methods regarding the substrate hardness value and the “coating - substrate” composite hardness value is made.
Beta Backscatter Measures the Hardness of Rubber
NASA Technical Reports Server (NTRS)
Morrissey, E. T.; Roje, F. N.
1986-01-01
Nondestructive testing method determines hardness, on Shore scale, of room-temperature-vulcanizing silicone rubber. Measures backscattered beta particles; backscattered radiation count directly proportional to Shore hardness. Test set calibrated with specimen, Shore hardness known from mechanical durometer test. Specimen of unknown hardness tested, and radiation count recorded. Count compared with known sample to find Shore hardness of unknown.
Retinal image analysis based on mixture models to detect hard exudates.
Sánchez, Clara I; García, María; Mayo, Agustín; López, María I; Hornero, Roberto
2009-08-01
Diabetic Retinopathy is one of the leading causes of blindness in developed countries. Hard exudates have been found to be one of the most prevalent earliest clinical signs of retinopathy. Thus, automatic detection of hard exudates from retinal images is clinically significant. In this study, an automatic method to detect hard exudates is proposed. The algorithm is based on mixture models to dynamically threshold the images in order to separate exudates from background. A postprocessing technique, based on edge detection, is applied to distinguish hard exudates from cotton wool spots and other artefacts. We prospectively assessed the algorithm performance using a database of 80 retinal images with variable colour, brightness, and quality. The algorithm obtained a sensitivity of 90.2% and a positive predictive value of 96.8% using a lesion-based criterion. The image-based classification accuracy is also evaluated obtaining a sensitivity of 100% and a specificity of 90%.
Producing Satisfactory Solutions to Scheduling Problems: An Iterative Constraint Relaxation Approach
NASA Technical Reports Server (NTRS)
Chien, S.; Gratch, J.
1994-01-01
One drawback to using constraint-propagation in planning and scheduling systems is that when a problem has an unsatisfiable set of constraints such algorithms typically only show that no solution exists. While, technically correct, in practical situations, it is desirable in these cases to produce a satisficing solution that satisfies the most important constraints (typically defined in terms of maximizing a utility function). This paper describes an iterative constraint relaxation approach in which the scheduler uses heuristics to progressively relax problem constraints until the problem becomes satisfiable. We present empirical results of applying these techniques to the problem of scheduling spacecraft communications for JPL/NASA antenna resources.
Fault-Tolerant, Radiation-Hard DSP
NASA Technical Reports Server (NTRS)
Czajkowski, David
2011-01-01
Commercial digital signal processors (DSPs) for use in high-speed satellite computers are challenged by the damaging effects of space radiation, mainly single event upsets (SEUs) and single event functional interrupts (SEFIs). Innovations have been developed for mitigating the effects of SEUs and SEFIs, enabling the use of very-highspeed commercial DSPs with improved SEU tolerances. Time-triple modular redundancy (TTMR) is a method of applying traditional triple modular redundancy on a single processor, exploiting the VLIW (very long instruction word) class of parallel processors. TTMR improves SEU rates substantially. SEFIs are solved by a SEFI-hardened core circuit, external to the microprocessor. It monitors the health of the processor, and if a SEFI occurs, forces the processor to return to performance through a series of escalating events. TTMR and hardened-core solutions were developed for both DSPs and reconfigurable field-programmable gate arrays (FPGAs). This includes advancement of TTMR algorithms for DSPs and reconfigurable FPGAs, plus a rad-hard, hardened-core integrated circuit that services both the DSP and FPGA. Additionally, a combined DSP and FPGA board architecture was fully developed into a rad-hard engineering product. This technology enables use of commercial off-the-shelf (COTS) DSPs in computers for satellite and other space applications, allowing rapid deployment at a much lower cost. Traditional rad-hard space computers are very expensive and typically have long lead times. These computers are either based on traditional rad-hard processors, which have extremely low computational performance, or triple modular redundant (TMR) FPGA arrays, which suffer from power and complexity issues. Even more frustrating is that the TMR arrays of FPGAs require a fixed, external rad-hard voting element, thereby causing them to lose much of their reconfiguration capability and in some cases significant speed reduction. The benefits of COTS high
General heuristics algorithms for solving capacitated arc routing problem
NASA Astrophysics Data System (ADS)
Fadzli, Mohammad; Najwa, Nurul; Masran, Hafiz
2015-05-01
In this paper, we try to determine the near-optimum solution for the capacitated arc routing problem (CARP). In general, NP-hard CARP is a special graph theory specifically arises from street services such as residential waste collection and road maintenance. By purpose, the design of the CARP model and its solution techniques is to find optimum (or near-optimum) routing cost for a fleet of vehicles involved in operation. In other words, finding minimum-cost routing is compulsory in order to reduce overall operation cost that related with vehicles. In this article, we provide a combination of various heuristics algorithm to solve a real case of CARP in waste collection and benchmark instances. These heuristics work as a central engine in finding initial solutions or near-optimum in search space without violating the pre-setting constraints. The results clearly show that these heuristics algorithms could provide good initial solutions in both real-life and benchmark instances.
A Framework for Dynamic Constraint Reasoning Using Procedural Constraints
NASA Technical Reports Server (NTRS)
Jonsson, Ari K.; Frank, Jeremy D.
1999-01-01
Many complex real-world decision and control problems contain an underlying constraint reasoning problem. This is particularly evident in a recently developed approach to planning, where almost all planning decisions are represented by constrained variables. This translates a significant part of the planning problem into a constraint network whose consistency determines the validity of the plan candidate. Since higher-level choices about control actions can add or remove variables and constraints, the underlying constraint network is invariably highly dynamic. Arbitrary domain-dependent constraints may be added to the constraint network and the constraint reasoning mechanism must be able to handle such constraints effectively. Additionally, real problems often require handling constraints over continuous variables. These requirements present a number of significant challenges for a constraint reasoning mechanism. In this paper, we introduce a general framework for handling dynamic constraint networks with real-valued variables, by using procedures to represent and effectively reason about general constraints. The framework is based on a sound theoretical foundation, and can be proven to be sound and complete under well-defined conditions. Furthermore, the framework provides hybrid reasoning capabilities, as alternative solution methods like mathematical programming can be incorporated into the framework, in the form of procedures.
Hiding quiet solutions in random constraint satisfaction problems
Zdeborova, Lenka; Krzakala, Florent
2008-01-01
We study constraint satisfaction problems on the so-called planted random ensemble. We show that for a certain class of problems, e.g., graph coloring, many of the properties of the usual random ensemble are quantitatively identical in the planted random ensemble. We study the structural phase transitions and the easy-hard-easy pattern in the average computational complexity. We also discuss the finite temperature phase diagram, finding a close connection with the liquid-glass-solid phenomenology.
Control Loop Processor (CLP)-Platform for Hard Real-Time Space Applications
NASA Astrophysics Data System (ADS)
Ruiz, Marco; Fern, Jean-Brieuc; de la vallee Poussin, Henri
2016-08-01
Nowadays, the field of tight control loops, characterized by hard real-time constraints (loop frequency > 1 kHz) in conjunction with complex algorithmic needs, currently lacks a microprocessor allowing to make a software programmable approach economically and technically viable. The control of electro-mechanical actuators is an example of target application.A dedicated space-hardened microprocessor, called Control Loop Processor (CLP), is currently under development - with the tight collaboration of ESA TEC/ED - and integrates several key features ensuring a fully deterministic behaviour as well as an embedded robustness/anomaly management in conjunction with vectorial IEEE-754 floating-points operations.A wide range of interfaces has also been selected to cover current and future space-oriented communication interfaces. A software development environment, integrating a C compiler and a code generator taking Simulink files as code source, is currently being developed on the LLVM framework. The assembler and disassembler, based on the LLVM engine, are currently available and have been used for the processor validation and for a motor control demonstration.
NASA Astrophysics Data System (ADS)
Jackson, C. S.; Hattab, M. W.; Huerta, G.
2014-12-01
Emergent constraints are observable quantities that provide some physical basis for testing or predicting how a climate model will respond to greenhouse gas forcing. Very few such constraints have been identified for the multi-model CMIP archive. Here we explore the question of whether constraints that apply to a single model, a perturbed parameter ensemble (PPE) of the Community Atmosphere Model (CAM3.1), can be applied to predicting the climate sensitivities of models within the CMIP archive. In particular we construct our predictive patterns from multivariate EOFs of the CAM3.1 ensemble control climate. Multiple regressive statistical models were created that do an excellent job of predicting CAM3.1 sensitivity to greenhouse gas forcing. However, these same patterns fail spectacularly to predict sensitivities of models within the CMIP archive. We attribute this failure to several factors. First, and perhaps the most important, is that the structures affecting climate sensitivity in CAM3.1 have a unique signature in the space of our multivariate EOF patterns that are unlike any other climate model. That is to say, we should not expect CAM3.1 to represent the way another models within CMIP archive respond to greenhouse gas forcing. The second, perhaps related, reason is that the CAM3.1 PPE does a poor job of spanning the range of climates and responses found within the CMIP archive. We shall discuss the implications of these results for the prospect of finding emergent constraints within the CMIP archive. We will also discuss what this may mean for establishing uncertainties in climate projections.
Identifying Regions Based on Flexible User Defined Constraints.
Folch, David C; Spielman, Seth E
2014-01-01
The identification of regions is both a computational and conceptual challenge. Even with growing computational power, regionalization algorithms must rely on heuristic approaches in order to find solutions. Therefore, the constraints and evaluation criteria that define a region must be translated into an algorithm that can efficiently and effectively navigate the solution space to find the best solution. One limitation of many existing regionalization algorithms is a requirement that the number of regions be selected a priori. The max-p algorithm, introduced in Duque et al. (2012), does not have this requirement, and thus the number of regions is an output of, not an input to, the algorithm. In this paper we extend the max-p algorithm to allow for greater flexibility in the constraints available to define a feasible region, placing the focus squarely on the multidimensional characteristics of region. We also modify technical aspects of the algorithm to provide greater flexibility in its ability to search the solution space. Using synthetic spatial and attribute data we are able to show the algorithm's broad ability to identify regions in maps of varying complexity. We also conduct a large scale computational experiment to identify parameter settings that result in the greatest solution accuracy under various scenarios. The rules of thumb identified from the experiment produce maps that correctly assign areas to their "true" region with 94% average accuracy, with nearly 50 percent of the simulations reaching 100 percent accuracy.
Convergence, adaptation, and constraint.
Losos, Jonathan B
2011-07-01
Convergent evolution of similar phenotypic features in similar environmental contexts has long been taken as evidence of adaptation. Nonetheless, recent conceptual and empirical developments in many fields have led to a proliferation of ideas about the relationship between convergence and adaptation. Despite criticism from some systematically minded biologists, I reaffirm that convergence in taxa occupying similar selective environments often is the result of natural selection. However, convergent evolution of a trait in a particular environment can occur for reasons other than selection on that trait in that environment, and species can respond to similar selective pressures by evolving nonconvergent adaptations. For these reasons, studies of convergence should be coupled with other methods-such as direct measurements of selection or investigations of the functional correlates of trait evolution-to test hypotheses of adaptation. The independent acquisition of similar phenotypes by the same genetic or developmental pathway has been suggested as evidence of constraints on adaptation, a view widely repeated as genomic studies have documented phenotypic convergence resulting from change in the same genes, sometimes even by the same mutation. Contrary to some claims, convergence by changes in the same genes is not necessarily evidence of constraint, but rather suggests hypotheses that can test the relative roles of constraint and selection in directing phenotypic evolution. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Diffusion LMS for Multitask Problems With Local Linear Equality Constraints
NASA Astrophysics Data System (ADS)
Nassif, Roula; Richard, Cedric; Ferrari, Andre; Sayed, Ali H.
2017-10-01
We consider distributed multitask learning problems over a network of agents where each agent is interested in estimating its own parameter vector, also called task, and where the tasks at neighboring agents are related according to a set of linear equality constraints. Each agent possesses its own convex cost function of its parameter vector and a set of linear equality constraints involving its own parameter vector and the parameter vectors of its neighboring agents. We propose an adaptive stochastic algorithm based on the projection gradient method and diffusion strategies in order to allow the network to optimize the individual costs subject to all constraints. Although the derivation is carried out for linear equality constraints, the technique can be applied to other forms of convex constraints. We conduct a detailed mean-square-error analysis of the proposed algorithm and derive closed-form expressions to predict its learning behavior. We provide simulations to illustrate the theoretical findings. Finally, the algorithm is employed for solving two problems in a distributed manner: a minimum-cost flow problem over a network and a space-time varying field reconstruction problem.
Constraint-based soft tissue simulation for virtual surgical training.
Tang, Wen; Wan, Tao Ruan
2014-11-01
Most of surgical simulators employ a linear elastic model to simulate soft tissue material properties due to its computational efficiency and the simplicity. However, soft tissues often have elaborate nonlinear material characteristics. Most prominently, soft tissues are soft and compliant to small strains, but after initial deformations they are very resistant to further deformations even under large forces. Such material characteristic is referred as the nonlinear material incompliant which is computationally expensive and numerically difficult to simulate. This paper presents a constraint-based finite-element algorithm to simulate the nonlinear incompliant tissue materials efficiently for interactive simulation applications such as virtual surgery. Firstly, the proposed algorithm models the material stiffness behavior of soft tissues with a set of 3-D strain limit constraints on deformation strain tensors. By enforcing a large number of geometric constraints to achieve the material stiffness, the algorithm reduces the task of solving stiff equations of motion with a general numerical solver to iteratively resolving a set of constraints with a nonlinear Gauss-Seidel iterative process. Secondly, as a Gauss-Seidel method processes constraints individually, in order to speed up the global convergence of the large constrained system, a multiresolution hierarchy structure is also used to accelerate the computation significantly, making interactive simulations possible at a high level of details. Finally, this paper also presents a simple-to-build data acquisition system to validate simulation results with ex vivo tissue measurements. An interactive virtual reality-based simulation system is also demonstrated.
An Aggregate Constraint Method for Inequality-constrained Least Squares Problems
NASA Astrophysics Data System (ADS)
Peng, Junhuan; Zhang, Hongping; Shong, Suli; Guo, Chunxi
2006-03-01
The inequality-constrained least squares (ICLS) problem can be solved by the simplex algorithm of quadratic programming. The ICLS problem may also be reformulated as a Bayesian problem and solved by using the Bayesian principle. This paper proposes using the aggregate constraint method of non-linear programming to solve the ICLS problem by converting many inequality constraints into one equality constraint, which is a basic augmented Lagrangean algorithm for deriving the solution to equality-constrained non-linear programming problems. Since the new approach finds the active constraints, we can derive the approximate algorithm-dependent statistical properties of the solution. As a result, some conclusions about the superiority of the estimator can be approximately made. Two simulated examples are given to show how to compute the approximate statistical properties and to show that the reasonable inequality constraints can improve the results of geodetic network with an ill-conditioned normal matrix.
Nanoindentation hardness of mineralized tissues.
Oyen, Michelle L
2006-01-01
A series elastic and plastic deformation model [Sakai, M., 1999. The Meyer hardness: a measure for plasticity? Journal of Materials Research 14(9), 3630-3639] is used to deconvolute the resistance to plastic deformation from the plane strain modulus and contact hardness parameters obtained in a nanoindentation test. Different functional dependencies of contact hardness on the plane strain modulus are examined. Plastic deformation resistance values are computed from the modulus and contact hardness for engineering materials and mineralized tissues. Elastic modulus and plastic deformation resistance parameters are used to calculate elastic and plastic deformation components, and to examine the partitioning of indentation deformation between elastic and plastic. Both the numerical values of plastic deformation resistance and the direct computation of deformation partitioning reveal the intermediate mechanical responses of mineralized composites when compared with homogeneous engineering materials.
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
Structure Constraints in a Constraint-Based Planner
NASA Technical Reports Server (NTRS)
Pang, Wan-Lin; Golden, Keith
2004-01-01
In this paper we report our work on a new constraint domain, where variables can take structured values. Earth-science data processing (ESDP) is a planning domain that requires the ability to represent and reason about complex constraints over structured data, such as satellite images. This paper reports on a constraint-based planner for ESDP and similar domains. We discuss our approach for translating a planning problem into a constraint satisfaction problem (CSP) and for representing and reasoning about structured objects and constraints over structures.
A novel constraint for thermodynamically designing DNA sequences.
Zhang, Qiang; Wang, Bin; Wei, Xiaopeng; Zhou, Changjun
2013-01-01
Biotechnological and biomolecular advances have introduced novel uses for DNA such as DNA computing, storage, and encryption. For these applications, DNA sequence design requires maximal desired (and minimal undesired) hybridizations, which are the product of a single new DNA strand from 2 single DNA strands. Here, we propose a novel constraint to design DNA sequences based on thermodynamic properties. Existing constraints for DNA design are based on the Hamming distance, a constraint that does not address the thermodynamic properties of the DNA sequence. Using a unique, improved genetic algorithm, we designed DNA sequence sets which satisfy different distance constraints and employ a free energy gap based on a minimum free energy (MFE) to gauge DNA sequences based on set thermodynamic properties. When compared to the best constraints of the Hamming distance, our method yielded better thermodynamic qualities. We then used our improved genetic algorithm to obtain lower-bound DNA sequence sets. Here, we discuss the effects of novel constraint parameters on the free energy gap.
Teaching Database Design with Constraint-Based Tutors
ERIC Educational Resources Information Center
Mitrovic, Antonija; Suraweera, Pramuditha
2016-01-01
Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…
Computerized Classification Testing under Practical Constraints with a Polytomous Model.
ERIC Educational Resources Information Center
Lau, C. Allen; Wang, Tianyou
A study was conducted to extend the sequential probability ratio testing (SPRT) procedure with the polytomous model under some practical constraints in computerized classification testing (CCT), such as methods to control item exposure rate, and to study the effects of other variables, including item information algorithms, test difficulties, item…
Teaching Database Design with Constraint-Based Tutors
ERIC Educational Resources Information Center
Mitrovic, Antonija; Suraweera, Pramuditha
2016-01-01
Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…
NASA Astrophysics Data System (ADS)
Schmidt, Greg; Witham, Brandon; Valore, Jason; Holland, Ben; Dalton, Jason
2012-06-01
Military, police, and industrial surveillance operations could benefit from having sensors deployed in configurations that maximize collection capability. We describe a surveillance planning approach that optimizes sensor placements to collect information about targets of interest by using information from predictive geospatial analytics, the physical environment, and surveillance constraints. We designed a tool that accounts for multiple sensor aspects-collection footprints, groupings, and characteristics; multiple optimization objectives-surveillance requirements and predicted threats; and multiple constraints-sensing, physical environment (including terrain), and geographic surveillance constraints. The tool uses a discrete grid model to keep track of geographic sensing objectives and constraints, and from these, estimate probabilities for collection containment and detection. We devised an evolutionary algorithm and polynomial time approximation schemes (PTAS) to optimize the tool variables above to generate the positions and aspect for a network of sensors. We also designed algorithms to coordinate a mixture of sensors with different competing objectives, competing constraints, couplings, and proximity constraints.
A Framework for Optimal Control Allocation with Structural Load Constraints
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc
2010-01-01
Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.
Noise reduction in adaptive-optics imagery with the use of support constraints
NASA Astrophysics Data System (ADS)
Matson, Charles L.; Roggemann, Michael C.
1995-02-01
The use of support constraints for noise reduction in images obtained with telescopes that use adaptive optics for atmospheric correction is discussed. Noise covariances are derived for these type of data, including the effects of photon noise and CCD read noise. The effectiveness of support constraints in achieving noise reduction is discussed in terms of these noise properties and in terms of the types of algorithms used to enforce the support constraint. Both a convex-projections and a cost-function minimization algorithm are used to enforce the support constraints, and it is shown with the use of computer simulations and field data that the cost-function algorithm results in artifacts in the reconstructions. The convex-projections algorithms produced mean-square-error decreases in the image domain of approximately 10% for high light levels but essentially no error decreases for low light levels. We emphasize images that are well resolved by the telescope and adaptive-optics system.
Constraints influencing sports wheelchair propulsion performance and injury risk
2013-01-01
The Paralympic Games are the pinnacle of sport for many athletes with a disability. A potential issue for many wheelchair athletes is how to train hard to maximise performance while also reducing the risk of injuries, particularly to the shoulder due to the accumulation of stress placed on this joint during activities of daily living, training and competition. The overall purpose of this narrative review was to use the constraints-led approach of dynamical systems theory to examine how various constraints acting upon the wheelchair-user interface may alter hand rim wheelchair performance during sporting activities, and to a lesser extent, their injury risk. As we found no studies involving Paralympic athletes that have directly utilised the dynamical systems approach to interpret their data, we have used this approach to select some potential constraints and discussed how they may alter wheelchair performance and/or injury risk. Organism constraints examined included player classifications, wheelchair setup, training and intrinsic injury risk factors. Task constraints examined the influence of velocity and types of locomotion (court sports vs racing) in wheelchair propulsion, while environmental constraints focused on forces that tend to oppose motion such as friction and surface inclination. Finally, the ecological validity of the research studies assessing wheelchair propulsion was critiqued prior to recommendations for practice and future research being given. PMID:23557065
Applying a New Parallelized Version of PSO Algorithm for Electrical Power Transmission
NASA Astrophysics Data System (ADS)
Zemzami, M.; Makhloufi, A.; Elhami, N.; Elhami, A.; Itmi, M.; Hmina, N.
2017-06-01
In this paper, the optimization of an electric power transmission material is presented giving specific consideration on material configuration and characteristics. The nature of electric power transmission networks makes it hard to manage. Thus, giving need for optimization. So the problem of optimization of electric power transmission as considered in this paper is improving the performance and reliability of the electricity pylon; the objective is to maximize resistance to load while reducing material usage and cost. For this purpose, we suggest a new version of PSO algorithm that allows the amelioration of its performance by introducing its parallelization associated to the concept of evolutionary neighborhoods. According to the experimental results, the proposed method is effective and outperforms basic PSO in terms of solution quality, accuracy, constraint handling, and time consuming.
Towards Fast, Scalable Hard Particle Monte Carlo Simulations on GPUs
NASA Astrophysics Data System (ADS)
Anderson, Joshua A.; Irrgang, M. Eric; Glaser, Jens; Harper, Eric S.; Engel, Michael; Glotzer, Sharon C.
2014-03-01
Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. We discuss the implementation of Monte Carlo for arbitrary hard shapes in HOOMD-blue, a GPU-accelerated particle simulation tool, to enable million particle simulations in a field where thousands is the norm. In this talk, we discuss our progress on basic parallel algorithms, optimizations that maximize GPU performance, and communication patterns for scaling to multiple GPUs. Research applications include colloidal assembly and other uses in materials design, biological aggregation, and operations research.
NASA Astrophysics Data System (ADS)
Zheng, Genrang; Lin, ZhengChun
The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.
A Constraint-Based Planner for Data Production
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Golden, Keith
2005-01-01
This paper presents a graph-based backtracking algorithm designed to support constrain-tbased planning in data production domains. This algorithm performs backtracking at two nested levels: the outer- backtracking following the structure of the planning graph to select planner subgoals and actions to achieve them and the inner-backtracking inside a subproblem associated with a selected action to find action parameter values. We show this algorithm works well in a planner applied to automating data production in an ecological forecasting system. We also discuss how the idea of multi-level backtracking may improve efficiency of solving semi-structured constraint problems.
Adaptive laser link reconfiguration using constraint propagation
NASA Technical Reports Server (NTRS)
Crone, M. S.; Julich, P. M.; Cook, L. M.
1993-01-01
This paper describes Harris AI research performed on the Adaptive Link Reconfiguration (ALR) study for Rome Lab, and focuses on the application of constraint propagation to the problem of link reconfiguration for the proposed space based Strategic Defense System (SDS) Brilliant Pebbles (BP) communications system. According to the concept of operations at the time of the study, laser communications will exist between BP's and to ground entry points. Long-term links typical of RF transmission will not exist. This study addressed an initial implementation of BP's based on the Global Protection Against Limited Strikes (GPALS) SDI mission. The number of satellites and rings studied was representative of this problem. An orbital dynamics program was used to generate line-of-site data for the modeled architecture. This was input into a discrete event simulation implemented in the Harris developed COnstraint Propagation Expert System (COPES) Shell, developed initially on the Rome Lab BM/C3 study. Using a model of the network and several heuristics, the COPES shell was used to develop the Heuristic Adaptive Link Ordering (HALO) Algorithm to rank and order potential laser links according to probability of communication. A reduced set of links based on this ranking would then be used by a routing algorithm to select the next hop. This paper includes an overview of Constraint Propagation as an Artificial Intelligence technique and its embodiment in the COPES shell. It describes the design and implementation of both the simulation of the GPALS BP network and the HALO algorithm in COPES. This is described using a 59 Data Flow Diagram, State Transition Diagrams, and Structured English PDL. It describes a laser communications model and the heuristics involved in rank-ordering the potential communication links. The generation of simulation data is described along with its interface via COPES to the Harris developed View Net graphical tool for visual analysis of communications
WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations
NASA Astrophysics Data System (ADS)
Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi
We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.
ERIC Educational Resources Information Center
Moreland, James D., Jr
2013-01-01
This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…
ERIC Educational Resources Information Center
Moreland, James D., Jr
2013-01-01
This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…
Asteroseismic constraints for Gaia
NASA Astrophysics Data System (ADS)
Creevey, O. L.; Thévenin, F.
2012-12-01
Distances from the Gaia mission will no doubt improve our understanding of stellar physics by providing an excellent constraint on the luminosity of the star. However, it is also clear that high precision stellar properties from, for example, asteroseismology, will also provide a needed input constraint in order to calibrate the methods that Gaia will use, e.g. stellar models or GSP_Phot. For solar-like stars (F, G, K IV/V), asteroseismic data delivers at the least two very important quantities: (1) the average large frequency separation < Δ ν > and (2) the frequency corresponding to the maximum of the modulated-amplitude spectrum ν_{max}. Both of these quantities are related directly to stellar parameters (radius and mass) and in particular their combination (gravity and density). We show how the precision in < Δ ν >, ν_{max}, and atmospheric parameters T_{eff} and [Fe/H] affect the determination of gravity (log g) for a sample of well-known stars. We find that log g can be determined within less than 0.02 dex accuracy for our sample while considering precisions in the data expected for V˜12 stars from Kepler data. We also derive masses and radii which are accurate to within 1σ of the accepted values. This study validates the subsequent use of all of the available asteroseismic data on solar-like stars from the Kepler field (>500 IV/V stars) in order to provide a very important constraint for Gaia calibration of GSP_Phot} through the use of log g. We note that while we concentrate on IV/V stars, both the CoRoT and Kepler fields contain asteroseismic data on thousands of giant stars which will also provide useful calibration measures.
Practical Cleanroom Operations Constraints
NASA Technical Reports Server (NTRS)
Hughes, David; Ginyard, Amani
2007-01-01
This viewgraph presentation reviews the GSFC Cleanroom Facility i.e., Spacecraft Systems Development and Integration Facility (SSDIF) with particular interest in its use during the development of the Wide Field Camera 3 (WFC3). The SSDIF is described and a diagram of the SSDIF is shown. A Constraint Table was created for consistency within Contamination Control Team. This table is shown. Another table that shows the activities that were allowed during the integration under given WFC3 condition and activity location is presented. Three decision trees are shown for different phases of the work: (1) Hardware Relocation, Hardware Work, and Contamination Control Operations.
Approximate learning algorithm in Boltzmann machines.
Yasuda, Muneki; Tanaka, Kazuyuki
2009-11-01
Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.
Practical engineering of hard spin-glass instances
NASA Astrophysics Data System (ADS)
Marshall, Jeffrey; Martin-Mayor, Victor; Hen, Itay
2016-05-01
Recent technological developments in the field of experimental quantum annealing have made prototypical annealing optimizers with hundreds of qubits commercially available. The experimental demonstration of a quantum speedup for optimization problems has since then become a coveted, albeit elusive goal. Recent studies have shown that the so far inconclusive results, regarding a quantum enhancement, may have been partly due to the benchmark problems used being unsuitable. In particular, these problems had inherently too simple a structure, allowing for both traditional resources and quantum annealers to solve them with no special efforts. The need therefore has arisen for the generation of harder benchmarks which would hopefully possess the discriminative power to separate classical scaling of performance with size, from quantum. We introduce here a practical technique for the engineering of extremely hard spin glass Ising-type problem instances that does not require `cherry picking' from large ensembles of randomly generated instances. We accomplish this by treating the generation of hard optimization problems itself as an optimization problem, for which we offer a heuristic algorithm that solves it. We demonstrate the genuine thermal hardness of our generated instances by examining them thermodynamically and analyzing their energy landscapes, as well as by testing the performance of various state-of-the art algorithms on them. We argue that a proper characterization of the generated instances offers a practical, efficient way to properly benchmark experimental quantum annealers, as well as any other optimization algorithm.
Practical engineering of hard spin-glass instances
NASA Astrophysics Data System (ADS)
Marshall, Jeffrey; Martin-Mayor, Victor; Hen, Itay
2016-07-01
Recent technological developments in the field of experimental quantum annealing have made prototypical annealing optimizers with hundreds of qubits commercially available. The experimental demonstration of a quantum speedup for optimization problems has since then become a coveted, albeit elusive goal. Recent studies have shown that the so far inconclusive results, regarding a quantum enhancement, may have been partly due to the benchmark problems used being unsuitable. In particular, these problems had inherently too simple a structure, allowing for both traditional resources and quantum annealers to solve them with no special efforts. The need therefore has arisen for the generation of harder benchmarks which would hopefully possess the discriminative power to separate classical scaling of performance with size from quantum. We introduce here a practical technique for the engineering of extremely hard spin-glass Ising-type problem instances that does not require "cherry picking" from large ensembles of randomly generated instances. We accomplish this by treating the generation of hard optimization problems itself as an optimization problem, for which we offer a heuristic algorithm that solves it. We demonstrate the genuine thermal hardness of our generated instances by examining them thermodynamically and analyzing their energy landscapes, as well as by testing the performance of various state-of-the-art algorithms on them. We argue that a proper characterization of the generated instances offers a practical, efficient way to properly benchmark experimental quantum annealers, as well as any other optimization algorithm.
TRACON Aircraft Arrival Planning and Optimization Through Spatial Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Bergh, Christopher P.; Krzeczowski, Kenneth J.; Davis, Thomas J.; Denery, Dallas G. (Technical Monitor)
1995-01-01
A new aircraft arrival planning and optimization algorithm has been incorporated into the Final Approach Spacing Tool (FAST) in the Center-TRACON Automation System (CTAS) developed at NASA-Ames Research Center. FAST simulations have been conducted over three years involving full-proficiency, level five air traffic controllers from around the United States. From these simulations an algorithm, called Spatial Constraint Satisfaction, has been designed, coded, undergone testing, and soon will begin field evaluation at the Dallas-Fort Worth and Denver International airport facilities. The purpose of this new design is an attempt to show that the generation of efficient and conflict free aircraft arrival plans at the runway does not guarantee an operationally acceptable arrival plan upstream from the runway -information encompassing the entire arrival airspace must be used in order to create an acceptable aircraft arrival plan. This new design includes functions available previously but additionally includes necessary representations of controller preferences and workload, operationally required amounts of extra separation, and integrates aircraft conflict resolution. As a result, the Spatial Constraint Satisfaction algorithm produces an optimized aircraft arrival plan that is more acceptable in terms of arrival procedures and air traffic controller workload. This paper discusses the current Air Traffic Control arrival planning procedures, previous work in this field, the design of the Spatial Constraint Satisfaction algorithm, and the results of recent evaluations of the algorithm.
TRACON Aircraft Arrival Planning and Optimization Through Spatial Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Bergh, Christopher P.; Krzeczowski, Kenneth J.; Davis, Thomas J.; Denery, Dallas G. (Technical Monitor)
1995-01-01
A new aircraft arrival planning and optimization algorithm has been incorporated into the Final Approach Spacing Tool (FAST) in the Center-TRACON Automation System (CTAS) developed at NASA-Ames Research Center. FAST simulations have been conducted over three years involving full-proficiency, level five air traffic controllers from around the United States. From these simulations an algorithm, called Spatial Constraint Satisfaction, has been designed, coded, undergone testing, and soon will begin field evaluation at the Dallas-Fort Worth and Denver International airport facilities. The purpose of this new design is an attempt to show that the generation of efficient and conflict free aircraft arrival plans at the runway does not guarantee an operationally acceptable arrival plan upstream from the runway -information encompassing the entire arrival airspace must be used in order to create an acceptable aircraft arrival plan. This new design includes functions available previously but additionally includes necessary representations of controller preferences and workload, operationally required amounts of extra separation, and integrates aircraft conflict resolution. As a result, the Spatial Constraint Satisfaction algorithm produces an optimized aircraft arrival plan that is more acceptable in terms of arrival procedures and air traffic controller workload. This paper discusses the current Air Traffic Control arrival planning procedures, previous work in this field, the design of the Spatial Constraint Satisfaction algorithm, and the results of recent evaluations of the algorithm.
Multivariable adaptive optimization of a continuous bioreactor with a constraint
Chang, Y.K.
1987-01-01
A single-variable on-line adaptive optimization algorithm using a bilevel forgetting factor was developed. Also a modified version of this algorithm was developed to handle a quality constraint. Both algorithms were tested in simulation studies on a continuous bakers' yeast culture for optimization speed and accuracy, reoptimization capability, and long term operational stability. The above algorithms were extended to a multivariable on-line adaptive optimization and tested in simulated optimization studies with and without a constraint on the residual ethanol concentration. The dilution rate (D) and the temperature (T) were manipulated to maximize the cellular productivity (DX). It took about 80 hours to optimize the culture and the attained steady state was very close to the optimum. When tested with a big step change in the feed substrate concentration it took 60 to 80 hours to drive and maintain the cellular productivity close to the new optimum value. Long term operational stability was also tested. The multivariable algorithm was experimentally applied to an actual bakers' yeast culture. Only unconstrained optimization was carried out. The optimization required 50 to 90 hours. The attained steady state was D = 0.301 1/hr, T = 32.8 C, and DX = 1.500 g/l/hr. A fast inferential optimization algorithm based on one of the fast responding off-gas data, the carbon dioxide evolution rate (CER), was proposed. In simulation and experimental studies this new algorithm is 2 to 3 times faster in optimization speed.
On handling ephemeral resource constraints in evolutionary search.
Allmendinger, Richard; Knowles, Joshua
2013-01-01
We consider optimization problems where the set of solutions available for evaluation at any given time t during optimization is some subset of the feasible space. This model is appropriate to describe many closed-loop optimization settings (i.e., where physical processes or experiments are used to evaluate solutions) where, due to resource limitations, it may be impossible to evaluate particular solutions at particular times (despite the solutions being part of the feasible space). We call the constraints determining which solutions are non-evaluable ephemeral resource constraints (ERCs). In this paper, we investigate two specific types of ERC: one encodes periodic resource availabilities, the other models commitment constraints that make the evaluable part of the space a function of earlier evaluations conducted. In an experimental study, both types of constraint are seen to impact the performance of an evolutionary algorithm significantly. To deal with the effects of the ERCs, we propose and test five different constraint-handling policies (adapted from those used to handle standard constraints), using a number of different test functions including a fitness landscape from a real closed-loop problem. We show that knowing information about the type of resource constraint in advance may be sufficient to select an effective policy for dealing with it, even when advance knowledge of the fitness landscape is limited.
Algorithm-Independent Framework for Verifying Integer Constraints
2007-11-02
TAL [7], Crary and Weirich [5]’s resource bound certifitation, Wang and Appel [16]’s safe garbage collection and an attempt at making the whole PCC...S. Weirich . Resource bound certification. In Proc. 27th Annual ACM SIGPLAN- SIGACT Symp. on Principles of Programming Languages. ACM Press, 2000. 30
An Implicit Enumeration Algorithm with Binary-Valued Constraints.
1986-03-01
problems is the National Basketball Association ( NBA -) schedul- ing problems developed by Bean (1980), as discussed in detail in the Appendix. These...fY! X F L- %n~ P ’ % -C-10 K7 K: K7 -L- -7".i - W. , W V APPENDIX The NBA Scheduling Problem §A.1 Formulation The National Basketball Association...16 2.2 4.9 40.2 15.14 §6.2.3 NBA Scheduling Problem The last set of testing problems involves the NBA scheduling problem. A detailed description of
Unraveling Quantum Annealers using Classical Hardness
NASA Astrophysics Data System (ADS)
Martin-Mayor, Victor; Hen, Itay
2015-10-01
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip.
Unraveling Quantum Annealers using Classical Hardness.
Martin-Mayor, Victor; Hen, Itay
2015-10-20
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as 'D-Wave' chips, promise to solve practical optimization problems potentially faster than conventional 'classical' computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize 'temperature chaos' as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip.
Unraveling Quantum Annealers using Classical Hardness
Martin-Mayor, Victor; Hen, Itay
2015-01-01
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257
Symbolic Constraint Maintenance Grid
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
Version 3.1 of Symbolic Constraint Maintenance Grid (SCMG) is a software system that provides a general conceptual framework for utilizing pre-existing programming techniques to perform symbolic transformations of data. SCMG also provides a language (and an associated communication method and protocol) for representing constraints on the original non-symbolic data. SCMG provides a facility for exchanging information between numeric and symbolic components without knowing the details of the components themselves. In essence, it integrates symbolic software tools (for diagnosis, prognosis, and planning) with non-artificial-intelligence software. SCMG executes a process of symbolic summarization and monitoring of continuous time series data that are being abstractly represented as symbolic templates of information exchange. This summarization process enables such symbolic- reasoning computing systems as artificial- intelligence planning systems to evaluate the significance and effects of channels of data more efficiently than would otherwise be possible. As a result of the increased efficiency in representation, reasoning software can monitor more channels and is thus able to perform monitoring and control functions more effectively.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
Multilevel Algorithms for Nonlinear Optimization
1994-06-01
NASA Contractor Report 194940 ICASE Report No. 94-53 AD-A284 318 * ICASE MULTILEVEL ALGORITHMSDDTIC FOR NONLINEAR OPTIMIZATION ELECTESEP 1 4 1994 F...Association SOperated b MULTILEVEL ALGORITHMS FOR NONLINEAR OPTIMIZATION Natalia Alexandrov Accesion For ICASE C Mail Stop 132C NTIS CRA&ID C TAB 1Q...ABSTRACT Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that
Evaluation of Open-Source Hard Real Time Software Packages
NASA Technical Reports Server (NTRS)
Mattei, Nicholas S.
2004-01-01
Reliable software is, at times, hard to find. No piece of software can be guaranteed to work in every situation that may arise during its use here at Glenn Research Center or in space. The job of the Software Assurance (SA) group in the Risk Management Office is to rigorously test the software in an effort to ensure it matches the contract specifications. In some cases the SA team also researches new alternatives for selected software packages. This testing and research is an integral part of the department of Safety and Mission Assurance. Real Time operation in reference to a computer system is a particular style of handing the timing and manner with which inputs and outputs are handled. A real time system executes these commands and appropriate processing within a defined timing constraint. Within this definition there are two other classifications of real time systems: hard and soft. A soft real time system is one in which if the particular timing constraints are not rigidly met there will be no critical results. On the other hand, a hard real time system is one in which if the timing constraints are not met the results could be catastrophic. An example of a soft real time system is a DVD decoder. If the particular piece of data from the input is not decoded and displayed to the screen at exactly the correct moment nothing critical will become of it, the user may not even notice it. However, a hard real time system is needed to control the timing of fuel injections or steering on the Space Shuttle; a delay of even a fraction of a second could be catastrophic in such a complex system. The current real time system employed by most NASA projects is Wind River's VxWorks operating system. This is a proprietary operating system that can be configured to work with many of NASA s needs and it provides very accurate and reliable hard real time performance. The down side is that since it is a proprietary operating system it is also costly to implement. The prospect of
Evaluation of Open-Source Hard Real Time Software Packages
NASA Technical Reports Server (NTRS)
Mattei, Nicholas S.
2004-01-01
Reliable software is, at times, hard to find. No piece of software can be guaranteed to work in every situation that may arise during its use here at Glenn Research Center or in space. The job of the Software Assurance (SA) group in the Risk Management Office is to rigorously test the software in an effort to ensure it matches the contract specifications. In some cases the SA team also researches new alternatives for selected software packages. This testing and research is an integral part of the department of Safety and Mission Assurance. Real Time operation in reference to a computer system is a particular style of handing the timing and manner with which inputs and outputs are handled. A real time system executes these commands and appropriate processing within a defined timing constraint. Within this definition there are two other classifications of real time systems: hard and soft. A soft real time system is one in which if the particular timing constraints are not rigidly met there will be no critical results. On the other hand, a hard real time system is one in which if the timing constraints are not met the results could be catastrophic. An example of a soft real time system is a DVD decoder. If the particular piece of data from the input is not decoded and displayed to the screen at exactly the correct moment nothing critical will become of it, the user may not even notice it. However, a hard real time system is needed to control the timing of fuel injections or steering on the Space Shuttle; a delay of even a fraction of a second could be catastrophic in such a complex system. The current real time system employed by most NASA projects is Wind River's VxWorks operating system. This is a proprietary operating system that can be configured to work with many of NASA s needs and it provides very accurate and reliable hard real time performance. The down side is that since it is a proprietary operating system it is also costly to implement. The prospect of
Hard Work and Hard Data: Getting Our Message Out.
ERIC Educational Resources Information Center
Glau, Gregory R.
Unless questions about student performance and student retention can be answered and unless educators are proactive in finding and publicizing such information, basic writing programs cannot determine if what they are doing is working. Hard data, especially from underrepresented groups, is needed to support these programs. At Arizona State…
Structure and rigidity in maximally random jammed packings of hard particles
NASA Astrophysics Data System (ADS)
Atkinson, Steven Donald
Packings of hard particles have served as a powerful, yet simple model for a wide variety of physical systems. One particularly interesting subset of these packings are so-called maximally random jammed (MRJ) packings, which constitute the most disordered packings that exist subject to the constraint of jamming (mechanical stability). In this dissertation, we first investigate the consequences of recently-discovered sequential linear programming (SLP) techniques to present previously-unknown possibilities for MRJ packings of two- and three-dimensional hard disks and spheres, respectively. We then turn our focus away from the limit of jamming and identify some structural signatures accompanying various compression processes towards jammed states that are indicative of an incipient rigid structure. In Chapter 2, we utilize the Torquato-Jiao SLP algorithm to construct MRJ packings of equal-sized spheres in three dimensions that possess substantial qualitative differences from previous putative MRJ states. We turn towards two dimensions in Chapter 3 and establish the existence of highly disordered, jammed packings of equal-sized disks that were previously thought not to exist. We discuss the implications that these findings have for our understanding of disorder in packing problems. In Chapter 4, we utilize a novel SLP algorithm we call the "pop test" to scrutinize the conjectured link between jamming and hyperuniformity. We uncover deficiencies in standard protocols' abilities to construct truly-jammed states accompanied by a correlated deficiency in exact hyperuniform behavior, suggesting that precise jamming is a particularly subtle matter in probing this connection. In Chapter 5, we consider the direct correlation function as a means of identifying various static signatures of jamming as we compress packings towards both ordered and disordered jammed states with particular attention paid to the growing suppression of long-ranged density fluctuations
Magnetic levitation for hard superconductors
Kordyuk, A.A.
1998-01-01
An approach for calculating the interaction between a hard superconductor and a permanent magnet in the field-cooled case is proposed. The exact solutions were obtained for the point magnetic dipole over a flat ideally hard superconductor. We have shown that such an approach is adaptable to a wide practical range of melt-textured high-temperature superconductors{close_quote} systems with magnetic levitation. In this case, the energy losses can be calculated from the alternating magnetic field distribution on the superconducting sample surface. {copyright} {ital 1998 American Institute of Physics.}
Future hard disk drive systems
NASA Astrophysics Data System (ADS)
Wood, Roger
2009-03-01
This paper briefly reviews the evolution of today's hard disk drive with the additional intention of orienting the reader to the overall mechanical and electrical architecture. The modern hard disk drive is a miracle of storage capacity and function together with remarkable economy of design. This paper presents a personal view of future customer requirements and the anticipated design evolution of the components. There are critical decisions and great challenges ahead for the key technologies of heads, media, head-disk interface, mechanics, and electronics.
A global approach to kinematic path planning to robots with holonomic and nonholonomic constraints
NASA Technical Reports Server (NTRS)
Divelbiss, Adam; Seereeram, Sanjeev; Wen, John T.
1993-01-01
Robots in applications may be subject to holonomic or nonholonomic constraints. Examples of holonomic constraints include a manipulator constrained through the contact with the environment, e.g., inserting a part, turning a crank, etc., and multiple manipulators constrained through a common payload. Examples of nonholonomic constraints include no-slip constraints on mobile robot wheels, local normal rotation constraints for soft finger and rolling contacts in grasping, and conservation of angular momentum of in-orbit space robots. The above examples all involve equality constraints; in applications, there are usually additional inequality constraints such as robot joint limits, self collision and environment collision avoidance constraints, steering angle constraints in mobile robots, etc. The problem of finding a kinematically feasible path that satisfies a given set of holonomic and nonholonomic constraints, of both equality and inequality types is addressed. The path planning problem is first posed as a finite time nonlinear control problem. This problem is subsequently transformed to a static root finding problem in an augmented space which can then be iteratively solved. The algorithm has shown promising results in planning feasible paths for redundant arms satisfying Cartesian path following and goal endpoint specifications, and mobile vehicles with multiple trailers. In contrast to local approaches, this algorithm is less prone to problems such as singularities and local minima.
Toward a theory of constraints.
Breunlin, D C
1999-06-01
Grounded in the cybernetic concept of negative explanation, the theory of constraints examines how human systems are kept from solving problems. To identify constraints, therapists must know where to look for them and what to look for. The theory proposes that constraints exist among the levels of a biopsychosocial system, which include biology, person, relationship, family, community, and society. The six metaframeworks of organization, sequences, mind, development, gender, and culture assist constraint identification. Combining the levels and metaframeworks creates a web of constraints, the complexity of which determines how difficult it will be to solve a given problem. The theory of constraint offers an integrative and pragmatic approach to therapy while simultaneously honoring the complexity of human systems.
Dynamic indentation hardness of materials
NASA Astrophysics Data System (ADS)
Koeppel, Brian James
Indentation hardness is one of the simplest and most commonly used measures for quickly characterizing material response under static loads. Hardness may mean resistance to cutting to a machinist, resistance to wear to a tribologist, or a measure of flow stress to a design engineer. In this simple technique, a predetermined force is applied to an indenter for 5-30 seconds causing it to penetrate a specimen. By measuring the load and the indentation size, a hardness value is determined. However, the rate of deformation during indenter penetration is of the order of 10sp{-4}\\ ssp{-1}. In most practical applications, such as high speed machining or impact, material deforms at strain rates in excess of 10sp3{-}10sp5\\ ssp{-1}. At such high rates, it is well established that the plastic behavior of materials is considerably different from their static counterpart. For example, materials exhibit an increase in their yield stress, flow stress, fracture stress, and fracture toughness at high strain rates. Hence, the use of static hardness as an indicator of material response under dynamic loads may not be appropriate. Accordingly, a simple dynamic indentation hardness tester is developed for characterizing materials at strain rates similar to those encountered in realistic situations. The experimental technique uses elastic stress wave propagation phenomena in a slender rod. The technique is designed to deliver a single indentation load of 100-200 mus duration. Similar to static measurements, the dynamic hardness is determined from the measured load and indentation size. Hardness measurements on a range of metals have revealed that the dynamic hardness is consistently greater than the static hardness. The increase in hardness is strongly dependent on the crystal structure of the material. The observed trends in hardness are also found to be consistent with the yield and flow stresses of these materials under uniaxial compression. Therefore, it is suggested that the
Placement with Symmetry Constraints for Analog IC Layout Design Based on Tree Representation
NASA Astrophysics Data System (ADS)
Hirakawa, Natsumi; Fujiyoshi, Kunihiro
Symmetry constrains are the constraints that the given cells should be placed symmetrically in design of analog ICs. We use O-tree to represent placements and propose a decoding algorithm which can obtain one of the minimum placements satisfying the constraints. The decoding algorithm uses linear programming, which is too much time consuming. Therefore we propose a graph based method to recognize if there exists no placement satisfying both the given symmetry and O-tree constraints, and use the method before application of linear programming. The effectiveness of the proposed method was shown by computational experiments.
ϑ-SHAKE: An extension to SHAKE for the explicit treatment of angular constraints
NASA Astrophysics Data System (ADS)
Gonnet, Pedro; Walther, Jens H.; Koumoutsakos, Petros
2009-03-01
This paper presents ϑ-SHAKE, an extension to SHAKE, an algorithm for the resolution of holonomic constraints in molecular dynamics simulations, which allows for the explicit treatment of angular constraints. We show that this treatment is more efficient than the use of fictitious bonds, significantly reducing the overlap between the individual constraints and thus accelerating convergence. The new algorithm is compared with SHAKE, M-SHAKE, the matrix-based approach described by Ciccotti and Ryckaert and P-SHAKE for rigid water and octane.
A Stochastic Approach to Diffeomorphic Point Set Registration with Landmark Constraints.
Kolesov, Ivan; Lee, Jehoon; Sharp, Gregory; Vela, Patricio; Tannenbaum, Allen
2016-02-01
This work presents a deformable point set registration algorithm that seeks an optimal set of radial basis functions to describe the registration. A novel, global optimization approach is introduced composed of simulated annealing with a particle filter based generator function to perform the registration. It is shown how constraints can be incorporated into this framework. A constraint on the deformation is enforced whose role is to ensure physically meaningful fields (i.e., invertible). Further, examples in which landmark constraints serve to guide the registration are shown. Results on 2D and 3D data demonstrate the algorithm's robustness to noise and missing information.
Relative constraints and evolution
NASA Astrophysics Data System (ADS)
Ochoa, Juan G. Diaz
2014-03-01
Several mathematical models of evolving systems assume that changes in the micro-states are constrained to the search of an optimal value in a local or global objective function. However, the concept of evolution requires a continuous change in the environment and species, making difficult the definition of absolute optimal values in objective functions. In this paper, we define constraints that are not absolute but relative to local micro-states, introducing a rupture in the invariance of the phase space of the system. This conceptual basis is useful to define alternative mathematical models for biological (or in general complex) evolving systems. We illustrate this concept with a modified Ising model, which can be useful to understand and model problems like the somatic evolution of cancer.
NASA Astrophysics Data System (ADS)
Wolfe, William J.; Wood, David; Sorensen, Stephen E.
1996-12-01
This paper discusses automated scheduling as it applies to complex domains such as factories, transportation, and communications systems. The window-constrained-packing problem is introduced as an ideal model of the scheduling trade offs. Specific algorithms are compared in terms of simplicity, speed, and accuracy. In particular, dispatch, look-ahead, and genetic algorithms are statistically compared on randomly generated job sets. The conclusion is that dispatch methods are fast and fairly accurate; while modern algorithms, such as genetic and simulate annealing, have excessive run times, and are too complex to be practical.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
Neural constraints on learning.
Sadtler, Patrick T; Quick, Kristin M; Golub, Matthew D; Chase, Steven M; Ryu, Stephen I; Tyler-Kabara, Elizabeth C; Yu, Byron M; Batista, Aaron P
2014-08-28
Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define this constraint. We employed a closed-loop intracortical brain-computer interface learning paradigm in which Rhesus macaques (Macaca mulatta) controlled a computer cursor by modulating neural activity patterns in the primary motor cortex. Using the brain-computer interface paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. The activity of a neural population can be represented in a high-dimensional space (termed the neural space), wherein each dimension corresponds to the activity of one neuron. These characteristic activity patterns comprise a low-dimensional subspace (termed the intrinsic manifold) within the neural space. The intrinsic manifold presumably reflects constraints imposed by the underlying neural circuitry. Here we show that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the intrinsic manifold. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the intrinsic manifold. These results suggest that the existing structure of a network can shape learning. On a timescale of hours, it seems to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already
Neural constraints on learning
Sadtler, Patrick T.; Quick, Kristin M.; Golub, Matthew D.; Chase, Steven M.; Ryu, Stephen I.; Tyler-Kabara, Elizabeth C.; Yu, Byron M.; Batista, Aaron P.
2014-01-01
Motor, sensory, and cognitive learning require networks of neurons to generate new activity patterns. Because some behaviors are easier to learn than others1,2, we wondered if some neural activity patterns are easier to generate than others. We asked whether the existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define the constraint. We employed a closed-loop intracortical brain-computer interface (BCI) learning paradigm in which Rhesus monkeys controlled a computer cursor by modulating neural activity patterns in primary motor cortex. Using the BCI paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. These patterns comprise a low-dimensional space (termed the intrinsic manifold, or IM) within the high-dimensional neural firing rate space. They presumably reflect constraints imposed by the underlying neural circuitry. We found that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the IM. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the IM. This result suggests that the existing structure of a network can shape learning. On the timescale of hours, it appears to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already possess3,4. PMID:25164754
Self-organization and clustering algorithms
NASA Technical Reports Server (NTRS)
Bezdek, James C.
1991-01-01
Kohonen's feature maps approach to clustering is often likened to the k or c-means clustering algorithms. Here, the author identifies some similarities and differences between the hard and fuzzy c-Means (HCM/FCM) or ISODATA algorithms and Kohonen's self-organizing approach. The author concludes that some differences are significant, but at the same time there may be some important unknown relationships between the two methodologies. Several avenues of research are proposed.
Numerical prediction of microstructure and hardness in multicycle simulations
NASA Astrophysics Data System (ADS)
Oddy, A. S.; McDill, J. M. J.
1996-06-01
Thermal-microstructural predictions are made and compared to physical simulations of heat-affected zones in multipass and weaved welds. The microstructural prediction algorithm includes reaustenitization kinetics, grain growth, austenite decomposition kinetics, hardness, and tempering. Microstructural simulation of weaved welds requires that the algorithm include transient reaustenitization, austenite decomposition for arbitrary thermal cycles including during reheating, and tempering. Material properties for each of these phenomena are taken from the best available literature. The numerical predictions are compared with the results of physical simulations made at the Metals Technology Laboratory, CANMET, on a Gleeble 1500 simulator. Thermal histories used in the physical simulations included single-pass welds, isothermal tempering, two-cycle, and three-cycle welds. The two-and three-cycle welds include temper-bead and weaved-weld simulations. A recurring theme in the analysis is the significant variation found in the material properties for the same grade of steel. This affected all the material properties used including those governing reaustenitization, austenite grain growth, austenite decomposition, and hardness. Hardness measurements taken from the literature show a variation of ±5 to 30 HV on the same sample. Alloy differences within the allowable range also led to hardness variations of ±30 HV for the heat-affected zone of multipass welds. The predicted hardnesses agree extremely well with those taken from the physical simulations. Some differences due to problems with the austenite decomposition properties were noted in that bainite formation was predicted to occur somewhat more rapidly than was found experimentally. Reaustenitization values predicted during the rapid excursions to intercritical temperatures were also in good qualitative agreement with those measured experimentally.
ERIC Educational Resources Information Center
Atwell, Nancie
2003-01-01
Writers thrive when they are motivated to work hard, have regular opportunities to practice and reflect, and benefit from a knowledgeable teacher who knows writing. Student feedback to lessons during writing workshop helped guide Nancie Atwell in her quest to provide the richest and most efficient path to better writing.
Playing the Numbers: Hard Choices
ERIC Educational Resources Information Center
Doyle, William R.
2009-01-01
Stateline.org recently called this recession the worst in 50 years for state budgets. As has been the case in past economic downturns, higher education looks to be particularly hard hit. Funds from the American Recovery and Relief Act may have postponed some of the difficulty for many colleges and universities, but the outlook for public higher…
Metrics for Hard Goods Merchandising.
ERIC Educational Resources Information Center
Cooper, Gloria S., Ed.; Magisos, Joel H., Ed.
Designed to meet the job-related metric measurement needs of students interested in hard goods merchandising, this instructional package is one of five for the marketing and distribution cluster, part of a set of 55 packages for metric instruction in different occupations. The package is intended for students who already know the occupational…
2017-02-02
NASA Mars Reconnaissance Orbiter observed a small portion of a dark crater floor in the Tyrrhena Terra region of Mars. This is largely ancient hard bedrock that has been cratered by numerous impacts over the eons. http://photojournal.jpl.nasa.gov/catalog/PIA11179
Highly parallel consistent labeling algorithm suitable for optoelectronic implementation.
Marsden, G C; Kiamilev, F; Esener, S; Lee, S H
1991-01-10
Constraint satisfaction problems require a search through a large set of possibilities. Consistent labeling is a method by which search spaces can be drastically reduced. We present a highly parallel consistent labeling algorithm, which achieves strong k-consistency for any value k and which can include higher-order constraints. The algorithm uses vector outer product, matrix summation, and matrix intersection operations. These operations require local computation with global communication and, therefore, are well suited to a optoelectronic implementation.
Naeem, Muhammad; Pareek, Udit; Lee, Daniel C.; Anpalagan, Alagan
2013-01-01
Due to the rapid increase in the usage and demand of wireless sensor networks (WSN), the limited frequency spectrum available for WSN applications will be extremely crowded in the near future. More sensor devices also mean more recharging/replacement of batteries, which will cause significant impact on the global carbon footprint. In this paper, we propose a relay-assisted cognitive radio sensor network (CRSN) that allocates communication resources in an environmentally friendly manner. We use shared band amplify and forward relaying for cooperative communication in the proposed CRSN. We present a multi-objective optimization architecture for resource allocation in a green cooperative cognitive radio sensor network (GC-CRSN). The proposed multi-objective framework jointly performs relay assignment and power allocation in GC-CRSN, while optimizing two conflicting objectives. The first objective is to maximize the total throughput, and the second objective is to minimize the total transmission power of CRSN. The proposed relay assignment and power allocation problem is a non-convex mixed-integer non-linear optimization problem (NC-MINLP), which is generally non-deterministic polynomial-time (NP)-hard. We introduce a hybrid heuristic algorithm for this problem. The hybrid heuristic includes an estimation-of-distribution algorithm (EDA) for performing power allocation and iterative greedy schemes for constraint satisfaction and relay assignment. We analyze the throughput and power consumption tradeoff in GC-CRSN. A detailed analysis of the performance of the proposed algorithm is presented with the simulation results. PMID:23584119
NASA Astrophysics Data System (ADS)
Dao, Son Duy; Abhary, Kazem; Marian, Romeo
2017-01-01
Integration of production planning and scheduling is a class of problems commonly found in manufacturing industry. This class of problems associated with precedence constraint has been previously modeled and optimized by the authors, in which, it requires a multidimensional optimization at the same time: what to make, how many to make, where to make and the order to make. It is a combinatorial, NP-hard problem, for which no polynomial time algorithm is known to produce an optimal result on a random graph. In this paper, the further development of Genetic Algorithm (GA) for this integrated optimization is presented. Because of the dynamic nature of the problem, the size of its solution is variable. To deal with this variability and find an optimal solution to the problem, GA with new features in chromosome encoding, crossover, mutation, selection as well as algorithm structure is developed herein. With the proposed structure, the proposed GA is able to "learn" from its experience. Robustness of the proposed GA is demonstrated by a complex numerical example in which performance of the proposed GA is compared with those of three commercial optimization solvers.
NASA Astrophysics Data System (ADS)
Dao, Son Duy; Abhary, Kazem; Marian, Romeo
2017-01-01
Integration of production planning and scheduling is a class of problems commonly found in manufacturing industry. This class of problems associated with precedence constraint has been previously modeled and optimized by the authors, in which, it requires a multidimensional optimization at the same time: what to make, how many to make, where to make and the order to make. It is a combinatorial, NP-hard problem, for which no polynomial time algorithm is known to produce an optimal result on a random graph. In this paper, the further development of Genetic Algorithm (GA) for this integrated optimization is presented. Because of the dynamic nature of the problem, the size of its solution is variable. To deal with this variability and find an optimal solution to the problem, GA with new features in chromosome encoding, crossover, mutation, selection as well as algorithm structure is developed herein. With the proposed structure, the proposed GA is able to "learn" from its experience. Robustness of the proposed GA is demonstrated by a complex numerical example in which performance of the proposed GA is compared with those of three commercial optimization solvers.
A Procedure for Empirical Initialization of Adaptive Testing Algorithms.
ERIC Educational Resources Information Center
van der Linden, Wim J.
In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability estimator. In this paper, an empirical initialization of…
Simulation of Hard Shadows on Large Spherical Terrains
NASA Astrophysics Data System (ADS)
Aslandere, Turgay; Flatken, Markus; Gerndt, Andreas
2016-12-01
Real-time rendering of high precision shadows using digital terrain models as input data is a challenging task. Especially when interactivity is targeted and level of detail data structures are utilized to tackle huge amount of data. In this paper, we present a real-time rendering approach for the computation of hard shadows using large scale digital terrain data obtained by satellite imagery. Our approach is based on an extended horizon mapping algorithm that avoids costly pre-computations and ensures high accuracy. This algorithm is further developed to handle large data. The proposed algorithms take the surface curvature of the large spherical bodies into account during the computation. The performance issues are discussed and the results are presented. The generated images can be exploited in 3D research and aerospace related areas.
Optimal filter design subject to output delobe constraints
NASA Technical Reports Server (NTRS)
Fortmann, T. E.; Athans, M.
1972-01-01
The design of filters for detection and estimation in radar and communications systems is considered, with inequality constraints on the maximum output sidelobe levels. A constrained optimization problem in Hilbert space is formulated, incorporating the sidelobe constraints via a partial ordering of continuous functions. Generalized versions (in Hilbert space) of the Kuhn-Tucker and Duality Theorems allow the reduction of this problem to an unconstrained one in the dual space of regular Borel measures. A convergent algorithm is presented for computational solution of the dual problem.
Improvement of MEM-deconvolution by an additional constraint
NASA Astrophysics Data System (ADS)
Reiter, J.; Pfleiderer, J.
1986-09-01
An attempt is made to improve existing versions of the maximum entropy method (MEM) and their understanding. Additional constraints are discussed, especially the T-statistic which can significantly reduce the correlation between residuals and model. An implementation of the T constraint into MEM requires a new numerical algorithm, which is made to work most efficiently on modern vector-processing computers. The entropy functional is derived from simple mathematical assumptions. The new MEM version is tested with radio data of NGC 6946 and optical data from M 87.
Treatment of inequality constraints in power system state estimation
Clements, K.A.; Davis, P.W.; Frey, K.D.
1995-05-01
A new formulation of the power system state estimation problem and a new solution technique are presented. The formulation allows for inequality constraints such as Var limits on generators and transformer tap ratio limits. In addition, unmeasured loads can be modeled as unknown but bounded quantities. The solution technique is an interior point method that uses logarithmic barrier functions to treat the inequality constraints. The authors describe computational issues arising in the implementation of the algorithm. Numerical results are given for systems ranging in size from six to 118 buses.
Hard processes in hadronic interactions
Satz, H. |; Wang, X.N.
1995-07-01
Quantum chromodynamics is today accepted as the fundamental theory of strong interactions, even though most hadronic collisions lead to final states for which quantitative QCD predictions are still lacking. It therefore seems worthwhile to take stock of where we stand today and to what extent the presently available data on hard processes in hadronic collisions can be accounted for in terms of QCD. This is one reason for this work. The second reason - and in fact its original trigger - is the search for the quark-gluon plasma in high energy nuclear collisions. The hard processes to be considered here are the production of prompt photons, Drell-Yan dileptons, open charm, quarkonium states, and hard jets. For each of these, we discuss the present theoretical understanding, compare the resulting predictions to available data, and then show what behaviour it leads to at RHIC and LHC energies. All of these processes have the structure mentioned above: they contain a hard partonic interaction, calculable perturbatively, but also the non-perturbative parton distribution within a hadron. These parton distributions, however, can be studied theoretically in terms of counting rule arguments, and they can be checked independently by measurements of the parton structure functions in deep inelastic lepton-hadron scattering. The present volume is the work of Hard Probe Collaboration, a group of theorists who are interested in the problem and were willing to dedicate a considerable amount of their time and work on it. The necessary preparation, planning and coordination of the project were carried out in two workshops of two weeks` duration each, in February 1994 at CERn in Geneva andin July 1994 at LBL in Berkeley.
Hard processes in hadronic interactions
Satz, H. |; Wang, X.N.
1995-07-01
Quantum chromodynamics is today accepted as the fundamental theory of strong interactions, even though most hadronic collisions lead to final states for which quantitative QCD predictions are still lacking. It therefore seems worthwhile to take stock of where we stand today and to what extent the presently available data on hard processes in hadronic collisions can be accounted for in terms of QCD. This is one reason for this work. The second reason--and in fact its original trigger--is the search for the quark-gluon plasma in high energy nuclear collisions. The hard processes to be considered here are the production of prompt photons, Drell-Yan dileptons, open charm, quarkonium states, and hard jets. For each of these, the authors discuss the present theoretical understanding, compare the resulting predictions to available data, and then show what behavior it leads to at RHIC and LHC energies. All of these processes have the structure mentioned above: they contain a hard partonic interaction, calculable perturbatively, but also the non-perturbative parton distribution within a hadron. These parton distributions, however, can be studied theoretically in terms of counting rule arguments, and they can be checked independently by measurements of the parton structure functions in deep inelastic lepton-hadron scattering. The present volume is the work of Hard Probe Collaboration, a group of theorists who are interested in the problem and were willing to dedicate a considerable amount of their time to work on it. The necessary preparation, planning and coordination of the project were carried out in two workshops of two weeks` duration each, in February 1994 at CERN in Geneva and in July 1994 at LBL in Berkeley. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.
Improving the Held and Karp Approach with Constraint Programming
NASA Astrophysics Data System (ADS)
Benchimol, Pascal; Régin, Jean-Charles; Rousseau, Louis-Martin; Rueher, Michel; van Hoeve, Willem-Jan
Held and Karp have proposed, in the early 1970s, a relaxation for the Traveling Salesman Problem (TSP) as well as a branch-and-bound procedure that can solve small to modest-size instances to optimality [4, 5]. It has been shown that the Held-Karp relaxation produces very tight bounds in practice, and this relaxation is therefore applied in TSP solvers such as Concorde [1]. In this short paper we show that the Held-Karp approach can benefit from well-known techniques in Constraint Programming (CP) such as domain filtering and constraint propagation. Namely, we show that filtering algorithms developed for the weighted spanning tree constraint [3, 8] can be adapted to the context of the Held and Karp procedure. In addition to the adaptation of existing algorithms, we introduce a special-purpose filtering algorithm based on the underlying mechanisms used in Prim's algorithm [7]. Finally, we explored two different branching schemes to close the integrality gap. Our initial experimental results indicate that the addition of the CP techniques to the Held-Karp method can be very effective.
Right ventricle segmentation with probability product kernel constraints.
Nambakhsh, Cyrus M S; Peters, Terry M; Islam, Ali; Ayed, Ismail Ben
2013-01-01
We propose a fast algorithm for 3D segmentation of the right ventricle (RV) in MRI using shape and appearance constraints based on probability product kernels (PPK). The proposed constraints remove the need for large, manually-segmented training sets and costly pose estimation (or registration) procedures, as is the case of the existing algorithms. We report comprehensive experiments, which demonstrate that the proposed algorithm (i) requires only a single subject for training; and (ii) yields a performance that is not significantly affected by the choice of the training data. Our PPK constraints are non-linear (high-order) functionals, which are not directly amenable to standard optimizers. We split the problem into several surrogate-functional optimizations, each solved via an efficient convex relaxation that is amenable to parallel implementations. We further introduce a scale variable that we optimize with fast fixed-point computations, thereby achieving pose invariance in real-time. Our parallelized implementation on a graphics processing unit (GPU) demonstrates that the proposed algorithm can yield a real-time solution for typical cardiac MRI volumes, with a speed-up of more than 20 times compared to the CPU version. We report a comprehensive experimental validations over 400 volumes acquired from 20 subjects, and demonstrate that the obtained 3D surfaces correlate with independent manual delineations.
NASA Astrophysics Data System (ADS)
Cutbill, Adam; Wang, G. Gary
2016-01-01
Constraints are necessary in optimization problems to steer optimization algorithms away from solutions which are not feasible or practical. However, redundant constraints are often added, which needlessly complicate the problem's description. This article introduces a probabilistic method to identify redundant inequality constraints for black-box optimization problems. The method uses Jaccard similarity to find item groups where the occurrence of a single item implies the occurrence of all other items in the group. The remaining groups are then mined with association analysis. Furthermore, unnecessary constraints are classified as redundant owing to co-occurrence, implication or covering. These classifications are presented as rules (in readable text), to indicate the relationships among constraints. The algorithm is applied to mathematical problems and to the engineering design of a pressure vessel. It was found that the rules are informative and correct, based on the available samples. Limitations of the proposed methods are also discussed.
Adaptive nonlinear optimization of the signal-to-noise ratio of an array subject to a constraint.
NASA Technical Reports Server (NTRS)
Winkler, L. P.; Schwartz, M.
1972-01-01
Investigation of a stochastic projected gradient algorithm which can be used to find a constrained optimum point for a concave or convex objective function subject to nonlinear constraints. When the constraint consists of only one linear equation, convergence to the constrained optimum value is proven, and bounds are obtained for the rate of convergence of the algorithm to the constrained optimum value. This algorithm is then applied to the nonlinear problem of automatically making an array of detectors form a beam in a desired direction in space when unknown interfering noise is present so as to maximize the signal-to-noise ratio subject to a constraint on the supergain ratio.
NASA Astrophysics Data System (ADS)
Lambrakos, S. G.; Boris, J. P.; Oran, E. S.; Chandrasekhar, I.; Nagumo, M.
1989-12-01
We present a new modification of the SHAKE algorithm, MSHAKE, that maintains fixed distances in molecular dynamics simulations of polyatomic molecules. The MSHAKE algorithm, which is applied by modifying the leapfrog algorithm to include forces of constraint, computes an initial estimate of constraint forces, then iteratively corrects the constraint forces required to maintain the fixed distances. Thus MSHAKE should always converge more rapidly than SHAKE. Further, the explicit determination of the constraint forces at each timestep makes MSHAKE convenient for use in molecular dynamics simulations where bond stress is a significant dynamical quantity.
Low dose hard x-ray contact microscopy assisted by a photoelectric conversion layer
Gomella, Andrew; Martin, Eric W.; Lynch, Susanna K.; Wen, Han; Morgan, Nicole Y.
2013-04-15
Hard x-ray contact microscopy provides images of dense samples at resolutions of tens of nanometers. However, the required beam intensity can only be delivered by synchrotron sources. We report on the use of a gold photoelectric conversion layer to lower the exposure dose by a factor of 40 to 50, allowing hard x-ray contact microscopy to be performed with a compact x-ray tube. We demonstrate the method in imaging the transmission pattern of a type of hard x-ray grating that cannot be fitted into conventional x-ray microscopes due to its size and shape. Generally the method is easy to implement and can record images of samples in the hard x-ray region over a large area in a single exposure, without some of the geometric constraints associated with x-ray microscopes based on zone-plate or other magnifying optics.
Low dose hard x-ray contact microscopy assisted by a photoelectric conversion layer
Gomella, Andrew; Martin, Eric W.; Lynch, Susanna K.; Morgan, Nicole Y.; Wen, Han
2013-01-01
Hard x-ray contact microscopy provides images of dense samples at resolutions of tens of nanometers. However, the required beam intensity can only be delivered by synchrotron sources. We report on the use of a gold photoelectric conversion layer to lower the exposure dose by a factor of 40 to 50, allowing hard x-ray contact microscopy to be performed with a compact x-ray tube. We demonstrate the method in imaging the transmission pattern of a type of hard x-ray grating that cannot be fitted into conventional x-ray microscopes due to its size and shape. Generally the method is easy to implement and can record images of samples in the hard x-ray region over a large area in a single exposure, without some of the geometric constraints associated with x-ray microscopes based on zone-plate or other magnifying optics. PMID:23837131
On-line hardness assessment of cold-rolled motor lamination steels
NASA Astrophysics Data System (ADS)
Soghomonian, Z. S.; Beckley, P.; Moses, A. J.
1995-08-01
It is important to be aware of variations in the mechanical hardness of strip emerging from the strand anneal lines, which are used to anneal continuous steel strip during production. Lamination stamping, punching, and handling characteristics can depend on the hardness of the materials. A novel technique for the nondestructive determination of the hardness of nonoriented electrical steels has been developed. This technique exploits the measurements of structure-sensitive magnetic parameters, which are measured continuously on-line in real time. The magnetic data being produced are then processed through appropriate algorithms to provide an evaluation of material mechanical hardness. Variation of hardness along the length of coils then can be readily examined.
Nanomechanics of hard films on compliant substrates.
Reedy, Earl David, Jr.; Emerson, John Allen; Bahr, David F.; Moody, Neville Reid; Zhou, Xiao Wang; Hales, Lucas; Adams, David Price; Yeager,John; Nyugen, Thao D.; Corona, Edmundo; Kennedy, Marian S.; Cordill, Megan J.
2009-09-01
Development of flexible thin film systems for biomedical, homeland security and environmental sensing applications has increased dramatically in recent years [1,2,3,4]. These systems typically combine traditional semiconductor technology with new flexible substrates, allowing for both the high electron mobility of semiconductors and the flexibility of polymers. The devices have the ability to be easily integrated into components and show promise for advanced design concepts, ranging from innovative microelectronics to MEMS and NEMS devices. These devices often contain layers of thin polymer, ceramic and metallic films where differing properties can lead to large residual stresses [5]. As long as the films remain substrate-bonded, they may deform far beyond their freestanding counterpart. Once debonded, substrate constraint disappears leading to film failure where compressive stresses can lead to wrinkling, delamination, and buckling [6,7,8] while tensile stresses can lead to film fracture and decohesion [9,10,11]. In all cases, performance depends on film adhesion. Experimentally it is difficult to measure adhesion. It is often studied using tape [12], pull off [13,14,15], and peel tests [16,17]. More recent techniques for measuring adhesion include scratch testing [18,19,20,21], four point bending [22,23,24], indentation [25,26,27], spontaneous blisters [28,29] and stressed overlayers [7,26,30,31,32,33]. Nevertheless, sample design and test techniques must be tailored for each system. There is a large body of elastic thin film fracture and elastic contact mechanics solutions for elastic films on rigid substrates in the published literature [5,7,34,35,36]. More recent work has extended these solutions to films on compliant substrates and show that increasing compliance markedly changes fracture energies compared with rigid elastic solution results [37,38]. However, the introduction of inelastic substrate response significantly complicates the problem [10,39,40]. As
Exploring fragment spaces under multiple physicochemical constraints
NASA Astrophysics Data System (ADS)
Pärn, Juri; Degen, Jörg; Rarey, Matthias
2007-06-01
We present a new algorithm for the enumeration of chemical fragment spaces under constraints. Fragment spaces consist of a set of molecular fragments and a set of rules that specifies how fragments can be combined. Although fragment spaces typically cover an infinite number of molecules, they can be enumerated in case that a physicochemical profile of the requested compounds is given. By using min-max ranges for a number of corresponding properties, our algorithm is able to enumerate all molecules which obey these properties. To speed up the calculation, the given ranges are used directly during the build-up process to guide the selection of fragments. Furthermore, a topology based fragment filter is used to skip most of the redundant fragment combinations. We applied the algorithm to 40 different target classes. For each of these, we generated tailored fragment spaces from sets of known inhibitors and additionally derived ranges for several physicochemical properties. We characterized the target-specific fragment spaces and were able to enumerate the complete chemical subspaces for most of the targets.
Fixed Costs and Hours Constraints
ERIC Educational Resources Information Center
Johnson, William R.
2011-01-01
Hours constraints are typically identified by worker responses to questions asking whether they would prefer a job with more hours and more pay or fewer hours and less pay. Because jobs with different hours but the same rate of pay may be infeasible when there are fixed costs of employment or mandatory overtime premia, the constraint in those…
Market segmentation using perceived constraints
Jinhee Jun; Gerard Kyle; Andrew Mowen
2008-01-01
We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...
Thermodynamic Constraints Improve Metabolic Networks.
Krumholz, Elias W; Libourel, Igor G L
2017-08-08
In pursuit of establishing a realistic metabolic phenotypic space, the reversibility of reactions is thermodynamically constrained in modern metabolic networks. The reversibility constraints follow from heuristic thermodynamic poise approximations that take anticipated cellular metabolite concentration ranges into account. Because constraints reduce the feasible space, draft metabolic network reconstructions may need more extensive reconciliation, and a larger number of genes may become essential. Notwithstanding ubiquitous application, the effect of reversibility constraints on the predictive capabilities of metabolic networks has not been investigated in detail. Instead, work has focused on the implementation and validation of the thermodynamic poise calculation itself. With the advance of fast linear programming-based network reconciliation, the effects of reversibility constraints on network reconciliation and gene essentiality predictions have become feasible and are the subject of this study. Networks with thermodynamically informed reversibility constraints outperformed gene essentiality predictions compared to networks that were constrained with randomly shuffled constraints. Unconstrained networks predicted gene essentiality as accurately as thermodynamically constrained networks, but predicted substantially fewer essential genes. Networks that were reconciled with sequence similarity data and strongly enforced reversibility constraints outperformed all other networks. We conclude that metabolic network analysis confirmed the validity of the thermodynamic constraints, and that thermodynamic poise information is actionable during network reconciliation. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Fixed Costs and Hours Constraints
ERIC Educational Resources Information Center
Johnson, William R.
2011-01-01
Hours constraints are typically identified by worker responses to questions asking whether they would prefer a job with more hours and more pay or fewer hours and less pay. Because jobs with different hours but the same rate of pay may be infeasible when there are fixed costs of employment or mandatory overtime premia, the constraint in those…
Credit Constraints for Higher Education
ERIC Educational Resources Information Center
Solis, Alex
2012-01-01
This paper exploits a natural experiment that produces exogenous variation on credit access to determine the effect on college enrollment. The paper assess how important are credit constraints to explain the gap in college enrollment by family income, and what would be the gap if credit constraints are eliminated. Progress in college and dropout…
Hard x-ray imaging polarimeter for PolariS
NASA Astrophysics Data System (ADS)
Hayashida, Kiyoshi; Kim, Juyong; Sadamoto, Masaaki; Yoshinaga, Keigo; Gunji, Shuichi; Mihara, Tatehiro; Kishimoto, Yuji; Kubo, Hidetoshi; Mizuno, Tsunefumi; Takahashi, Hiromitsu; Dotani, Tadayasu; Yonetoku, Daisuke; Nakamori, Takeshi; Yoneyama, Tomokage; Ikeyama, Yuki; Kamitsukasa, Fumiyoshi
2016-07-01
Hard X-ray imaging polarimeters are developed for the X-ray γ-ray polaeimtery satellite PolariS. The imaging polarimter is scattering type, in which anisotropy in the direction of Compton scattering is employed to measure the hard X-ray (10-80 keV) polarization, and is installed on the focal planes of hard X-ray telescopes. We have updated the design of the model so as to cover larger solid angles of scattering direction. We also examine the event selection algorithm to optimize the detection efficiency of recoiled electrons in plastic scintillators. We succeed in improving the efficiency by factor of about 3-4 from the previous algorithm and criteria for 18-30 keV incidence. For 23 keV X-ray incidence, the recoiled electron energy is about 1 keV. We measured the efficiency to detect recoiled electrons in this case, and found about half of the theoretical limit. The improvement in this efficiency directly leads to that in the detection efficiency. In other words, however, there is still a room for improvement. We examine various process in the detector, and estimate the major loss is primarily that of scintillation light in a plastic scintillator pillar with a very small cross section (2.68mm squared) and a long length (40mm). Nevertheless, the current model provides the MDP of 6% for 10mCrab sources, which are the targets of PolariS.
Stochastic Hard-Sphere Dynamics for Hydrodynamics of Non-Ideal Fluids
Donev, A; Alder, B J; Garcia, A L
2008-02-26
A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.
Solution of the NP-hard total tardiness minimization problem in scheduling theory
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2007-06-01
The classical NP-hard (in the ordinary sense) problem of scheduling jobs in order to minimize the total tardiness for a single machine 1‖Σ T j is considered. An NP-hard instance of the problem is completely analyzed. A procedure for partitioning the initial set of jobs into subsets is proposed. Algorithms are constructed for finding an optimal schedule depending on the number of subsets. The complexity of the algorithms is O( n 2Σ p j ), where n is the number of jobs and p j is the processing time of the jth job ( j = 1, 2, …, n).
NASA Astrophysics Data System (ADS)
Berkowitz, Max; Parr, Robert G.
1988-02-01
Hardness and softness kernels η(r,r') and s(r,r') are defined for the ground state of an atomic or molecular electronic system, and the previously defined local hardness and softness η(r) and s(r) and global hardness and softness η and S are obtained from them. The physical meaning of s(r), as a charge capacitance, is discussed (following Huheey and Politzer), and two alternative ``hardness'' indices are identified and briefly discussed.
Ultrasonic material hardness depth measurement
Good, M.S.; Schuster, G.J.; Skorpik, J.R.
1997-07-08
The invention is an ultrasonic surface hardness depth measurement apparatus and method permitting rapid determination of hardness depth of shafts, rods, tubes and other cylindrical parts. The apparatus of the invention has a part handler, sensor, ultrasonic electronics component, computer, computer instruction sets, and may include a display screen. The part handler has a vessel filled with a couplant, and a part rotator for rotating a cylindrical metal part with respect to the sensor. The part handler further has a surface follower upon which the sensor is mounted, thereby maintaining a constant distance between the sensor and the exterior surface of the cylindrical metal part. The sensor is mounted so that a front surface of the sensor is within the vessel with couplant between the front surface of the sensor and the part. 12 figs.
Ultrasonic material hardness depth measurement
Good, Morris S.; Schuster, George J.; Skorpik, James R.
1997-01-01
The invention is an ultrasonic surface hardness depth measurement apparatus and method permitting rapid determination of hardness depth of shafts, rods, tubes and other cylindrical parts. The apparatus of the invention has a part handler, sensor, ultrasonic electronics component, computer, computer instruction sets, and may include a display screen. The part handler has a vessel filled with a couplant, and a part rotator for rotating a cylindrical metal part with respect to the sensor. The part handler further has a surface follower upon which the sensor is mounted, thereby maintaining a constant distance between the sensor and the exterior surface of the cylindrical metal part. The sensor is mounted so that a front surface of the sensor is within the vessel with couplant between the front surface of the sensor and the part.
Transpecific microsatellites for hard pines.
Shepherd, M.; Cross, M.; Maguire, L.; Dieters, J.; Williams, G.; Henry, J.
2002-04-01
Microsatellites are difficult to recover from large plant genomes so cross-specific utilisation is an important source of markers. Fifty microsatellites were tested for cross-specific amplification and polymorphism to two New World hard pine species, slash pine ( Pinus elliottii var. elliottii) and Caribbean pine ( P. caribaea var. hondurensis). Twenty-nine (58%) markers amplified in both hard pine species, and 23 of these 29 were polymorphic. Soft pine (subgenus Strobus) microsatellite markers did amplify, but none were polymorphic. Pinus elliottii var. elliottii and P. caribaea var. hondurensis showed mutational changes in the flanking regions and the repeat motif that were informative for Pinus spp. phylogenetic relationships. Most allele length variation could be attributed to variability in repeat unit number. There was no evidence for ascertainment bias.
Velocity and energy distributions in microcanonical ensembles of hard spheres
NASA Astrophysics Data System (ADS)
Scalas, Enrico; Gabriel, Adrian T.; Martin, Edgar; Germano, Guido
2015-08-01
In a microcanonical ensemble (constant N V E , hard reflecting walls) and in a molecular dynamics ensemble (constant N V E PG , periodic boundary conditions) with a number N of smooth elastic hard spheres in a d -dimensional volume V having a total energy E , a total momentum P , and an overall center of mass position G , the individual velocity components, velocity moduli, and energies have transformed beta distributions with different arguments and shape parameters depending on d , N , E , the boundary conditions, and possible symmetries in the initial conditions. This can be shown marginalizing the joint distribution of individual energies, which is a symmetric Dirichlet distribution. In the thermodynamic limit the beta distributions converge to gamma distributions with different arguments and shape or scale parameters, corresponding respectively to the Gaussian, i.e., Maxwell-Boltzmann, Maxwell, and Boltzmann or Boltzmann-Gibbs distribution. These analytical results agree with molecular dynamics and Monte Carlo simulations with different numbers of hard disks or spheres and hard reflecting walls or periodic boundary conditions. The agreement is perfect with our Monte Carlo algorithm, which acts only on velocities independently of positions with the collision versor sampled uniformly on a unit half sphere in d dimensions, while slight deviations appear with our molecular dynamics simulations for the smallest values of N .
Microwave assisted hard rock cutting
Lindroth, David P.; Morrell, Roger J.; Blair, James R.
1991-01-01
An apparatus for the sequential fracturing and cutting of subsurface volume of hard rock (102) in the strata (101) of a mining environment (100) by subjecting the volume of rock to a beam (25) of microwave energy to fracture the subsurface volume of rock by differential expansion; and , then bringing the cutting edge (52) of a piece of conventional mining machinery (50) into contact with the fractured rock (102).
NASA Astrophysics Data System (ADS)
Giorgi, Marco
2005-06-01
For the next generation of High Energy Physics (HEP) Experiments silicon microstrip detectors working in harsh radiation environments with excellent performances are necessary. The irradiation causes bulk and surface damages that modify the electrical properties of the detector. Solutions like AC coupled strips, overhanging metal contact, <100> crystal lattice orientation, low resistivity n-bulk and Oxygenated substrate are studied for rad-hard detectors. The paper presents an outlook of these technologies.
Sahoo, Pradyumna Kumar; Mandal, Palash Kumar; Ghosh, Saradindu
2014-01-01
Schwannomas are benign encapsulated perineural tumors. The head and neck region is the most common site. Intraoral origin is seen in only 1% of cases, tongue being the most common site; its location in the palate is rare. We report a case of hard-palate schwannoma with bony erosion which was immunohistochemically confirmed. The tumor was excised completely intraorally. After two months of follow-up, the defect was found to be completely covered with palatal mucosa. PMID:25298716
Williams, Ruth
2010-09-29
Skills for Health has launched a set of resources to help healthcare employers tackle hard-to-fill entry-level vacancies and provide sustainable employment for local unemployed people. The Sector Employability Toolkit aims to reduce recruitment and retention costs for entry-level posts and repare people for employment through pre-job training programmes, and support employers to develop local partnerships to gain access to wider pools of candidates and funding streams.
Tests of Radar Rainfall Retrieval Algorithms
NASA Technical Reports Server (NTRS)
Durden, Stephen L.
1999-01-01
The NASA/JPL Airborne Rain Mapping Radar (ARMAR) operates at 14 GHz. ARMAR flew on the NASA DC-8 aircraft during Tropical Ocean Global Atmosphere (TOGA) Coupled Ocean Atmosphere Response Experiment (COARE), collecting data in oceanic mesoscale convective systems, similar to those now being observed by the Tropical Rainfall Measuring Mission (TRMM) satellite, which includes a 14-GHz precipitation radar. Several algorithms for retrieving rain rate from downward looking radars are in existence. These can be categorized as deterministic and stochastic. Deterministic algorithms use the path integrated attenuation (PIA), measured by the surface reference technique, as a constraint. One deterministic algorithm corrects the attenuation-rainfall (k-R) relation, while another corrects the reflectivity rainfall (ZR) relation. Stochastic algorithms apply an Extended Kalman Filter to the reflectivity profile. One employs radar reflectivity only; the other additionally uses the PIA. We find that the stochastic algorithm with PIA is the most robust algorithm with regard to incorrect assumptions about the drop-size distribution (DSD). The deterministic algorithm that uses the PIA to adjust the Z-R relation is also fairly robust and produces rain rates similar to the stochastic algorithm that uses the PIA. The deterministic algorithm that adjusts only the k-R relation and the stochastic radar-only algorithm are more sensitive to assumptions about the DSD. It is likely that they underestimate convective rainfall, especially if the DSD is erroneously assumed to be appropriate for stratiform rain conditions. The underestimation is illustrated in the diagram. The algorithm labeled D IS initially assumes a DSD that is appropriate for stratiform. rain, while the rain is most likely convective. The PIA constraint causes the k-R relation to be adjusted, resulting in a much lower rain rate than the other algorithms. Additional information is contained in the original.
A Hybrid Constraint Representation and Reasoning Framework
NASA Technical Reports Server (NTRS)
Golden, Keith; Pang, Wan-Lin
2003-01-01
This paper introduces JNET, a novel constraint representation and reasoning framework that supports procedural constraints and constraint attachments, providing a flexible way of integrating the constraint reasoner with a run- time software environment. Attachments in JNET are constraints over arbitrary Java objects, which are defined using Java code, at runtime, with no changes to the JNET source code.
Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem
NASA Astrophysics Data System (ADS)
Chen, Wei
2015-07-01
In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-08-30
We present, in a unifying way, the main components of three asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel stochastic molecular-dynamics algorithm that builds on the Direct Simulation Monte Carlo (DSMC). We explain how to effectively combine event-driven and classical time-driven handling, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
Asynchronous Event-Driven Particle Algorithms
Donev, A
2007-02-28
We present in a unifying way the main components of three examples of asynchronous event-driven algorithms for simulating physical systems of interacting particles. The first example, hard-particle molecular dynamics (MD), is well-known. We also present a recently-developed diffusion kinetic Monte Carlo (DKMC) algorithm, as well as a novel event-driven algorithm for Direct Simulation Monte Carlo (DSMC). Finally, we describe how to combine MD with DSMC in an event-driven framework, and discuss some promises and challenges for event-driven simulation of realistic physical systems.
Infrared Kuiper Belt Constraints
Teplitz, V.L.; Stern, S.A.; Anderson, J.D.; Rosenbaum, D.; Scalise, R.J.; Wentzler, P.
1999-05-01
We compute the temperature and IR signal of particles of radius {ital a} and albedo {alpha} at heliocentric distance {ital R}, taking into account the emissivity effect, and give an interpolating formula for the result. We compare with analyses of {ital COBE} DIRBE data by others (including recent detection of the cosmic IR background) for various values of heliocentric distance {ital R}, particle radius {ital a}, and particle albedo {alpha}. We then apply these results to a recently developed picture of the Kuiper belt as a two-sector disk with a nearby, low-density sector (40{lt}R{lt}50{endash}90 AU) and a more distant sector with a higher density. We consider the case in which passage through a molecular cloud essentially cleans the solar system of dust. We apply a simple model of dust production by comet collisions and removal by the Poynting-Robertson effect to find limits on total and dust masses in the near and far sectors as a function of time since such a passage. Finally, we compare Kuiper belt IR spectra for various parameter values. Results of this work include: (1) numerical limits on Kuiper belt dust as a function of ({ital R}, {ital a}, {alpha}) on the basis of four alternative sets of constraints, including those following from recent discovery of the cosmic IR background by Hauser et al.; (2) application to the two-sector Kuiper belt model, finding mass limits and spectrum shape for different values of relevant parameters including dependence on time elapsed since last passage through a molecular cloud cleared the outer solar system of dust; and (3) potential use of spectral information to determine time since last passage of the Sun through a giant molecular cloud. {copyright} {ital {copyright} 1999.} {ital The American Astronomical Society}
Evolutionary constraints or opportunities?
Sharov, Alexei A.
2014-01-01
Natural selection is traditionally viewed as a leading factor of evolution, whereas variation is assumed to be random and non-directional. Any order in variation is attributed to epigenetic or developmental constraints that can hinder the action of natural selection. In contrast I consider the positive role of epigenetic mechanisms in evolution because they provide organisms with opportunities for rapid adaptive change. Because the term “constraint” has negative connotations, I use the term “regulated variation” to emphasize the adaptive nature of phenotypic variation, which helps populations and species to survive and evolve in changing environments. The capacity to produce regulated variation is a phenotypic property, which is not described in the genome. Instead, the genome acts as a switchboard, where mostly random mutations switch “on” or “off” preexisting functional capacities of organism components. Thus, there are two channels of heredity: informational (genomic) and structure-functional (phenotypic). Functional capacities of organisms most likely emerged in a chain of modifications and combinations of more simple ancestral functions. The role of DNA has been to keep records of these changes (without describing the result) so that they can be reproduced in the following generations. Evolutionary opportunities include adjustments of individual functions, multitasking, connection between various components of an organism, and interaction between organisms. The adaptive nature of regulated variation can be explained by the differential success of lineages in macro-evolution. Lineages with more advantageous patterns of regulated variation are likely to produce more species and secure more resources (i.e., long-term lineage selection). PMID:24769155
Why Are Drugs So Hard to Quit?
MedlinePlus Videos and Cool Tools
... Quitting drugs is hard because addiction is a brain disease. Your brain is like a control tower that sends out ... and choices. Addiction changes the signals in your brain and makes it hard to feel OK without ...
NASA Astrophysics Data System (ADS)
Gusev, M. I.
2016-10-01
We study the penalty function type methods for computing the reachable sets of nonlinear control systems with state constraints. The state constraints are given by a finite system of smooth inequalities. The proposed methods are based on removing the state constraints by replacing the original system with an auxiliary system without constraints. This auxiliary system is obtained by modifying the set of velocities of the original system around the boundary of constraints. The right-hand side of the system depends on a penalty parameter. We prove that the reachable sets of the auxiliary system approximate in the Hausdorff metric the reachable set of the original system with state constraints as the penalty parameter tends to zero (infinity) and give the estimates of the rate of convergence. The numerical algorithms for computing the reachable sets, based on Pontryagin's maximum principle, are also considered.
Enhancement of coupled multichannel images using sparsity constraints.
Ramakrishnan, Naveen; Ertin, Emre; Moses, Randolph L
2010-08-01
We consider the problem of joint enhancement of multichannel images with pixel based constraints on the multichannel data. Previous work by Cetin and Karl introduced nonquadratic regularization methods for SAR image enhancement using sparsity enforcing penalty terms. We formulate an optimization problem that jointly enhances complex-valued multichannel images while preserving the cross-channel information, which we include as constraints tying the multichannel images together. We pose this problem as a joint optimization problem with constraints. We first reformulate it as an equivalent (unconstrained) dual problem and develop a numerically-efficient method for solving it. We develop the Dual Descent method, which has low complexity, for solving the joint optimization problem. The algorithm is applied to both an interferometric synthetic aperture radar (IFSAR) problem, in which the relative phase between two complex-valued images indicate height, and to a synthetic multimodal medical image example.
Automatic Constraint Detection for 2D Layout Regularization.
Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter
2016-08-01
In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.
Hierarchical motion estimation with smoothness constraints and postprocessing
NASA Astrophysics Data System (ADS)
Xie, Kan; Van Eycken, Luc; Oosterlinck, Andre J.
1996-01-01
How to acquire accurate and reliable motion parameters from an image sequence is a knotty problem for many applications in image processing, image recognition, and video coding, especially when scenes involve moving objects with various shapes and sizes as well as very fast and complicated motion. In this paper, an improved pel-based motion estimation (ME) algorithm with smoothness constraints is presented, which is based on the investigation and the comparison of different existing pel-based ME (or optical flow) algorithms. Then, in order to cope with various moving objects and their complex motion, a hierarchical ME algorithm with smoothness constraints and postprocessing is proposed. The experimental results show that the motion parameters obtained by the hierarchical ME algorithm are quite creditable and seem to be close to the real physical motion fields if the luminance intensity changes are due to the motion of objects. The hierarchical ME algorithm still provides approximate and smooth vector fields even for scenes that involve some motion-irrelevant intensity changes or blurring caused by violent motion.
Russian Doll Search for solving Constraint Optimization problems
Verfaillie, G.; Lemaitre, M.
1996-12-31
If the Constraint Satisfaction framework has been extended to deal with Constraint Optimization problems, it appears that optimization is far more complex than satisfaction. One of the causes of the inefficiency of complete tree search methods, like Depth First Branch and Bound, lies in the poor quality of the lower bound on the global valuation of a partial assignment, even when using Forward Checking techniques. In this paper, we introduce the Russian Doll Search algorithm which replaces one search by n successive searches on nested subproblems (n being the number of problem variables), records the results of each search and uses them later, when solving larger subproblems, in order to improve the lower bound on the global valuation of any partial assignment. On small random problems and on large real scheduling problems, this algorithm yields surprisingly good results, which greatly improve as the problems get more constrained and the bandwidth of the used variable ordering diminishes.
NP-hardness of decoding quantum error-correction codes
Hsieh, Min-Hsiu; Le Gall, Francois
2011-05-15
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
Parameterized Complexity of k-Anonymity: Hardness and Tractability
NASA Astrophysics Data System (ADS)
Bonizzoni, Paola; Della Vedova, Gianluca; Dondi, Riccardo; Pirola, Yuri
The problem of publishing personal data without giving up privacy is becoming increasingly important. A precise formalization that has been recently proposed is the k-anonymity, where the rows of a table are partitioned in clusters of size at least k and all rows in a cluster become the same tuple after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is hard even when the stored values are over a binary alphabet or the table consists of a bounded number of columns. In this paper we study how the complexity of the problem is influenced by different parameters. First we show that the problem is W[1]-hard when parameterized by the value of the solution (and k). Then we exhibit a fixed-parameter algorithm when the problem is parameterized by the number of columns and the number of different values in any column.
Applicability of Dynamic Facilitation Theory to Binary Hard Disk Systems
NASA Astrophysics Data System (ADS)
Isobe, Masaharu; Keys, Aaron S.; Chandler, David; Garrahan, Juan P.
2016-09-01
We numerically investigate the applicability of dynamic facilitation (DF) theory for glass-forming binary hard disk systems where supercompression is controlled by pressure. By using novel efficient algorithms for hard disks, we are able to generate equilibrium supercompressed states in an additive nonequimolar binary mixture, where microcrystallization and size segregation do not emerge at high average packing fractions. Above an onset pressure where collective heterogeneous relaxation sets in, we find that relaxation times are well described by a "parabolic law" with pressure. We identify excitations, or soft spots, that give rise to structural relaxation and find that they are spatially localized, their average concentration decays exponentially with pressure, and their associated energy scale is logarithmic in the excitation size. These observations are consistent with the predictions of DF generalized to systems controlled by pressure rather than temperature.
Applicability of Dynamic Facilitation Theory to Binary Hard Disk Systems.
Isobe, Masaharu; Keys, Aaron S; Chandler, David; Garrahan, Juan P
2016-09-30
We numerically investigate the applicability of dynamic facilitation (DF) theory for glass-forming binary hard disk systems where supercompression is controlled by pressure. By using novel efficient algorithms for hard disks, we are able to generate equilibrium supercompressed states in an additive nonequimolar binary mixture, where microcrystallization and size segregation do not emerge at high average packing fractions. Above an onset pressure where collective heterogeneous relaxation sets in, we find that relaxation times are well described by a "parabolic law" with pressure. We identify excitations, or soft spots, that give rise to structural relaxation and find that they are spatially localized, their average concentration decays exponentially with pressure, and their associated energy scale is logarithmic in the excitation size. These observations are consistent with the predictions of DF generalized to systems controlled by pressure rather than temperature.
Automated formulation of constraint satisfaction problems
Sabin, M.; Freuder, E.C.
1996-12-31
A wide variety of problems can be represented as constraint satisfaction problems (CSPs), and once so represented can be solved by a variety of effective algorithms. However, as with other powerful, general Al problem solving methods, we must still address the task of moving from a natural statement of the problem to a formulation of the problem as a CSP. This research addresses the task of automating this problem formulation process, using logic puzzles as a testbed. Beyond problem formulation per se, we address the issues of effective problem formulation, i.e. finding formulations that support more efficient solution, as well as incremental problem formulation that supports reasoning from partial information and are congenial to human thought processes.
Geographic Constraints on Social Network Groups
González, Marta C.; Barabási, Albert-László; Christakis, Nicholas A.
2011-01-01
Social groups are fundamental building blocks of human societies. While our social interactions have always been constrained by geography, it has been impossible, due to practical difficulties, to evaluate the nature of this restriction on social group structure. We construct a social network of individuals whose most frequent geographical locations are also known. We also classify the individuals into groups according to a community detection algorithm. We study the variation of geographical span for social groups of varying sizes, and explore the relationship between topological positions and geographic positions of their members. We find that small social groups are geographically very tight, but become much more clumped when the group size exceeds about 30 members. Also, we find no correlation between the topological positions and geographic positions of individuals within network communities. These results suggest that spreading processes face distinct structural and spatial constraints. PMID:21483665
Resource allocation using constraint propagation
NASA Technical Reports Server (NTRS)
Rogers, John S.
1990-01-01
The concept of constraint propagation was discussed. Performance increases are possible with careful application of these constraint mechanisms. The degree of performance increase is related to the interdependence of the different activities resource usage. Although this method of applying constraints to activities and resources is often beneficial, it is obvious that this is no panacea cure for the computational woes that are experienced by dynamic resource allocation and scheduling problems. A combined effort for execution optimization in all areas of the system during development and the selection of the appropriate development environment is still the best method of producing an efficient system.
Warren G. Harding and the Press.
ERIC Educational Resources Information Center
Whitaker, W. Richard
There are many parallels between the Richard M. Nixon administration and Warren G. Harding's term: both Republicans, both touched by scandal, and both having a unique relationship with the press. But in Harding's case the relationship was a positive one. One of Harding's first official acts as president was to restore the regular White House news…
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Hard hats. 56.15002 Section 56.15002 Mineral... HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Personal Protection § 56.15002 Hard hats. All persons shall wear suitable hard hats when in or around a mine or plant where falling...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Hard hats. 57.15002 Section 57.15002 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND NONMETAL MINE SAFETY AND... Underground § 57.15002 Hard hats. All persons shall wear suitable hard hats when in or around a mine or...
Code of Federal Regulations, 2011 CFR
2011-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2012 CFR
2012-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2013 CFR
2013-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2014 CFR
2014-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Code of Federal Regulations, 2010 CFR
2010-01-01
... REGULATIONS Germination Tests in the Administration of the Act § 201.57 Hard seeds. Seeds which remain hard at..., are to be counted as “hard seed.” If at the end of the germination period provided for legumes, okra... percentage of germination. For flatpea, continue the swollen seed in test for 14 days when germinating at...
Computing group cardinality constraint solutions for logistic regression problems.
Zhang, Yong; Kwon, Dongjin; Pohl, Kilian M
2017-01-01
We derive an algorithm to directly solve logistic regression based on cardinality constraint, group sparsity and use it to classify intra-subject MRI sequences (e.g. cine MRIs) of healthy from diseased subjects. Group cardinality constraint models are often applied to medical images in order to avoid overfitting of the classifier to the training data. Solutions within these models are generally determined by relaxing the cardinality constraint to a weighted feature selection scheme. However, these solutions relate to the original sparse problem only under specific assumptions, which generally do not hold for medical image applications. In addition, inferring clinical meaning from features weighted by a classifier is an ongoing topic of discussion. Avoiding weighing features, we propose to directly solve the group cardinality constraint logistic regression problem by generalizing the Penalty Decomposition method. To do so, we assume that an intra-subject series of images represents repeated samples of the same disease patterns. We model this assumption by combining series of measurements created by a feature across time into a single group. Our algorithm then derives a solution within that model by decoupling the minimization of the logistic regression function from enforcing the group sparsity constraint. The minimum to the smooth and convex logistic regression problem is determined via gradient descent while we derive a closed form solution for finding a sparse approximation of that minimum. We apply our method to cine MRI of 38 healthy controls and 44 adult patients that received reconstructive surgery of Tetralogy of Fallot (TOF) during infancy. Our method correctly identifies regions impacted by TOF and generally obtains statistically significant higher classification accuracy than alternative solutions to this model, i.e., ones relaxing group cardinality constraints.
Grefenstette, J.J.
1994-12-31
Genetic algorithms solve problems by using principles inspired by natural population genetics: They maintain a population of knowledge structures that represent candidate solutions, and then let that population evolve over time through competition and controlled variation. GAs are being applied to a wide range of optimization and learning problems in many domains.
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Applying Motion Constraints Based on Test Data
NASA Technical Reports Server (NTRS)
Burlone, Michael
2014-01-01
MSC ADAMS is a simulation software that is used to analyze multibody dynamics. Using user subroutines, it is possible to apply motion constraints to the rigid bodies so that they match the motion profile collected from test data. This presentation describes the process of taking test data and passing it to ADAMS using user subroutines, and uses the Morpheus free-flight 4 test as an example of motion data used for this purpose. Morpheus is the name of a prototype lander vehicle built by NASA that serves as a test bed for various experimental technologies (see backup slides for details) MSC.ADAMS"TM" is used to play back telemetry data (vehicle orientation and position) from each test as the inputs to a 6-DoF general motion constraint (details in backup slides) The MSC.ADAMS"TM" playback simulations allow engineers to examine and analyze flight trajectory as well as observe vehicle motion from any angle and at any playback speed. This facilitates the development of robust and stable control algorithms, increasing reliability and reducing development costs of this developmental engine The simulation also incorporates a 3D model of the artificial hazard field, allowing engineers to visualize and measure performance of the developmental autonomous landing and hazard avoidance technology ADAMS is a multibody dynamics solver. It uses forces, constraints, and mass properties to numerically integrate equations of motion. The ADAMS solver will ask the motion subroutine for position, velocity, and acceleration values at various time steps. Those values must be continuous over the whole time domain. Each degree of freedom in the telemetry data can be examined separately; however, linear interpolation of the telemetry data is invalid, since there will be discontinuities in velocity and acceleration.
Applying Motion Constraints Based on Test Data
NASA Technical Reports Server (NTRS)
Burlone, Michael
2014-01-01
MSC ADAMS is a simulation software that is used to analyze multibody dynamics. Using user subroutines, it is possible to apply motion constraints to the rigid bodies so that they match the motion profile collected from test data. This presentation describes the process of taking test data and passing it to ADAMS using user subroutines, and uses the Morpheus free-flight 4 test as an example of motion data used for this purpose. Morpheus is the name of a prototype lander vehicle built by NASA that serves as a test bed for various experimental technologies (see backup slides for details) MSC.ADAMS"TM" is used to play back telemetry data (vehicle orientation and position) from each test as the inputs to a 6-DoF general motion constraint (details in backup slides) The MSC.ADAMS"TM" playback simulations allow engineers to examine and analyze flight trajectory as well as observe vehicle motion from any angle and at any playback speed. This facilitates the development of robust and stable control algorithms, increasing reliability and reducing development costs of this developmental engine The simulation also incorporates a 3D model of the artificial hazard field, allowing engineers to visualize and measure performance of the developmental autonomous landing and hazard avoidance technology ADAMS is a multibody dynamics solver. It uses forces, constraints, and mass properties to numerically integrate equations of motion. The ADAMS solver will ask the motion subroutine for position, velocity, and acceleration values at various time steps. Those values must be continuous over the whole time domain. Each degree of freedom in the telemetry data can be examined separately; however, linear interpolation of the telemetry data is invalid, since there will be discontinuities in velocity and acceleration.
Efficient Algorithms for Langevin and DPD Dynamics.
Goga, N; Rzepiela, A J; de Vries, A H; Marrink, S J; Berendsen, H J C
2012-10-09
In this article, we present several algorithms for stochastic dynamics, including Langevin dynamics and different variants of Dissipative Particle Dynamics (DPD), applicable to systems with or without constraints. The algorithms are based on the impulsive application of friction and noise, thus avoiding the computational complexity of algorithms that apply continuous friction and noise. Simulation results on thermostat strength and diffusion properties for ideal gas, coarse-grained (MARTINI) water, and constrained atomic (SPC/E) water systems are discussed. We show that the measured thermal relaxation rates agree well with theoretical predictions. The influence of various parameters on the diffusion coefficient is discussed.
Refined genetic algorithm -- Economic dispatch example
Sheble, G.B.; Brittig, K.
1995-02-01
A genetic-based algorithm is used to solve an economic dispatch (ED) problem. The algorithm utilizes payoff information of perspective solutions to evaluate optimality. Thus, the constraints of classical LaGrangian techniques on unit curves are eliminated. Using an economic dispatch problem as a basis for comparison, several different techniques which enhance program efficiency and accuracy, such as mutation prediction, elitism, interval approximation and penalty factors, are explored. Two unique genetic algorithms are also compared. The results are verified for a sample problem using a classical technique.
Self-accelerating massive gravity: Hidden constraints and characteristics
NASA Astrophysics Data System (ADS)
Motloch, Pavel; Hu, Wayne; Motohashi, Hayato
2016-05-01
Self-accelerating backgrounds in massive gravity provide an arena to explore the Cauchy problem for derivatively coupled fields that obey complex constraints which reduce the phase space degrees of freedom. We present here an algorithm based on the Kronecker form of a matrix pencil that finds all hidden constraints, for example those associated with derivatives of the equations of motion, and characteristic curves for any 1 +1 dimensional system of linear partial differential equations. With the Regge-Wheeler-Zerilli decomposition of metric perturbations into angular momentum and parity states, this technique applies to fully 3 +1 dimensional perturbations of massive gravity around any spherically symmetric self-accelerating background. Five spin modes of the massive graviton propagate once the constraints are imposed: two spin-2 modes with luminal characteristics present in the massless theory as well as two spin-1 modes and one spin-0 mode. Although the new modes all possess the same—typically spacelike—characteristic curves, the spin-1 modes are parabolic while the spin-0 modes are hyperbolic. The joint system, which remains coupled by nonderivative terms, cannot be solved as a simple Cauchy problem from a single noncharacteristic surface. We also illustrate the generality of the algorithm with other cases where derivative constraints reduce the number of propagating degrees of freedom or order of the equations.
Rao, R.; Buescher, K.L.; Hanagandi, V.
1995-12-31
In the optimal plant location and sizing problem it is desired to optimize cost function involving plant sizes, locations, and production schedules in the face of supply-demand and plant capacity constraints. We will use simulated annealing (SA) and a genetic algorithm (GA) to solve this problem. We will compare these techniques with respect to computational expenses, constraint handling capabilities, and the quality of the solution obtained in general. Simulated Annealing is a combinatorial stochastic optimization technique which has been shown to be effective in obtaining fast suboptimal solutions for computationally, hard problems. The technique is especially attractive since solutions are obtained in polynomial time for problems where an exhaustive search for the global optimum would require exponential time. We propose a synergy between the cluster analysis technique, popular in classical stochastic global optimization, and the GA to accomplish global optimization. This synergy minimizes redundant searches around local optima and enhances the capable it of the GA to explore new areas in the search space.
NOSEP: Nonoverlapping Sequence Pattern Mining With Gap Constraints.
Wu, Youxi; Tong, Yao; Zhu, Xingquan; Wu, Xindong
2017-09-28
Sequence pattern mining aims to discover frequent subsequences as patterns in a single sequence or a sequence database. By combining gap constraints (or flexible wildcards), users can specify special characteristics of the patterns and discover meaningful subsequences suitable for their own application domains, such as finding gene transcription sites from DNA sequences or discovering patterns for time series data classification. Due to the inherent complexity of sequence patterns, including the exponential candidate space with respect to pattern letters and gap constraints, to date, existing sequence pattern mining methods are either incomplete or do not support the Apriori property because the support ratio of a pattern may be greater than that of its subpatterns. Most importantly, patterns discovered by these methods are either too restrictive or too general and cannot represent underlying meaningful knowledge in the sequences. In this paper, we focus on a nonoverlapping sequence pattern (NOSEP) mining task with gap constraints, where an NOSEP allows sequence letters to be flexibly and maximally utilized for pattern discovery. A new Apriori-based NOSEP mining algorithm is proposed. NOSEP is a complete pattern mining algorithm, which uses a specially designed data structure, Nettree, to calculate the exact occurrence of a pattern in the sequence. Experimental results and comparisons on biology DNA sequences, time series data, and Gazelle datasets demonstrate the efficiency of the proposed algorithm and the uniqueness of NOSEPs compared to other methods.
Neutrino constraints on inelastic dark matter after CDMS II results
NASA Astrophysics Data System (ADS)
Shu, Jing; Yin, Peng-Fei; Zhu, Shou-Hua
2010-06-01
We discuss the neutrino constraints from solar and terrestrial dark matter (DM) annihilations in the inelastic dark matter (iDM) scenario after the recent CDMS II results. To reconcile the DAMA/LIBRA data with constraints from all other direct experiments, the iDM needs to be light (mχ<100GeV) and have a large DM-nucleon cross section (σn˜10-4pb in the spin-independent (SI) scattering and σn˜10pb in the spin-dependent (SD) scattering). The dominant contribution to the iDM capture in the Sun is from scattering off Fe/Al in the SI/SD case. Current bounds from Super-Kamiokande exclude the hard DM annihilation channels, such as W+W-, ZZ, tt¯, and τ+τ-. For soft channels such as bb¯ and cc¯, the limits are loose, but could be tested or further constrained by future IceCube plus DeepCore. For neutrino constraints from the DM annihilation in the Earth, due to the weaker gravitational effect of the Earth and inelastic capture condition, the constraint exists only for small mass splitting δ<40keV and mχ˜(10,50)GeV even in the τ+τ- channel.
Neutrino constraints on inelastic dark matter after CDMS II results
Shu Jing; Yin Pengfei; Zhu Shouhua
2010-06-15
We discuss the neutrino constraints from solar and terrestrial dark matter (DM) annihilations in the inelastic dark matter (iDM) scenario after the recent CDMS II results. To reconcile the DAMA/LIBRA data with constraints from all other direct experiments, the iDM needs to be light (m{sub {chi}}<100 GeV) and have a large DM-nucleon cross section ({sigma}{sub n{approx}}10{sup -4} pb in the spin-independent (SI) scattering and {sigma}{sub n{approx}}10 pb in the spin-dependent (SD) scattering). The dominant contribution to the iDM capture in the Sun is from scattering off Fe/Al in the SI/SD case. Current bounds from Super-Kamiokande exclude the hard DM annihilation channels, such as W{sup +}W{sup -}, ZZ, tt, and {tau}{sup +{tau}-}. For soft channels such as bb and cc, the limits are loose, but could be tested or further constrained by future IceCube plus DeepCore. For neutrino constraints from the DM annihilation in the Earth, due to the weaker gravitational effect of the Earth and inelastic capture condition, the constraint exists only for small mass splitting {delta}<40 keV and m{sub {chi}}{approx}(10,50) GeV even in the {tau}{sup +{tau}-} channel.
Weighted constraints in generative linguistics.
Pater, Joe
2009-08-01
Harmonic Grammar (HG) and Optimality Theory (OT) are closely related formal frameworks for the study of language. In both, the structure of a given language is determined by the relative strengths of a set of constraints. They differ in how these strengths are represented: as numerical weights (HG) or as ranks (OT). Weighted constraints have advantages for the construction of accounts of language learning and other cognitive processes, partly because they allow for the adaptation of connectionist and statistical models. HG has been little studied in generative linguistics, however, largely due to influential claims that weighted constraints make incorrect predictions about the typology of natural languages, predictions that are not shared by the more popular OT. This paper makes the case that HG is in fact a promising framework for typological research, and reviews and extends the existing arguments for weighted over ranked constraints.
Constraint-based stereo matching
NASA Technical Reports Server (NTRS)
Kuan, D. T.
1987-01-01
The major difficulty in stereo vision is the correspondence problem that requires matching features in two stereo images. Researchers describe a constraint-based stereo matching technique using local geometric constraints among edge segments to limit the search space and to resolve matching ambiguity. Edge segments are used as image features for stereo matching. Epipolar constraint and individual edge properties are used to determine possible initial matches between edge segments in a stereo image pair. Local edge geometric attributes such as continuity, junction structure, and edge neighborhood relations are used as constraints to guide the stereo matching process. The result is a locally consistent set of edge segment correspondences between stereo images. These locally consistent matches are used to generate higher-level hypotheses on extended edge segments and junctions to form more global contexts to achieve global consistency.
The hard problem of cooperation.
Eriksson, Kimmo; Strimling, Pontus
2012-01-01
Based on individual variation in cooperative inclinations, we define the "hard problem of cooperation" as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition) change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior.
The Hard Problem of Cooperation
Eriksson, Kimmo; Strimling, Pontus
2012-01-01
Based on individual variation in cooperative inclinations, we define the “hard problem of cooperation” as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition) change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior. PMID:22792282
NASA Astrophysics Data System (ADS)
Gras, Vincent; Luong, Michel; Amadon, Alexis; Boulant, Nicolas
2015-12-01
In Magnetic Resonance Imaging at ultra-high field, kT-points radiofrequency pulses combined with parallel transmission are a promising technique to mitigate the B1 field inhomogeneity in 3D imaging applications. The optimization of the corresponding k-space trajectory for its slice-selective counterpart, i.e. the spokes method, has been shown in various studies to be very valuable but also dependent on the hardware and specific absorption rate constraints. Due to the larger number of degrees of freedom than for spokes excitations, joint design techniques based on the fine discretization (gridding) of the parameter space become hardly tractable for kT-points pulses. In this article, we thus investigate the simultaneous optimization of the 3D blipped k-space trajectory and of the kT-points RF pulses, using a magnitude least squares cost-function, with explicit constraints and in the large flip angle regime. A second-order active-set algorithm is employed due to its demonstrated success and robustness in similar problems. An analysis of global optimality and of the structure of the returned trajectories is proposed. The improvement provided by the k-space trajectory optimization is validated experimentally by measuring the flip angle on a spherical water phantom at 7T and via Quantum Process Tomography.
Thermopile detector radiation hard readout
NASA Astrophysics Data System (ADS)
Gaalema, Stephen; Van Duyne, Stephen; Gates, James L.; Foote, Marc C.
2010-08-01
The NASA Jupiter Europa Orbiter (JEO) conceptual payload contains a thermal instrument with six different spectral bands ranging from 8μm to 100μm. The thermal instrument is based on multiple linear arrays of thermopile detectors that are intrinsically radiation hard; however, the thermopile CMOS readout needs to be hardened to tolerate the radiation sources of the JEO mission. Black Forest Engineering is developing a thermopile readout to tolerate the JEO mission radiation sources. The thermal instrument and ROIC process/design techniques are described to meet the JEO mission requirements.
Radiation hard electronics for LHC
NASA Astrophysics Data System (ADS)
Raymond, M.; Millmore, M.; Hall, G.; Sachdeva, R.; French, M.; Nygård, E.; Yoshioka, K.
1995-02-01
A CMOS front end electronics chain is being developed by the RD20 collaboration for microstrip detector readout at LHC. It is based on a preamplifier and CR-RC filter, analogue pipeline and an analogue signal processor. Amplifiers and transistor test structures have been constructed and evaluated in detail using a Harris 1.2 μm radiation hardened CMOS process. Progress with larger scale elements, including 32 channel front end chips, is described. A radiation hard 128 channel chip, with a 40 MHz analogue multiplexer, is to be submitted for fabrication in July 1994 which will form the basis of the readout of the tracking system of the CMS experiment.
Radiation Hardness Assurance (RHA) Guideline
NASA Technical Reports Server (NTRS)
Campola, Michael J.
2016-01-01
Radiation Hardness Assurance (RHA) consists of all activities undertaken to ensure that the electronics and materials of a space system perform to their design specifications after exposure to the mission space environment. The subset of interests for NEPP and the REAG, are EEE parts. It is important to register that all of these undertakings are in a feedback loop and require constant iteration and updating throughout the mission life. More detail can be found in the reference materials on applicable test data for usage on parts.
Hard Scattering Studies at Jlab
Harutyun Avagyan; Peter Bosted; Volker Burkert; Latifa Elouadrhiri
2005-09-01
We present current activities and future prospects for studies of hard scattering processes using the CLAS detector and the CEBAF polarized electron beam. Kinematic dependences of single and double spin asymmetries have been measured in a wide kinematic range at CLAS with a polarized NH{sub 3} and unpolarized liquid hydrogen targets. It has been shown that the data are consistent with factorization and observed target and beam asymmetries are in good agreement with measurements performed at higher energies, suggesting that the high energy-description of the semi-inclusive DIS process can be extended to the moderate energies of JLab measurements.
Efficient multiple-way graph partitioning algorithms
Dasdan, A.; Aykanat, C.
1995-12-01
Graph partitioning deals with evenly dividing a graph into two or more parts such that the total weight of edges interconnecting these parts, i.e., cutsize, is minimized. Graph partitioning has important applications in VLSI layout, mapping, and sparse Gaussian elimination. Since graph partitioning problem is NP-hard, we should resort to polynomial-time algorithms to obtain a good solution, or hopefully a near-optimal solution. Kernighan-Lin (KL) propsoed a 2-way partitioning algorithms. Fiduccia-Mattheyses (FM) introduced a faster version of KL algorithm. Sanchis (FMS) generalized FM algorithm to a multiple-way partitioning algorithm. Simulated Annealing (SA) is one of the most successful approaches that are not KL-based.
A Hybrid Causal Search Algorithm for Latent Variable Models
Ogarrio, Juan Miguel; Spirtes, Peter; Ramsey, Joe
2017-01-01
Existing score-based causal model search algorithms such as GES (and a speeded up version, FGS) are asymptotically correct, fast, and reliable, but make the unrealistic assumption that the true causal graph does not contain any unmeasured confounders. There are several constraint-based causal search algorithms (e.g RFCI, FCI, or FCI+) that are asymptotically correct without assuming that there are no unmeasured confounders, but often perform poorly on small samples. We describe a combined score and constraint-based algorithm, GFCI, that we prove is asymptotically correct. On synthetic data, GFCI is only slightly slower than RFCI but more accurate than FCI, RFCI and FCI+. PMID:28239434
Thermodynamic constraints for biochemical networks.
Beard, Daniel A; Babson, Eric; Curtis, Edward; Qian, Hong
2004-06-07
The constraint-based approach to analysis of biochemical systems has emerged as a useful tool for rational metabolic engineering. Flux balance analysis (FBA) is based on the constraint of mass conservation; energy balance analysis (EBA) is based on non-equilibrium thermodynamics. The power of these approaches lies in the fact that the constraints are based on physical laws, and do not make use of unknown parameters. Here, we show that the network structure (i.e. the stoichiometric matrix) alone provides a system of constraints on the fluxes in a biochemical network which are feasible according to both mass balance and the laws of thermodynamics. A realistic example shows that these constraints can be sufficient for deriving unambiguous, biologically meaningful results. The thermodynamic constraints are obtained by comparing of the sign pattern of the flux vector to the sign patterns of the cycles of the internal cycle space via connection between stoichiometric network theory (SNT) and the mathematical theory of oriented matroids.
A timeline algorithm for astronomy missions
NASA Technical Reports Server (NTRS)
Moore, J. E.; Guffin, O. T.
1975-01-01
An algorithm is presented for generating viewing timelines for orbital astronomy missions of the pointing (nonsurvey/scan) type. The algorithm establishes a target sequence from a list of candidate targets in a way which maximizes total viewing time. Two special cases are treated. One concerns dim targets which, due to lighting constraints, are scheduled only during the antipolar portion of each orbit. They normally require long observation times extending over several revolutions. A minimum slew heuristic is employed to select the sequence of dim targets. The other case deals with bright, or short duration, targets, which have less restrictive lighting constraints and are scheduled during the portion of each orbit when dim targets cannot be viewed. Since this process moves much more rapidly than the dim path, an enumeration algorithm is used to select the sequence that maximizes total viewing time.
ERIC Educational Resources Information Center
Végh, Ladislav
2016-01-01
The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…
Hardness correlation for uranium and its alloys
Humphreys, D L; Romig, Jr, A D
1983-03-01
The hardness of 16 different uranium-titanium (U-Ti) alloys was measured on six (6) different hardness scales (R/sub A/, R/sub B/, R/sub C/, R/sub D/, Knoop, and Vickers). The alloys contained between 0.75 and 2.0 wt % Ti. All of the alloys were solutionized (850/sup 0/C, 1 h) and ice-water quenched to produce a supersaturated martensitic phase. A range of hardnesses was obtained by aging the samples for various times and temperatures. The correlation of various hardness scales was shown to be virtually identical to the hardness-scale correlation for steels. For more-accurate conversion from one hardness scale to another, least-squares-curve fits were determined for the various hardness-scale correlations. 34 figures, 5 tables.
Applying Soft Arc Consistency to Distributed Constraint Optimization Problems
NASA Astrophysics Data System (ADS)
Matsui, Toshihiro; Silaghi, Marius C.; Hirayama, Katsutoshi; Yokoo, Makot; Matsuo, Hiroshi
The Distributed Constraint Optimization Problem (DCOP) is a fundamental framework of multi-agent systems. With DCOPs a multi-agent system is represented as a set of variables and a set of constraints/cost functions. Distributed task scheduling and distributed resource allocation can be formalized as DCOPs. In this paper, we propose an efficient method that applies directed soft arc consistency to a DCOP. In particular, we focus on DCOP solvers that employ pseudo-trees. A pseudo-tree is a graph structure for a constraint network that represents a partial ordering of variables. Some pseudo-tree-based search algorithms perform optimistic searches using explicit/implicit backtracking in parallel. However, for cost functions taking a wide range of cost values, such exact algorithms require many search iterations. Therefore additional improvements are necessary to reduce the number of search iterations. A previous study used a dynamic programming-based preprocessing technique that estimates the lower bound values of costs. However, there are opportunities for further improvements of efficiency. In addition, modifications of the search algorithm are necessary to use the estimated lower bounds. The proposed method applies soft arc consistency (soft AC) enforcement to DCOP. In the proposed method, directed soft AC is performed based on a pseudo-tree in a bottom up manner. Using the directed soft AC, the global lower bound value of cost functions is passed up to the root node of the pseudo-tree. It also totally reduces values of binary cost functions. As a result, the original problem is converted to an equivalent problem. The equivalent problem is efficiently solved using common search algorithms. Therefore, no major modifications are necessary in search algorithms. The performance of the proposed method is evaluated by experimentation. The results show that it is more efficient than previous methods.
2013-01-01
approximation to the Tutte polynomial evaluated at 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 22-10-2012 13. SUPPLEMENTARY NOTES The views...over classical algorithms for additive approximation to the Tutte polynomial evaluated at certain roots of unity. This leads to an additive...Temperly-Leib algebras and approximations of tensor networks, and yield additive approximations to the Tutte polynomial . These are relevant to
Hard and Soft Safety Verifications
NASA Technical Reports Server (NTRS)
Wetherholt, Jon; Anderson, Brenda
2012-01-01
The purpose of this paper is to examine the differences between and the effects of hard and soft safety verifications. Initially, the terminology should be defined and clarified. A hard safety verification is datum which demonstrates how a safety control is enacted. An example of this is relief valve testing. A soft safety verification is something which is usually described as nice to have but it is not necessary to prove safe operation. An example of a soft verification is the loss of the Solid Rocket Booster (SRB) casings from Shuttle flight, STS-4. When the main parachutes failed, the casings impacted the water and sank. In the nose cap of the SRBs, video cameras recorded the release of the parachutes to determine safe operation and to provide information for potential anomaly resolution. Generally, examination of the casings and nozzles contributed to understanding of the newly developed boosters and their operation. Safety verification of SRB operation was demonstrated by examination for erosion or wear of the casings and nozzle. Loss of the SRBs and associated data did not delay the launch of the next Shuttle flight.
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
MM Algorithms for Geometric and Signomial Programming
Lange, Kenneth; Zhou, Hua
2013-01-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545
Decentralized Patrolling Under Constraints in Dynamic Environments.
Chen, Shaofei; Wu, Feng; Shen, Lincheng; Chen, Jing; Ramchurn, Sarvapali D
2015-12-22
We investigate a decentralized patrolling problem for dynamic environments where information is distributed alongside threats. In this problem, agents obtain information at a location, but may suffer attacks from the threat at that location. In a decentralized fashion, each agent patrols in a designated area of the environment and interacts with a limited number of agents. Therefore, the goal of these agents is to coordinate to gather as much information as possible while limiting the damage incurred. Hence, we model this class of problem as a transition-decoupled partially observable Markov decision process with health constraints. Furthermore, we propose scalable decentralized online algorithms based on Monte Carlo tree search and a factored belief vector. We empirically evaluate our algorithms on decentralized patrolling problems and benchmark them against the state-of-the-art online planning solver. The results show that our approach outperforms the state-of-the-art by more than 56% for six agents patrolling problems and can scale up to 24 agents in reasonable time.
On the hardness of offline multi-objective optimization.
Teytaud, Olivier
2007-01-01
It has been empirically established that multiobjective evolutionary algorithms do not scale well with the number of conflicting objectives. This paper shows that the convergence rate of all comparison-based multi-objective algorithms, for the Hausdorff distance, is not much better than the convergence rate of the random search under certain conditions. The number of objectives must be very moderate and the framework should hold the following assumptions: the objectives are conflicting and the computational cost is lower bounded by the number of comparisons is a good model. Our conclusions are: (i) the number of conflicting objectives is relevant (ii) the criteria based on comparisons with random-search for multi-objective optimization is also relevant (iii) having more than 3-objectives optimization is very hard. Furthermore, we provide some insight into cross-over operators.
A modified multilevel scheme for internal and external constraints in virtual environments.
Arikatla, Venkata S; De, Suvranu
2013-01-01
Multigrid algorithms are gaining popularity in virtual reality simulations as they have a theoretically optimal performance that scales linearly with the number of degrees of freedom of the simulation system. We propose a multilevel approach that combines the efficiency of the multigrid algorithms with the ability to resolve multi-body constraints during interactive simulations. First, we develop a single level modified block Gauss-Seidel (MBGS) smoother that can incorporate constraints. This is subsequently incorporated in a standard multigrid V-cycle with corrections for constraints to form the modified multigrid V-cycle (MMgV). Numerical results show that the solver can resolve constraints while achieving the theoretical performance of multigrid schemes.
Voltage breakdown follower avoids hard thermal constraints in a Geiger mode avalanche photodiode.
Viterbini, M; Nozzoli, S; Poli, M; Adriani, A; Nozzoli, F; Ottaviano, A; Ponzo, S
1996-09-20
A novel approach to single-photon detection by means of an avalanche photodiode is described and preliminary results obtained by implementation of a prototype are reported. The electronic circuit (breakdown voltage follower) avoids the use of complex temperature controls typically used with these devices, thus reducing system complexity and cost. Data obtained without any thermoregulation show the same behavior with respect to systems thermoregulated to within a few hundredths of a degree celsius.