Public channel cryptography: chaos synchronization and Hilbert's tenth problem.
Kanter, Ido; Kopelowitz, Evi; Kinzel, Wolfgang
2008-08-22
The synchronization process of two mutually delayed coupled deterministic chaotic maps is demonstrated both analytically and numerically. The synchronization is preserved when the mutually transmitted signals are concealed by two commutative private filters, a convolution of the truncated time-delayed output signals or some powers of the delayed output signals. The task of a passive attacker is mapped onto Hilbert's tenth problem, solving a set of nonlinear Diophantine equations, which was proven to be in the class of NP-complete problems [problems that are both NP (verifiable in nondeterministic polynomial time) and NP-hard (any NP problem can be translated into this problem)]. This bridge between nonlinear dynamics and NP-complete problems opens a horizon for new types of secure public-channel protocols.
Quantum speedup in solving the maximal-clique problem
NASA Astrophysics Data System (ADS)
Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang
2018-03-01
The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.
NASA Astrophysics Data System (ADS)
Traversa, Fabio L.; Di Ventra, Massimiliano
2017-02-01
We introduce a class of digital machines, we name Digital Memcomputing Machines, (DMMs) able to solve a wide range of problems including Non-deterministic Polynomial (NP) ones with polynomial resources (in time, space, and energy). An abstract DMM with this power must satisfy a set of compatible mathematical constraints underlying its practical realization. We prove this by making a connection with the dynamical systems theory. This leads us to a set of physical constraints for poly-resource resolvability. Once the mathematical requirements have been assessed, we propose a practical scheme to solve the above class of problems based on the novel concept of self-organizing logic gates and circuits (SOLCs). These are logic gates and circuits able to accept input signals from any terminal, without distinction between conventional input and output terminals. They can solve boolean problems by self-organizing into their solution. They can be fabricated either with circuit elements with memory (such as memristors) and/or standard MOS technology. Using tools of functional analysis, we prove mathematically the following constraints for the poly-resource resolvability: (i) SOLCs possess a global attractor; (ii) their only equilibrium points are the solutions of the problems to solve; (iii) the system converges exponentially fast to the solutions; (iv) the equilibrium convergence rate scales at most polynomially with input size. We finally provide arguments that periodic orbits and strange attractors cannot coexist with equilibria. As examples, we show how to solve the prime factorization and the search version of the NP-complete subset-sum problem. Since DMMs map integers into integers, they are robust against noise and hence scalable. We finally discuss the implications of the DMM realization through SOLCs to the NP = P question related to constraints of poly-resources resolvability.
Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei
2013-10-01
The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Exploiting Quantum Resonance to Solve Combinatorial Problems
NASA Technical Reports Server (NTRS)
Zak, Michail; Fijany, Amir
2006-01-01
Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.
Solving SAT Problem Based on Hybrid Differential Evolution Algorithm
NASA Astrophysics Data System (ADS)
Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan
Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.
The Computational Complexity of the Kakuro Puzzle, Revisited
NASA Astrophysics Data System (ADS)
Ruepp, Oliver; Holzer, Markus
We present a new proof of NP-completeness for the problem of solving instances of the Japanese pencil puzzle Kakuro (also known as Cross-Sum). While the NP-completeness of Kakuro puzzles has been shown before [T. Seta. The complexity of CROSS SUM. IPSJ SIG Notes, AL-84:51-58, 2002], there are still two interesting aspects to our proof: we show NP-completeness for a new variant of Kakuro that has not been investigated before and thus improves the aforementioned result. Moreover some parts of the proof have been generated automatically, using an interesting technique involving SAT solvers.
Parallel computation with molecular-motor-propelled agents in nanofabricated networks.
Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V
2016-03-08
The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.
Ultrafast adiabatic quantum algorithm for the NP-complete exact cover problem
Wang, Hefeng; Wu, Lian-Ao
2016-01-01
An adiabatic quantum algorithm may lose quantumness such as quantum coherence entirely in its long runtime, and consequently the expected quantum speedup of the algorithm does not show up. Here we present a general ultrafast adiabatic quantum algorithm. We show that by applying a sequence of fast random or regular signals during evolution, the runtime can be reduced substantially, whereas advantages of the adiabatic algorithm remain intact. We also propose a randomized Trotter formula and show that the driving Hamiltonian and the proposed sequence of fast signals can be implemented simultaneously. We illustrate the algorithm by solving the NP-complete 3-bit exact cover problem (EC3), where NP stands for nondeterministic polynomial time, and put forward an approach to implementing the problem with trapped ions. PMID:26923834
A restricted Steiner tree problem is solved by Geometric Method II
NASA Astrophysics Data System (ADS)
Lin, Dazhi; Zhang, Youlin; Lu, Xiaoxu
2013-03-01
The minimum Steiner tree problem has wide application background, such as transportation system, communication network, pipeline design and VISL, etc. It is unfortunately that the computational complexity of the problem is NP-hard. People are common to find some special problems to consider. In this paper, we first put forward a restricted Steiner tree problem, which the fixed vertices are in the same side of one line L and we find a vertex on L such the length of the tree is minimal. By the definition and the complexity of the Steiner tree problem, we know that the complexity of this problem is also Np-complete. In the part one, we have considered there are two fixed vertices to find the restricted Steiner tree problem. Naturally, we consider there are three fixed vertices to find the restricted Steiner tree problem. And we also use the geometric method to solve such the problem.
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
NASA Astrophysics Data System (ADS)
Kobak, B. V.; Zhukovskiy, A. G.; Kuzin, A. P.
2018-05-01
This paper considers one of the classical NP complete problems - an inhomogeneous minimax problem. When solving such large-scale problem, there appear difficulties in obtaining an exact solution. Therefore, let us propose getting an optimum solution in an acceptable time. Among a wide range of genetic algorithm models, let us choose the modified Goldberg model, which earlier was successfully used by authors in solving NP complete problems. The classical Goldberg model uses a single-point crossover and a singlepoint mutation, which somewhat decreases the accuracy of the obtained results. In the article, let us propose using a full two-point crossover with various mutations previously researched. In addition, the work studied the necessary probability to apply it to the crossover in order to obtain results that are more accurate. Results of the computation experiment showed that the higher the probability of a crossover, the higher the quality of both the average results and the best solutions. In addition, it was found out that the higher the values of the number of individuals and the number of repetitions, the closer both the average results and the best solutions to the optimum. The paper shows how the use of a full two-point crossover increases the accuracy of solving an inhomogeneous minimax problem, while the time for getting the solution increases, but remains polynomial.
Identification and Addressing Reduction-Related Misconceptions
ERIC Educational Resources Information Center
Gal-Ezer, Judith; Trakhtenbrot, Mark
2016-01-01
Reduction is one of the key techniques used for problem-solving in computer science. In particular, in the theory of computation and complexity (TCC), mapping and polynomial reductions are used for analysis of decidability and computational complexity of problems, including the core concept of NP-completeness. Reduction is a highly abstract…
Scheduling in the Face of Uncertain Resource Consumption and Utility
NASA Technical Reports Server (NTRS)
Koga, Dennis (Technical Monitor); Frank, Jeremy; Dearden, Richard
2003-01-01
We discuss the problem of scheduling tasks that consume a resource with known capacity and where the tasks have varying utility. We consider problems in which the resource consumption and utility of each activity is described by probability distributions. In these circumstances, we would like to find schedules that exceed a lower bound on the expected utility when executed. We first show that while some of these problems are NP-complete, others are only NP-Hard. We then describe various heuristic search algorithms to solve these problems and their drawbacks. Finally, we present empirical results that characterize the behavior of these heuristics over a variety of problem classes.
Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems
Fonseca Guerra, Gabriel A.; Furber, Steve B.
2017-01-01
Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems). The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart. PMID:29311791
AI techniques for a space application scheduling problem
NASA Technical Reports Server (NTRS)
Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.
1991-01-01
Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).
Solving NP-Hard Problems with Physarum-Based Ant Colony System.
Liu, Yuxin; Gao, Chao; Zhang, Zili; Lu, Yuxiao; Chen, Shi; Liang, Mingxin; Tao, Li
2017-01-01
NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS.
Solving search problems by strongly simulating quantum circuits
Johnson, T. H.; Biamonte, J. D.; Clark, S. R.; Jaksch, D.
2013-01-01
Simulating quantum circuits using classical computers lets us analyse the inner workings of quantum algorithms. The most complete type of simulation, strong simulation, is believed to be generally inefficient. Nevertheless, several efficient strong simulation techniques are known for restricted families of quantum circuits and we develop an additional technique in this article. Further, we show that strong simulation algorithms perform another fundamental task: solving search problems. Efficient strong simulation techniques allow solutions to a class of search problems to be counted and found efficiently. This enhances the utility of strong simulation methods, known or yet to be discovered, and extends the class of search problems known to be efficiently simulable. Relating strong simulation to search problems also bounds the computational power of efficiently strongly simulable circuits; if they could solve all problems in P this would imply that all problems in NP and #P could be solved in polynomial time. PMID:23390585
On the Complexity of Delaying an Adversary’s Project
2005-01-01
interdiction models for such problems and show that the resulting problem com- plexities run the gamut : polynomially solvable, weakly NP-complete, strongly...problems and show that the resulting problem complexities run the gamut : polynomially solvable, weakly NP-complete, strongly NP-complete or NP-hard. We
Solving the Swath Segment Selection Problem
NASA Technical Reports Server (NTRS)
Knight, Russell; Smith, Benjamin
2006-01-01
Several artificial-intelligence search techniques have been tested as means of solving the swath segment selection problem (SSSP) -- a real-world problem that is not only of interest in its own right, but is also useful as a test bed for search techniques in general. In simplest terms, the SSSP is the problem of scheduling the observation times of an airborne or spaceborne synthetic-aperture radar (SAR) system to effect the maximum coverage of a specified area (denoted the target), given a schedule of downlinks (opportunities for radio transmission of SAR scan data to a ground station), given the limit on the quantity of SAR scan data that can be stored in an onboard memory between downlink opportunities, and given the limit on the achievable downlink data rate. The SSSP is NP complete (short for "nondeterministic polynomial time complete" -- characteristic of a class of intractable problems that can be solved only by use of computers capable of making guesses and then checking the guesses in polynomial time).
Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei
2017-12-01
As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.
An optical solution for the traveling salesman problem.
Haist, Tobias; Osten, Wolfgang
2007-08-06
We introduce an optical method based on white light interferometry in order to solve the well-known NP-complete traveling salesman problem. To our knowledge it is the first time that a method for the reduction of non-polynomial time to quadratic time has been proposed. We will show that this achievement is limited by the number of available photons for solving the problem. It will turn out that this number of photons is proportional to N(N) for a traveling salesman problem with N cities and that for large numbers of cities the method in practice therefore is limited by the signal-to-noise ratio. The proposed method is meant purely as a gedankenexperiment.
Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian
2015-10-23
The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.
NASA Astrophysics Data System (ADS)
Shakeri, Nadim; Jalili, Saeed; Ahmadi, Vahid; Rasoulzadeh Zali, Aref; Goliaei, Sama
2015-01-01
The problem of finding the Hamiltonian path in a graph, or deciding whether a graph has a Hamiltonian path or not, is an NP-complete problem. No exact solution has been found yet, to solve this problem using polynomial amount of time and space. In this paper, we propose a two dimensional (2-D) optical architecture based on optical electronic devices such as micro ring resonators, optical circulators and MEMS based mirror (MEMS-M) to solve the Hamiltonian Path Problem, for undirected graphs in linear time. It uses a heuristic algorithm and employs n+1 different wavelengths of a light ray, to check whether a Hamiltonian path exists or not on a graph with n vertices. Then if a Hamiltonian path exists, it reports the path. The device complexity of the proposed architecture is O(n2).
Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian
2015-01-01
The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650
Efficient solution for finding Hamilton cycles in undirected graphs.
Alhalabi, Wadee; Kitanneh, Omar; Alharbi, Amira; Balfakih, Zain; Sarirete, Akila
2016-01-01
The Hamilton cycle problem is closely related to a series of famous problems and puzzles (traveling salesman problem, Icosian game) and, due to the fact that it is NP-complete, it was extensively studied with different algorithms to solve it. The most efficient algorithm is not known. In this paper, a necessary condition for an arbitrary un-directed graph to have Hamilton cycle is proposed. Based on this condition, a mathematical solution for this problem is developed and several proofs and an algorithmic approach are introduced. The algorithm is successfully implemented on many Hamiltonian and non-Hamiltonian graphs. This provides a new effective approach to solve a problem that is fundamental in graph theory and can influence the manner in which the existing applications are used and improved.
Replicating the benefits of Deutschian closed timelike curves without breaking causality
NASA Astrophysics Data System (ADS)
Yuan, Xiao; Assad, Syed M.; Thompson, Jayne; Haw, Jing Yan; Vedral, Vlatko; Ralph, Timothy C.; Lam, Ping Koy; Weedbrook, Christian; Gu, Mile
2015-11-01
In general relativity, closed timelike curves can break causality with remarkable and unsettling consequences. At the classical level, they induce causal paradoxes disturbing enough to motivate conjectures that explicitly prevent their existence. At the quantum level such problems can be resolved through the Deutschian formalism, however this induces radical benefits—from cloning unknown quantum states to solving problems intractable to quantum computers. Instinctively, one expects these benefits to vanish if causality is respected. Here we show that in harnessing entanglement, we can efficiently solve NP-complete problems and clone arbitrary quantum states—even when all time-travelling systems are completely isolated from the past. Thus, the many defining benefits of Deutschian closed timelike curves can still be harnessed, even when causality is preserved. Our results unveil a subtle interplay between entanglement and general relativity, and significantly improve the potential of probing the radical effects that may exist at the interface between relativity and quantum theory.
Solving a Hamiltonian Path Problem with a bacterial computer
Baumgardner, Jordan; Acker, Karen; Adefuye, Oyinade; Crowley, Samuel Thomas; DeLoache, Will; Dickson, James O; Heard, Lane; Martens, Andrew T; Morton, Nickolaus; Ritter, Michelle; Shoecraft, Amber; Treece, Jessica; Unzicker, Matthew; Valencia, Amanda; Waters, Mike; Campbell, A Malcolm; Heyer, Laurie J; Poet, Jeffrey L; Eckdahl, Todd T
2009-01-01
Background The Hamiltonian Path Problem asks whether there is a route in a directed graph from a beginning node to an ending node, visiting each node exactly once. The Hamiltonian Path Problem is NP complete, achieving surprising computational complexity with modest increases in size. This challenge has inspired researchers to broaden the definition of a computer. DNA computers have been developed that solve NP complete problems. Bacterial computers can be programmed by constructing genetic circuits to execute an algorithm that is responsive to the environment and whose result can be observed. Each bacterium can examine a solution to a mathematical problem and billions of them can explore billions of possible solutions. Bacterial computers can be automated, made responsive to selection, and reproduce themselves so that more processing capacity is applied to problems over time. Results We programmed bacteria with a genetic circuit that enables them to evaluate all possible paths in a directed graph in order to find a Hamiltonian path. We encoded a three node directed graph as DNA segments that were autonomously shuffled randomly inside bacteria by a Hin/hixC recombination system we previously adapted from Salmonella typhimurium for use in Escherichia coli. We represented nodes in the graph as linked halves of two different genes encoding red or green fluorescent proteins. Bacterial populations displayed phenotypes that reflected random ordering of edges in the graph. Individual bacterial clones that found a Hamiltonian path reported their success by fluorescing both red and green, resulting in yellow colonies. We used DNA sequencing to verify that the yellow phenotype resulted from genotypes that represented Hamiltonian path solutions, demonstrating that our bacterial computer functioned as expected. Conclusion We successfully designed, constructed, and tested a bacterial computer capable of finding a Hamiltonian path in a three node directed graph. This proof-of-concept experiment demonstrates that bacterial computing is a new way to address NP-complete problems using the inherent advantages of genetic systems. The results of our experiments also validate synthetic biology as a valuable approach to biological engineering. We designed and constructed basic parts, devices, and systems using synthetic biology principles of standardization and abstraction. PMID:19630940
Intrinsic optimization using stochastic nanomagnets
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-01-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets. PMID:28295053
Intrinsic optimization using stochastic nanomagnets
NASA Astrophysics Data System (ADS)
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-03-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets.
Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem
NASA Astrophysics Data System (ADS)
Skakov, E. S.; Malysh, V. N.
2018-03-01
The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows
Wang, Di; Kleinberg, Robert D.
2009-01-01
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596
Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.
Wang, Di; Kleinberg, Robert D
2009-11-28
Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.
NASA Astrophysics Data System (ADS)
Santosa, B.; Siswanto, N.; Fiqihesa
2018-04-01
This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution
NASA Astrophysics Data System (ADS)
Zhang, Xingong; Yin, Yunqiang; Wu, Chin-Chia
2017-01-01
There is a situation found in many manufacturing systems, such as steel rolling mills, fire fighting or single-server cycle-queues, where a job that is processed later consumes more time than that same job when processed earlier. The research finds that machine maintenance can improve the worsening of processing conditions. After maintenance activity, the machine will be restored. The maintenance duration is a positive and non-decreasing differentiable convex function of the total processing times of the jobs between maintenance activities. Motivated by this observation, the makespan and the total completion time minimization problems in the scheduling of jobs with non-decreasing rates of job processing time on a single machine are considered in this article. It is shown that both the makespan and the total completion time minimization problems are NP-hard in the strong sense when the number of maintenance activities is arbitrary, while the makespan minimization problem is NP-hard in the ordinary sense when the number of maintenance activities is fixed. If the deterioration rates of the jobs are identical and the maintenance duration is a linear function of the total processing times of the jobs between maintenance activities, then this article shows that the group balance principle is satisfied for the makespan minimization problem. Furthermore, two polynomial-time algorithms are presented for solving the makespan problem and the total completion time problem under identical deterioration rates, respectively.
Discovering Motifs in Biological Sequences Using the Micron Automata Processor.
Roy, Indranil; Aluru, Srinivas
2016-01-01
Finding approximately conserved sequences, called motifs, across multiple DNA or protein sequences is an important problem in computational biology. In this paper, we consider the (l, d) motif search problem of identifying one or more motifs of length l present in at least q of the n given sequences, with each occurrence differing from the motif in at most d substitutions. The problem is known to be NP-complete, and the largest solved instance reported to date is (26,11). We propose a novel algorithm for the (l,d) motif search problem using streaming execution over a large set of non-deterministic finite automata (NFA). This solution is designed to take advantage of the micron automata processor, a new technology close to deployment that can simultaneously execute multiple NFA in parallel. We demonstrate the capability for solving much larger instances of the (l, d) motif search problem using the resources available within a single automata processor board, by estimating run-times for problem instances (39,18) and (40,17). The paper serves as a useful guide to solving problems using this new accelerator technology.
Grover Search and the No-Signaling Principle
NASA Astrophysics Data System (ADS)
Bao, Ning; Bouland, Adam; Jordan, Stephen P.
2016-09-01
Two of the key properties of quantum physics are the no-signaling principle and the Grover search lower bound. That is, despite admitting stronger-than-classical correlations, quantum mechanics does not imply superluminal signaling, and despite a form of exponential parallelism, quantum mechanics does not imply polynomial-time brute force solution of NP-complete problems. Here, we investigate the degree to which these two properties are connected. We examine four classes of deviations from quantum mechanics, for which we draw inspiration from the literature on the black hole information paradox. We show that in these models, the physical resources required to send a superluminal signal scale polynomially with the resources needed to speed up Grover's algorithm. Hence the no-signaling principle is equivalent to the inability to solve NP-hard problems efficiently by brute force within the classes of theories analyzed.
Robust optimization with transiently chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Sumi, R.; Molnár, B.; Ercsey-Ravasz, M.
2014-05-01
Efficiently solving hard optimization problems has been a strong motivation for progress in analog computing. In a recent study we presented a continuous-time dynamical system for solving the NP-complete Boolean satisfiability (SAT) problem, with a one-to-one correspondence between its stable attractors and the SAT solutions. While physical implementations could offer great efficiency, the transiently chaotic dynamics raises the question of operability in the presence of noise, unavoidable on analog devices. Here we show that the probability of finding solutions is robust to noise intensities well above those present on real hardware. We also developed a cellular neural network model realizable with analog circuits, which tolerates even larger noise intensities. These methods represent an opportunity for robust and efficient physical implementations.
Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko
2013-06-18
Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.
Identification and addressing reduction-related misconceptions
NASA Astrophysics Data System (ADS)
Gal-Ezer, Judith; Trakhtenbrot, Mark
2016-07-01
Reduction is one of the key techniques used for problem-solving in computer science. In particular, in the theory of computation and complexity (TCC), mapping and polynomial reductions are used for analysis of decidability and computational complexity of problems, including the core concept of NP-completeness. Reduction is a highly abstract technique that involves revealing close non-trivial connections between problems that often seem to have nothing in common. As a result, proper understanding and application of reduction is a serious challenge for students and a source of numerous misconceptions. The main contribution of this paper is detection of such misconceptions, analysis of their roots, and proposing a way to address them in an undergraduate TCC course. Our observations suggest that the main source of the misconceptions is the false intuitive rule "the bigger is a set/problem, the harder it is to solve". Accordingly, we developed a series of exercises for proactive prevention of these misconceptions.
Solving Set Cover with Pairs Problem using Quantum Annealing
NASA Astrophysics Data System (ADS)
Cao, Yudong; Jiang, Shuxian; Perouli, Debbie; Kais, Sabre
2016-09-01
Here we consider using quantum annealing to solve Set Cover with Pairs (SCP), an NP-hard combinatorial optimization problem that plays an important role in networking, computational biology, and biochemistry. We show an explicit construction of Ising Hamiltonians whose ground states encode the solution of SCP instances. We numerically simulate the time-dependent Schrödinger equation in order to test the performance of quantum annealing for random instances and compare with that of simulated annealing. We also discuss explicit embedding strategies for realizing our Hamiltonian construction on the D-wave type restricted Ising Hamiltonian based on Chimera graphs. Our embedding on the Chimera graph preserves the structure of the original SCP instance and in particular, the embedding for general complete bipartite graphs and logical disjunctions may be of broader use than that the specific problem we deal with.
Yang, S; Wang, D
2000-01-01
This paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.
SAT Encoding of Unification in EL
NASA Astrophysics Data System (ADS)
Baader, Franz; Morawska, Barbara
Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. In a recent paper, we have shown that unification in EL is NP-complete, and thus of a complexity that is considerably lower than in other Description Logics of comparably restricted expressive power. In this paper, we introduce a new NP-algorithm for solving unification problems in EL, which is based on a reduction to satisfiability in propositional logic (SAT). The advantage of this new algorithm is, on the one hand, that it allows us to employ highly optimized state-of-the-art SAT solvers when implementing an EL-unification algorithm. On the other hand, this reduction provides us with a proof of the fact that EL-unification is in NP that is much simpler than the one given in our previous paper on EL-unification.
Probabilistic Analysis of Algorithms for NP-Complete Problems
1989-09-29
LASSIFICATION OF THIS PAGE DTIC FILE COPY i PO ATO PAGEm ’ Forn Approvedii IONO PAGE I iMB NO. 07040188 .... "....... b . RESTRICTIVE MARKINGSECTE D...0790 3. DISTRIBUTION IAVAILABILITY OF REPORTAD-A217 880 -- ApprvdnrPU1l Qroo; B distr’ibutil unli mit od. .... .S. MONITORING...efficiently solves P in bouncded probability under D. I1 b ) A finds a solution to an instance of P chosen randomly according to D in time bounded by a
NASA Astrophysics Data System (ADS)
Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.
2011-08-01
This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.
Exploiting Bounded Signal Flow for Graph Orientation Based on Cause-Effect Pairs
NASA Astrophysics Data System (ADS)
Dorn, Britta; Hüffner, Falk; Krüger, Dominikus; Niedermeier, Rolf; Uhlmann, Johannes
We consider the following problem: Given an undirected network and a set of sender-receiver pairs, direct all edges such that the maximum number of "signal flows" defined by the pairs can be routed respecting edge directions. This problem has applications in communication networks and in understanding protein interaction based cell regulation mechanisms. Since this problem is NP-hard, research so far concentrated on polynomial-time approximation algorithms and tractable special cases. We take the viewpoint of parameterized algorithmics and examine several parameters related to the maximum signal flow over vertices or edges. We provide several fixed-parameter tractability results, and in one case a sharp complexity dichotomy between a linear-time solvable case and a slightly more general NP-hard case. We examine the value of these parameters for several real-world network instances. For many relevant cases, the NP-hard problem can be solved to optimality. In this way, parameterized analysis yields both deeper insight into the computational complexity and practical solving strategies.
Global Optimal Trajectory in Chaos and NP-Hardness
NASA Astrophysics Data System (ADS)
Latorre, Vittorio; Gao, David Yang
This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.
An Efficient Rank Based Approach for Closest String and Closest Substring
2012-01-01
This paper aims to present a new genetic approach that uses rank distance for solving two known NP-hard problems, and to compare rank distance with other distance measures for strings. The two NP-hard problems we are trying to solve are closest string and closest substring. For each problem we build a genetic algorithm and we describe the genetic operations involved. Both genetic algorithms use a fitness function based on rank distance. We compare our algorithms with other genetic algorithms that use different distance measures, such as Hamming distance or Levenshtein distance, on real DNA sequences. Our experiments show that the genetic algorithms based on rank distance have the best results. PMID:22675483
Problem solving during artificial selection of self-replicating loops
NASA Astrophysics Data System (ADS)
Chou, Hui-Hsien; Reggia, James A.
1998-05-01
Past cellular automata models of self-replication have generally done only one thing: replicate themselves. However, it has recently been demonstrated that such self-replicating structures can be programmed to also carry out a task during the replication process. Past models of this sort have been limited in that the “program” involved is copied unchanged from parent to child, so that each generation of replicants is executing exactly the same program on exactly the same data. Here we take a different approach in which each replicant receives a distinct partial solution that is modified during replication. Under artificial selection, replicants with promising solutions proliferate while those with failed solutions are lost. We show that this approach can be applied successfully to solve an NP-complete problem, the satisfiability problem. Bounds are given on the cellular space size and time needed to solve a given problem, and simulations demonstrate that this approach works effectively. These and other recent results raise the possibility of evolving self-replicating structures that have a simulated metabolism or that carry out useful tasks.
Exact parallel algorithms for some members of the traveling salesman problem family
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pekny, J.F.
1989-01-01
The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less
The Convergence of Intelligences
NASA Astrophysics Data System (ADS)
Diederich, Joachim
Minsky (1985) argued an extraterrestrial intelligence may be similar to ours despite very different origins. ``Problem- solving'' offers evolutionary advantages and individuals who are part of a technical civilisation should have this capacity. On earth, the principles of problem-solving are the same for humans, some primates and machines based on Artificial Intelligence (AI) techniques. Intelligent systems use ``goals'' and ``sub-goals'' for problem-solving, with memories and representations of ``objects'' and ``sub-objects'' as well as knowledge of relations such as ``cause'' or ``difference.'' Some of these objects are generic and cannot easily be divided into parts. We must, therefore, assume that these objects and relations are universal, and a general property of intelligence. Minsky's arguments from 1985 are extended here. The last decade has seen the development of a general learning theory (``computational learning theory'' (CLT) or ``statistical learning theory'') which equally applies to humans, animals and machines. It is argued that basic learning laws will also apply to an evolved alien intelligence, and this includes limitations of what can be learned efficiently. An example from CLT is that the general learning problem for neural networks is intractable, i.e. it cannot be solved efficiently for all instances (it is ``NP-complete''). It is the objective of this paper to show that evolved intelligences will be constrained by general learning laws and will use task-decomposition for problem-solving. Since learning and problem-solving are core features of intelligence, it can be said that intelligences converge despite very different origins.
Dominant takeover regimes for genetic algorithms
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
The genetic algorithm (GA) is a machine-based optimization routine which connects evolutionary learning to natural genetic laws. The present work addresses the problem of obtaining the dominant takeover regimes in the GA dynamics. Estimated GA run times are computed for slow and fast convergence in the limits of high and low fitness ratios. Using Euler's device for obtaining partial sums in closed forms, the result relaxes the previously held requirements for long time limits. Analytical solution reveal that appropriately accelerated regimes can mark the ascendancy of the most fit solution. In virtually all cases, the weak (logarithmic) dependence of convergence time on problem size demonstrates the potential for the GA to solve large N-P complete problems.
Amir, Ofra; Amir, Dor; Shahar, Yuval; Hart, Yuval; Gal, Kobi
2018-01-01
Demonstrability-the extent to which group members can recognize a correct solution to a problem-has a significant effect on group performance. However, the interplay between group size, demonstrability and performance is not well understood. This paper addresses these gaps by studying the joint effect of two factors-the difficulty of solving a problem and the difficulty of verifying the correctness of a solution-on the ability of groups of varying sizes to converge to correct solutions. Our empirical investigations use problem instances from different computational complexity classes, NP-Complete (NPC) and PSPACE-complete (PSC), that exhibit similar solution difficulty but differ in verification difficulty. Our study focuses on nominal groups to isolate the effect of problem complexity on performance. We show that NPC problems have higher demonstrability than PSC problems: participants were significantly more likely to recognize correct and incorrect solutions for NPC problems than for PSC problems. We further show that increasing the group size can actually decrease group performance for some problems of low demonstrability. We analytically derive the boundary that distinguishes these problems from others for which group performance monotonically improves with group size. These findings increase our understanding of the mechanisms that underlie group problem-solving processes, and can inform the design of systems and processes that would better facilitate collective decision-making.
A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles
Crawford, Broderick; Paredes, Fernando; Norero, Enrique
2015-01-01
The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n 2 × n 2 grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n 2. Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods. PMID:26078751
A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles.
Soto, Ricardo; Crawford, Broderick; Galleguillos, Cristian; Paredes, Fernando; Norero, Enrique
2015-01-01
The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n(2) × n(2) grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n(2). Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda
2017-01-01
Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505
A new distributed systems scheduling algorithm: a swarm intelligence approach
NASA Astrophysics Data System (ADS)
Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi
2011-12-01
The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.
Aono, Masashi; Gunji, Yukio-Pegio
2003-10-01
The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy.
Statistical physics of hard combinatorial optimization: Vertex cover problem
NASA Astrophysics Data System (ADS)
Zhao, Jin-Hua; Zhou, Hai-Jun
2014-07-01
Typical-case computation complexity is a research topic at the boundary of computer science, applied mathematics, and statistical physics. In the last twenty years, the replica-symmetry-breaking mean field theory of spin glasses and the associated message-passing algorithms have greatly deepened our understanding of typical-case computation complexity. In this paper, we use the vertex cover problem, a basic nondeterministic-polynomial (NP)-complete combinatorial optimization problem of wide application, as an example to introduce the statistical physical methods and algorithms. We do not go into the technical details but emphasize mainly the intuitive physical meanings of the message-passing equations. A nonfamiliar reader shall be able to understand to a large extent the physics behind the mean field approaches and to adjust the mean field methods in solving other optimization problems.
NASA Astrophysics Data System (ADS)
Iswari, T.; Asih, A. M. S.
2018-04-01
In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.
Performance comparison of some evolutionary algorithms on job shop scheduling problems
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Rao, C. S. P.
2016-09-01
Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
Graphical models for optimal power flow
Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; ...
2016-09-13
Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithmmore » for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.« less
Satisfiability modulo theory and binary puzzle
NASA Astrophysics Data System (ADS)
Utomo, Putranto
2017-06-01
The binary puzzle is a sudoku-like puzzle with values in each cell taken from the set {0, 1}. We look at the mathematical theory behind it. A solved binary puzzle is an n × n binary array where n is even that satisfies the following conditions: (1) No three consecutive ones and no three consecutive zeros in each row and each column, (2) Every row and column is balanced, that is the number of ones and zeros must be equal in each row and in each column, (3) Every two rows and every two columns must be distinct. The binary puzzle had been proven to be an NP-complete problem [5]. Research concerning the satisfiability of formulas with respect to some background theory is called satisfiability modulo theory (SMT). An SMT solver is an extension of a satisfiability (SAT) solver. The notion of SMT can be used for solving various problem in mathematics and industries such as formula verification and operation research [1, 7]. In this paper we apply SMT to solve binary puzzles. In addition, we do an experiment in solving different sizes and different number of blanks. We also made comparison with two other approaches, namely by a SAT solver and exhaustive search.
Solving TSP problem with improved genetic algorithm
NASA Astrophysics Data System (ADS)
Fu, Chunhua; Zhang, Lijun; Wang, Xiaojing; Qiao, Liying
2018-05-01
The TSP is a typical NP problem. The optimization of vehicle routing problem (VRP) and city pipeline optimization can use TSP to solve; therefore it is very important to the optimization for solving TSP problem. The genetic algorithm (GA) is one of ideal methods in solving it. The standard genetic algorithm has some limitations. Improving the selection operator of genetic algorithm, and importing elite retention strategy can ensure the select operation of quality, In mutation operation, using the adaptive algorithm selection can improve the quality of search results and variation, after the chromosome evolved one-way evolution reverse operation is added which can make the offspring inherit gene of parental quality improvement opportunities, and improve the ability of searching the optimal solution algorithm.
On unified modeling, theory, and method for solving multi-scale global optimization problems
NASA Astrophysics Data System (ADS)
Gao, David Yang
2016-10-01
A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.
Analog Approach to Constraint Satisfaction Enabled by Spin Orbit Torque Magnetic Tunnel Junctions.
Wijesinghe, Parami; Liyanagedera, Chamika; Roy, Kaushik
2018-05-02
Boolean satisfiability (k-SAT) is an NP-complete (k ≥ 3) problem that constitute one of the hardest classes of constraint satisfaction problems. In this work, we provide a proof of concept hardware based analog k-SAT solver, that is built using Magnetic Tunnel Junctions (MTJs). The inherent physics of MTJs, enhanced by device level modifications, is harnessed here to emulate the intricate dynamics of an analog satisfiability (SAT) solver. In the presence of thermal noise, the MTJ based system can successfully solve Boolean satisfiability problems. Most importantly, our results exhibit that, the proposed MTJ based hardware SAT solver is capable of finding a solution to a significant fraction (at least 85%) of hard 3-SAT problems, within a time that has a polynomial relationship with the number of variables(<50).
Smell Detection Agent Based Optimization Algorithm
NASA Astrophysics Data System (ADS)
Vinod Chandra, S. S.
2016-09-01
In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.
NASA Astrophysics Data System (ADS)
Kumar, Ravi; Singh, Surya Prakash
2017-11-01
The dynamic cellular facility layout problem (DCFLP) is a well-known NP-hard problem. It has been estimated that the efficient design of DCFLP reduces the manufacturing cost of products by maintaining the minimum material flow among all machines in all cells, as the material flow contributes around 10-30% of the total product cost. However, being NP hard, solving the DCFLP optimally is very difficult in reasonable time. Therefore, this article proposes a novel similarity score-based two-phase heuristic approach to solve the DCFLP optimally considering multiple products in multiple times to be manufactured in the manufacturing layout. In the first phase of the proposed heuristic, a machine-cell cluster is created based on similarity scores between machines. This is provided as an input to the second phase to minimize inter/intracell material handling costs and rearrangement costs over the entire planning period. The solution methodology of the proposed approach is demonstrated. To show the efficiency of the two-phase heuristic approach, 21 instances are generated and solved using the optimization software package LINGO. The results show that the proposed approach can optimally solve the DCFLP in reasonable time.
Standardization of 237Np by the CIEMAT/NIST LSC tracer method
Gunther
2000-03-01
The standardization of 237Np presents some difficulties: several groups of alpha, beta and gamma radiation, chemical problems with the daughter nuclide 233Pa, an incomplete radioactive equilibrium after sample preparation, high conversion of some gamma transitions. To solve the chemical problems, a sample composition involving the Ultima Gold AB scintillator and a high concentration of HCl is used. Standardization by the CIEMAT/NIST method and by pulse shape discrimination is described. The results agree within 0.1% with those obtained by two other methods.
Distance Constraint Satisfaction Problems
NASA Astrophysics Data System (ADS)
Bodirsky, Manuel; Dalmau, Victor; Martin, Barnaby; Pinsker, Michael
We study the complexity of constraint satisfaction problems for templates Γ that are first-order definable in ({ Z}; {suc}), the integers with the successor relation. Assuming a widely believed conjecture from finite domain constraint satisfaction (we require the tractability conjecture by Bulatov, Jeavons and Krokhin in the special case of transitive finite templates), we provide a full classification for the case that Γ is locally finite (i.e., the Gaifman graph of Γ has finite degree). We show that one of the following is true: The structure Γ is homomorphically equivalent to a structure with a certain majority polymorphism (which we call modular median) and CSP(Γ) can be solved in polynomial time, or Γ is homomorphically equivalent to a finite transitive structure, or CSP(Γ) is NP-complete.
G.A.M.E.: GPU-accelerated mixture elucidator.
Schurz, Alioune; Su, Bo-Han; Tu, Yi-Shu; Lu, Tony Tsung-Yu; Lin, Olivia A; Tseng, Yufeng J
2017-09-15
GPU acceleration is useful in solving complex chemical information problems. Identifying unknown structures from the mass spectra of natural product mixtures has been a desirable yet unresolved issue in metabolomics. However, this elucidation process has been hampered by complex experimental data and the inability of instruments to completely separate different compounds. Fortunately, with current high-resolution mass spectrometry, one feasible strategy is to define this problem as extending a scaffold database with sidechains of different probabilities to match the high-resolution mass obtained from a high-resolution mass spectrum. By introducing a dynamic programming (DP) algorithm, it is possible to solve this NP-complete problem in pseudo-polynomial time. However, the running time of the DP algorithm grows by orders of magnitude as the number of mass decimal digits increases, thus limiting the boost in structural prediction capabilities. By harnessing the heavily parallel architecture of modern GPUs, we designed a "compute unified device architecture" (CUDA)-based GPU-accelerated mixture elucidator (G.A.M.E.) that considerably improves the performance of the DP, allowing up to five decimal digits for input mass data. As exemplified by four testing datasets with verified constitutions from natural products, G.A.M.E. allows for efficient and automatic structural elucidation of unknown mixtures for practical procedures. Graphical abstract .
On the complexity and approximability of some Euclidean optimal summing problems
NASA Astrophysics Data System (ADS)
Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.
2016-10-01
The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.
A New Approach for Solving the Generalized Traveling Salesman Problem
NASA Astrophysics Data System (ADS)
Pop, P. C.; Matei, O.; Sabo, C.
The generalized traveling problem (GTSP) is an extension of the classical traveling salesman problem. The GTSP is known to be an NP-hard problem and has many interesting applications. In this paper we present a local-global approach for the generalized traveling salesman problem. Based on this approach we describe a novel hybrid metaheuristic algorithm for solving the problem using genetic algorithms. Computational results are reported for Euclidean TSPlib instances and compared with the existing ones. The obtained results point out that our hybrid algorithm is an appropriate method to explore the search space of this complex problem and leads to good solutions in a reasonable amount of time.
Quantum Computing: Solving Complex Problems
DiVincenzo, David
2018-05-22
One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.
Split Bregman's optimization method for image construction in compressive sensing
NASA Astrophysics Data System (ADS)
Skinner, D.; Foo, S.; Meyer-Bäse, A.
2014-05-01
The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.
Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem
Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849
Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows
NASA Astrophysics Data System (ADS)
Jittamai, Phongchai
This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.
Solving the Container Stowage Problem (CSP) using Particle Swarm Optimization (PSO)
NASA Astrophysics Data System (ADS)
Matsaini; Santosa, Budi
2018-04-01
Container Stowage Problem (CSP) is a problem of containers arrangement into ships by considering rules such as: total weight, weight of one stack, destination, equilibrium, and placement of containers on vessel. Container stowage problem is combinatorial problem and hard to solve with enumeration technique. It is an NP-Hard Problem. Therefore, to find a solution, metaheuristics is preferred. The objective of solving the problem is to minimize the amount of shifting such that the unloading time is minimized. Particle Swarm Optimization (PSO) is proposed to solve the problem. The implementation of PSO is combined with some steps which are stack position change rules, stack changes based on destination, and stack changes based on the weight type of the stacks (light, medium, and heavy). The proposed method was applied on five different cases. The results were compared to Bee Swarm Optimization (BSO) and heuristics method. PSO provided mean of 0.87% gap and time gap of 60 second. While BSO provided mean of 2,98% gap and 459,6 second to the heuristcs.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Optical solver of combinatorial problems: nanotechnological approach.
Cohen, Eyal; Dolev, Shlomi; Frenkel, Sergey; Kryzhanovsky, Boris; Palagushkin, Alexandr; Rosenblit, Michael; Zakharov, Victor
2013-09-01
We present an optical computing system to solve NP-hard problems. As nano-optical computing is a promising venue for the next generation of computers performing parallel computations, we investigate the application of submicron, or even subwavelength, computing device designs. The system utilizes a setup of exponential sized masks with exponential space complexity produced in polynomial time preprocessing. The masks are later used to solve the problem in polynomial time. The size of the masks is reduced to nanoscaled density. Simulations were done to choose a proper design, and actual implementations show the feasibility of such a system.
Neural networks for vertical microcode compaction
NASA Astrophysics Data System (ADS)
Chu, Pong P.
1992-09-01
Neural networks provide an alternative way to solve complex optimization problems. Instead of performing a program of instructions sequentially as in a traditional computer, neural network model explores many competing hypotheses simultaneously using its massively parallel net. The paper shows how to use the neural network approach to perform vertical micro-code compaction for a micro-programmed control unit. The compaction procedure includes two basic steps. The first step determines the compatibility classes and the second step selects a minimal subset to cover the control signals. Since the selection process is an NP- complete problem, to find an optimal solution is impractical. In this study, we employ a customized neural network to obtain the minimal subset. We first formalize this problem, and then define an `energy function' and map it to a two-layer fully connected neural network. The modified network has two types of neurons and can always obtain a valid solution.
USDA-ARS?s Scientific Manuscript database
The Arthropod Borne Animal Diseases Unit (ABADRU) mission is to solve major endemic, emerging, and exotic arthropod-borne disease problems in livestock. The ABADRU has four 5-year project plans under two ARS National Research Programs; Animal Health NP103 and Veterinary, Medical, and Urban Entomolog...
Fast optimization algorithms and the cosmological constant
NASA Astrophysics Data System (ADS)
Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad
2017-11-01
Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.
Computing quantum discord is NP-complete
NASA Astrophysics Data System (ADS)
Huang, Yichen
2014-03-01
We study the computational complexity of quantum discord (a measure of quantum correlation beyond entanglement), and prove that computing quantum discord is NP-complete. Therefore, quantum discord is computationally intractable: the running time of any algorithm for computing quantum discord is believed to grow exponentially with the dimension of the Hilbert space so that computing quantum discord in a quantum system of moderate size is not possible in practice. As by-products, some entanglement measures (namely entanglement cost, entanglement of formation, relative entropy of entanglement, squashed entanglement, classical squashed entanglement, conditional entanglement of mutual information, and broadcast regularization of mutual information) and constrained Holevo capacity are NP-hard/NP-complete to compute. These complexity-theoretic results are directly applicable in common randomness distillation, quantum state merging, entanglement distillation, superdense coding, and quantum teleportation; they may offer significant insights into quantum information processing. Moreover, we prove the NP-completeness of two typical problems: linear optimization over classical states and detecting classical states in a convex set, providing evidence that working with classical states is generically computationally intractable.
A meta-heuristic method for solving scheduling problem: crow search algorithm
NASA Astrophysics Data System (ADS)
Adhi, Antono; Santosa, Budi; Siswanto, Nurhadi
2018-04-01
Scheduling is one of the most important processes in an industry both in manufacturingand services. The scheduling process is the process of selecting resources to perform an operation on tasks. Resources can be machines, peoples, tasks, jobs or operations.. The selection of optimum sequence of jobs from a permutation is an essential issue in every research in scheduling problem. Optimum sequence becomes optimum solution to resolve scheduling problem. Scheduling problem becomes NP-hard problem since the number of job in the sequence is more than normal number can be processed by exact algorithm. In order to obtain optimum results, it needs a method with capability to solve complex scheduling problems in an acceptable time. Meta-heuristic is a method usually used to solve scheduling problem. The recently published method called Crow Search Algorithm (CSA) is adopted in this research to solve scheduling problem. CSA is an evolutionary meta-heuristic method which is based on the behavior in flocks of crow. The calculation result of CSA for solving scheduling problem is compared with other algorithms. From the comparison, it is found that CSA has better performance in term of optimum solution and time calculation than other algorithms.
Particle Filter with State Permutations for Solving Image Jigsaw Puzzles
Yang, Xingwei; Adluru, Nagesh; Latecki, Longin Jan
2016-01-01
We deal with an image jigsaw puzzle problem, which is defined as reconstructing an image from a set of square and non-overlapping image patches. It is known that a general instance of this problem is NP-complete, and it is also challenging for humans, since in the considered setting the original image is not given. Recently a graphical model has been proposed to solve this and related problems. The target label probability function is then maximized using loopy belief propagation. We also formulate the problem as maximizing a label probability function and use exactly the same pairwise potentials. Our main contribution is a novel inference approach in the sampling framework of Particle Filter (PF). Usually in the PF framework it is assumed that the observations arrive sequentially, e.g., the observations are naturally ordered by their time stamps in the tracking scenario. Based on this assumption, the posterior density over the corresponding hidden states is estimated. In the jigsaw puzzle problem all observations (puzzle pieces) are given at once without any particular order. Therefore, we relax the assumption of having ordered observations and extend the PF framework to estimate the posterior density by exploring different orders of observations and selecting the most informative permutations of observations. This significantly broadens the scope of applications of the PF inference. Our experimental results demonstrate that the proposed inference framework significantly outperforms the loopy belief propagation in solving the image jigsaw puzzle problem. In particular, the extended PF inference triples the accuracy of the label assignment compared to that using loopy belief propagation. PMID:27795660
Harmony search algorithm: application to the redundancy optimization problem
NASA Astrophysics Data System (ADS)
Nahas, Nabil; Thien-My, Dao
2010-09-01
The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.
Image flows and one-liner graphical image representation.
Makhervaks, Vadim; Barequet, Gill; Bruckstein, Alfred
2002-10-01
This paper introduces a novel graphical image representation consisting of a single curve-the one-liner. The first step of the algorithm involves the detection and ranking of image edges. A new edge exploration technique is used to perform both tasks simultaneously. This process is based on image flows. It uses a gradient vector field and a new operator to explore image edges. Estimation of the derivatives of the image is performed by using local Taylor expansions in conjunction with a weighted least-squares method. This process finds all the possible image edges without any pruning, and collects information that allows the edges found to be prioritized. This enables the most important edges to be selected to form a skeleton of the representation sought. The next step connects the selected edges into one continuous curve-the one-liner. It orders the selected edges and determines the curves connecting them. These two problems are solved separately. Since the abstract graph setting of the first problem is NP-complete, we reduce it to a variant of the traveling salesman problem and compute an approximate solution to it. We solve the second problem by using Dijkstra's shortest-path algorithm. The full software implementation for the entire one-liner determination process is available.
Characterizing L1-norm best-fit subspaces
NASA Astrophysics Data System (ADS)
Brooks, J. Paul; Dulá, José H.
2017-05-01
Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.
ScaffoldScaffolder: solving contig orientation via bidirected to directed graph reduction.
Bodily, Paul M; Fujimoto, M Stanley; Snell, Quinn; Ventura, Dan; Clement, Mark J
2016-01-01
The contig orientation problem, which we formally define as the MAX-DIR problem, has at times been addressed cursorily and at times using various heuristics. In setting forth a linear-time reduction from the MAX-CUT problem to the MAX-DIR problem, we prove the latter is NP-complete. We compare the relative performance of a novel greedy approach with several other heuristic solutions. Our results suggest that our greedy heuristic algorithm not only works well but also outperforms the other algorithms due to the nature of scaffold graphs. Our results also demonstrate a novel method for identifying inverted repeats and inversion variants, both of which contradict the basic single-orientation assumption. Such inversions have previously been noted as being difficult to detect and are directly involved in the genetic mechanisms of several diseases. http://bioresearch.byu.edu/scaffoldscaffolder. paulmbodily@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Mulder, Samuel A; Wunsch, Donald C
2003-01-01
The Traveling Salesman Problem (TSP) is a very hard optimization problem in the field of operations research. It has been shown to be NP-complete, and is an often-used benchmark for new optimization techniques. One of the main challenges with this problem is that standard, non-AI heuristic approaches such as the Lin-Kernighan algorithm (LK) and the chained LK variant are currently very effective and in wide use for the common fully connected, Euclidean variant that is considered here. This paper presents an algorithm that uses adaptive resonance theory (ART) in combination with a variation of the Lin-Kernighan local optimization algorithm to solve very large instances of the TSP. The primary advantage of this algorithm over traditional LK and chained-LK approaches is the increased scalability and parallelism allowed by the divide-and-conquer clustering paradigm. Tours obtained by the algorithm are lower quality, but scaling is much better and there is a high potential for increasing performance using parallel hardware.
NASA Astrophysics Data System (ADS)
Prasetyo, H.; Alfatsani, M. A.; Fauza, G.
2018-05-01
The main issue in vehicle routing problem (VRP) is finding the shortest route of product distribution from the depot to outlets to minimize total cost of distribution. Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) is one of the variants of VRP that accommodates vehicle capacity and distribution period. Since the main problem of CCVRPTW is considered a non-polynomial hard (NP-hard) problem, it requires an efficient and effective algorithm to solve the problem. This study was aimed to develop Biased Random Key Genetic Algorithm (BRKGA) that is combined with local search to solve the problem of CCVRPTW. The algorithm design was then coded by MATLAB. Using numerical test, optimum algorithm parameters were set and compared with the heuristic method and Standard BRKGA to solve a case study on soft drink distribution. Results showed that BRKGA combined with local search resulted in lower total distribution cost compared with the heuristic method. Moreover, the developed algorithm was found to be successful in increasing the performance of Standard BRKGA.
An improved genetic algorithm and its application in the TSP problem
NASA Astrophysics Data System (ADS)
Li, Zheng; Qin, Jinlei
2011-12-01
Concept and research actuality of genetic algorithm are introduced in detail in the paper. Under this condition, the simple genetic algorithm and an improved algorithm are described and applied in an example of TSP problem, where the advantage of genetic algorithm is adequately shown in solving the NP-hard problem. In addition, based on partial matching crossover operator, the crossover operator method is improved into extended crossover operator in order to advance the efficiency when solving the TSP. In the extended crossover method, crossover operator can be performed between random positions of two random individuals, which will not be restricted by the position of chromosome. Finally, the nine-city TSP is solved using the improved genetic algorithm with extended crossover method, the efficiency of whose solution process is much higher, besides, the solving speed of the optimal solution is much faster.
NASA Astrophysics Data System (ADS)
Hsiao, Ming-Chih; Su, Ling-Huey
2018-02-01
This research addresses the problem of scheduling hybrid machine types, in which one type is a two-machine flowshop and another type is a single machine. A job is either processed on the two-machine flowshop or on the single machine. The objective is to determine a production schedule for all jobs so as to minimize the makespan. The problem is NP-hard since the two parallel machines problem was proved to be NP-hard. Simulated annealing algorithms are developed to solve the problem optimally. A mixed integer programming (MIP) is developed and used to evaluate the performance for two SAs. Computational experiments demonstrate the efficiency of the simulated annealing algorithms, the quality of the simulated annealing algorithms will also be reported.
Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem
Molla-Alizadeh-Zavardehi, S.; Tavakkoli-Moghaddam, R.; Lotfi, F. Hosseinzadeh
2014-01-01
This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM) scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA), variable neighborhood search (VNS), and simulated annealing (SA) frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms. PMID:24883359
Multiple-variable neighbourhood search for the single-machine total weighted tardiness problem
NASA Astrophysics Data System (ADS)
Chung, Tsui-Ping; Fu, Qunjie; Liao, Ching-Jong; Liu, Yi-Ting
2017-07-01
The single-machine total weighted tardiness (SMTWT) problem is a typical discrete combinatorial optimization problem in the scheduling literature. This problem has been proved to be NP hard and thus provides a challenging area for metaheuristics, especially the variable neighbourhood search algorithm. In this article, a multiple variable neighbourhood search (m-VNS) algorithm with multiple neighbourhood structures is proposed to solve the problem. Special mechanisms named matching and strengthening operations are employed in the algorithm, which has an auto-revising local search procedure to explore the solution space beyond local optimality. Two aspects, searching direction and searching depth, are considered, and neighbourhood structures are systematically exchanged. Experimental results show that the proposed m-VNS algorithm outperforms all the compared algorithms in solving the SMTWT problem.
Li, Guo; Lv, Fei; Guan, Xu
2014-01-01
This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment.
Satisfiability of logic programming based on radial basis function neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged
2014-07-10
In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We appliedmore » the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.« less
QSPIN: A High Level Java API for Quantum Computing Experimentation
NASA Technical Reports Server (NTRS)
Barth, Tim
2017-01-01
QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed. PMID:27926946
Lv, Fei; Guan, Xu
2014-01-01
This paper investigates a collaborative scheduling model in the assembly system, wherein multiple suppliers have to deliver their components to the multiple manufacturers under the operation of Supply-Hub. We first develop two different scenarios to examine the impact of Supply-Hub. One is that suppliers and manufacturers make their decisions separately, and the other is that the Supply-Hub makes joint decisions with collaborative scheduling. The results show that our scheduling model with the Supply-Hub is a NP-complete problem, therefore, we propose an auto-adapted differential evolution algorithm to solve this problem. Moreover, we illustrate that the performance of collaborative scheduling by the Supply-Hub is superior to separate decision made by each manufacturer and supplier. Furthermore, we also show that the algorithm proposed has good convergence and reliability, which can be applicable to more complicated supply chain environment. PMID:24892104
Subquantum information and computation
NASA Astrophysics Data System (ADS)
Valentini, Antony
2002-08-01
It is argued that immense physical resources -- for nonlocal communication, espionage, and exponentially-fast computation -- are hidden from us by quantum noise, and that this noise is not fundamental but merely a property of an equilibrium state in which the universe happens to be at the present time. It is suggested that `non-quantum' or nonequilibrium matter might exist today in the form of relic particles from the early universe. We describe how such matter could be detected and put to practical use. Nonequilibrium matter could be used to send instantaneous signals, to violate the uncertainty principle, to distinguish non-orthogonal quantum states without disturbing them, to eavesdrop on quantum key distribution, and to outpace quantum computation (solving NP-complete problems in polynomial time).
Ant colony optimization for solving university facility layout problem
NASA Astrophysics Data System (ADS)
Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin
2013-04-01
Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).
A quantum annealing architecture with all-to-all connectivity from local interactions.
Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter
2015-10-01
Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is-in the spirit of topological quantum memories-redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems.
A quantum annealing architecture with all-to-all connectivity from local interactions
Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter
2015-01-01
Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is—in the spirit of topological quantum memories—redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems. PMID:26601316
Physics-Aware Informative Coverage Planning for Autonomous Vehicles
2014-06-01
environment and find the optimal path connecting fixed nodes, which is equivalent to solving the Traveling Salesman Problem (TSP). While TSP is an NP...intended for application to USV harbor patrolling, it is applicable to many different domains. The problem of traveling over an area and gathering...environment. I. INTRODUCTION There are many applications that need persistent monitor- ing of a given area, requiring repeated travel over the area to
NASA Astrophysics Data System (ADS)
Pei, Jun; Liu, Xinbao; Pardalos, Panos M.; Fan, Wenjuan; Wang, Ling; Yang, Shanlin
2016-03-01
Motivated by applications in manufacturing industry, we consider a supply chain scheduling problem, where each job is characterised by non-identical sizes, different release times and unequal processing times. The objective is to minimise the makespan by making batching and sequencing decisions. The problem is formalised as a mixed integer programming model and proved to be strongly NP-hard. Some structural properties are presented for both the general case and a special case. Based on these properties, a lower bound is derived, and a novel two-phase heuristic (TP-H) is developed to solve the problem, which guarantees to obtain a worst case performance ratio of ?. Computational experiments with a set of different sizes of random instances are conducted to evaluate the proposed approach TP-H, which is superior to another two heuristics proposed in the literature. Furthermore, the experimental results indicate that TP-H can effectively and efficiently solve large-size problems in a reasonable time.
NASA Astrophysics Data System (ADS)
Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III
2018-04-01
NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.
Strategies to initiate and control the nucleation behavior of bimetallic nanoparticles.
Krishnan, Gopi; de Graaf, Sytze; Ten Brink, Gert H; Persson, Per O Å; Kooi, Bart J; Palasantzas, George
2017-06-22
In this work we report strategies to nucleate bimetallic nanoparticles (NPs) made by gas phase synthesis of elements showing difficulty in homogeneous nucleation. It is shown that the nucleation assisted problem of bimetallic NP synthesis can be solved via the following pathways: (i) selecting an element which can itself nucleate and act as a nucleation center for the synthesis of bimetallic NPs; (ii) introducing H 2 or CH 4 as an impurity/trace gas to initiate nucleation during the synthesis of bimetallic NPs. The latter can solve the problem if none of the elements in a bimetallic NP can initiate nucleation. We illustrate the abovementioned strategies for the case of Mg based bimetallic NPs, which are interesting as hydrogen storage materials and exhibit both nucleation and oxidation issues even under ultra-high vacuum conditions. In particular, it is shown that adding H 2 in small proportions favors the formation of a solid solution/alloy structure even in the case of immiscible Mg and Ti, where normally phase separation occurs during synthesis. In addition, we illustrate the possibility of improving the nucleation rate, and controlling the structure and size distribution of bimetallic NPs using H 2 /CH 4 as a reactive/nucleating gas. This is shown to be associated with the dimer bond energies of the various formed species and the vapor pressures of the metals, which are key factors for NP nucleation.
Algorithmics - Is There Hope for a Unified Theory?
NASA Astrophysics Data System (ADS)
Hromkovič, Juraj
Computer science was born with the formal definition of the notion of an algorithm. This definition provides clear limits of automatization, separating problems into algorithmically solvable problems and algorithmically unsolvable ones. The second big bang of computer science was the development of the concept of computational complexity. People recognized that problems that do not admit efficient algorithms are not solvable in practice. The search for a reasonable, clear and robust definition of the class of practically solvable algorithmic tasks started with the notion of the class {P} and of {NP}-completeness. In spite of the fact that this robust concept is still fundamental for judging the hardness of computational problems, a variety of approaches was developed for solving instances of {NP}-hard problems in many applications. Our 40-years short attempt to fix the fuzzy border between the practically solvable problems and the practically unsolvable ones partially reminds of the never-ending search for the definition of "life" in biology or for the definitions of matter and energy in physics. Can the search for the formal notion of "practical solvability" also become a never-ending story or is there hope for getting a well-accepted, robust definition of it? Hopefully, it is not surprising that we are not able to answer this question in this invited talk. But to deal with this question is of crucial importance, because only due to enormous effort scientists get a better and better feeling of what the fundamental notions of science like life and energy mean. In the flow of numerous technical results, we must not forget the fact that most of the essential revolutionary contributions to science were done by defining new concepts and notions.
NASA Astrophysics Data System (ADS)
Kuncoro, K. S.; Junaedi, I.; Dwijanto
2018-03-01
This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.
A Benders based rolling horizon algorithm for a dynamic facility location problem
Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.
2016-06-28
This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less
Approximate solution of the p-median minimization problem
NASA Astrophysics Data System (ADS)
Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.
2016-09-01
A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.
Using Online Algorithms to Solve NP-Hard Problems More Efficiently in Practice
2007-12-01
bounds. For the openstacks , TPP, and pipesworld domains, our results were qualitatively different: most instances in these domains were either easy...between our results in these two sets of domains. For most in- stances in the openstacks domain we found no k values that elicited a “yes” answer in
Computational Study for Planar Connected Dominating Set Problem
NASA Astrophysics Data System (ADS)
Marzban, Marjan; Gu, Qian-Ping; Jia, Xiaohua
The connected dominating set (CDS) problem is a well studied NP-hard problem with many important applications. Dorn et al. [ESA2005, LNCS3669,pp95-106] introduce a new technique to generate 2^{O(sqrt{n})} time and fixed-parameter algorithms for a number of non-local hard problems, including the CDS problem in planar graphs. The practical performance of this algorithm is yet to be evaluated. We perform a computational study for such an evaluation. The results show that the size of instances can be solved by the algorithm mainly depends on the branchwidth of the instances, coinciding with the theoretical result. For graphs with small or moderate branchwidth, the CDS problem instances with size up to a few thousands edges can be solved in a practical time and memory space. This suggests that the branch-decomposition based algorithms can be practical for the planar CDS problem.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Phunchongharn, Phond; Hossain, Ekram; Camorlinga, Sergio
2011-11-01
We study the multiple access problem for e-Health applications (referred to as secondary users) coexisting with medical devices (referred to as primary or protected users) in a hospital environment. In particular, we focus on transmission scheduling and power control of secondary users in multiple spatial reuse time-division multiple access (STDMA) networks. The objective is to maximize the spectrum utilization of secondary users and minimize their power consumption subject to the electromagnetic interference (EMI) constraints for active and passive medical devices and minimum throughput guarantee for secondary users. The multiple access problem is formulated as a dual objective optimization problem which is shown to be NP-complete. We propose a joint scheduling and power control algorithm based on a greedy approach to solve the problem with much lower computational complexity. To this end, an enhanced greedy algorithm is proposed to improve the performance of the greedy algorithm by finding the optimal sequence of secondary users for scheduling. Using extensive simulations, the tradeoff in performance in terms of spectrum utilization, energy consumption, and computational complexity is evaluated for both the algorithms.
Computing Role Assignments of Proper Interval Graphs in Polynomial Time
NASA Astrophysics Data System (ADS)
Heggernes, Pinar; van't Hof, Pim; Paulusma, Daniël
A homomorphism from a graph G to a graph R is locally surjective if its restriction to the neighborhood of each vertex of G is surjective. Such a homomorphism is also called an R-role assignment of G. Role assignments have applications in distributed computing, social network theory, and topological graph theory. The Role Assignment problem has as input a pair of graphs (G,R) and asks whether G has an R-role assignment. This problem is NP-complete already on input pairs (G,R) where R is a path on three vertices. So far, the only known non-trivial tractable case consists of input pairs (G,R) where G is a tree. We present a polynomial time algorithm that solves Role Assignment on all input pairs (G,R) where G is a proper interval graph. Thus we identify the first graph class other than trees on which the problem is tractable. As a complementary result, we show that the problem is Graph Isomorphism-hard on chordal graphs, a superclass of proper interval graphs and trees.
Improved dynamic MRI reconstruction by exploiting sparsity and rank-deficiency.
Majumdar, Angshul
2013-06-01
In this paper we address the problem of dynamic MRI reconstruction from partially sampled K-space data. Our work is motivated by previous studies in this area that proposed exploiting the spatiotemporal correlation of the dynamic MRI sequence by posing the reconstruction problem as a least squares minimization regularized by sparsity and low-rank penalties. Ideally the sparsity and low-rank penalties should be represented by the l(0)-norm and the rank of a matrix; however both are NP hard penalties. The previous studies used the convex l(1)-norm as a surrogate for the l(0)-norm and the non-convex Schatten-q norm (0
NASA Astrophysics Data System (ADS)
Zheng, Genrang; Lin, ZhengChun
The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.
Lexicographic goal programming and assessment tools for a combinatorial production problem.
DOT National Transportation Integrated Search
2008-01-01
NP-complete combinatorial problems often necessitate the use of near-optimal solution techniques including : heuristics and metaheuristics. The addition of multiple optimization criteria can further complicate : comparison of these solution technique...
NASA Astrophysics Data System (ADS)
Wu, Fei; Shao, Shihai; Tang, Youxi
2016-10-01
To enable simultaneous multicast downlink transmit and receive operations on the same frequency band, also known as full-duplex links between an access point and mobile users. The problem of minimizing the total power of multicast transmit beamforming is considered from the viewpoint of ensuring the suppression amount of near-field line-of-sight self-interference and guaranteeing prescribed minimum signal-to-interference-plus-noise-ratio (SINR) at each receiver of the multicast groups. Based on earlier results for multicast groups beamforming, the joint problem is easily shown to be NP-hard. A semidefinite relaxation (SDR) technique with linear program power adjust method is proposed to solve the NP-hard problem. Simulation shows that the proposed method is feasible even when the local receive antenna in nearfield and the mobile user in far-filed are in the same direction.
A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems
NASA Astrophysics Data System (ADS)
Thammano, Arit; Teekeng, Wannaporn
2015-05-01
The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.
The complexity of divisibility.
Bausch, Johannes; Cubitt, Toby
2016-09-01
We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.
The TSP-approach to approximate solving the m-Cycles Cover Problem
NASA Astrophysics Data System (ADS)
Gimadi, Edward Kh.; Rykov, Ivan; Tsidulko, Oxana
2016-10-01
In the m-Cycles Cover problem it is required to find a collection of m vertex-disjoint cycles that covers all vertices of the graph and the total weight of edges in the cover is minimum (or maximum). The problem is a generalization of the Traveling salesmen problem. It is strongly NP-hard. We discuss a TSP-approach that gives polynomial approximate solutions for this problem. It transforms an approximation TSP algorithm into an approximation m-CCP algorithm. In this paper we present a number of successful transformations with proven performance guarantees for the obtained solutions.
Parallel-Batch Scheduling and Transportation Coordination with Waiting Time Constraint
Gong, Hua; Chen, Daheng; Xu, Ke
2014-01-01
This paper addresses a parallel-batch scheduling problem that incorporates transportation of raw materials or semifinished products before processing with waiting time constraint. The orders located at the different suppliers are transported by some vehicles to a manufacturing facility for further processing. One vehicle can load only one order in one shipment. Each order arriving at the facility must be processed in the limited waiting time. The orders are processed in batches on a parallel-batch machine, where a batch contains several orders and the processing time of the batch is the largest processing time of the orders in it. The goal is to find a schedule to minimize the sum of the total flow time and the production cost. We prove that the general problem is NP-hard in the strong sense. We also demonstrate that the problem with equal processing times on the machine is NP-hard. Furthermore, a dynamic programming algorithm in pseudopolynomial time is provided to prove its ordinarily NP-hardness. An optimal algorithm in polynomial time is presented to solve a special case with equal processing times and equal transportation times for each order. PMID:24883385
Computational complexity of ecological and evolutionary spatial dynamics
Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A.
2015-01-01
There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569
NASA Astrophysics Data System (ADS)
Lin, Geng; Guan, Jian; Feng, Huibin
2018-06-01
The positive influence dominating set problem is a variant of the minimum dominating set problem, and has lots of applications in social networks. It is NP-hard, and receives more and more attention. Various methods have been proposed to solve the positive influence dominating set problem. However, most of the existing work focused on greedy algorithms, and the solution quality needs to be improved. In this paper, we formulate the minimum positive influence dominating set problem as an integer linear programming (ILP), and propose an ILP based memetic algorithm (ILPMA) for solving the problem. The ILPMA integrates a greedy randomized adaptive construction procedure, a crossover operator, a repair operator, and a tabu search procedure. The performance of ILPMA is validated on nine real-world social networks with nodes up to 36,692. The results show that ILPMA significantly improves the solution quality, and is robust.
Honey bee-inspired algorithms for SNP haplotype reconstruction problem
NASA Astrophysics Data System (ADS)
PourkamaliAnaraki, Maryam; Sadeghi, Mehdi
2016-03-01
Reconstructing haplotypes from SNP fragments is an important problem in computational biology. There have been a lot of interests in this field because haplotypes have been shown to contain promising data for disease association research. It is proved that haplotype reconstruction in Minimum Error Correction model is an NP-hard problem. Therefore, several methods such as clustering techniques, evolutionary algorithms, neural networks and swarm intelligence approaches have been proposed in order to solve this problem in appropriate time. In this paper, we have focused on various evolutionary clustering techniques and try to find an efficient technique for solving haplotype reconstruction problem. It can be referred from our experiments that the clustering methods relying on the behaviour of honey bee colony in nature, specifically bees algorithm and artificial bee colony methods, are expected to result in more efficient solutions. An application program of the methods is available at the following link. http://www.bioinf.cs.ipm.ir/software/haprs/
Solving optimization problems by the public goods game
NASA Astrophysics Data System (ADS)
Javarone, Marco Alberto
2017-09-01
We introduce a method based on the Public Goods Game for solving optimization tasks. In particular, we focus on the Traveling Salesman Problem, i.e. a NP-hard problem whose search space exponentially grows increasing the number of cities. The proposed method considers a population whose agents are provided with a random solution to the given problem. In doing so, agents interact by playing the Public Goods Game using the fitness of their solution as currency of the game. Notably, agents with better solutions provide higher contributions, while those with lower ones tend to imitate the solution of richer agents for increasing their fitness. Numerical simulations show that the proposed method allows to compute exact solutions, and suboptimal ones, in the considered search spaces. As result, beyond to propose a new heuristic for combinatorial optimization problems, our work aims to highlight the potentiality of evolutionary game theory beyond its current horizons.
A neural network approach to job-shop scheduling.
Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E
1991-01-01
A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.
The Cyclic Nature of Problem Solving: An Emergent Multidimensional Problem-Solving Framework
ERIC Educational Resources Information Center
Carlson, Marilyn P.; Bloom, Irene
2005-01-01
This paper describes the problem-solving behaviors of 12 mathematicians as they completed four mathematical tasks. The emergent problem-solving framework draws on the large body of research, as grounded by and modified in response to our close observations of these mathematicians. The resulting "Multidimensional Problem-Solving Framework" has four…
End-to-End Network QoS via Scheduling of Flexible Resource Reservation Requests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, S.; Katramatos, D.; Yu, D.
2011-11-14
Modern data-intensive applications move vast amounts of data between multiple locations around the world. To enable predictable and reliable data transfer, next generation networks allow such applications to reserve network resources for exclusive use. In this paper, we solve an important problem (called SMR3) to accommodate multiple and concurrent network reservation requests between a pair of end-sites. Given the varying availability of bandwidth within the network, our goal is to accommodate as many reservation requests as possible while minimizing the total time needed to complete the data transfers. We first prove that SMR3 is an NP-hard problem. Then we solvemore » it by developing a polynomial-time heuristic, called RRA. The RRA algorithm hinges on an efficient mechanism to accommodate large number of requests by minimizing the bandwidth wastage. Finally, via numerical results, we show that RRA constructs schedules that accommodate significantly larger number of requests compared to other, seemingly efficient, heuristics.« less
Data-Driven Packet Loss Estimation for Node Healthy Sensing in Decentralized Cluster.
Fan, Hangyu; Wang, Huandong; Li, Yong
2018-01-23
Decentralized clustering of modern information technology is widely adopted in various fields these years. One of the main reason is the features of high availability and the failure-tolerance which can prevent the entire system form broking down by a failure of a single point. Recently, toolkits such as Akka are used by the public commonly to easily build such kind of cluster. However, clusters of such kind that use Gossip as their membership managing protocol and use link failure detecting mechanism to detect link failures cannot deal with the scenario that a node stochastically drops packets and corrupts the member status of the cluster. In this paper, we formulate the problem to be evaluating the link quality and finding a max clique (NP-Complete) in the connectivity graph. We then proposed an algorithm that consists of two models driven by data from application layer to respectively solving these two problems. Through simulations with statistical data and a real-world product, we demonstrate that our algorithm has a good performance.
NASA Astrophysics Data System (ADS)
Kassa, Semu Mitiku; Tsegay, Teklay Hailay
2017-08-01
Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.
Optimization of Self-Directed Target Coverage in Wireless Multimedia Sensor Network
Yang, Yang; Wang, Yufei; Pi, Dechang; Wang, Ruchuan
2014-01-01
Video and image sensors in wireless multimedia sensor networks (WMSNs) have directed view and limited sensing angle. So the methods to solve target coverage problem for traditional sensor networks, which use circle sensing model, are not suitable for WMSNs. Based on the FoV (field of view) sensing model and FoV disk model proposed, how expected multimedia sensor covers the target is defined by the deflection angle between target and the sensor's current orientation and the distance between target and the sensor. Then target coverage optimization algorithms based on expected coverage value are presented for single-sensor single-target, multisensor single-target, and single-sensor multitargets problems distinguishingly. Selecting the orientation that sensor rotated to cover every target falling in the FoV disk of that sensor for candidate orientations and using genetic algorithm to multisensor multitargets problem, which has NP-complete complexity, then result in the approximated minimum subset of sensors which covers all the targets in networks. Simulation results show the algorithm's performance and the effect of number of targets on the resulting subset. PMID:25136667
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.
Iterative pass optimization of sequence data
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
From Constraints to Resolution Rules Part II : chains, braids, confluence and T&E
NASA Astrophysics Data System (ADS)
Berthier, Denis
In this Part II, we apply the general theory developed in Part I to a detailed analysis of the Constraint Satisfaction Problem (CSP). We show how specific types of resolution rules can be defined. In particular, we introduce the general notions of a chain and a braid. As in Part I, these notions are illustrated in detail with the Sudoku example - a problem known to be NP-complete and which is therefore typical of a broad class of hard problems. For Sudoku, we also show how far one can go in "approximating" a CSP with a resolution theory and we give an empirical statistical analysis of how the various puzzles, corresponding to different sets of entries, can be classified along a natural scale of complexity. For any CSP, we also prove the confluence property of some Resolution Theories based on braids and we show how it can be used to define different resolution strategies. Finally, we prove that, in any CSP, braids have the same solving capacity as Trial-and-Error (T&E) with no guessing and we comment this result in the Sudoku case.
Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin; Cheng, Runwei
Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.
Monkey search algorithm for ECE components partitioning
NASA Astrophysics Data System (ADS)
Kuliev, Elmar; Kureichik, Vladimir; Kureichik, Vladimir, Jr.
2018-05-01
The paper considers one of the important design problems – a partitioning of electronic computer equipment (ECE) components (blocks). It belongs to the NP-hard class of problems and has a combinatorial and logic nature. In the paper, a partitioning problem formulation can be found as a partition of graph into parts. To solve the given problem, the authors suggest using a bioinspired approach based on a monkey search algorithm. Based on the developed software, computational experiments were carried out that show the algorithm efficiency, as well as its recommended settings for obtaining more effective solutions in comparison with a genetic algorithm.
NASA Technical Reports Server (NTRS)
Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide
2014-01-01
There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning problems to QUBOs, the form of input required for a quantum annealing machine such as the D-Wave II.
Computing with motile bio-agents
NASA Astrophysics Data System (ADS)
Nicolau, Dan V., Jr.; Burrage, Kevin; Nicolau, Dan V.
2007-12-01
We describe a model of computation of the parallel type, which we call 'computing with bio-agents', based on the concept that motions of biological objects such as bacteria or protein molecular motors in confined spaces can be regarded as computations. We begin with the observation that the geometric nature of the physical structures in which model biological objects move modulates the motions of the latter. Consequently, by changing the geometry, one can control the characteristic trajectories of the objects; on the basis of this, we argue that such systems are computing devices. We investigate the computing power of mobile bio-agent systems and show that they are computationally universal in the sense that they are capable of computing any Boolean function in parallel. We argue also that using appropriate conditions, bio-agent systems can solve NP-complete problems in probabilistic polynomial time.
Teaching Problem Solving without Modeling through "Thinking Aloud Pair Problem Solving."
ERIC Educational Resources Information Center
Pestel, Beverly C.
1993-01-01
Reviews research relevant to the problem of unsatisfactory student problem-solving abilities and suggests a teaching strategy that addresses the issue. Author explains how she uses teaching aloud problem solving (TAPS) in college chemistry and presents evaluation data. Among the findings are that the TAPS class got fewer problems completely right,…
Genetic algorithm parameters tuning for resource-constrained project scheduling problem
NASA Astrophysics Data System (ADS)
Tian, Xingke; Yuan, Shengrui
2018-04-01
Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.
NASA Astrophysics Data System (ADS)
Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán
2016-07-01
We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.
NASA Astrophysics Data System (ADS)
Gutin, Gregory; Kim, Eun Jung; Soleimanfallah, Arezou; Szeider, Stefan; Yeo, Anders
The NP-hard general factor problem asks, given a graph and for each vertex a list of integers, whether the graph has a spanning subgraph where each vertex has a degree that belongs to its assigned list. The problem remains NP-hard even if the given graph is bipartite with partition U ⊎ V, and each vertex in U is assigned the list {1}; this subproblem appears in the context of constraint programming as the consistency problem for the extended global cardinality constraint. We show that this subproblem is fixed-parameter tractable when parameterized by the size of the second partite set V. More generally, we show that the general factor problem for bipartite graphs, parameterized by |V |, is fixed-parameter tractable as long as all vertices in U are assigned lists of length 1, but becomes W[1]-hard if vertices in U are assigned lists of length at most 2. We establish fixed-parameter tractability by reducing the problem instance to a bounded number of acyclic instances, each of which can be solved in polynomial time by dynamic programming.
A constraint optimization based virtual network mapping method
NASA Astrophysics Data System (ADS)
Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen
2013-03-01
Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.
Enhancing memory and imagination improves problem solving among individuals with depression.
McFarland, Craig P; Primosch, Mark; Maxson, Chelsey M; Stewart, Brandon T
2017-08-01
Recent work has revealed links between memory, imagination, and problem solving, and suggests that increasing access to detailed memories can lead to improved imagination and problem-solving performance. Depression is often associated with overgeneral memory and imagination, along with problem-solving deficits. In this study, we tested the hypothesis that an interview designed to elicit detailed recollections would enhance imagination and problem solving among both depressed and nondepressed participants. In a within-subjects design, participants completed a control interview or an episodic specificity induction prior to completing memory, imagination, and problem-solving tasks. Results revealed that compared to the control interview, the episodic specificity induction fostered increased detail generation in memory and imagination and more relevant steps on the problem-solving task among depressed and nondepressed participants. This study builds on previous work by demonstrating that a brief interview can enhance problem solving among individuals with depression and supports the notion that episodic memory plays a key role in problem solving. It should be noted, however, that the results of the interview are relatively short-lived.
Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control
NASA Astrophysics Data System (ADS)
Kamyar, Reza
In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to efficiently utilize hundreds and potentially thousands of processors, and analyze systems with 100+ dimensional state-space. Furthermore, we extend our algorithms to analyze robust stability over more complicated geometries such as hypercubes and arbitrary convex polytopes. Our algorithms can be readily extended to address a wide variety of problems in control such as Hinfinity synthesis for systems with parametric uncertainty and computing control Lyapunov functions.
Quiñones, Victoria; Jurska, Justyna; Fener, Eileen; Miranda, Regina
2016-01-01
Objective Research suggests that being unable to generate solutions to problems in times of distress may contribute to suicidal thoughts and behavior, and that depression is associated with problem solving deficits. This study examined active and passive problem solving as moderators of the association between depressive symptoms and future suicidal ideation (SI) among suicide attempters and non-attempters. Method Young adults (n = 324, 73% female, Mage = 19, SD = 2.22) with (n = 78) and without (n = 246) a suicide attempt history completed a problem-solving task, self-report measures of hopelessness, depression, and SI at baseline, and also completed a self-report measure of SI at 6-month follow-up. Results Passive problem solving was higher among suicide attempters but did not moderate the association between depressive symptoms and future SI. Among attempters, active problem solving buffered against depressive symptoms in predicting future SI. Conclusions Suicide prevention should foster active problem solving, especially among suicide attempters. PMID:25760651
Physician Extender (Sr. NP or PA) | Center for Cancer Research
Be part of our mission to solve the most important, challenging and neglected problems in modern cancer research and patient care. The National Cancer Institute’s Center for Cancer Research (CCR) is a world-leading cancer research organization working toward scientific breakthroughs at medicine’s cutting edge. Our scientists can’t do it alone. It takes an extraordinary team
NASA Astrophysics Data System (ADS)
Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru
We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.
Minimizing distortion and internal forces in truss structures by simulated annealing
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
1989-01-01
Inaccuracies in the length of members and the diameters of joints of large truss reflector backup structures may produce unacceptable levels of surface distortion and member forces. However, if the member lengths and joint diameters can be measured accurately it is possible to configure the members and joints so that root-mean-square (rms) surface error and/or rms member forces is minimized. Following Greene and Haftka (1989) it is assumed that the force vector f is linearly proportional to the member length errors e(sub M) of dimension NMEMB (the number of members) and joint errors e(sub J) of dimension NJOINT (the number of joints), and that the best-fit displacement vector d is a linear function of f. Let NNODES denote the number of positions on the surface of the truss where error influences are measured. The solution of the problem is discussed. To classify, this problem was compared to a similar combinatorial optimization problem. In particular, when only the member length errors are considered, minimizing d(sup 2)(sub rms) is equivalent to the quadratic assignment problem. The quadratic assignment problem is a well known NP-complete problem in operations research literature. Hence minimizing d(sup 2)(sub rms) is is also an NP-complete problem. The focus of the research is the development of a simulated annealing algorithm to reduce d(sup 2)(sub rms). The plausibility of this technique is its recent success on a variety of NP-complete combinatorial optimization problems including the quadratic assignment problem. A physical analogy for simulated annealing is the way liquids freeze and crystallize. All computational experiments were done on a MicroVAX. The two interchange heuristic is very fast but produces widely varying results. The two and three interchange heuristic provides less variability in the final objective function values but runs much more slowly. Simulated annealing produced the best objective function values for every starting configuration and was faster than the two and three interchange heuristic.
Optimal shortening of uniform covering arrays
Rangel-Valdez, Nelson; Avila-George, Himer; Carrizalez-Turrubiates, Oscar
2017-01-01
Software test suites based on the concept of interaction testing are very useful for testing software components in an economical way. Test suites of this kind may be created using mathematical objects called covering arrays. A covering array, denoted by CA(N; t, k, v), is an N × k array over Zv={0,…,v-1} with the property that every N × t sub-array covers all t-tuples of Zvt at least once. Covering arrays can be used to test systems in which failures occur as a result of interactions among components or subsystems. They are often used in areas such as hardware Trojan detection, software testing, and network design. Because system testing is expensive, it is critical to reduce the amount of testing required. This paper addresses the Optimal Shortening of Covering ARrays (OSCAR) problem, an optimization problem whose objective is to construct, from an existing covering array matrix of uniform level, an array with dimensions of (N − δ) × (k − Δ) such that the number of missing t-tuples is minimized. Two applications of the OSCAR problem are (a) to produce smaller covering arrays from larger ones and (b) to obtain quasi-covering arrays (covering arrays in which the number of missing t-tuples is small) to be used as input to a meta-heuristic algorithm that produces covering arrays. In addition, it is proven that the OSCAR problem is NP-complete, and twelve different algorithms are proposed to solve it. An experiment was performed on 62 problem instances, and the results demonstrate the effectiveness of solving the OSCAR problem to facilitate the construction of new covering arrays. PMID:29267343
Optimal recombination in genetic algorithms for flowshop scheduling problems
NASA Astrophysics Data System (ADS)
Kovalenko, Julia
2016-10-01
The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.
NASA Astrophysics Data System (ADS)
Kreinovich, Vladik; Longpre, Luc; Starks, Scott A.; Xiang, Gang; Beck, Jan; Kandathi, Raj; Nayak, Asis; Ferson, Scott; Hajagos, Janos
2007-02-01
In many areas of science and engineering, it is desirable to estimate statistical characteristics (mean, variance, covariance, etc.) under interval uncertainty. For example, we may want to use the measured values x(t) of a pollution level in a lake at different moments of time to estimate the average pollution level; however, we do not know the exact values x(t)--e.g., if one of the measurement results is 0, this simply means that the actual (unknown) value of x(t) can be anywhere between 0 and the detection limit (DL). We must, therefore, modify the existing statistical algorithms to process such interval data. Such a modification is also necessary to process data from statistical databases, where, in order to maintain privacy, we only keep interval ranges instead of the actual numeric data (e.g., a salary range instead of the actual salary). Most resulting computational problems are NP-hard--which means, crudely speaking, that in general, no computationally efficient algorithm can solve all particular cases of the corresponding problem. In this paper, we overview practical situations in which computationally efficient algorithms exist: e.g., situations when measurements are very accurate, or when all the measurements are done with one (or few) instruments. As a case study, we consider a practical problem from bioinformatics: to discover the genetic difference between the cancer cells and the healthy cells, we must process the measurements results and find the concentrations c and h of a given gene in cancer and in healthy cells. This is a particular case of a general situation in which, to estimate states or parameters which are not directly accessible by measurements, we must solve a system of equations in which coefficients are only known with interval uncertainty. We show that in general, this problem is NP-hard, and we describe new efficient algorithms for solving this problem in practically important situations.
Artificial immune algorithm for multi-depot vehicle scheduling problems
NASA Astrophysics Data System (ADS)
Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling
2008-10-01
In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.
Nguyen, Cathina T; Fairclough, Diane L; Noll, Robert B
2016-01-01
Problem-solving skills training is an intervention designed to teach coping skills that has shown to decrease negative affectivity (depressive symptoms, negative mood, and post-traumatic stress symptoms) in mothers of children with cancer. The objective of this study was to see whether mothers of children recently diagnosed with autism spectrum disorder would be receptive to receiving problem-solving skills training (feasibility trial). Participants were recruited from a local outpatient developmental clinic that is part of a university department of pediatrics. Participants were to receive eight 1-h sessions of problem-solving skills training and were asked to complete assessments prior to beginning problem-solving skills training (T1), immediately after intervention (T2), and 3 months after T2 (T3). Outcome measures assessed problem-solving skills and negative affectivity (i.e. distress). In total, 30 mothers were approached and 24 agreed to participate (80.0%). Of them, 17 mothers completed problem-solving skills training (retention rate: 70.8%). Mothers of children with autism spectrum disorder who completed problem-solving skills training had significant decreases in negative affectivity and increases in problem-solving skills. A comparison to mothers of children with cancer shows that mothers of children with autism spectrum disorder displayed similar levels of depressive symptoms but less negative mood and fewer symptoms of post-traumatic stress. Data suggest that problem-solving skills training may be an effective way to alleviate distress in mothers of children recently diagnosed with autism spectrum disorder. Data also suggest that mothers of children with autism spectrum disorder were moderately receptive to receiving problem-solving skills training. Implications are that problem-solving skills training may be beneficial to parents of children with autism spectrum disorder; modifications to improve retention rates are suggested. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter
2018-05-01
This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.
Social problem-solving interventions in medium secure settings for women.
Long, C G; Fulton, B; Dolley, O; Hollin, C R
2011-10-01
Problem-solving interventions are a feature of overall medium secure treatment programmes. However, despite the relevance of such treatment to personality disorder there are few descriptions of such interventions for women. Beneficial effects for women who completed social problem-solving group treatment were evident on a number of psychometric assessments. A treatment non-completion rate of one-third raises questions of both acceptability and timing of cognitive behavioural interventions.
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Chen, C. L.
1989-01-01
Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.
Portfolios of quantum algorithms.
Maurer, S M; Hogg, T; Huberman, B A
2001-12-17
Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.
Optimal matching for prostate brachytherapy seed localization with dimension reduction.
Lee, Junghoon; Labat, Christian; Jain, Ameet K; Song, Danny Y; Burdette, Everette C; Fichtinger, Gabor; Prince, Jerry L
2009-01-01
In prostate brachytherapy, x-ray fluoroscopy has been used for intra-operative dosimetry to provide qualitative assessment of implant quality. More recent developments have made possible 3D localization of the implanted radioactive seeds. This is usually modeled as an assignment problem and solved by resolving the correspondence of seeds. It is, however, NP-hard, and the problem is even harder in practice due to the significant number of hidden seeds. In this paper, we propose an algorithm that can find an optimal solution from multiple projection images with hidden seeds. It solves an equivalent problem with reduced dimensional complexity, thus allowing us to find an optimal solution in polynomial time. Simulation results show the robustness of the algorithm. It was validated on 5 phantom and 18 patient datasets, successfully localizing the seeds with detection rate of > or = 97.6% and reconstruction error of < or = 1.2 mm. This is considered to be clinically excellent performance.
Temporal Constraint Reasoning With Preferences
NASA Technical Reports Server (NTRS)
Khatib, Lina; Morris, Paul; Morris, Robert; Rossi, Francesca
2001-01-01
A number of reasoning problems involving the manipulation of temporal information can naturally be viewed as implicitly inducing an ordering of potential local decisions involving time (specifically, associated with durations or orderings of events) on the basis of preferences. For example. a pair of events might be constrained to occur in a certain order, and, in addition. it might be preferable that the delay between them be as large, or as small, as possible. This paper explores problems in which a set of temporal constraints is specified, where each constraint is associated with preference criteria for making local decisions about the events involved in the constraint, and a reasoner must infer a complete solution to the problem such that, to the extent possible, these local preferences are met in the best way. A constraint framework for reasoning about time is generalized to allow for preferences over event distances and durations, and we study the complexity of solving problems in the resulting formalism. It is shown that while in general such problems are NP-hard, some restrictions on the shape of the preference functions, and on the structure of the preference set, can be enforced to achieve tractability. In these cases, a simple generalization of a single-source shortest path algorithm can be used to compute a globally preferred solution in polynomial time.
Two-Stage orders sequencing system for mixed-model assembly
NASA Astrophysics Data System (ADS)
Zemczak, M.; Skolud, B.; Krenczyk, D.
2015-11-01
In the paper, the authors focus on the NP-hard problem of orders sequencing, formulated similarly to Car Sequencing Problem (CSP). The object of the research is the assembly line in an automotive industry company, on which few different models of products, each in a certain number of versions, are assembled on the shared resources, set in a line. Such production type is usually determined as a mixed-model production, and arose from the necessity of manufacturing customized products on the basis of very specific orders from single clients. The producers are nowadays obliged to provide each client the possibility to determine a huge amount of the features of the product they are willing to buy, as the competition in the automotive market is large. Due to the previously mentioned nature of the problem (NP-hard), in the given time period only satisfactory solutions are sought, as the optimal solution method has not yet been found. Most of the researchers that implemented inaccurate methods (e.g. evolutionary algorithms) to solving sequencing problems dropped the research after testing phase, as they were not able to obtain reproducible results, and met problems while determining the quality of the received solutions. Therefore a new approach to solving the problem, presented in this paper as a sequencing system is being developed. The sequencing system consists of a set of determined rules, implemented into computer environment. The system itself works in two stages. First of them is connected with the determination of a place in the storage buffer to which certain production orders should be sent. In the second stage of functioning, precise sets of sequences are determined and evaluated for certain parts of the storage buffer under certain criteria.
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Problem solving for depressed suicide attempters and depressed individuals without suicide attempt.
Roskar, Saska; Zorko, Maja; Bucik, Valentin; Marusic, Andrej
2007-12-01
Next to feelings of hopelessness, certain cognitive features such as problem solving deficiency, attentional bias and reduced future positive thinking are involved in the development and maintenance of suicidal behavior. The aim of this study was to examine feelings of hopelessness and problem solving ability in depressed suicide attempters and depressed individuals without a suicide attempt and to see whether these features change over time. Three groups of participants, depressed suicide attempters (N=23), psychiatric control group (N=27) and healthy volunteers (N=27) completed measures of hopelessness and executive planning and problem solving abilities. The two clinical groups completed all measures shortly after admission and then again 7 weeks later whereas the non-clinical control group completed measures at baseline only. Both clinical groups displayed a higher level of hopelessness and poorer problem solving ability when compared to non-clinical volunteers. However, no differences were found between the two clinical groups. In neither of the clinical groups was improvement in problem solving ability between baseline and retesting observed despite the lowering of feelings of hopelessness. The diagnoses in the psychiatric controls group were only obtained by the psychiatrist and not checked by further documentation or questionnaires. Furthermore we did not control for personality traits which might influence cognitive functioning. Since feelings of hopelessness decreased over time and problem solving ability nevertheless remained stable it is important that treatment not only focuses on mood improvement of depressed suicidal and depressed non-suicidal individuals but also on teaching problem solving techniques.
Teaching Effective Problem Solving Strategies for Interns
ERIC Educational Resources Information Center
Warren, Louis L.
2005-01-01
This qualitative study investigates what problem solving strategies interns learn from their clinical teachers during their internships. Twenty-four interns who completed their internship in the elementary grades shared what problem solving strategies had the greatest impact upon them in learning how to deal with problems during their internship.…
Generating effective project scheduling heuristics by abstraction and reconstitution
NASA Technical Reports Server (NTRS)
Janakiraman, Bhaskar; Prieditis, Armand
1992-01-01
A project scheduling problem consists of a finite set of jobs, each with fixed integer duration, requiring one or more resources such as personnel or equipment, and each subject to a set of precedence relations, which specify allowable job orderings, and a set of mutual exclusion relations, which specify jobs that cannot overlap. No job can be interrupted once started. The objective is to minimize project duration. This objective arises in nearly every large construction project--from software to hardware to buildings. Because such project scheduling problems are NP-hard, they are typically solved by branch-and-bound algorithms. In these algorithms, lower-bound duration estimates (admissible heuristics) are used to improve efficiency. One way to obtain an admissible heuristic is to remove (abstract) all resources and mutual exclusion constraints and then obtain the minimal project duration for the abstracted problem; this minimal duration is the admissible heuristic. Although such abstracted problems can be solved efficiently, they yield inaccurate admissible heuristics precisely because those constraints that are central to solving the original problem are abstracted. This paper describes a method to reconstitute the abstracted constraints back into the solution to the abstracted problem while maintaining efficiency, thereby generating better admissible heuristics. Our results suggest that reconstitution can make good admissible heuristics even better.
Uzbekova, D G
2015-01-01
The article describes scientific activity of outstanding pharmacologist, Academician N.P. Kravkov (1865-1924) on studying dynamics of the vascular system in experiment: Using the method of isolated animal organs of animals, N.P. Kravkov discovered self-maintained periodic contractions of vessels independent of the central nervous system and not associated with cardiac contractions. On isolated animal organs (heart, kidneys, spleen, womb, pancreas and others) specialists of the laboratory of N.P. Kravkov studied vascular reactions and sensitivity of vascular zones to administration of pharmacological agents in normal conditions and on various experimental ''pathological" models. For studying physiology and pharmacology of coronary vessels irrespective of cardiac contractions masking change in their lumen N.P. Kravkov suggested his original method of cardiac arrest by means of administration of strophanthin followed by passing through vessels of the unfunctioning heart solutions of various pharmacological substances. N.P. Kravkov and !{is followers studied alterations in vascular tonicity on isolated organs of cadavers of people who had died of various diseases: tuberculosis, typhoid fever and epidemic typhus, scarlet fever, measles, diphtheria, pneumonia et cet. The scientist believed that studying the functional state of vessels on post-mortem material would make it possible to more precisely and accurately solve the problem of intravital alterations thereof N.P. Kravkov's works on physiology and pathology of'the vascular system served as the basis for the developing clinical discipline, i.e. angiology.
Data-Driven Packet Loss Estimation for Node Healthy Sensing in Decentralized Cluster
Fan, Hangyu; Wang, Huandong; Li, Yong
2018-01-01
Decentralized clustering of modern information technology is widely adopted in various fields these years. One of the main reason is the features of high availability and the failure-tolerance which can prevent the entire system form broking down by a failure of a single point. Recently, toolkits such as Akka are used by the public commonly to easily build such kind of cluster. However, clusters of such kind that use Gossip as their membership managing protocol and use link failure detecting mechanism to detect link failures cannot deal with the scenario that a node stochastically drops packets and corrupts the member status of the cluster. In this paper, we formulate the problem to be evaluating the link quality and finding a max clique (NP-Complete) in the connectivity graph. We then proposed an algorithm that consists of two models driven by data from application layer to respectively solving these two problems. Through simulations with statistical data and a real-world product, we demonstrate that our algorithm has a good performance. PMID:29360792
A multilevel probabilistic beam search algorithm for the shortest common supersequence problem.
Gallardo, José E
2012-01-01
The shortest common supersequence problem is a classical problem with many applications in different fields such as planning, Artificial Intelligence and especially in Bioinformatics. Due to its NP-hardness, we can not expect to efficiently solve this problem using conventional exact techniques. This paper presents a heuristic to tackle this problem based on the use at different levels of a probabilistic variant of a classical heuristic known as Beam Search. The proposed algorithm is empirically analysed and compared to current approaches in the literature. Experiments show that it provides better quality solutions in a reasonable time for medium and large instances of the problem. For very large instances, our heuristic also provides better solutions, but required execution times may increase considerably.
Vandermorris, Susan; Sheldon, Signy; Winocur, Gordon; Moscovitch, Morris
2013-11-01
The relationship of higher order problem solving to basic neuropsychological processes likely depends on the type of problems to be solved. Well-defined problems (e.g., completing a series of errands) may rely primarily on executive functions. Conversely, ill-defined problems (e.g., navigating socially awkward situations) may, in addition, rely on medial temporal lobe (MTL) mediated episodic memory processes. Healthy young (N = 18; M = 19; SD = 1.3) and old (N = 18; M = 73; SD = 5.0) adults completed a battery of neuropsychological tests of executive and episodic memory function, and experimental tests of problem solving. Correlation analyses and age group comparisons demonstrated differential contributions of executive and autobiographical episodic memory function to well-defined and ill-defined problem solving and evidence for an episodic simulation mechanism underlying ill-defined problem solving efficacy. Findings are consistent with the emerging idea that MTL-mediated episodic simulation processes support the effective solution of ill-defined problems, over and above the contribution of frontally mediated executive functions. Implications for the development of intervention strategies that target preservation of functional independence in older adults are discussed.
Altschuler, M D; Kassaee, A
1997-02-01
To match corresponding seed images in different radiographs so that the 3D seed locations can be triangulated automatically and without ambiguity requires (at least) three radiographs taken from different perspectives, and an algorithm that finds the proper permutations of the seed-image indices. Matching corresponding images in only two radiographs introduces inherent ambiguities which can be resolved only with the use of non-positional information obtained with intensive human effort. Matching images in three or more radiographs is an 'NP (Non-determinant in Polynomial time)-complete' problem. Although the matching problem is fundamental, current methods for three-radiograph seed-image matching use 'local' (seed-by-seed) methods that may lead to incorrect matchings. We describe a permutation-sampling method which not only gives good 'global' (full permutation) matches for the NP-complete three-radiograph seed-matching problem, but also determines the reliability of the radiographic data themselves, namely, whether the patient moved in the interval between radiographic perspectives.
NASA Astrophysics Data System (ADS)
Altschuler, Martin D.; Kassaee, Alireza
1997-02-01
To match corresponding seed images in different radiographs so that the 3D seed locations can be triangulated automatically and without ambiguity requires (at least) three radiographs taken from different perspectives, and an algorithm that finds the proper permutations of the seed-image indices. Matching corresponding images in only two radiographs introduces inherent ambiguities which can be resolved only with the use of non-positional information obtained with intensive human effort. Matching images in three or more radiographs is an `NP (Non-determinant in Polynomial time)-complete' problem. Although the matching problem is fundamental, current methods for three-radiograph seed-image matching use `local' (seed-by-seed) methods that may lead to incorrect matchings. We describe a permutation-sampling method which not only gives good `global' (full permutation) matches for the NP-complete three-radiograph seed-matching problem, but also determines the reliability of the radiographic data themselves, namely, whether the patient moved in the interval between radiographic perspectives.
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.
Ćwik, Michał; Józefczyk, Jerzy
2018-01-01
An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
The checkpoint ordering problem
Hungerländer, P.
2017-01-01
Abstract We suggest a new variant of a row layout problem: Find an ordering of n departments with given lengths such that the total weighted sum of their distances to a given checkpoint is minimized. The Checkpoint Ordering Problem (COP) is both of theoretical and practical interest. It has several applications and is conceptually related to some well-studied combinatorial optimization problems, namely the Single-Row Facility Layout Problem, the Linear Ordering Problem and a variant of parallel machine scheduling. In this paper we study the complexity of the (COP) and its special cases. The general version of the (COP) with an arbitrary but fixed number of checkpoints is NP-hard in the weak sense. We propose both a dynamic programming algorithm and an integer linear programming approach for the (COP) . Our computational experiments indicate that the (COP) is hard to solve in practice. While the run time of the dynamic programming algorithm strongly depends on the length of the departments, the integer linear programming approach is able to solve instances with up to 25 departments to optimality. PMID:29170574
Fischer, Axel R; Lan, Nham Thi Phuong; Wiedemann, Cornelia; Heide, Petra; Werner, Peter; Schmidt, Arndt W; Theumer, Gabriele; Knölker, Hans-Joachim
2010-04-23
A new method for determining the endocrine disrupting substance 4-nonylphenol (technical grade=mixture of isomers, 4-NP) from water samples has been developed by using 4-(2,6-dimethylhept-3-yl)phenol (4-sec-NP) as model compound. This branched monoalkylphenol is shown to serve as internal standard (IS) for the determination of technical 4-nonylphenol. To the best of our knowledge, 4-(2,6-dimethylhept-3-yl)phenol (racemic mixture) is a newly synthesized 4-nonylphenol isomer and has not been described elsewhere. Recoveries have been determined by analyzing spiked water samples from distilled water, river water and wastewater. Following acetylation, the compounds were enriched via solid phase extraction (SPE). Analyses of the compounds were performed by capillary column gas chromatography/mass spectrometry (GC/MS), operating in selected ion-monitoring (SIM) mode. The recovery of technical 4-NP using either the newly prepared 4-sec-NP or 4-n-nonylphenol (4-n-NP) as IS have been compared. 4-sec-NP showed slightly better results. However, in the first series of experiments using wastewater, the yields for the derivatization of the two standard compounds were remarkably different. The yield for derivatization of 4-n-NP was approximately 20%, probably due to the difficult matrix of the wastewater. In contrast, the yield for the derivatization of 4-sec-NP was considerably higher (approximately 63%). This problem can be solved by increasing the concentration of the reagent used for derivatization. For better control of the clean-up process, we recommend application of 4-sec-NP as internal standard, at least in water samples with complex matrices (e.g., high content of hydroxylated compounds). Copyright 2010 Elsevier B.V. All rights reserved.
Exact solutions for species tree inference from discordant gene trees.
Chang, Wen-Chieh; Górecki, Paweł; Eulenstein, Oliver
2013-10-01
Phylogenetic analysis has to overcome the grant challenge of inferring accurate species trees from evolutionary histories of gene families (gene trees) that are discordant with the species tree along whose branches they have evolved. Two well studied approaches to cope with this challenge are to solve either biologically informed gene tree parsimony (GTP) problems under gene duplication, gene loss, and deep coalescence, or the classic RF supertree problem that does not rely on any biological model. Despite the potential of these problems to infer credible species trees, they are NP-hard. Therefore, these problems are addressed by heuristics that typically lack any provable accuracy and precision. We describe fast dynamic programming algorithms that solve the GTP problems and the RF supertree problem exactly, and demonstrate that our algorithms can solve instances with data sets consisting of as many as 22 taxa. Extensions of our algorithms can also report the number of all optimal species trees, as well as the trees themselves. To better asses the quality of the resulting species trees that best fit the given gene trees, we also compute the worst case species trees, their numbers, and optimization score for each of the computational problems. Finally, we demonstrate the performance of our exact algorithms using empirical and simulated data sets, and analyze the quality of heuristic solutions for the studied problems by contrasting them with our exact solutions.
Tuo, Shouheng; Yong, Longquan; Deng, Fang’an; Li, Yanhai; Lin, Yong; Lu, Qiuju
2017-01-01
Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application. PMID:28403224
Tuo, Shouheng; Yong, Longquan; Deng, Fang'an; Li, Yanhai; Lin, Yong; Lu, Qiuju
2017-01-01
Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.
An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud
NASA Astrophysics Data System (ADS)
Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.
2017-08-01
Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.
Wade, Shari L; Walz, Nicolay C; Carey, JoAnne; McMullen, Kendra M; Cass, Jennifer; Mark, Erin; Yeates, Keith Owen
2012-11-01
To examine the results of a randomized clinical trial (RCT) of Teen Online Problem Solving (TOPS), an online problem solving therapy model, in increasing problem-solving skills and decreasing depressive symptoms and global distress for caregivers of adolescents with traumatic brain injury (TBI). Families of adolescents aged 11-18 who sustained a moderate to severe TBI between 3 and 19 months earlier were recruited from hospital trauma registries. Participants were assigned to receive a web-based, problem-solving intervention (TOPS, n = 20), or access to online resources pertaining to TBI (Internet Resource Comparison; IRC; n = 21). Parent report of problem solving skills, depressive symptoms, global distress, utilization, and satisfaction were assessed pre- and posttreatment. Groups were compared on follow-up scores after controlling for pretreatment levels. Family income was examined as a potential moderator of treatment efficacy. Improvement in problem solving was examined as a mediator of reductions in depression and distress. Forty-one participants provided consent and completed baseline assessments, with follow-up assessments completed on 35 participants (16 TOPS and 19 IRC). Parents in both groups reported a high level of satisfaction with both interventions. Improvements in problem solving skills and depression were moderated by family income, with caregivers of lower income in TOPS reporting greater improvements. Increases in problem solving partially mediated reductions in global distress. Findings suggest that TOPS may be effective in improving problem solving skills and reducing depressive symptoms for certain subsets of caregivers in families of adolescents with TBI.
Newsvendor problem under complete uncertainty: a case of innovative products.
Gaspars-Wieloch, Helena
2017-01-01
The paper presents a new scenario-based decision rule for the classical version of the newsvendor problem (NP) under complete uncertainty (i.e. uncertainty with unknown probabilities). So far, NP has been analyzed under uncertainty with known probabilities or under uncertainty with partial information (probabilities known incompletely). The novel approach is designed for the sale of new, innovative products, where it is quite complicated to define probabilities or even probability-like quantities, because there are no data available for forecasting the upcoming demand via statistical analysis. The new procedure described in the contribution is based on a hybrid of Hurwicz and Bayes decision rules. It takes into account the decision maker's attitude towards risk (measured by coefficients of optimism and pessimism) and the dispersion (asymmetry, range, frequency of extremes values) of payoffs connected with particular order quantities. It does not require any information about the probability distribution.
Chen, Zhe; Honomichl, Ryan; Kennedy, Diane; Tan, Enda
2016-06-01
The present study examines 5- to 8-year-old children's relation reasoning in solving matrix completion tasks. This study incorporates a componential analysis, an eye-tracking method, and a microgenetic approach, which together allow an investigation of the cognitive processing strategies involved in the development and learning of children's relational thinking. Developmental differences in problem-solving performance were largely due to deficiencies in engaging the processing strategies that are hypothesized to facilitate problem-solving performance. Feedback designed to highlight the relations between objects within the matrix improved 5- and 6-year-olds' problem-solving performance, as well as their use of appropriate processing strategies. Furthermore, children who engaged the processing strategies early on in the task were more likely to solve subsequent problems in later phases. These findings suggest that encoding relations, integrating rules, completing the model, and generalizing strategies across tasks are critical processing components that underlie relational thinking. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Genetic algorithms for the vehicle routing problem
NASA Astrophysics Data System (ADS)
Volna, Eva
2016-06-01
The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.
NASA Astrophysics Data System (ADS)
Miranowicz, Adam; Bartkiewicz, Karol; Lambert, Neill; Chen, Yueh-Nan; Nori, Franco
2015-12-01
If a single-mode nonclassical light is combined with the vacuum on a beam splitter, then the output state is entangled. As proposed in [Phys. Rev. Lett. 94, 173602 (2005), 10.1103/PhysRevLett.94.173602], by measuring this output-state entanglement for a balanced lossless beam splitter, one can quantify the input-state nonclassicality. These measures of nonclassicality (referred to as entanglement potentials) can be based, in principle, on various entanglement measures, leading to the negativity (NP) and concurrence (CP) potentials, and the potential for the relative entropy of entanglement (REEP). We search for the maximal relative nonclassicality, which can be achieved by comparing two entanglement measures for (i) arbitrary two-qubit states and (ii) those which can be generated from a photon-number qubit via a balanced lossless beam splitter, where the qubit basis states are the vacuum and single-photon states. Surprisingly, we find that the maximal relative nonclassicality, measured by the REEP for a given value of the NP, can be increased (if NP <0.527 ) by using either a tunable beam splitter or by amplitude damping of the output state of the balanced beam splitter. We also show that the maximal relative nonclassicality, measured by the NP for a given value of the REEP, can be increased by phase damping (dephasing). Note that the entanglement itself is not increased by these losses (since they act locally), but the possible ratios of different measures are affected. Moreover, we show that partially dephased states can be more nonclassical than both pure states and completely dephased states, by comparing the NP for a given value of the REEP. Thus, one can conclude that not all standard entanglement measures can be used as entanglement potentials. Alternatively, one can infer that a single balanced lossless beam splitter is not always transferring the whole nonclassicality of its input state into the entanglement of its output modes. The application of a lossy beam splitter can solve this problem, at least for the cases analyzed in this paper.
Sequential Quadratic Programming Algorithms for Optimization
1989-08-01
quadratic program- ma ng (SQ(2l ) aIiatain.seenis to be relgarded aIs tie( buest choice for the solution of smiall. dlense problema (see S tour L)toS...For the step along d, note that a < nOing + 3 szH + i3.ninA A a K f~Iz,;nd and from Id1 _< ,,, we must have that for some /3 , np , 11P11 < dn"p. 5.2...Nevertheless, many of these problems are considered hard to solve. Moreover, for some of these problems the assumptions made in Chapter 2 to establish the
Viola, Adrienne; Taggi-Pinto, Alison; Sahler, Olle Jane Z; Alderfer, Melissa A; Devine, Katie A
2018-05-01
Some adolescents with cancer report distress and unmet needs. Guided by the disability-stress-coping model, we evaluated associations among problem-solving skills, parent-adolescent cancer-related communication, parent-adolescent dyadic functioning, and distress in adolescents with cancer. Thirty-nine adolescent-parent dyads completed measures of these constructs. Adolescents were 14-20 years old on treatment or within 1 year of completing treatment. Better problem-solving skills were correlated with lower adolescent distress (r = -0.70, P < 0.001). Adolescent-reported cancer-related communication problems and dyadic functioning were not significantly related to adolescent distress (rs < 0.18). Future work should examine use of problem-solving interventions to decrease distress for adolescents with cancer. © 2018 Wiley Periodicals, Inc.
Teaching Semantic Tableaux Method for Propositional Classical Logic with a CAS
ERIC Educational Resources Information Center
Aguilera-Venegas, Gabriel; Galán-García, José Luis; Galán-García, María Ángeles; Rodríguez-Cielos, Pedro
2015-01-01
Automated theorem proving (ATP) for Propositional Classical Logic is an algorithm to check the validity of a formula. It is a very well-known problem which is decidable but co-NP-complete. There are many algorithms for this problem. In this paper, an educationally oriented implementation of Semantic Tableaux method is described. The program has…
Dyer, Joseph-Omer; Hudon, Anne; Montpetit-Tourangeau, Katherine; Charlin, Bernard; Mamede, Sílvia; van Gog, Tamara
2015-03-07
Example-based learning using worked examples can foster clinical reasoning. Worked examples are instructional tools that learners can use to study the steps needed to solve a problem. Studying worked examples paired with completion examples promotes acquisition of problem-solving skills more than studying worked examples alone. Completion examples are worked examples in which some of the solution steps remain unsolved for learners to complete. Providing learners engaged in example-based learning with self-explanation prompts has been shown to foster increased meaningful learning compared to providing no self-explanation prompts. Concept mapping and concept map study are other instructional activities known to promote meaningful learning. This study compares the effects of self-explaining, completing a concept map and studying a concept map on conceptual knowledge and problem-solving skills among novice learners engaged in example-based learning. Ninety-one physiotherapy students were randomized into three conditions. They performed a pre-test and a post-test to evaluate their gains in conceptual knowledge and problem-solving skills (transfer performance) in intervention selection. They studied three pairs of worked/completion examples in a digital learning environment. Worked examples consisted of a written reasoning process for selecting an optimal physiotherapy intervention for a patient. The completion examples were partially worked out, with the last few problem-solving steps left blank for students to complete. The students then had to engage in additional self-explanation, concept map completion or model concept map study in order to synthesize and deepen their knowledge of the key concepts and problem-solving steps. Pre-test performance did not differ among conditions. Post-test conceptual knowledge was higher (P < .001) in the concept map study condition (68.8 ± 21.8%) compared to the concept map completion (52.8 ± 17.0%) and self-explanation (52.2 ± 21.7%) conditions. Post-test problem-solving performance was higher (P < .05) in the self-explanation (63.2 ± 16.0%) condition compared to the concept map study (53.3 ± 16.4%) and concept map completion (51.0 ± 13.6%) conditions. Students in the self-explanation condition also invested less mental effort in the post-test. Studying model concept maps led to greater conceptual knowledge, whereas self-explanation led to higher transfer performance. Self-explanation and concept map study can be combined with worked example and completion example strategies to foster intervention selection.
Estimates of the absolute error and a scheme for an approximate solution to scheduling problems
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2009-02-01
An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.
Worry and problem-solving skills and beliefs in primary school children.
Parkinson, Monika; Creswell, Cathy
2011-03-01
To examine the association between worry and problem-solving skills and beliefs (confidence and perceived control) in primary school children. Children (8-11 years) were screened using the Penn State Worry Questionnaire for Children. High (N= 27) and low (N= 30) scorers completed measures of anxiety, problem-solving skills (generating alternative solutions to problems, planfulness, and effectiveness of solutions) and problem-solving beliefs (confidence and perceived control). High and low worry groups differed significantly on measures of anxiety and problem-solving beliefs (confidence and control) but not on problem-solving skills. Consistent with findings with adults, worry in children was associated with cognitive distortions, not skills deficits. Interventions for worried children may benefit from a focus on increasing positive problem-solving beliefs. ©2010 The British Psychological Society.
Hasegawa, Akira; Nishimura, Haruki; Mastuda, Yuko; Kunisato, Yoshihiko; Morimoto, Hiroshi; Adachi, Masaki
This study examined the relationship between trait rumination and the effectiveness of problem solving strategies as assessed by the Means-Ends Problem-Solving Test (MEPS) in a nonclinical population. The present study extended previous studies in terms of using two instructions in the MEPS: the second-person, actual strategy instructions, which has been utilized in previous studies on rumination, and the third-person, ideal-strategy instructions, which is considered more suitable for assessing the effectiveness of problem solving strategies. We also replicated the association between rumination and each dimension of the Social Problem-Solving Inventory-Revised Short Version (SPSI-R:S). Japanese undergraduate students ( N = 223) completed the Beck Depression Inventory-Second Edition, Ruminative Responses Scale (RRS), MEPS, and SPSI-R:S. One half of the sample completed the MEPS with the second-person, actual strategy instructions. The other participants completed the MEPS with the third-person, ideal-strategy instructions. The results showed that neither total RRS score, nor its subscale scores were significantly correlated with MEPS scores under either of the two instructions. These findings taken together with previous findings indicate that in nonclinical populations, trait rumination is not related to the effectiveness of problem solving strategies, but that state rumination while responding to the MEPS deteriorates the quality of strategies. The correlations between RRS and SPSI-R:S scores indicated that trait rumination in general, and its brooding subcomponent in particular are parts of cognitive and behavioral responses that attempt to avoid negative environmental and negative private events. Results also showed that reflection is a part of active problem solving.
Fung, Wenson; Swanson, H Lee
2017-07-01
The purpose of this study was to assess whether the differential effects of working memory (WM) components (the central executive, phonological loop, and visual-spatial sketchpad) on math word problem-solving accuracy in children (N = 413, ages 6-10) are completely mediated by reading, calculation, and fluid intelligence. The results indicated that all three WM components predicted word problem solving in the nonmediated model, but only the storage component of WM yielded a significant direct path to word problem-solving accuracy in the fully mediated model. Fluid intelligence was found to moderate the relationship between WM and word problem solving, whereas reading, calculation, and related skills (naming speed, domain-specific knowledge) completely mediated the influence of the executive system on problem-solving accuracy. Our results are consistent with findings suggesting that storage eliminates the predictive contribution of executive WM to various measures Colom, Rebollo, Abad, & Shih (Memory & Cognition, 34: 158-171, 2006). The findings suggest that the storage component of WM, rather than the executive component, has a direct path to higher-order processing in children.
Impulsivity as a mediator in the relationship between problem solving and suicidal ideation.
Gonzalez, Vivian M; Neander, Lucía L
2018-03-15
This study examined whether three facets of impulsivity previously shown to be associated with suicidal ideation and attempts (negative urgency, lack of premeditation, and lack of perseverance) help to account for the established association between problem solving deficits and suicidal ideation. Emerging adult college student drinkers with a history of at least passive suicidal ideation (N = 387) completed measures of problem solving, impulsivity, and suicidal ideation. A path analysis was conducted to examine the mediating role of impulsivity variables in the association between problem solving (rational problem solving, positive and negative problem orientation, and avoidance style) and suicidal ideation. Direct and indirect associations through impulsivity, particularly negative urgency, were found between problem solving and severity of suicidal ideation. Interventions aimed at teaching problem solving skills, as well as self-efficacy and optimism for solving life problems, may help to reduce impulsivity and suicidal ideation. © 2018 Wiley Periodicals, Inc.
Social interaction as a heuristic for combinatorial optimization problems
NASA Astrophysics Data System (ADS)
Fontanari, José F.
2010-11-01
We investigate the performance of a variant of Axelrod’s model for dissemination of culture—the Adaptive Culture Heuristic (ACH)—on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents’ strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N1/4 so that the number of agents must increase with the fourth power of the problem size, N∝F4 , to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F6 which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.
Solving multiconstraint assignment problems using learning automata.
Horn, Geir; Oommen, B John
2010-02-01
This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the pioneering LA solutions to this problem, unequivocally demonstrates that LA can play an important role in solving complex combinatorial and integer optimization problems.
Differential geometric treewidth estimation in adiabatic quantum computation
NASA Astrophysics Data System (ADS)
Wang, Chi; Jonckheere, Edmond; Brun, Todd
2016-10-01
The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.
Connected Component Model for Multi-Object Tracking.
He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan
2016-08-01
In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.
Semiclassical approach to finite-temperature quantum annealing with trapped ions
NASA Astrophysics Data System (ADS)
Raventós, David; Graß, Tobias; Juliá-Díaz, Bruno; Lewenstein, Maciej
2018-05-01
Recently it has been demonstrated that an ensemble of trapped ions may serve as a quantum annealer for the number-partitioning problem [Nat. Commun. 7, 11524 (2016), 10.1038/ncomms11524]. This hard computational problem may be addressed by employing a tunable spin-glass architecture. Following the proposal of the trapped-ion annealer, we study here its robustness against thermal effects; that is, we investigate the role played by thermal phonons. For the efficient description of the system, we use a semiclassical approach, and benchmark it against the exact quantum evolution. The aim is to understand better and characterize how the quantum device approaches a solution of an otherwise difficult to solve NP-hard problem.
Approximate ground states of the random-field Potts model from graph cuts
NASA Astrophysics Data System (ADS)
Kumar, Manoj; Kumar, Ravinder; Weigel, Martin; Banerjee, Varsha; Janke, Wolfhard; Puri, Sanjay
2018-05-01
While the ground-state problem for the random-field Ising model is polynomial, and can be solved using a number of well-known algorithms for maximum flow or graph cut, the analog random-field Potts model corresponds to a multiterminal flow problem that is known to be NP-hard. Hence an efficient exact algorithm is very unlikely to exist. As we show here, it is nevertheless possible to use an embedding of binary degrees of freedom into the Potts spins in combination with graph-cut methods to solve the corresponding ground-state problem approximately in polynomial time. We benchmark this heuristic algorithm using a set of quasiexact ground states found for small systems from long parallel tempering runs. For a not-too-large number q of Potts states, the method based on graph cuts finds the same solutions in a fraction of the time. We employ the new technique to analyze the breakup length of the random-field Potts model in two dimensions.
BiCluE - Exact and heuristic algorithms for weighted bi-cluster editing of biomedical data
2013-01-01
Background The explosion of biological data has dramatically reformed today's biology research. The biggest challenge to biologists and bioinformaticians is the integration and analysis of large quantity of data to provide meaningful insights. One major problem is the combined analysis of data from different types. Bi-cluster editing, as a special case of clustering, which partitions two different types of data simultaneously, might be used for several biomedical scenarios. However, the underlying algorithmic problem is NP-hard. Results Here we contribute with BiCluE, a software package designed to solve the weighted bi-cluster editing problem. It implements (1) an exact algorithm based on fixed-parameter tractability and (2) a polynomial-time greedy heuristics based on solving the hardest part, edge deletions, first. We evaluated its performance on artificial graphs. Afterwards we exemplarily applied our implementation on real world biomedical data, GWAS data in this case. BiCluE generally works on any kind of data types that can be modeled as (weighted or unweighted) bipartite graphs. Conclusions To our knowledge, this is the first software package solving the weighted bi-cluster editing problem. BiCluE as well as the supplementary results are available online at http://biclue.mpi-inf.mpg.de. PMID:24565035
INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P
2012-10-01
It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less
Solving Tommy's Writing Problems.
ERIC Educational Resources Information Center
Burdman, Debra
1986-01-01
The article describes an approach by which word processing helps to solve some of the writing problems of learning disabled students. Aspects considered include prewriting, drafting, revising, and completing the story. (CL)
Finding long chains in kidney exchange using the traveling salesman problem.
Anderson, Ross; Ashlagi, Itai; Gamarnik, David; Roth, Alvin E
2015-01-20
As of May 2014 there were more than 100,000 patients on the waiting list for a kidney transplant from a deceased donor. Although the preferred treatment is a kidney transplant, every year there are fewer donors than new patients, so the wait for a transplant continues to grow. To address this shortage, kidney paired donation (KPD) programs allow patients with living but biologically incompatible donors to exchange donors through cycles or chains initiated by altruistic (nondirected) donors, thereby increasing the supply of kidneys in the system. In many KPD programs a centralized algorithm determines which exchanges will take place to maximize the total number of transplants performed. This optimization problem has proven challenging both in theory, because it is NP-hard, and in practice, because the algorithms previously used were unable to optimally search over all long chains. We give two new algorithms that use integer programming to optimally solve this problem, one of which is inspired by the techniques used to solve the traveling salesman problem. These algorithms provide the tools needed to find optimal solutions in practice.
Finding long chains in kidney exchange using the traveling salesman problem
Anderson, Ross; Ashlagi, Itai; Gamarnik, David; Roth, Alvin E.
2015-01-01
As of May 2014 there were more than 100,000 patients on the waiting list for a kidney transplant from a deceased donor. Although the preferred treatment is a kidney transplant, every year there are fewer donors than new patients, so the wait for a transplant continues to grow. To address this shortage, kidney paired donation (KPD) programs allow patients with living but biologically incompatible donors to exchange donors through cycles or chains initiated by altruistic (nondirected) donors, thereby increasing the supply of kidneys in the system. In many KPD programs a centralized algorithm determines which exchanges will take place to maximize the total number of transplants performed. This optimization problem has proven challenging both in theory, because it is NP-hard, and in practice, because the algorithms previously used were unable to optimally search over all long chains. We give two new algorithms that use integer programming to optimally solve this problem, one of which is inspired by the techniques used to solve the traveling salesman problem. These algorithms provide the tools needed to find optimal solutions in practice. PMID:25561535
Surveillance of a 2D Plane Area with 3D Deployed Cameras
Fu, Yi-Ge; Zhou, Jie; Deng, Lei
2014-01-01
As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc.) has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO) algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1) deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2) deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV) of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm. PMID:24469353
Cross-Layer Algorithms for QoS Enhancement in Wireless Multimedia Sensor Networks
NASA Astrophysics Data System (ADS)
Saxena, Navrati; Roy, Abhishek; Shin, Jitae
A lot of emerging applications like advanced telemedicine and surveillance systems, demand sensors to deliver multimedia content with precise level of QoS enhancement. Minimizing energy in sensor networks has been a much explored research area but guaranteeing QoS over sensor networks still remains an open issue. In this letter we propose a cross-layer approach combining Network and MAC layers, for QoS enhancement in wireless multimedia sensor networks. In the network layer a statistical estimate of sensory QoS parameters is performed and a nearoptimal genetic algorithmic solution is proposed to solve the NP-complete QoS-routing problem. On the other hand the objective of the proposed MAC algorithm is to perform the QoS-based packet classification and automatic adaptation of the contention window. Simulation results demonstrate that the proposed protocol is capable of providing lower delay and better throughput, at the cost of reasonable energy consumption, in comparison with other existing sensory QoS protocols.
Barakat, Lamia P.; Daniel, Lauren C.; Smith, Kelsey; Robinson, M. Renée; Patterson, Chavis A.
2013-01-01
Children with sickle cell disease (SCD) are at risk for poor health-related quality of life (HRQOL). The current analysis sought to explore parent problem-solving abilities/skills as a moderator between SCD complications and HRQOL to evaluate applicability to pediatric SCD. At baseline, 83 children ages 6–12 years and their primary caregiver completed measures of the child HRQOL. Primary caregivers also completed a measure of social problem-solving. A SCD complications score was computed from medical record review. Parent problem-solving abilities significantly moderated the association of SCD complications with child self-report psychosocial HRQOL (p = .006). SCD complications had a direct effect on parent proxy physical and psychosocial child HRQOL. Enhancing parent problem-solving abilities may be one approach to improve HRQOL for children with high SCD complications; however, modification of parent perceptions of HRQOL may require direct intervention to improve knowledge and skills involved in disease management. PMID:24222378
Barakat, Lamia P; Daniel, Lauren C; Smith, Kelsey; Renée Robinson, M; Patterson, Chavis A
2014-03-01
Children with sickle cell disease (SCD) are at risk for poor health-related quality of life (HRQOL). The current analysis sought to explore parent problem-solving abilities/skills as a moderator between SCD complications and HRQOL to evaluate applicability to pediatric SCD. At baseline, 83 children ages 6-12 years and their primary caregiver completed measures of child HRQOL. Primary caregivers also completed a measure of social problem-solving. A SCD complications score was computed from medical record review. Parent problem-solving abilities significantly moderated the association of SCD complications with child self-report psychosocial HRQOL (p = .006). SCD complications had a direct effect on parent proxy physical and psychosocial child HRQOL. Enhancing parent problem-solving abilities may be one approach to improve HRQOL for children with high SCD complications; however, modification of parent perceptions of HRQOL may require direct intervention to improve knowledge and skills involved in disease management.
Scheduling in the Face of Uncertain Resource Consumption and Utility
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Dearden, Richard
2003-01-01
We discuss the problem of scheduling tasks that consume uncertain amounts of a resource with known capacity and where the tasks have uncertain utility. In these circumstances, we would like to find schedules that exceed a lower bound on the expected utility when executed. We show that the problems are NP- complete, and present some results that characterize the behavior of some simple heuristics over a variety of problem classes.
Problem Order Implications for Learning
ERIC Educational Resources Information Center
Li, Nan; Cohen, William W.; Koedinger, Kenneth R.
2013-01-01
The order of problems presented to students is an important variable that affects learning effectiveness. Previous studies have shown that solving problems in a blocked order, in which all problems of one type are completed before the student is switched to the next problem type, results in less effective performance than does solving the problems…
ERIC Educational Resources Information Center
Kelly, William E.
2005-01-01
This study explored the relationship between night-sky watching and self-reported cognitive variables: need for cognition and social problem-solving. University students (N = 140) completed the Noctcaelador Inventory, the Need for Cognition Scale, and the Social Problem Solving Inventory. The results indicated that an interest in the night-sky was…
ERIC Educational Resources Information Center
Hoffman, Bobby
2010-01-01
This study investigated the role of self-efficacy beliefs, mathematics anxiety, and working memory capacity in problem-solving accuracy, response time, and efficiency (the ratio of problem-solving accuracy to response time). Pre-service teachers completed a mathematics anxiety inventory measuring cognitive and affective dispositions for…
ERIC Educational Resources Information Center
Goode, Natassia; Beckmann, Jens F.
2010-01-01
This study investigates the relationships between structural knowledge, control performance and fluid intelligence in a complex problem solving (CPS) task. 75 participants received either complete, partial or no information regarding the underlying structure of a complex problem solving task, and controlled the task to reach specific goals.…
Amoeba-Inspired Heuristic Search Dynamics for Exploring Chemical Reaction Paths.
Aono, Masashi; Wakabayashi, Masamitsu
2015-09-01
We propose a nature-inspired model for simulating chemical reactions in a computationally resource-saving manner. The model was developed by extending our previously proposed heuristic search algorithm, called "AmoebaSAT [Aono et al. 2013]," which was inspired by the spatiotemporal dynamics of a single-celled amoeboid organism that exhibits sophisticated computing capabilities in adapting to its environment efficiently [Zhu et al. 2013]. AmoebaSAT is used for solving an NP-complete combinatorial optimization problem [Garey and Johnson 1979], "the satisfiability problem," and finds a constraint-satisfying solution at a speed that is dramatically faster than one of the conventionally known fastest stochastic local search methods [Iwama and Tamaki 2004] for a class of randomly generated problem instances [ http://www.cs.ubc.ca/~hoos/5/benchm.html ]. In cases where the problem has more than one solution, AmoebaSAT exhibits dynamic transition behavior among a variety of the solutions. Inheriting these features of AmoebaSAT, we formulate "AmoebaChem," which explores a variety of metastable molecules in which several constraints determined by input atoms are satisfied and generates dynamic transition processes among the metastable molecules. AmoebaChem and its developed forms will be applied to the study of the origins of life, to discover reaction paths for which expected or unexpected organic compounds may be formed via unknown unstable intermediates and to estimate the likelihood of each of the discovered paths.
Towards a theory of automated elliptic mesh generation
NASA Technical Reports Server (NTRS)
Cordova, J. Q.
1992-01-01
The theory of elliptic mesh generation is reviewed and the fundamental problem of constructing computational space is discussed. It is argued that the construction of computational space is an NP-Complete problem and therefore requires a nonstandard approach for its solution. This leads to the development of graph-theoretic, combinatorial optimization and integer programming algorithms. Methods for the construction of two dimensional computational space are presented.
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Motkova, A. V.
2018-01-01
A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.
Self-Affirmation Improves Problem-Solving under Stress
Creswell, J. David; Dutcher, Janine M.; Klein, William M. P.; Harris, Peter R.; Levine, John M.
2013-01-01
High levels of acute and chronic stress are known to impair problem-solving and creativity on a broad range of tasks. Despite this evidence, we know little about protective factors for mitigating the deleterious effects of stress on problem-solving. Building on previous research showing that self-affirmation can buffer stress, we tested whether an experimental manipulation of self-affirmation improves problem-solving performance in chronically stressed participants. Eighty undergraduates indicated their perceived chronic stress over the previous month and were randomly assigned to either a self-affirmation or control condition. They then completed 30 difficult remote associate problem-solving items under time pressure in front of an evaluator. Results showed that self-affirmation improved problem-solving performance in underperforming chronically stressed individuals. This research suggests a novel means for boosting problem-solving under stress and may have important implications for understanding how self-affirmation boosts academic achievement in school settings. PMID:23658751
The semantic system is involved in mathematical problem solving.
Zhou, Xinlin; Li, Mengyi; Li, Leinian; Zhang, Yiyun; Cui, Jiaxin; Liu, Jie; Chen, Chuansheng
2018-02-01
Numerous studies have shown that the brain regions around bilateral intraparietal cortex are critical for number processing and arithmetical computation. However, the neural circuits for more advanced mathematics such as mathematical problem solving (with little routine arithmetical computation) remain unclear. Using functional magnetic resonance imaging (fMRI), this study (N = 24 undergraduate students) compared neural bases of mathematical problem solving (i.e., number series completion, mathematical word problem solving, and geometric problem solving) and arithmetical computation. Direct subject- and item-wise comparisons revealed that mathematical problem solving typically had greater activation than arithmetical computation in all 7 regions of the semantic system (which was based on a meta-analysis of 120 functional neuroimaging studies on semantic processing). Arithmetical computation typically had greater activation in the supplementary motor area and left precentral gyrus. The results suggest that the semantic system in the brain supports mathematical problem solving. Copyright © 2017 Elsevier Inc. All rights reserved.
Self-affirmation improves problem-solving under stress.
Creswell, J David; Dutcher, Janine M; Klein, William M P; Harris, Peter R; Levine, John M
2013-01-01
High levels of acute and chronic stress are known to impair problem-solving and creativity on a broad range of tasks. Despite this evidence, we know little about protective factors for mitigating the deleterious effects of stress on problem-solving. Building on previous research showing that self-affirmation can buffer stress, we tested whether an experimental manipulation of self-affirmation improves problem-solving performance in chronically stressed participants. Eighty undergraduates indicated their perceived chronic stress over the previous month and were randomly assigned to either a self-affirmation or control condition. They then completed 30 difficult remote associate problem-solving items under time pressure in front of an evaluator. Results showed that self-affirmation improved problem-solving performance in underperforming chronically stressed individuals. This research suggests a novel means for boosting problem-solving under stress and may have important implications for understanding how self-affirmation boosts academic achievement in school settings.
ERIC Educational Resources Information Center
Duquette, Lise
1999-01-01
Examines the role of metacognition, particularly problem solving strategies, in how second language students learn in a multimedia environment, studying problem solving strategies used by students completing exercises in Mydlarski and Paramskas' program, Vi-Conte. Presents recommendations for training teachers, noting that the flexibility of…
Computational complexity in entanglement transformations
NASA Astrophysics Data System (ADS)
Chitambar, Eric A.
In physics, systems having three parts are typically much more difficult to analyze than those having just two. Even in classical mechanics, predicting the motion of three interacting celestial bodies remains an insurmountable challenge while the analogous two-body problem has an elementary solution. It is as if just by adding a third party, a fundamental change occurs in the structure of the problem that renders it unsolvable. In this thesis, we demonstrate how such an effect is likewise present in the theory of quantum entanglement. In fact, the complexity differences between two-party and three-party entanglement become quite conspicuous when comparing the difficulty in deciding what state changes are possible for these systems when no additional entanglement is consumed in the transformation process. We examine this entanglement transformation question and its variants in the language of computational complexity theory, a powerful subject that formalizes the concept of problem difficulty. Since deciding feasibility of a specified bipartite transformation is relatively easy, this task belongs to the complexity class P. On the other hand, for tripartite systems, we find the problem to be NP-Hard, meaning that its solution is at least as hard as the solution to some of the most difficult problems humans have encountered. One can then rigorously defend the assertion that a fundamental complexity difference exists between bipartite and tripartite entanglement since unlike the former, the full range of forms realizable by the latter is incalculable (assuming P≠NP). However, similar to the three-body celestial problem, when one examines a special subclass of the problem---invertible transformations on systems having at least one qubit subsystem---we prove that the problem can be solved efficiently. As a hybrid of the two questions, we find that the question of tripartite to bipartite transformations can be solved by an efficient randomized algorithm. Our results are obtained by encoding well-studied computational problems such as polynomial identity testing and tensor rank into questions of entanglement transformation. In this way, entanglement theory provides a physical manifestation of some of the most puzzling and abstract classical computation questions.
NASA Astrophysics Data System (ADS)
Kunze, Herb; La Torre, Davide; Lin, Jianyi
2017-01-01
We consider the inverse problem associated with IFSM: Given a target function f , find an IFSM, such that its fixed point f ¯ is sufficiently close to f in the Lp distance. Forte and Vrscay [1] showed how to reduce this problem to a quadratic optimization model. In this paper, we extend the collage-based method developed by Kunze, La Torre and Vrscay ([2][3][4]), by proposing the minimization of the 1-norm instead of the 0-norm. In fact, optimization problems involving the 0-norm are combinatorial in nature, and hence in general NP-hard. To overcome these difficulties, we introduce the 1-norm and propose a Sequential Quadratic Programming algorithm to solve the corresponding inverse problem. As in Kunze, La Torre and Vrscay [3] in our formulation, the minimization of collage error is treated as a multi-criteria problem that includes three different and conflicting criteria i.e., collage error, entropy and sparsity. This multi-criteria program is solved by means of a scalarization technique which reduces the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented.
Achieving Crossed Strong Barrier Coverage in Wireless Sensor Network.
Han, Ruisong; Yang, Wei; Zhang, Li
2018-02-10
Barrier coverage has been widely used to detect intrusions in wireless sensor networks (WSNs). It can fulfill the monitoring task while extending the lifetime of the network. Though barrier coverage in WSNs has been intensively studied in recent years, previous research failed to consider the problem of intrusion in transversal directions. If an intruder knows the deployment configuration of sensor nodes, then there is a high probability that it may traverse the whole target region from particular directions, without being detected. In this paper, we introduce the concept of crossed barrier coverage that can overcome this defect. We prove that the problem of finding the maximum number of crossed barriers is NP-hard and integer linear programming (ILP) is used to formulate the optimization problem. The branch-and-bound algorithm is adopted to determine the maximum number of crossed barriers. In addition, we also propose a multi-round shortest path algorithm (MSPA) to solve the optimization problem, which works heuristically to guarantee efficiency while maintaining near-optimal solutions. Several conventional algorithms for finding the maximum number of disjoint strong barriers are also modified to solve the crossed barrier problem and for the purpose of comparison. Extensive simulation studies demonstrate the effectiveness of MSPA.
Xu, Andrew Wei
2010-09-01
In genome rearrangement, given a set of genomes G and a distance measure d, the median problem asks for another genome q that minimizes the total distance [Formula: see text]. This is a key problem in genome rearrangement based phylogenetic analysis. Although this problem is known to be NP-hard, we have shown in a previous article, on circular genomes and under the DCJ distance measure, that a family of patterns in the given genomes--represented by adequate subgraphs--allow us to rapidly find exact solutions to the median problem in a decomposition approach. In this article, we extend this result to the case of linear multichromosomal genomes, in order to solve more interesting problems on eukaryotic nuclear genomes. A multi-way capping problem in the linear multichromosomal case imposes an extra computational challenge on top of the difficulty in the circular case, and this difficulty has been underestimated in our previous study and is addressed in this article. We represent the median problem by the capped multiple breakpoint graph, extend the adequate subgraphs into the capped adequate subgraphs, and prove optimality-preserving decomposition theorems, which give us the tools to solve the median problem and the multi-way capping optimization problem together. We also develop an exact algorithm ASMedian-linear, which iteratively detects instances of (capped) adequate subgraphs and decomposes problems into subproblems. Tested on simulated data, ASMedian-linear can rapidly solve most problems with up to several thousand genes, and it also can provide optimal or near-optimal solutions to the median problem under the reversal/HP distance measures. ASMedian-linear is available at http://sites.google.com/site/andrewweixu .
Perceptual support promotes strategy generation: Evidence from equation solving.
Alibali, Martha W; Crooks, Noelle M; McNeil, Nicole M
2017-08-30
Over time, children shift from using less optimal strategies for solving mathematics problems to using better ones. But why do children generate new strategies? We argue that they do so when they begin to encode problems more accurately; therefore, we hypothesized that perceptual support for correct encoding would foster strategy generation. Fourth-grade students solved mathematical equivalence problems (e.g., 3 + 4 + 5 = 3 + __) in a pre-test. They were then randomly assigned to one of three perceptual support conditions or to a Control condition. Participants in all conditions completed three mathematical equivalence problems with feedback about correctness. Participants in the experimental conditions received perceptual support (i.e., highlighting in red ink) for accurately encoding the equal sign, the right side of the equation, or the numbers that could be added to obtain the correct solution. Following this intervention, participants completed a problem-solving post-test. Among participants who solved the problems incorrectly at pre-test, those who received perceptual support for correctly encoding the equal sign were more likely to generate new, correct strategies for solving the problems than were those who received feedback only. Thus, perceptual support for accurate encoding of a key problem feature promoted generation of new, correct strategies. Statement of Contribution What is already known on this subject? With age and experience, children shift to using more effective strategies for solving math problems. Problem encoding also improves with age and experience. What the present study adds? Support for encoding the equal sign led children to generate correct strategies for solving equations. Improvements in problem encoding are one source of new strategies. © 2017 The British Psychological Society.
Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.
Learning and interactivity in solving a transformation problem.
Guthrie, Lisa G; Vallée-Tourangeau, Frédéric; Vallée-Tourangeau, Gaëlle; Howard, Chelsea
2015-07-01
Outside the psychologist's laboratory, thinking proceeds on the basis of a great deal of interaction with artefacts that are recruited to augment problem-solving skills. The role of interactivity in problem solving was investigated using a river-crossing problem. In Experiment 1A, participants completed the same problem twice, once in a low interactivity condition, and once in a high interactivity condition (with order counterbalanced across participants). Learning, as gauged in terms of latency to completion, was much more pronounced when the high interactivity condition was experienced second. When participants first completed the task in the high interactivity condition, transfer to the low interactivity condition during the second attempt was limited; Experiment 1B replicated this pattern of results. Participants thus showed greater facility to transfer their experience of completing the problem from a low to a high interactivity condition. Experiment 2 was designed to determine the amount of learning in a low and high interactivity condition; in this experiment participants completed the problem twice, but level of interactivity was manipulated between subjects. Learning was evident in both the low and high interactivity groups, but latency per move was significantly faster in the high interactivity group, in both presentations. So-called problem isomorphs instantiated in different task ecologies draw upon different skills and abilities; a distributed cognition analysis may provide a fruitful perspective on learning and transfer.
Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem
NASA Astrophysics Data System (ADS)
Luo, Yabo; Waden, Yongo P.
2017-06-01
Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.
NASA Astrophysics Data System (ADS)
Main, June Dewey; Budd Rowe, Mary
This study investigated the relationship of locus-of-control orientations and task structure to the science problem-solving performance of 100 same-sex, sixth-grade student pairs. Pairs performed a four-variable problem-solving task, racing cylinders down a ramp in a series of trials to determine the 3 fastest of 18 different cylinders. The task was completed in one of two treatment conditions: the structured condition with moderate cuing and the unstructured condition with minimal cuing. Pairs completed an after-task assessment, predicting the results of proposed cylinder races, to measure the ability to understand and apply task concepts. Overall conclusions were: (1) There was no relationship between locus-of-control orientation and effectiveness of problem-solving strategy; (2) internality was significantly related to higher accuracy on task solutions and on after-task predictions; (3) there was no significant relationship between task structure and effectiveness of problem-solving strategy; (4) solutions to the task were more accurate in the unstructured task condition; (5) internality related to more accurate solutions in the unstructured task condition.
On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems
Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...
2015-10-30
In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less
Quiñones, Victoria; Jurska, Justyna; Fener, Eileen; Miranda, Regina
2015-04-01
Research suggests that being unable to generate solutions to problems in times of distress may contribute to suicidal thoughts and behavior, and that depression is associated with problem-solving deficits. This study examined active and passive problem solving as moderators of the association between depressive symptoms and future suicidal ideation among suicide attempters and nonattempters. Young adults (n = 324, 73% female, mean age = 19, standard deviation = 2.22) with (n = 78) and without (n = 246) a suicide attempt history completed a problem-solving task, self-report measures of hopelessness, depression, and suicidal ideation at baseline, and a self-report measure of suicidal ideation at 6-month follow-up. Passive problem solving was higher among suicide attempters but did not moderate the association between depressive symptoms and future suicidal ideation. Among attempters, active problem solving buffered against depressive symptoms in predicting future suicidal ideation. Suicide prevention should foster active problem solving, especially among suicide attempters. © 2015 Wiley Periodicals, Inc.
New Hardness Results for Diophantine Approximation
NASA Astrophysics Data System (ADS)
Eisenbrand, Friedrich; Rothvoß, Thomas
We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.
ERIC Educational Resources Information Center
Pollak, Ave
This guide is intended for use in presenting a three-session course designed to develop the problem-solving skills required of persons employed in the manufacturing and service industries. The course is structured so that, upon its completion, students will be able to accomplish the following: describe and analyze problems encountered at work;…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Raka; Chakrabarti, Chandana, E-mail: chandana.chakrabarti@saha.ac.in
2005-08-01
A thaumatin-like antifungal protein, NP24-I, has been isolated from ripe tomato fruits. It was crystallized by the vapour-diffusion method and data were collected to 2.45 Å. The structure was solved by molecular replacement. NP24 is a 24 kDa (207-amino-acid) antifungal thaumatin-like protein (TLP) found in tomato fruits. An isoform of the protein, NP24-I, is reported to play a possible role in ripening of the fruit in addition to its antifungal properties. The protein has been isolated and purified and crystallized by the hanging-drop vapour-diffusion method. The crystals belong to the tetragonal space group P4{sub 3}, with unit-cell parameters a =more » b = 61.01, c = 62.90 Å and one molecule per asymmetric unit. X-ray diffraction data were processed to a resolution of 2.45 Å and the structure was solved by molecular replacement.« less
Calvete, Esther
2007-05-01
This study examined whether justification of violence beliefs and social problem solving mediated between maltreatment experiences and aggressive and delinquent behavior in adolescents. Data were collected on 191 maltreated and 546 nonmaltreated adolescents (ages 14 to 17 years), who completed measures of justification of violence beliefs, social problem-solving dimensions (problem orientation, and impulsivity/carelessness style), and psychological problems. Findings indicated that maltreated adolescents' higher levels of delinquent and aggressive behavior were partially accounted for by justification of violence beliefs, and that their higher levels of depressive symptoms were partially mediated by a more negative orientation to social problem-solving. Comparisons between boys and girls indicated that the model linking maltreatment, cognitive variables, and psychological problems was invariant.
Yakubova, Gulnoza; Hughes, Elizabeth M; Hornberger, Erin
2015-09-01
The purpose of this study was to determine the effectiveness of a point-of-view video modeling intervention to teach mathematics problem-solving when working on word problems involving subtracting mixed fractions with uncommon denominators. Using a multiple-probe across students design of single-case methodology, three high school students with ASD completed the study. All three students demonstrated greater accuracy in solving fraction word problems and maintained accuracy levels at a 1-week follow-up.
Rejection Sensitivity and Depression: Indirect Effects Through Problem Solving.
Kraines, Morganne A; Wells, Tony T
2017-01-01
Rejection sensitivity (RS) and deficits in social problem solving are risk factors for depression. Despite their relationship to depression and the potential connection between them, no studies have examined RS and social problem solving together in the context of depression. As such, we examined RS, five facets of social problem solving, and symptoms of depression in a young adult sample. A total of 180 participants completed measures of RS, social problem solving, and depressive symptoms. We used bootstrapping to examine the indirect effect of RS on depressive symptoms through problem solving. RS was positively associated with depressive symptoms. A negative problem orientation, impulsive/careless style, and avoidance style of social problem solving were positively associated with depressive symptoms, and a positive problem orientation was negatively associated with depressive symptoms. RS demonstrated an indirect effect on depressive symptoms through two social problem-solving facets: the tendency to view problems as threats to one's well-being and an avoidance problem-solving style characterized by procrastination, passivity, or overdependence on others. These results are consistent with prior research that found a positive association between RS and depression symptoms, but this is the first study to implicate specific problem-solving deficits in the relationship between RS and depression. Our results suggest that depressive symptoms in high RS individuals may result from viewing problems as threats and taking an avoidant, rather than proactive, approach to dealing with problems. These findings may have implications for problem-solving interventions for rejection sensitive individuals.
Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model
NASA Astrophysics Data System (ADS)
Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled
2018-03-01
The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.
NASA Astrophysics Data System (ADS)
Buddala, Raviteja; Mahapatra, Siba Sankar
2017-11-01
Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.
Dubow, E F; Tisak, J
1989-12-01
This study investigated the relation between stressful life events and adjustment in elementary school children, with particular emphasis on the potential main and stress-buffering effects of social support and social problem-solving skills. Third through fifth graders (N = 361) completed social support and social problem-solving measures. Their parents provided ratings of stress in the child's environment and ratings of the child's behavioral adjustment. Teachers provided ratings of the children's behavioral and academic adjustment. Hierarchical multiple regressions revealed significant stress-buffering effects for social support and problem-solving skills on teacher-rated behavior problems, that is, higher levels of social support and problem-solving skills moderated the relation between stressful life events and behavior problems. A similar stress-buffering effect was found for problem-solving skills on grade-point average and parent-rated behavior problems. In terms of children's competent behaviors, analyses supported a main effect model of social support and problem-solving. Possible processes accounting for the main and stress-buffering effects are discussed.
McMurran, Mary; Huband, Nick; Duggan, Conor
2008-06-01
In the treatment of offenders with personality disorders, one matter that requires attention is the rate of treatment non-completion. This is important as it has cost-efficiency and negative outcome implications. We compared the characteristics of those who participated in a personality disorder treatment programme divided into three groups: Group 1, treatment completers (N = 21); Group 2, those expelled for rule breaking (N = 16); and Group 3, those removed because they were not engaging in treatment (N = 19). We hypothesized that, compared with the other two groups, Group 2 would score higher on the impulsive/careless style scale, and that those in Group 3 would score higher on the avoidant style scale of the social problem-solving inventory-revised (SPSI-R). Further, we hypothesized that high anxiety would be associated with treatment non-completion in both the groups. These differences were not found. However, in combining both groups of non-completers for comparison, completers were shown to score significantly higher on SPSI-R rational problem solving and significantly lower on SPSI-R impulsive/careless style. Findings suggest that teaching impulsive people a rational approach to social problem solving may reduce their level of non-completion.
NASA Astrophysics Data System (ADS)
Amir, Amihood; Gotthilf, Zvi; Shalom, B. Riva
The Longest Common Subsequence (LCS) of two strings A and B is a well studied problem having a wide range of applications. When each symbol of the input strings is assigned a positive weight the problem becomes the Heaviest Common Subsequence (HCS) problem. In this paper we consider a different version of weighted LCS on Position Weight Matrices (PWM). The Position Weight Matrix was introduced as a tool to handle a set of sequences that are not identical, yet, have many local similarities. Such a weighted sequence is a 'statistical image' of this set where we are given the probability of every symbol's occurrence at every text location. We consider two possible definitions of LCS on PWM. For the first, we solve the weighted LCS problem of z sequences in time O(zn z + 1). For the second, we prove \\cal{NP}-hardness and provide an approximation algorithm.
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Vazifeh-Noshafagh, Samira; Taleizadeh, Ata Allah; Hajipour, Vahid; Mahmoudi, Amin
2017-01-01
This article presents a new multi-objective model for a facility location problem with congestion and pricing policies. This model considers situations in which immobile service facilities are congested by a stochastic demand following M/M/m/k queues. The presented model belongs to the class of mixed-integer nonlinear programming models and NP-hard problems. To solve such a hard model, a new multi-objective optimization algorithm based on a vibration theory, namely multi-objective vibration damping optimization (MOVDO), is developed. In order to tune the algorithms parameters, the Taguchi approach using a response metric is implemented. The computational results are compared with those of the non-dominated ranking genetic algorithm and non-dominated sorting genetic algorithm. The outputs demonstrate the robustness of the proposed MOVDO in large-sized problems.
NASA Astrophysics Data System (ADS)
Adams, Wendy Kristine
The purpose of my research was to produce a problem solving evaluation tool for physics. To do this it was necessary to gain a thorough understanding of how students solve problems. Although physics educators highly value problem solving and have put extensive effort into understanding successful problem solving, there is currently no efficient way to evaluate problem solving skill. Attempts have been made in the past; however, knowledge of the principles required to solve the subject problem are so absolutely critical that they completely overshadow any other skills students may use when solving a problem. The work presented here is unique because the evaluation tool removes the requirement that the student already have a grasp of physics concepts. It is also unique because I picked a wide range of people and picked a wide range of tasks for evaluation. This is an important design feature that helps make things emerge more clearly. This dissertation includes an extensive literature review of problem solving in physics, math, education and cognitive science as well as descriptions of studies involving student use of interactive computer simulations, the design and validation of a beliefs about physics survey and finally the design of the problem solving evaluation tool. I have successfully developed and validated a problem solving evaluation tool that identifies 44 separate assets (skills) necessary for solving problems. Rigorous validation studies, including work with an independent interviewer, show these assets identified by this content-free evaluation tool are the same assets that students use to solve problems in mechanics and quantum mechanics. Understanding this set of component assets will help teachers and researchers address problem solving within the classroom.
NASA Astrophysics Data System (ADS)
Mezentsev, Yu A.; Baranova, N. V.
2018-05-01
A universal economical and mathematical model designed for determination of optimal strategies for managing subsystems (components of subsystems) of production and logistics of enterprises is considered. Declared universality allows taking into account on the system level both production components, including limitations on the ways of converting raw materials and components into sold goods, as well as resource and logical restrictions on input and output material flows. The presented model and generated control problems are developed within the framework of the unified approach that allows one to implement logical conditions of any complexity and to define corresponding formal optimization tasks. Conceptual meaning of used criteria and limitations are explained. The belonging of the generated tasks of the mixed programming with the class of NP is shown. An approximate polynomial algorithm for solving the posed optimization tasks for mixed programming of real dimension with high computational complexity is proposed. Results of testing the algorithm on the tasks in a wide range of dimensions are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quaglioni, S.
2016-09-22
A 2011 DOE-NP Early Career Award (ECA) under Field Work Proposal (FWP) SCW1158 supported the project “Solving the Long-Standing Problem of Low-Energy Nuclear Reactions at the Highest Microscopic Level” in the five-year period from June 15, 2011 to June 14, 2016. This project, led by PI S. Quaglioni, aimed at developing a comprehensive and computationally efficient framework to arrive at a unified description of structural properties and reactions of light nuclei in terms of constituent protons and neutrons interacting through nucleon-nucleon (NN) and three-nucleon (3N) forces. Specifically, the project had three main goals: 1) arriving at the accurate predictions formore » fusion reactions that power stars and Earth-based fusion facilities; 2) realizing a comprehensive description of clustering and continuum effects in exotic nuclei, including light Borromean systems; and 3) achieving fundamental understanding of the role of the 3N force in nuclear reactions and nuclei at the drip line.« less
Constructive Metacognitive Activity Shift in Mathematical Problem Solving
ERIC Educational Resources Information Center
Hastuti, Intan Dwi; Nusantara, Toto; Subanji; Susanto, Hery
2016-01-01
This study aims to describe the constructive metacognitive activity shift of eleventh graders in solving a mathematical problem. Subjects in this study were 10 students in grade 11 of SMAN 1 Malang. They were divided into 4 groups. Three types of metacognitive activity undertaken by students when completing mathematical problem are awareness,…
Material Mediation: Tools and Representations Supporting Collaborative Problem-Solving Discourse
ERIC Educational Resources Information Center
Katic, Elvira K.; Hmelo-Silver, Cindy E.; Weber, Keith H.
2009-01-01
This study investigates how a variety of resources mediated collaborative problem solving for a group of preservice teachers. The participants in this study completed mathematical, combinatorial tasks and then watched a video of a sixth grader as he exhibited sophisticated reasoning to recognize the isomorphic structure of these problems. The…
Sheriff, Kelli A; Boon, Richard T
2014-08-01
The purpose of this study was to examine the effects of computer-based graphic organizers, using Kidspiration 3© software, to solve one-step word problems. Participants included three students with mild intellectual disability enrolled in a functional academic skills curriculum in a self-contained classroom. A multiple probe single-subject research design (Horner & Baer, 1978) was used to evaluate the effectiveness of computer-based graphic organizers to solving mathematical one-step word problems. During the baseline phase, the students completed a teacher-generated worksheet that consisted of nine functional word problems in a traditional format using a pencil, paper, and a calculator. In the intervention and maintenance phases, the students were instructed to complete the word problems using a computer-based graphic organizer. Results indicated that all three of the students improved in their ability to solve the one-step word problems using computer-based graphic organizers compared to traditional instructional practices. Limitations of the study and recommendations for future research directions are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
An Integrated Method Based on PSO and EDA for the Max-Cut Problem.
Lin, Geng; Guan, Jian
2016-01-01
The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20,000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality.
Guo, Hao; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489
Seidenstücker, Axel; Plettl, Alfred; Ziemann, Paul
2013-01-01
Summary The basic idea of using hexagonally ordered arrays of Au nanoparticles (NP) on top of a given substrate as a mask for the subsequent anisotropic etching in order to fabricate correspondingly ordered arrays of nanopillars meets two serious obstacles: The position of the NP may change during the etching process and, thus, the primary pattern of the mask deteriorates or is completely lost. Furthermore, the NP are significantly eroded during etching and, consequently, the achievable pillar height is strongly restricted. The present work presents approaches on how to get around both problems. For this purpose, arrays of Au NPs (starting diameter 12 nm) are deposited on top of silica substrates by applying diblock copolymer micelle nanolithography (BCML). It is demonstrated that evaporated octadecyltrimethoxysilane (OTMS) layers act as stabilizer on the NP position, which allows for an increase of their size up to 50 nm by an electroless photochemical process. In this way, ordered arrays of silica nanopillars are obtained with maximum heights of 270 nm and aspect ratios of 5:1. Alternatively, the NP position can be fixed by a short etching step with negligible mask erosion followed by cycles of growing and reactive ion etching (RIE). In that case, each cycle is started by photochemically re-growing the Au NP mask and thereby completely compensating for the erosion due to the previous cycle. As a result of this mask repair method, arrays of silica nanopillar with heights up to 680 nm and aspect ratios of 10:1 are fabricated. Based on the given recipes, the approach can be applied to a variety of materials like silicon, silicon oxide, and silicon nitride. PMID:24367758
Diagnosing the pathophysiologic mechanisms of nocturnal polyuria.
Goessaert, An-Sofie; Krott, Louise; Hoebeke, Piet; Vande Walle, Johan; Everaert, Karel
2015-02-01
Diagnosis of nocturnal polyuria (NP) is based on a bladder diary. Addition of a renal function profile (RFP) for analysis of concentrating and solute-conserving capacity allows differentiation of NP pathophysiology and could facilitate individualized treatment. To map circadian rhythms of water and solute diuresis by comparing participants with and without NP. This prospective observational study was carried out in Ghent University Hospital between 2011 and 2013. Participants with and without NP completed a 72-h bladder dairy. RFP, free water clearance (FWC), and creatinine, solute, sodium, and urea clearance were measured for all participants. The study participants were divided into those with (n=77) and those without (n=35) NP. The mean age was 57 yr (SD 16 yr) and 41% of the participants were female. Compared to participants without NP, the NP group exhibited a higher diuresis rate throughout the night (p=0.015); higher FWC (p=0.013) and lower osmolality (p=0.030) at the start of the night; and persistently higher sodium clearance during the night (p<0.001). The pathophysiologic mechanism of NP was identified as water diuresis alone in 22%, sodium diuresis alone in 19%, and a combination of water and sodium diuresis in 47% of the NP group. RFP measurement in first-line NP screening to discriminate between water and solute diuresis as pathophysiologic mechanisms complements the bladder diary and could facilitate optimal individualized treatment of patients with NP. We evaluated eight urine samples collected over 24h to detect the underlying problem in NP. We found that NP can be attributed to water or sodium diuresis or a combination of both. This urinalysis can be used to adapt treatment according to the underlying mechanism in patients with bothersome consequences of NP, such as nocturia and urinary incontinence. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Castagnoli, Giuseppe
2018-03-01
The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete. We complete it in three steps: (i) extending the representation to the process of setting the problem, (ii) relativizing the extended representation to the problem solver to whom the problem setting must be concealed, and (iii) symmetrizing the relativized representation for time reversal to represent the reversibility of the underlying physical process. The third steps projects the input state of the representation, where the problem solver is completely ignorant of the setting and thus the solution of the problem, on one where she knows half solution (half of the information specifying it when the solution is an unstructured bit string). Completing the physical representation shows that the number of computation steps (oracle queries) required to solve any oracle problem in an optimal quantum way should be that of a classical algorithm endowed with the advanced knowledge of half solution.
On the Hardness of Subset Sum Problem from Different Intervals
NASA Astrophysics Data System (ADS)
Kogure, Jun; Kunihiro, Noboru; Yamamoto, Hirosuke
The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.
An Optimal Algorithm towards Successive Location Privacy in Sensor Networks with Dynamic Programming
NASA Astrophysics Data System (ADS)
Zhao, Baokang; Wang, Dan; Shao, Zili; Cao, Jiannong; Chan, Keith C. C.; Su, Jinshu
In wireless sensor networks, preserving location privacy under successive inference attacks is extremely critical. Although this problem is NP-complete in general cases, we propose a dynamic programming based algorithm and prove it is optimal in special cases where the correlation only exists between p immediate adjacent observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, Daisy W.; Borek, Dominika; Luthra, Priya
During viral RNA synthesis, Ebola virus (EBOV) nucleoprotein (NP) alternates between an RNA-template-bound form and a template-free form to provide the viral polymerase access to the RNA template. In addition, newly synthesized NP must be prevented from indiscriminately binding to noncognate RNAs. Here, we investigate the molecular bases for these critical processes. We identify an intrinsically disordered peptide derived from EBOV VP35 (NPBP, residues 20–48) that binds NP with high affinity and specificity, inhibits NP oligomerization, and releases RNA from NP-RNA complexes in vitro. The structure of the NPBP/ΔNP NTD complex, solved to 3.7 Å resolution, reveals how NPBP peptidemore » occludes a large surface area that is important for NP-NP and NP-RNA interactions and for viral RNA synthesis. Together, our results identify a highly conserved viral interface that is important for EBOV replication and can be targeted for therapeutic development.« less
Leung, Daisy W.; Borek, Dominika; Luthra, Priya; ...
2015-04-01
During viral RNA synthesis, Ebola virus (EBOV) nucleoprotein (NP) alternates between an RNA-template-bound form and a template-free form to provide the viral polymerase access to the RNA template. In addition, newly synthesized NP must be prevented from indiscriminately binding to noncognate RNAs. Here, we investigate the molecular bases for these critical processes. We identify an intrinsically disordered peptide derived from EBOV VP35 (NPBP, residues 20–48) that binds NP with high affinity and specificity, inhibits NP oligomerization, and releases RNA from NP-RNA complexes in vitro. The structure of the NPBP/ΔNP NTD complex, solved to 3.7 Å resolution, reveals how NPBP peptidemore » occludes a large surface area that is important for NP-NP and NP-RNA interactions and for viral RNA synthesis. Together, our results identify a highly conserved viral interface that is important for EBOV replication and can be targeted for therapeutic development.« less
An annealed chaotic maximum neural network for bipartite subgraph problem.
Wang, Jiahai; Tang, Zheng; Wang, Ronglong
2004-04-01
In this paper, based on maximum neural network, we propose a new parallel algorithm that can help the maximum neural network escape from local minima by including a transient chaotic neurodynamics for bipartite subgraph problem. The goal of the bipartite subgraph problem, which is an NP- complete problem, is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. Lee et al. presented a parallel algorithm using the maximum neural model (winner-take-all neuron model) for this NP- complete problem. The maximum neural model always guarantees a valid solution and greatly reduces the search space without a burden on the parameter-tuning. However, the model has a tendency to converge to a local minimum easily because it is based on the steepest descent method. By adding a negative self-feedback to the maximum neural network, we proposed a new parallel algorithm that introduces richer and more flexible chaotic dynamics and can prevent the network from getting stuck at local minima. After the chaotic dynamics vanishes, the proposed algorithm is then fundamentally reined by the gradient descent dynamics and usually converges to a stable equilibrium point. The proposed algorithm has the advantages of both the maximum neural network and the chaotic neurodynamics. A large number of instances have been simulated to verify the proposed algorithm. The simulation results show that our algorithm finds the optimum or near-optimum solution for the bipartite subgraph problem superior to that of the best existing parallel algorithms.
Bell, Kathryn M; Higgins, Lorrin
2015-04-16
The purpose of the current study was to examine the joint influences of experiential avoidance and social problem solving on the link between childhood emotional abuse (CEA) and intimate partner violence (IPV). Experiential avoidance following CEA may interfere with a person's ability to effectively problem solve in social situations, increasing risk for conflict and interpersonal violence. As part of a larger study, 232 women recruited from the community completed measures assessing childhood emotional, physical, and sexual abuse, experiential avoidance, maladaptive social problem solving, and IPV perpetration and victimization. Final trimmed models indicated that CEA was indirectly associated with IPV victimization and perpetration via experiential avoidance and Negative Problem Orientation (NPO) and Impulsivity/Carelessness Style (ICS) social problem solving strategies. Though CEA was related to an Avoidance Style (AS) social problem solving strategy, this strategy was not significantly associated with IPV victimization or perpetration. Experiential avoidance had both a direct and indirect effect, via NPO and ICS social problem solving, on IPV victimization and perpetration. Findings suggest that CEA may lead some women to avoid unwanted internal experiences, which may adversely impact their ability to effectively problem solve in social situations and increase IPV risk.
Bell, Kathryn M.; Higgins, Lorrin
2015-01-01
The purpose of the current study was to examine the joint influences of experiential avoidance and social problem solving on the link between childhood emotional abuse (CEA) and intimate partner violence (IPV). Experiential avoidance following CEA may interfere with a person’s ability to effectively problem solve in social situations, increasing risk for conflict and interpersonal violence. As part of a larger study, 232 women recruited from the community completed measures assessing childhood emotional, physical, and sexual abuse, experiential avoidance, maladaptive social problem solving, and IPV perpetration and victimization. Final trimmed models indicated that CEA was indirectly associated with IPV victimization and perpetration via experiential avoidance and Negative Problem Orientation (NPO) and Impulsivity/Carelessness Style (ICS) social problem solving strategies. Though CEA was related to an Avoidance Style (AS) social problem solving strategy, this strategy was not significantly associated with IPV victimization or perpetration. Experiential avoidance had both a direct and indirect effect, via NPO and ICS social problem solving, on IPV victimization and perpetration. Findings suggest that CEA may lead some women to avoid unwanted internal experiences, which may adversely impact their ability to effectively problem solve in social situations and increase IPV risk. PMID:25893570
Transformational and derivational strategies in analogical problem solving.
Schelhorn, Sven-Eric; Griego, Jacqueline; Schmid, Ute
2007-03-01
Analogical problem solving is mostly described as transfer of a source solution to a target problem based on the structural correspondences (mapping) between source and target. Derivational analogy (Carbonell, Machine learning: an artificial intelligence approach Los Altos. Morgan Kaufmann, 1986) proposes an alternative view: a target problem is solved by replaying a remembered problem-solving episode. Thus, the experience with the source problem is used to guide the search for the target solution by applying the same solution technique rather than by transferring the complete solution. We report an empirical study using the path finding problems presented in Novick and Hmelo (J Exp Psychol Learn Mem Cogn 20:1296-1321, 1994) as material. We show that both transformational and derivational analogy are problem-solving strategies realized by human problem solvers. Which strategy is evoked in a given problem-solving context depends on the constraints guiding object-to-object mapping between source and target problem. Specifically, if constraints facilitating mapping are available, subjects are more likely to employ a transformational strategy, otherwise they are more likely to use a derivational strategy.
Identifying non-elliptical entity mentions in a coordinated NP with ellipses.
Chae, Jeongmin; Jung, Younghee; Lee, Taemin; Jung, Soonyoung; Huh, Chan; Kim, Gilhan; Kim, Hyeoncheol; Oh, Heungbum
2014-02-01
Named entities in the biomedical domain are often written using a Noun Phrase (NP) along with a coordinating conjunction such as 'and' and 'or'. In addition, repeated words among named entity mentions are frequently omitted. It is often difficult to identify named entities. Although various Named Entity Recognition (NER) methods have tried to solve this problem, these methods can only deal with relatively simple elliptical patterns in coordinated NPs. We propose a new NER method for identifying non-elliptical entity mentions with simple or complex ellipses using linguistic rules and an entity mention dictionary. The GENIA and CRAFT corpora were used to evaluate the performance of the proposed system. The GENIA corpus was used to evaluate the performance of the system according to the quality of the dictionary. The GENIA corpus comprises 3434 non-elliptical entity mentions in 1585 coordinated NPs with ellipses. The system achieves 92.11% precision, 95.20% recall, and 93.63% F-score in identification of non-elliptical entity mentions in coordinated NPs. The accuracy of the system in resolving simple and complex ellipses is 94.54% and 91.95%, respectively. The CRAFT corpus was used to evaluate the performance of the system under realistic conditions. The system achieved 78.47% precision, 67.10% recall, and 72.34% F-score in coordinated NPs. The performance evaluations of the system show that it efficiently solves the problem caused by ellipses, and improves NER performance. The algorithm is implemented in PHP and the code can be downloaded from https://code.google.com/p/medtextmining/. Copyright © 2013. Published by Elsevier Inc.
Nanomedicine: Problem Solving to Treat Cancer
ERIC Educational Resources Information Center
Hemling, Melissa A.; Sammel, Lauren M.; Zenner, Greta; Payne, Amy C.; Crone, Wendy C.
2006-01-01
Many traditional classroom science and technology activities often ask students to complete prepackaged labs that ensure that everyone arrives at the same "scientifically accurate" solution or theory, which ignores the important problem-solving and creative aspects of scientific research and technological design. Students rarely have the…
Boonen, Anton J. H.; de Koning, Björn B.; Jolles, Jelle; van der Schoot, Menno
2016-01-01
Successfully solving mathematical word problems requires both mental representation skills and reading comprehension skills. In Realistic Math Education (RME), however, students primarily learn to apply the first of these skills (i.e., representational skills) in the context of word problem solving. Given this, it seems legitimate to assume that students from a RME curriculum experience difficulties when asked to solve semantically complex word problems. We investigated this assumption under 80 sixth grade students who were classified as successful and less successful word problem solvers based on a standardized mathematics test. To this end, students completed word problems that ask for both mental representation skills and reading comprehension skills. The results showed that even successful word problem solvers had a low performance on semantically complex word problems, despite adequate performance on semantically less complex word problems. Based on this study, we concluded that reading comprehension skills should be given a (more) prominent role during word problem solving instruction in RME. PMID:26925012
Boonen, Anton J H; de Koning, Björn B; Jolles, Jelle; van der Schoot, Menno
2016-01-01
Successfully solving mathematical word problems requires both mental representation skills and reading comprehension skills. In Realistic Math Education (RME), however, students primarily learn to apply the first of these skills (i.e., representational skills) in the context of word problem solving. Given this, it seems legitimate to assume that students from a RME curriculum experience difficulties when asked to solve semantically complex word problems. We investigated this assumption under 80 sixth grade students who were classified as successful and less successful word problem solvers based on a standardized mathematics test. To this end, students completed word problems that ask for both mental representation skills and reading comprehension skills. The results showed that even successful word problem solvers had a low performance on semantically complex word problems, despite adequate performance on semantically less complex word problems. Based on this study, we concluded that reading comprehension skills should be given a (more) prominent role during word problem solving instruction in RME.
Minimizing the Sum of Completion Times with Resource Dependant Times
NASA Astrophysics Data System (ADS)
Yedidsion, Liron; Shabtay, Dvir; Kaspi, Moshe
2008-10-01
We extend the classical minimization sum of completion times problem to the case where the processing times are controllable by allocating a nonrenewable resource. The quality of a solution is measured by two different criteria. The first criterion is the sum of completion times and the second is the total weighted resource consumption. We consider four different problem variations for treating the two criteria. We prove that this problem is NP-hard for three of the four variations even if all resource consumption weights are equal. However, somewhat surprisingly, the variation of minimizing the integrated objective function is solvable in polynomial time. Although the sum of completion times is arguably the most important scheduling criteria, the complexity of this problem, up to this paper, was an open question for three of the four variations. The results of this research have various implementations, including efficient battery usage on mobile devices such as mobile computer, phones and GPS devices in order to prolong their battery duration.
ERIC Educational Resources Information Center
van Gog, Tamara; Paas, Fred; Merrienboer, Jeroen J. G.; Witte, Puk
2005-01-01
This study investigated the amounts of problem-solving process information ("action," "why," "how," and "metacognitive") elicited by means of concurrent, retrospective, and cued retrospective reporting. In a within-participants design, 26 participants completed electrical circuit troubleshooting tasks under different reporting conditions. The…
Flippin' Fluid Mechanics--Comparison Using Two Groups
ERIC Educational Resources Information Center
Webster, Donald R.; Majerich, David M.; Madden, Amanda G.
2016-01-01
A flipped classroom approach was implemented in an undergraduate fluid mechanics course. Students watched short, online video lectures before class, participated in active in-class problem solving sessions (in pairs), and completed individualized online quizzes weekly. In-class activities were designed to develop problem-solving skills and teach…
Complex network problems in physics, computer science and biology
NASA Astrophysics Data System (ADS)
Cojocaru, Radu Ionut
There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe lattice at zero temperature and then we apply this formalism to the K-SAT problem defined in Chapter 1. The phase transition which physicists study often corresponds to a change in the computational complexity of the corresponding computer science problem. Chapter 3 presents phase transitions which are specific to the problems discussed in Chapter 1 and also known results for the K-SAT problem. We discuss the replica method and experimental evidences of replica symmetry breaking. The physics approach to hard problems is based on replica methods which are difficult to understand. In Chapter 4 we develop novel methods for studying hard problems using methods similar to the message passing techniques that were discussed in Chapter 2. Although we concentrated on the symmetric case, cavity methods show promise for generalizing our methods to the un-symmetric case. As has been highlighted by John Hopfield, several key features of biological systems are not shared by physical systems. Although living entities follow the laws of physics and chemistry, the fact that organisms adapt and reproduce introduces an essential ingredient that is missing in the physical sciences. In order to extract information from networks many algorithm have been developed. In Chapter 5 we apply polynomial algorithms like minimum spanning tree in order to study and construct gene regulatory networks from experimental data. As future work we propose the use of algorithms like min-cut/max-flow and Dijkstra for understanding key properties of these networks.
Open shop scheduling problem to minimize total weighted completion time
NASA Astrophysics Data System (ADS)
Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian
2017-01-01
A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.
Site partitioning for distributed redundant disk arrays
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.
1992-01-01
Distributed redundant disk arrays can be used in a distributed computing system or database system to provide recovery in the presence of temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites into redundant arrays in such way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-complete and we propose two heuristic algorithms for finding approximate solutions.
Spatial visualization in physics problem solving.
Kozhevnikov, Maria; Motes, Michael A; Hegarty, Mary
2007-07-08
Three studies were conducted to examine the relation of spatial visualization to solving kinematics problems that involved either predicting the two-dimensional motion of an object, translating from one frame of reference to another, or interpreting kinematics graphs. In Study 1, 60 physics-naíve students were administered kinematics problems and spatial visualization ability tests. In Study 2, 17 (8 high- and 9 low-spatial ability) additional students completed think-aloud protocols while they solved the kinematics problems. In Study 3, the eye movements of fifteen (9 high- and 6 low-spatial ability) students were recorded while the students solved kinematics problems. In contrast to high-spatial students, most low-spatial students did not combine two motion vectors, were unable to switch frames of reference, and tended to interpret graphs literally. The results of the study suggest an important relationship between spatial visualization ability and solving kinematics problems with multiple spatial parameters. 2007 Cognitive Science Society, Inc.
Leung, Daisy W; Borek, Dominika; Luthra, Priya; Binning, Jennifer M; Anantpadma, Manu; Liu, Gai; Harvey, Ian B; Su, Zhaoming; Endlich-Frazier, Ariel; Pan, Juanli; Shabman, Reed S; Chiu, Wah; Davey, Robert A; Otwinowski, Zbyszek; Basler, Christopher F; Amarasinghe, Gaya K
2015-04-21
During viral RNA synthesis, Ebola virus (EBOV) nucleoprotein (NP) alternates between an RNA-template-bound form and a template-free form to provide the viral polymerase access to the RNA template. In addition, newly synthesized NP must be prevented from indiscriminately binding to noncognate RNAs. Here, we investigate the molecular bases for these critical processes. We identify an intrinsically disordered peptide derived from EBOV VP35 (NPBP, residues 20-48) that binds NP with high affinity and specificity, inhibits NP oligomerization, and releases RNA from NP-RNA complexes in vitro. The structure of the NPBP/ΔNPNTD complex, solved to 3.7 Å resolution, reveals how NPBP peptide occludes a large surface area that is important for NP-NP and NP-RNA interactions and for viral RNA synthesis. Together, our results identify a highly conserved viral interface that is important for EBOV replication and can be targeted for therapeutic development. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Takeda, Mitsuhiro; Chang, Chung-ke; Ikeya, Teppei; Güntert, Peter; Chang, Yuan-hsiang; Hsu, Yen-lan; Huang, Tai-huang; Kainosho, Masatsune
2008-07-18
The C-terminal domain (CTD) of the severe acute respiratory syndrome coronavirus (SARS-CoV) nucleocapsid protein (NP) contains a potential RNA-binding region in its N-terminal portion and also serves as a dimerization domain by forming a homodimer with a molecular mass of 28 kDa. So far, the structure determination of the SARS-CoV NP CTD in solution has been impeded by the poor quality of NMR spectra, especially for aromatic resonances. We have recently developed the stereo-array isotope labeling (SAIL) method to overcome the size problem of NMR structure determination by utilizing a protein exclusively composed of stereo- and regio-specifically isotope-labeled amino acids. Here, we employed the SAIL method to determine the high-quality solution structure of the SARS-CoV NP CTD by NMR. The SAIL protein yielded less crowded and better resolved spectra than uniform (13)C and (15)N labeling, and enabled the homodimeric solution structure of this protein to be determined. The NMR structure is almost identical with the previously solved crystal structure, except for a disordered putative RNA-binding domain at the N-terminus. Studies of the chemical shift perturbations caused by the binding of single-stranded DNA and mutational analyses have identified the disordered region at the N-termini as the prime site for nucleic acid binding. In addition, residues in the beta-sheet region also showed significant perturbations. Mapping of the locations of these residues onto the helical model observed in the crystal revealed that these two regions are parts of the interior lining of the positively charged helical groove, supporting the hypothesis that the helical oligomer may form in solution.
Abdollahi, Abbas; Talib, Mansor Abu; Yaacob, Siti Nor; Ismail, Zanariah
2016-04-01
Recent evidence suggests that suicidal ideation has increased among Malaysian college students over the past two decades; therefore, it is essential to increase our knowledge concerning the etiology of suicidal ideation among Malaysian college students. This study was conducted to examine the relationships between problem-solving skills, hopelessness, and suicidal ideation among Malaysian college students. The participants included 500 undergraduate students from two Malaysian public universities who completed the self-report questionnaires. Structural equation modeling estimated that college students with poor problem-solving confidence, external personal control of emotion, and avoiding style were more likely to report suicidal ideation. Hopelessness partially mediated the relationship between problem-solving skills and suicidal ideation. These findings reinforce the importance of poor problem-solving skills and hopelessness as risk factors for suicidal ideation among college students.
NASA Astrophysics Data System (ADS)
Gong, Weiwei; Zhou, Xu
2017-06-01
In Computer Science, the Boolean Satisfiability Problem(SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. SAT is one of the first problems that was proven to be NP-complete, which is also fundamental to artificial intelligence, algorithm and hardware design. This paper reviews the main algorithms of the SAT solver in recent years, including serial SAT algorithms, parallel SAT algorithms, SAT algorithms based on GPU, and SAT algorithms based on FPGA. The development of SAT is analyzed comprehensively in this paper. Finally, several possible directions for the development of the SAT problem are proposed.
A parallel-machine scheduling problem with two competing agents
NASA Astrophysics Data System (ADS)
Lee, Wen-Chiung; Chung, Yu-Hsiang; Wang, Jen-Ya
2017-06-01
Scheduling with two competing agents has become popular in recent years. Most of the research has focused on single-machine problems. This article considers a parallel-machine problem, the objective of which is to minimize the total completion time of jobs from the first agent given that the maximum tardiness of jobs from the second agent cannot exceed an upper bound. The NP-hardness of this problem is also examined. A genetic algorithm equipped with local search is proposed to search for the near-optimal solution. Computational experiments are conducted to evaluate the proposed genetic algorithm.
ERIC Educational Resources Information Center
Capraro, Mary Margaret; An, Song A.; Ma, Tingting; Rangel-Chavez, A. Fabiola; Harbaugh, Adam
2012-01-01
Open-ended problems have been regarded as powerful tools for teaching mathematics. This study examined the problem solving of eight mathematics/science middle-school teachers. A semi-structured interview was conducted with (PTs) after completing an open-ended triangle task with four unique solutions. Of particular emphasis was how the PTs used a…
Mathematical Ability Relies on Knowledge, Too
ERIC Educational Resources Information Center
Sweller, John; Clark, Richard E.; Kirschner, Paul A.
2011-01-01
Recent "reform" curricula both ignore the absence of supporting data and completely misunderstand the role of problem solving in cognition. If, the argument goes, teachers are not really teaching people mathematics but rather are teaching them some form of general problem solving, then mathematical content can be reduced in importance. According…
Assessment for Intervention: A Problem-Solving Approach
ERIC Educational Resources Information Center
Brown-Chidsey, Rachel, Ed.
2005-01-01
This cutting-edge volume offers a complete primer on conducting problem-solving based assessments in school or clinical settings. Presented are an effective framework and up-to-date tools for identifying and remediating the many environmental factors that may contribute to a student's academic, emotional, or behavioral difficulties, and for…
Using Algorithms in Solving Synapse Transmission Problems.
ERIC Educational Resources Information Center
Stencel, John E.
1992-01-01
Explains how a simple three-step algorithm can aid college students in solving synapse transmission problems. Reports that all of the students did not completely understand the algorithm. However, many learn a simple working model of synaptic transmission and understand why an impulse will pass across a synapse quantitatively. Students also see…
NASA Astrophysics Data System (ADS)
Reinscheid, Uwe M.
2009-01-01
The absolute configurations of two estrogenic nonylphenols were determined in solution. Both nonylphenols, NP35 and NP112 could not be crystallized so that only solution methods are able to solve directly the question of absolute configuration. The conclusion based on experimental and calculated optical rotation and VCD data for the nonylphenol NP35 was independently confirmed by another study using a camphanoyl derivative and X-ray analysis of the obtained crystals. In case of NP112, the experimental rotation data are inconclusive. However, the comparison between experimental and calculated VCD data allowed the determination of the absolute configuration.
Physical activity problem-solving inventory for adolescents: development and initial validation.
Thompson, Debbe; Bhatt, Riddhi; Watson, Kathy
2013-08-01
Youth encounter physical activity barriers, often called problems. The purpose of problem solving is to generate solutions to overcome the barriers. Enhancing problem-solving ability may enable youth to be more physically active. Therefore, a method for reliably assessing physical activity problem-solving ability is needed. The purpose of this research was to report the development and initial validation of the physical activity problem-solving inventory for adolescents (PAPSIA). Qualitative and quantitative procedures were used. The social problem-solving inventory for adolescents guided the development of the PAPSIA scale. Youth (14- to 17-year-olds) were recruited using standard procedures, such as distributing flyers in the community and to organizations likely to be attended by adolescents. Cognitive interviews were conducted in person. Adolescents completed pen and paper versions of the questionnaire and/or scales assessing social desirability, self-reported physical activity, and physical activity self-efficacy. An expert panel review, cognitive interviews, and a pilot study (n = 129) established content validity. Construct, concurrent, and predictive validity were also established (n = 520 youth). PAPSIA is a promising measure for assessing youth physical activity problem-solving ability. Future research will assess its validity with objectively measured physical activity.
Extremal Optimization for Quadratic Unconstrained Binary Problems
NASA Astrophysics Data System (ADS)
Boettcher, S.
We present an implementation of τ-EO for quadratic unconstrained binary optimization (QUBO) problems. To this end, we transform modify QUBO from its conventional Boolean presentation into a spin glass with a random external field on each site. These fields tend to be rather large compared to the typical coupling, presenting EO with a challenging two-scale problem, exploring smaller differences in couplings effectively while sufficiently aligning with those strong external fields. However, we also find a simple solution to that problem that indicates that those external fields apparently tilt the energy landscape to a such a degree such that global minima become more easy to find than those of spin glasses without (or very small) fields. We explore the impact of the weight distribution of the QUBO formulation in the operations research literature and analyze their meaning in a spin-glass language. This is significant because QUBO problems are considered among the main contenders for NP-hard problems that could be solved efficiently on a quantum computer such as D-Wave.
Li, Chih-Ying; Waid-Ebbs, Julia; Velozo, Craig A.; Heaton, Shelley C.
2016-01-01
Primary Objective Social problem solving deficits characterize individuals with traumatic brain injury (TBI). Poor social problem solving interferes with daily functioning and productive lifestyles. Therefore, it is of vital importance to use the appropriate instrument to identify deficits in social problem solving for individuals with TBI. This study investigates factor structure and item-level psychometrics of the Social Problem Solving Inventory-Revised Short Form (SPSI-R:S), for adults with moderate and severe TBI. Research Design Secondary analysis of 90 adults with moderate and severe TBI who completed the SPSI-R:S. Methods and Procedures An exploratory factor analysis (EFA), principal components analysis (PCA) and Rasch analysis examined the factor structure and item-level psychometrics of the SPSI-R:S. Main Outcomes and Results The EFA showed three dominant factors, with positively worded items represented as the most definite factor. The other two factors are negative problem solving orientation and skills; and negative problem solving emotion. Rasch analyses confirmed the three factors are each unidimensional constructs. Conclusions The total score interpretability of the SPSI-R:S may be challenging due to the multidimensional structure of the total measure. Instead, we propose using three separate SPSI-R:S subscores to measure social problem solving for the TBI population. PMID:26052731
Li, Chih-Ying; Waid-Ebbs, Julia; Velozo, Craig A; Heaton, Shelley C
2016-01-01
Social problem-solving deficits characterise individuals with traumatic brain injury (TBI), and poor social problem solving interferes with daily functioning and productive lifestyles. Therefore, it is of vital importance to use the appropriate instrument to identify deficits in social problem solving for individuals with TBI. This study investigates factor structure and item-level psychometrics of the Social Problem Solving Inventory-Revised: Short Form (SPSI-R:S), for adults with moderate and severe TBI. Secondary analysis of 90 adults with moderate and severe TBI who completed the SPSI-R:S was performed. An exploratory factor analysis (EFA), principal components analysis (PCA) and Rasch analysis examined the factor structure and item-level psychometrics of the SPSI-R:S. The EFA showed three dominant factors, with positively worded items represented as the most definite factor. The other two factors are negative problem-solving orientation and skills; and negative problem-solving emotion. Rasch analyses confirmed the three factors are each unidimensional constructs. It was concluded that the total score interpretability of the SPSI-R:S may be challenging due to the multidimensional structure of the total measure. Instead, we propose using three separate SPSI-R:S subscores to measure social problem solving for the TBI population.
Pourhassan, Mojgan; Neumann, Frank
2018-06-22
The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.
Al Nasr, Kamal; Ranjan, Desh; Zubair, Mohammad; Chen, Lin; He, Jing
2014-01-01
Electron cryomicroscopy is becoming a major experimental technique in solving the structures of large molecular assemblies. More and more three-dimensional images have been obtained at the medium resolutions between 5 and 10 Å. At this resolution range, major α-helices can be detected as cylindrical sticks and β-sheets can be detected as plain-like regions. A critical question in de novo modeling from cryo-EM images is to determine the match between the detected secondary structures from the image and those on the protein sequence. We formulate this matching problem into a constrained graph problem and present an O(Δ(2)N(2)2(N)) algorithm to this NP-Hard problem. The algorithm incorporates the dynamic programming approach into a constrained K-shortest path algorithm. Our method, DP-TOSS, has been tested using α-proteins with maximum 33 helices and α-β proteins up to five helices and 12 β-strands. The correct match was ranked within the top 35 for 19 of the 20 α-proteins and all nine α-β proteins tested. The results demonstrate that DP-TOSS improves accuracy, time and memory space in deriving the topologies of the secondary structure elements for proteins with a large number of secondary structures and a complex skeleton.
Solving Constraint Satisfaction Problems with Networks of Spiking Neurons
Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang
2016-01-01
Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785
NASA Astrophysics Data System (ADS)
Narwadi, Teguh; Subiyanto
2017-03-01
The Travelling Salesman Problem (TSP) is one of the best known NP-hard problems, which means that no exact algorithm to solve it in polynomial time. This paper present a new variant application genetic algorithm approach with a local search technique has been developed to solve the TSP. For the local search technique, an iterative hill climbing method has been used. The system is implemented on the Android OS because android is now widely used around the world and it is mobile system. It is also integrated with Google API that can to get the geographical location and the distance of the cities, and displays the route. Therefore, we do some experimentation to test the behavior of the application. To test the effectiveness of the application of hybrid genetic algorithm (HGA) is compare with the application of simple GA in 5 sample from the cities in Central Java, Indonesia with different numbers of cities. According to the experiment results obtained that in the average solution HGA shows in 5 tests out of 5 (100%) is better than simple GA. The results have shown that the hybrid genetic algorithm outperforms the genetic algorithm especially in the case with the problem higher complexity.
Contribution of problem-solving skills to fear of recurrence in breast cancer survivors.
Akechi, Tatuo; Momino, Kanae; Yamashita, Toshinari; Fujita, Takashi; Hayashi, Hironori; Tsunoda, Nobuyuki; Iwata, Hiroji
2014-05-01
Although fear of recurrence is a major concern among breast cancer survivors after surgery, no standard strategies exist that alleviate their distress. This study examined the association of patients' problem-solving skills and fear of recurrence and psychological distress among breast cancer survivors. Randomly selected, ambulatory, female patients with breast cancer participated in this study. They were asked to complete the Concerns about Recurrence Scale (CARS) and the Hospital Anxiety and Depression Scale. Multiple regression analyses were used to examine their associations. Data were obtained from 317 patients. Patients' problem-solving skills were significantly associated with all subscales of fear of recurrence and overall worries measured by the CARS. In addition, patients' problem-solving skills were significantly associated with both their anxiety and depression. Our findings warrant clinical trials to investigate effectiveness of psychosocial intervention program, including enhancing patients' problem-solving skills and reducing fear of recurrence among breast cancer survivors.
Rees, Joanna; Langdon, Peter E
2016-07-01
The purpose of this study was to investigate the relationship between depression, hopelessness, problem-solving ability and self-harming behaviours amongst people with mild intellectual disabilities (IDs). Thirty-six people with mild IDs (77.9% women, Mage = 31.77, SD = 10.73, MIQ = 62.65, SD = 5.74) who had a history of self-harm were recruited. Participants were asked to complete measures of depression, hopelessness and problem-solving ability. Cutting was most frequently observed, and depression was prevalent amongst the sample. There was a significant positive relationship between depression and hopelessness, while there was no significant relationship between self-harm and depression or hopelessness. Problem-solving ability explained 15% of the variance in self-harm scores. Problem-solving ability appears to be associated with self-harming behaviours in people with mild IDs. © 2015 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Eisenberg, Mike; Spitzer, Kathy
1998-01-01
Explains the Big6 approach to information problem-solving based on exercises that were developed for college or upper high school students that can be completed during class sessions. Two of the exercises relate to personal information problems, and one relates Big6 skill areas to course assignments. (LRW)
Crooks, Noelle M.; Alibali, Martha W.
2013-01-01
This study investigated whether activating elements of prior knowledge can influence how problem solvers encode and solve simple mathematical equivalence problems (e.g., 3 + 4 + 5 = 3 + __). Past work has shown that such problems are difficult for elementary school students (McNeil and Alibali, 2000). One possible reason is that children's experiences in math classes may encourage them to think about equations in ways that are ultimately detrimental. Specifically, children learn a set of patterns that are potentially problematic (McNeil and Alibali, 2005a): the perceptual pattern that all equations follow an “operations = answer” format, the conceptual pattern that the equal sign means “calculate the total”, and the procedural pattern that the correct way to solve an equation is to perform all of the given operations on all of the given numbers. Upon viewing an equivalence problem, knowledge of these patterns may be reactivated, leading to incorrect problem solving. We hypothesized that these patterns may negatively affect problem solving by influencing what people encode about a problem. To test this hypothesis in children would require strengthening their misconceptions, and this could be detrimental to their mathematical development. Therefore, we tested this hypothesis in undergraduate participants. Participants completed either control tasks or tasks that activated their knowledge of the three patterns, and were then asked to reconstruct and solve a set of equivalence problems. Participants in the knowledge activation condition encoded the problems less well than control participants. They also made more errors in solving the problems, and their errors resembled the errors children make when solving equivalence problems. Moreover, encoding performance mediated the effect of knowledge activation on equivalence problem solving. Thus, one way in which experience may affect equivalence problem solving is by influencing what students encode about the equations. PMID:24324454
NASA Astrophysics Data System (ADS)
Amallynda, I.; Santosa, B.
2017-11-01
This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.
Completed Beltrami-Michell formulation for analyzing mixed boundary value problems in elasticity
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Kaljevic, Igor; Hopkins, Dale A.; Saigal, Sunil
1995-01-01
In elasticity, the method of forces, wherein stress parameters are considered as the primary unknowns, is known as the Beltrami-Michell formulation (BMF). The existing BMF can only solve stress boundary value problems; it cannot handle the more prevalent displacement of mixed boundary value problems of elasticity. Therefore, this formulation, which has restricted application, could not become a true alternative to the Navier's displacement method, which can solve all three types of boundary value problems. The restrictions in the BMF have been alleviated by augmenting the classical formulation with a novel set of conditions identified as the boundary compatibility conditions. This new method, which completes the classical force formulation, has been termed the completed Beltrami-Michell formulation (CBMF). The CBMF can solve general elasticity problems with stress, displacement, and mixed boundary conditions in terms of stresses as the primary unknowns. The CBMF is derived from the stationary condition of the variational functional of the integrated force method. In the CBMF, stresses for kinematically stable structures can be obtained without any reference to the displacements either in the field or on the boundary. This paper presents the CBMF and its derivation from the variational functional of the integrated force method. Several examples are presented to demonstrate the applicability of the completed formulation for analyzing mixed boundary value problems under thermomechanical loads. Selected example problems include a cylindrical shell wherein membrane and bending responses are coupled, and a composite circular plate.
Thai Grade 10 and 11 Students' Conceptual Understanding and Ability to Solve Stoichiometry Problems
ERIC Educational Resources Information Center
Dahsah, Chanyah; Coll, Richard K.
2007-01-01
Stoichiometry and related concepts are an important part of student learning in chemistry. In this interpretive-based inquiry, we investigated Thai Grade 10 and 11 students' conceptual understanding and ability to solve numerical problems for stoichiometry-related concepts. Ninety-seven participants completed a purpose-designed survey instrument…
ERIC Educational Resources Information Center
Erdamar, Gurcu; Alpan, Gulgun
2013-01-01
This study aims to examine the development of preservice teachers' epistemological beliefs and problem solving skills in the process of teaching practice. Participants of this descriptive study were senior students from Gazi University's Faculty of Vocational Education ("n" = 189). They completed the Epistemological Belief Scale and…
Unpaid Child Support: The Abuse of American Values.
ERIC Educational Resources Information Center
Kobayashi, Futoshi
Noting that fewer than half the single mothers in the United States receive complete and regular child support payments, this paper discusses reasons for unpaid child support, examines whether stricter enforcement of child support obligations will help solve the overall problem, and proposes another option for solving the problem of unpaid child…
Concordancers and Dictionaries as Problem-Solving Tools for ESL Academic Writing
ERIC Educational Resources Information Center
Yoon, Choongil
2016-01-01
The present study investigated how 6 Korean ESL graduate students in Canada used a suite of freely available reference resources, consisting of Web-based corpus tools, Google search engines, and dictionaries, for solving linguistic problems while completing an authentic academic writing assignment in English. Using a mixed methods design, the…
Michael Eisenberg and Robert Berkowitz's Big6[TM] Information Problem-Solving Model.
ERIC Educational Resources Information Center
Carey, James O.
2003-01-01
Reviews the Big6 information problem-solving model. Highlights include benefits and dangers of the simplicity of the model; theories of instruction; testing of the model; the model as a process for completing research projects; and advice for school library media specialists considering use of the model. (LRW)
Hasegawa, Akira; Hattori, Yosuke; Nishimura, Haruki; Tanno, Yoshihiko
2015-06-01
The main purpose of this study was to examine whether depressive rumination and social problem solving are prospectively associated with depressive symptoms. Nonclinical university students (N = 161, 64 men, 97 women; M age = 19.7 yr., SD = 3.6, range = 18-61) recruited from three universities in Japan completed the Beck Depression Inventory-Second Edition (BDI-II), the Ruminative Responses Scale, Social Problem-Solving Inventory-Revised Short Version (SPSI-R:S), and the Means-Ends Problem-Solving Procedure at baseline, and the BDI-II again at 6 mo. later. A stepwise multiple regression analysis with the BDI-II and all subscales of the rumination and social problem solving measures as independent variables indicated that only the BDI-II scores and the Impulsivity/carelessness style subscale of the SPSI-R:S at Time 1 were significantly associated with BDI-II scores at Time 2 (β = 0.73, 0.12, respectively; independent variables accounted for 58.8% of the variance). These findings suggest that in Japan an impulsive and careless problem-solving style was prospectively associated with depressive symptomatology 6 mo. later, as contrasted with previous findings of a cycle of rumination and avoidance problem-solving style.
Computing smallest intervention strategies for multiple metabolic networks in a boolean model.
Lu, Wei; Tamura, Takeyuki; Song, Jiangning; Akutsu, Tatsuya
2015-02-01
This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online.
Processes involved in solving mathematical problems
NASA Astrophysics Data System (ADS)
Shahrill, Masitah; Putri, Ratu Ilma Indra; Zulkardi, Prahmana, Rully Charitas Indra
2018-04-01
This study examines one of the instructional practices features utilized within the Year 8 mathematics lessons in Brunei Darussalam. The codes from the TIMSS 1999 Video Study were applied and strictly followed, and from the 183 mathematics problems recorded, there were 95 problems with a solution presented during the public segments of the video-recorded lesson sequences of the four sampled teachers. The analyses involved firstly, identifying the processes related to mathematical problem statements, and secondly, examining the different processes used in solving the mathematical problems for each problem publicly completed during the lessons. The findings revealed that for three of the teachers, their problem statements coded as `using procedures' ranged from 64% to 83%, while the remaining teacher had 40% of his problem statements coded as `making connections.' The processes used when solving the problems were mainly `using procedures', and none of the problems were coded as `giving results only'. Furthermore, all four teachers made use of making the relevant connections in solving the problems given to their respective students.
Cognitive Predictors of Everyday Problem Solving across the Lifespan.
Chen, Xi; Hertzog, Christopher; Park, Denise C
2017-01-01
An important aspect of successful aging is maintaining the ability to solve everyday problems encountered in daily life. The limited evidence today suggests that everyday problem solving ability increases from young adulthood to middle age, but decreases in older age. The present study examined age differences in the relative contributions of fluid and crystallized abilities to solving problems on the Everyday Problems Test (EPT). We hypothesized that due to diminishing fluid resources available with advanced age, crystallized knowledge would become increasingly important in predicting everyday problem solving with greater age. Two hundred and twenty-one healthy adults from the Dallas Lifespan Brain Study, aged 24-93 years, completed a cognitive battery that included measures of fluid ability (i.e., processing speed, working memory, inductive reasoning) and crystallized ability (i.e., multiple measures of vocabulary). These measures were used to predict performance on EPT. Everyday problem solving showed an increase in performance from young to early middle age, with performance beginning to decrease at about age of 50 years. As hypothesized, fluid ability was the primary predictor of performance on everyday problem solving for young adults, but with increasing age, crystallized ability became the dominant predictor. This study provides evidence that everyday problem solving ability differs with age, and, more importantly, that the processes underlying it differ with age as well. The findings indicate that older adults increasingly rely on knowledge to support everyday problem solving, whereas young adults rely almost exclusively on fluid intelligence. © 2017 S. Karger AG, Basel.
Number Partitioning via Quantum Adiabatic Computation
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Toussaint, Udo
2002-01-01
We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.
Minimum Interference Planar Geometric Topology in Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Nguyen, Trac N.; Huynh, Dung T.
The approach of using topology control to reduce interference in wireless sensor networks has attracted attention of several researchers. There are at least two definitions of interference in the literature. In a wireless sensor network the interference at a node may be caused by an edge that is transmitting data [15], or it occurs because the node itself is within the transmission range of another [3], [1], [6]. In this paper we show that the problem of assigning power to nodes in the plane to yield a planar geometric graph whose nodes have bounded interference is NP-complete under both interference definitions. Our results provide a rigorous proof for a theorem in [15] whose proof is unconvincing. They also address one of the open issues raised in [6] where Halldórsson and Tokuyama were concerned with the receiver model of node interference, and derived an O(sqrt {Δ}) upper bound for the maximum node interference of a wireless ad hoc network in the plane (Δ is the maximum interference of the so-called uniform radius network). The question as to whether this problem is NP-complete in the 2-dimensional case was left open.
Courses timetabling problem by minimizing the number of less preferable time slots
NASA Astrophysics Data System (ADS)
Oktavia, M.; Aman, A.; Bakhtiar, T.
2017-01-01
In an organization with large number of resources, timetabling is one of the most important factors of management strategy and the one that is most prone to errors or issues. Timetabling the perfect organization plan is quite a task, thus the aid of operations research or management strategy approaches is obligation. Timetabling in educational institutions can roughly be categorized into school timetabling, course timetabling, and examination timetabling, which differ from each other by their entities involved such as the type of events, the kind of institution, and the type and the relative influence of constraints. Education timetabling problem is generally a kind of complex combinatorial problem consisting of NP-complete sub-problems. It is required that the requested timetable fulfills a set of hard and soft constraints of various types. In this paper we consider a courses timetabling problem at university whose objective is to minimize the number of less preferable time slots. We mean by less preferable time slots are those devoted in early morning (07.00 - 07.50 AM) or those in the late afternoon (17.00 - 17.50 AM) that in fact beyond the working hour, those scheduled during the lunch break (12.00 - 12.50 AM), those scheduled in Wednesday 10.00 - 11.50 AM that coincides with Department Meeting, and those in Saturday which should be in fact devoted for day-off. In some cases, timetable with a number of activities scheduled in abovementioned time slots are commonly encountered. The courses timetabling for the Educational Program of General Competence (PPKU) students at odd semester at Bogor Agricultural University (IPB) has been modelled in the framework of the integer linear programming. We solved the optimization problem heuristically by categorizing all the groups into seven clusters.
Analysis of mathematical problem-solving ability based on metacognition on problem-based learning
NASA Astrophysics Data System (ADS)
Mulyono; Hadiyanti, R.
2018-03-01
Problem-solving is the primary purpose of the mathematics curriculum. Problem-solving abilities influenced beliefs and metacognition. Metacognition as superordinate capabilities can direct, regulate cognition and motivation and then problem-solving processes. This study aims to (1) test and analyzes the quality of problem-based learning and (2) investigate the problem-solving capabilities based on metacognition. This research uses mixed method study with The subject research are class XI students of Mathematics and Science at High School Kesatrian 2 Semarang which divided into tacit use, aware use, strategic use and reflective use level. The collecting data using scale, interviews, and tests. The data processed with the proportion of test, t-test, and paired samples t-test. The result shows that the students with levels tacit use were able to complete the whole matter given, but do not understand what and why a strategy is used. Students with aware use level were able to solve the problem, be able to build new knowledge through problem-solving to the indicators, understand the problem, determine the strategies used, although not right. Students on the Strategic ladder Use can be applied and adopt a wide variety of appropriate strategies to solve the issues and achieved re-examine indicators of process and outcome. The student with reflective use level is not found in this study. Based on the results suggested that study about the identification of metacognition in problem-solving so that the characteristics of each level of metacognition more clearly in a more significant sampling. Teachers need to know in depth about the student metacognitive activity and its relationship with mathematical problem solving and another problem resolution.
NASA Astrophysics Data System (ADS)
Zittersteijn, Michiel; Schildknecht, Thomas; Vananti, Alessandro; Dolado Perez, Juan Carlos; Martinot, Vincent
2016-07-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention. This problem is also known as the Multiple Target Tracking (MTT) problem. The complexity of the MTT problem is defined by its dimension S. Current research tends to focus on the S = 2 MTT problem. The reason for this is that for S = 2 the problem has a P-complexity. However, with S = 2 the decision to associate a set of observations is based on the minimum amount of information, in ambiguous situations (e.g. satellite clusters) this will lead to incorrect associations. The S > 2 MTT problem is an NP-hard combinatorial optimization problem. In previous work an Elitist Genetic Algorithm (EGA) was proposed as a method to approximately solve this problem. It was shown that the EGA is able to find a good approximate solution with a polynomial time complexity. The EGA relies on solving the Lambert problem in order to perform the necessary orbit determinations. This means that the algorithm is restricted to orbits that are described by Keplerian motion. The work presented in this paper focuses on the impact that this restriction has on the algorithm performance.
A mediational model of self-esteem and social problem-solving in anorexia nervosa.
Paterson, Gillian; Power, Kevin; Collin, Paula; Greirson, David; Yellowlees, Alex; Park, Katy
2011-01-01
Poor problem-solving and low self-esteem are frequently cited as significant factors in the development and maintenance of anorexia nervosa. The current study examines the multi-dimensional elements of these measures and postulates a model whereby self-esteem mediates the relationship between social problems-solving and anorexic pathology and considers the implications of this pathway. Fifty-five inpatients with a diagnosis of anorexia nervosa and 50 non-clinical controls completed three standardised multi-dimensional questionnaires pertaining to social problem-solving, self-esteem and eating pathology. Significant differences were yielded between clinical and non-clinical samples on all measures. Within the clinical group, elements of social problem-solving most significant to anorexic pathology were positive problem orientation, negative problem orientation and avoidance. Components of self-esteem most significant to anorexic pathology were eating, weight and shape concern but not eating restraint. The mediational model was upheld with social problem-solving impacting on anorexic pathology through the existence of low self-esteem. Problem orientation, that is, the cognitive processes of social problem-solving appear to be more significant than problem-solving methods in individuals with anorexia nervosa. Negative perceptions of eating, weight and shape appear to impact on low self-esteem but level of restriction does not. Finally, results indicate that self-esteem is a significant factor in the development and execution of positive or negative social problem-solving in individuals with anorexia nervosa by mediating the relationship between those two variables. Copyright © 2010 John Wiley & Sons, Ltd and Eating Disorders Association.
Minimizing the Diameter of a Network Using Shortcut Edges
NASA Astrophysics Data System (ADS)
Demaine, Erik D.; Zadimoghaddam, Morteza
We study the problem of minimizing the diameter of a graph by adding k shortcut edges, for speeding up communication in an existing network design. We develop constant-factor approximation algorithms for different variations of this problem. We also show how to improve the approximation ratios using resource augmentation to allow more than k shortcut edges. We observe a close relation between the single-source version of the problem, where we want to minimize the largest distance from a given source vertex, and the well-known k-median problem. First we show that our constant-factor approximation algorithms for the general case solve the single-source problem within a constant factor. Then, using a linear-programming formulation for the single-source version, we find a (1 + ɛ)-approximation using O(klogn) shortcut edges. To show the tightness of our result, we prove that any ({3 over 2}-ɛ)-approximation for the single-source version must use Ω(klogn) shortcut edges assuming P ≠ NP.
Job shop scheduling problem with late work criterion
NASA Astrophysics Data System (ADS)
Piroozfard, Hamed; Wong, Kuan Yew
2015-05-01
Scheduling is considered as a key task in many industries, such as project based scheduling, crew scheduling, flight scheduling, machine scheduling, etc. In the machine scheduling area, the job shop scheduling problems are considered to be important and highly complex, in which they are characterized as NP-hard. The job shop scheduling problems with late work criterion and non-preemptive jobs are addressed in this paper. Late work criterion is a fairly new objective function. It is a qualitative measure and concerns with late parts of the jobs, unlike classical objective functions that are quantitative measures. In this work, simulated annealing was presented to solve the scheduling problem. In addition, operation based representation was used to encode the solution, and a neighbourhood search structure was employed to search for the new solutions. The case studies are Lawrence instances that were taken from the Operations Research Library. Computational results of this probabilistic meta-heuristic algorithm were compared with a conventional genetic algorithm, and a conclusion was made based on the algorithm and problem.
Wade, Shari L.; Stancin, Terry; Kirkwood, Michael; Brown, Tanya Maines; Rochester, Mayo Clinic; McMullen, Kendra M.; Taylor, H. Gerry
2013-01-01
Objective To test the efficacy of Counselor-Assisted Problem Solving (CAPS) versus an internet resources comparison (IRC) condition in reducing behavior problems in adolescents following traumatic brain injury (TBI). Design Randomized clinical trial with interviewers naïve to treatment condition. Setting Three large tertiary children's hospitals and two general hospitals with pediatric commitment. Participants 132 children ages 12-17 years hospitalized during the previous 6 months for moderate to severe TBI. Interventions Participants in CAPS (n = 65) completed 8-12 online modules providing training in problem solving, communication skills, and self-regulation and subsequent synchronous videoconferences with a therapist. Participants in the IRC group (n = 67) received links to internet resources about pediatric TBI. Main Outcome Measures Child Behavior Checklist (CBCL) administered before and after completion of treatment (i.e., approximately six months after treatment initiation). Results Post hoc analysis of covariance (ANCOVA), controlling for pre-treatment scores, was used to examine group differences in behavior problems in the entire sample and among older (n=59) and younger adolescents (n=53). Among older but not younger adolescents, CAPS resulted in greater improvements on multiple dimensions of externalizing behavior problems than did IRC. Conclusion Online problem-solving therapy may be effective in reducing behavior problems in older adolescent survivors of moderate-severe TBI. PMID:23640543
Special Operations Research Topics 2014
2014-01-01
problems . I encourage SOF personnel to contribute their experiences and ideas to the SOF community by submitting your completed research on these...and rapid problem solving. While this can be very beneficial in high-stress, time-sensitive situations, it may not be conducive to the development...a perception that everything is important and all problems must be quickly solved. Not only does this imply that slowing down to think is a waste
Problem-solving skills and hardiness as protective factors against stress in Iranian nurses.
Abdollahi, Abbas; Talib, Mansor Abu; Yaacob, Siti Nor; Ismail, Zanariah
2014-02-01
Nursing is a stressful occupation, even when compared with other health professions; therefore, it is necessary to advance our knowledge about the protective factors that can help reduce stress among nurses. The present study sought to investigate the associations among problem-solving skills and hardiness with perceived stress in nurses. The participants, 252 nurses from six private hospitals in Tehran, completed the Personal Views Survey, the Perceived Stress Scale, and the Problem-Solving Inventory. Structural Equation Modeling (SEM) was used to analyse the data and answer the research hypotheses. As expected, greater hardiness was associated with low levels of perceived stress, and nurses low in perceived stress were more likely to be considered approachable, have a style that relied on their own sense of internal personal control, and demonstrate effective problem-solving confidence. These findings reinforce the importance of hardiness and problem-solving skills as protective factors against perceived stress among nurses, and could be important in training future nurses so that hardiness ability and problem-solving skills can be imparted, allowing nurses to have more ability to control their perceived stress.
Markovian Search Games in Heterogeneous Spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffin, Christopher H
2009-01-01
We consider how to search for a mobile evader in a large heterogeneous region when sensors are used for detection. Sensors are modeled using probability of detection. Due to environmental effects, this probability will not be constant over the entire region. We map this problem to a graph search problem and, even though deterministic graph search is NP-complete, we derive a tractable, optimal, probabilistic search strategy. We do this by defining the problem as a differential game played on a Markov chain. We prove that this strategy is optimal in the sense of Nash. Simulations of an example problem illustratemore » our approach and verify our claims.« less
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Fikes, R. E.; Chaitin, L. J.; Hart, P. E.; Duda, R. O.; Nilsson, N. J.
1971-01-01
A program of research in the field of artificial intelligence is presented. The research areas discussed include automatic theorem proving, representations of real-world environments, problem-solving methods, the design of a programming system for problem-solving research, techniques for general scene analysis based upon television data, and the problems of assembling an integrated robot system. Major accomplishments include the development of a new problem-solving system that uses both formal logical inference and informal heuristic methods, the development of a method of automatic learning by generalization, and the design of the overall structure of a new complete robot system. Eight appendices to the report contain extensive technical details of the work described.
Maddoux, John; Symes, Lene; McFarlane, Judith; Koci, Anne; Gilroy, Heidi; Fredland, Nina
2014-01-01
The environmental stress of intimate partner violence is common and often results in mental health problems of depression, anxiety, and PTSD for women and behavioral dysfunctions for their children. Problem-solving skills can serve to mitigate or accentuate the environmental stress of violence and associated impact on mental health. To better understand the relationship between problem-solving skills and mental health of abused women with children, a cross-sectional predictive analysis of 285 abused women who used justice or shelter services was completed. The women were asked about social problem-solving, and mental health symptoms of depression, anxiety, and PTSD as well as behavioral functioning of their children. Higher negative problem-solving scores were associated with significantly (P < 0.001) greater odds of having clinically significant levels of PTSD, anxiety, depression, and somatization for the woman and significantly (P < 0.001) greater odds of her child having borderline or clinically significant levels of both internalizing and externalizing behaviors. A predominately negative problem-solving approach was strongly associated with poorer outcomes for both mothers and children in the aftermath of the environmental stress of abuse. Interventions addressing problem-solving ability may be beneficial in increasing abused women's abilities to navigate the daily stressors of life following abuse.
Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.
Li, Yuhong; Jia, Fucang; Qin, Jing
2016-10-01
Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Hambrick, David Z.; Libarkin, Julie C.; Petcovic, Heather L.; Baker, Kathleen M.; Elkins, Joe; Callahan, Caitlin N.; Turner, Sheldon P.; Rench, Tara A.; LaDue, Nicole D.
2012-01-01
Sources of individual differences in scientific problem solving were investigated. Participants representing a wide range of experience in geology completed tests of visuospatial ability and geological knowledge, and performed a geological bedrock mapping task, in which they attempted to infer the geological structure of an area in the Tobacco…
Viewing or Visualising Which Concept Map Strategy Works Best on Problem-Solving Performance?
ERIC Educational Resources Information Center
Lee, Youngmin; Nelson, David W.
2005-01-01
The purpose of this study was to investigate the effects of two types of maps (generative vs. completed) and the amount of prior knowledge (high vs. low) on well-structured and ill-structured problem-solving performance. Forty-four undergraduates who were registered in an introductory instructional technology course participated in the study.…
ERIC Educational Resources Information Center
Beal, Carole R.; Rosenblum, L. Penny
2018-01-01
Introduction: The authors examined a tablet computer application (iPad app) for its effectiveness in helping students studying prealgebra to solve mathematical word problems. Methods: Forty-three visually impaired students (that is, those who are blind or have low vision) completed eight alternating mathematics units presented using their…
Cognitive Predictors of Everyday Problem Solving across the Lifespan
Chen, Xi; Hertzog, Christopher; Park, Denise C.
2017-01-01
Background An important aspect of successful aging is maintaining the ability to solve everyday problems encountered in daily life. The limited evidence today suggests that everyday problem solving ability increases from young adulthood to middle age, but decreases in older age. Objectives The present study examined age differences in the relative contributions of fluid and crystallized abilities to solving problems on the Everyday Problems Test (EPT; [1]). We hypothesized that due to diminishing fluid resources available with advanced age, crystallized knowledge would become increasingly important in predicting everyday problem solving with greater age. Method Two hundred and twenty-one healthy adults from the Dallas Lifespan Brain Study, aged 24–93 years, completed a cognitive battery that included measures of fluid ability (i.e., processing speed, working memory, inductive reasoning) and crystallized ability (i.e., multiple measures of vocabulary). These measures were used to predict performance on the Everyday Problems Test. Results Everyday problem solving showed an increase in performance from young to early middle age, with performance beginning to decrease at about age of fifty. As hypothesized, fluid ability was the primary predictor of performance on everyday problem solving for young adults, but with increasing age, crystallized ability became the dominant predictor. Conclusion This study provides evidence that everyday problem solving ability differs with age, and, more importantly, that the processes underlying it differ with age as well. The findings indicate that older adults increasingly rely on knowledge to support everyday problem solving, whereas young adults rely almost exclusively on fluid intelligence. PMID:28273664
Time to Completion of Web-Based Physics Problems with Tutoring
Warnakulasooriya, Rasil; Palazzo, David J; Pritchard, David E
2007-01-01
We studied students performing a complex learning task, that of solving multipart physics problems with interactive tutoring on the web. We extracted the rate of completion and fraction completed as a function of time on task by retrospectively analyzing the log of student–tutor interactions. There was a spontaneous division of students into three groups, the central (and largest) group (about 65% of the students) being those who solved the problem in real time after multiple interactions with the tutorial program (primarily receiving feedback to submitted wrong answers and requesting hints). This group displayed a sigmoidal fraction-completed curve as a function of logarithmic time. The sigmoidal shape is qualitatively flatter for problems that do not include hints and wrong-answer responses. We argue that the group of students who respond quickly (about 10% of the students) is obtaining the answer from some outside source. The third group (about 25% of the students) represents those who interrupt their solution, presumably to work offline or to obtain outside help. PMID:17725054
An improved stochastic fractal search algorithm for 3D protein structure prediction.
Zhou, Changjun; Sun, Chuan; Wang, Bin; Wang, Xiaojun
2018-05-03
Protein structure prediction (PSP) is a significant area for biological information research, disease treatment, and drug development and so on. In this paper, three-dimensional structures of proteins are predicted based on the known amino acid sequences, and the structure prediction problem is transformed into a typical NP problem by an AB off-lattice model. This work applies a novel improved Stochastic Fractal Search algorithm (ISFS) to solve the problem. The Stochastic Fractal Search algorithm (SFS) is an effective evolutionary algorithm that performs well in exploring the search space but falls into local minimums sometimes. In order to avoid the weakness, Lvy flight and internal feedback information are introduced in ISFS. In the experimental process, simulations are conducted by ISFS algorithm on Fibonacci sequences and real peptide sequences. Experimental results prove that the ISFS performs more efficiently and robust in terms of finding the global minimum and avoiding getting stuck in local minimums.
NASA Astrophysics Data System (ADS)
Aurora, Tarlok
2005-04-01
In a calculus-based introductory physics course, students were assigned to write the statements of word problems (along with the accompanying diagrams if any), analyze these, identify important concepts/equations and try to solve these end-of- chapter homework problems. They were required to bring to class their written assignment until the chapter was completed in lecture. These were quickly checked at the beginning of the class. In addition, re-doing selected solved examples in the textbook were assigned as homework. Where possible, students were asked to look for similarities between the solved-examples and the end-of-the-chapter problems, or occasionally these were brought to the students' attention. It was observed that many students were able to solve several of the solved-examples on the test even though the instructor had not solved these in class. This was seen as an improvement over the previous years. It made the students more responsible for their learning. Another benefit was that it alleviated the problems previously created by many students not bringing the textbooks to class. It allowed more time for problem solving/discussions in class.
A review on simple assembly line balancing type-e problem
NASA Astrophysics Data System (ADS)
Jusop, M.; Rashid, M. F. F. Ab
2015-12-01
Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.
Worry, beliefs about worry and problem solving in young children.
Wilson, Charlotte; Hughes, Claire
2011-10-01
Childhood worry is common, and yet little is known about why some children develop pathological worry and others do not. Two theories of adult worry that are particularly relevant to children are Davey's problem-solving model in which perseverative worry occurs as a result of thwarted problem-solving attempts, and Wells' metacognitive model, in which positive and negative beliefs about worry interact to produce pathological worry. The present study aimed to test hypotheses that levels of worry in young children are associated with poor or avoidant solution generation for social problems, and poor problem-solving confidence. It also aimed to explore beliefs about worry in this age group, and to examine their relationships with worry, anxiety and age. Fifty-seven young children (6-10 years) responded to open ended questions about social problem-solving situations and beliefs about worry, and completed measures of worry, anxiety and problem-solving confidence. Children with higher levels of worry and anxiety reported using more avoidant solutions in social problem situations and children's low confidence in problem solving was associated with high levels of worry. Children as young as 6 years old reported both positive and negative beliefs about worry, but neither were associated with age, gender, or level of anxiety or worry. RESULTS indicate similarities between adults and children in the relationships between problem-solving variables and worry, but not in relationships between beliefs about worry and worry. This may be due to developmental factors, or may be the result of measurement issues.
Data Understanding Applied to Optimization
NASA Technical Reports Server (NTRS)
Buntine, Wray; Shilman, Michael
1998-01-01
The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.
AI tools in computer based problem solving
NASA Technical Reports Server (NTRS)
Beane, Arthur J.
1988-01-01
The use of computers to solve value oriented, deterministic, algorithmic problems, has evolved a structured life cycle model of the software process. The symbolic processing techniques used, primarily in research, for solving nondeterministic problems, and those for which an algorithmic solution is unknown, have evolved a different model, much less structured. Traditionally, the two approaches have been used completely independently. With the advent of low cost, high performance 32 bit workstations executing identical software with large minicomputers and mainframes, it became possible to begin to merge both models into a single extended model of computer problem solving. The implementation of such an extended model on a VAX family of micro/mini/mainframe systems is described. Examples in both development and deployment of applications involving a blending of AI and traditional techniques are given.
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-04-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.
Determination of Algorithm Parallelism in NP Complete Problems for Distributed Architectures
1990-03-05
12 structure STACK declare OpenStack (S-.NODE **TopPtr) -+TopPtrI FlushStack(S.-NODE **TopPtr) -*TopPtr PushOnStack(S-.NODE **TopPtr, ITEM *NewltemPtr...OfCoveringSets, CoveringSets, L, Best CoverTime, Vertex, Set3end SCND ADT B.26 structure STACKI declare OpenStack (S-NODE **TopPtr) -+TopPtr FlushStack(S
A Target Coverage Scheduling Scheme Based on Genetic Algorithms in Directional Sensor Networks
Gil, Joon-Min; Han, Youn-Hee
2011-01-01
As a promising tool for monitoring the physical world, directional sensor networks (DSNs) consisting of a large number of directional sensors are attracting increasing attention. As directional sensors in DSNs have limited battery power and restricted angles of sensing range, maximizing the network lifetime while monitoring all the targets in a given area remains a challenge. A major technique to conserve the energy of directional sensors is to use a node wake-up scheduling protocol by which some sensors remain active to provide sensing services, while the others are inactive to conserve their energy. In this paper, we first address a Maximum Set Covers for DSNs (MSCD) problem, which is known to be NP-complete, and present a greedy algorithm-based target coverage scheduling scheme that can solve this problem by heuristics. This scheme is used as a baseline for comparison. We then propose a target coverage scheduling scheme based on a genetic algorithm that can find the optimal cover sets to extend the network lifetime while monitoring all targets by the evolutionary global search technique. To verify and evaluate these schemes, we conducted simulations and showed that the schemes can contribute to extending the network lifetime. Simulation results indicated that the genetic algorithm-based scheduling scheme had better performance than the greedy algorithm-based scheme in terms of maximizing network lifetime. PMID:22319387
NASA Astrophysics Data System (ADS)
Steen-Eibensteiner, Janice Lee
2006-07-01
A strong science knowledge base and problem solving skills have always been highly valued for employment in the science industry. Skills currently needed for employment include being able to problem solve (Overtoom, 2000). Academia also recognizes the need for effectively teaching students to apply problem solving skills in clinical settings. This thesis investigates how students solve complex science problems in an academic setting in order to inform the development of problem solving skills for the workplace. Students' use of problem solving skills in the form of learned concepts and procedural knowledge was studied as students completed a problem that might come up in real life. Students were taking a community college sophomore biology course, Human Anatomy & Physiology II. The problem topic was negative feedback inhibition of the thyroid and parathyroid glands. The research questions answered were (1) How well do community college students use a complex of conceptual knowledge when solving a complex science problem? (2) What conceptual knowledge are community college students using correctly, incorrectly, or not using when solving a complex science problem? (3) What problem solving procedural knowledge are community college students using successfully, unsuccessfully, or not using when solving a complex science problem? From the whole class the high academic level participants performed at a mean of 72% correct on chapter test questions which was a low average to fair grade of C-. The middle and low academic participants both failed (F) the test questions (37% and 30% respectively); 29% (9/31) of the students show only a fair performance while 71% (22/31) fail. From the subset sample population of 2 students each from the high, middle, and low academic levels selected from the whole class 35% (8/23) of the concepts were used effectively, 22% (5/23) marginally, and 43% (10/23) poorly. Only 1 concept was used incorrectly by 3/6 of the students and identified as a misconception. One of 21 (5%) problem-solving pathway characteristics was used effectively, 7 (33%) marginally, and 13 (62%) poorly. There were very few (0 to 4) problem-solving pathway characteristics used unsuccessfully most were simply not used.
Better approximation guarantees for job-shop scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, L.A.; Paterson, M.; Srinivasan, A.
1997-06-01
Job-shop scheduling is a classical NP-hard problem. Shmoys, Stein & Wein presented the first polynomial-time approximation algorithm for this problem that has a good (polylogarithmic) approximation guarantee. We improve the approximation guarantee of their work, and present further improvements for some important NP-hard special cases of this problem (e.g., in the preemptive case where machines can suspend work on operations and later resume). We also present NC algorithms with improved approximation guarantees for some NP-hard special cases.
Maurex, Liselotte; Lekander, Mats; Nilsonne, Asa; Andersson, Eva E; Asberg, Marie; Ohman, Arne
2010-09-01
The primary aim of this study was to compare the retrieval of autobiographical memory and the social problem-solving performance of individuals with borderline personality disorder (BPD) and a history of suicide attempts, with and without concurrent diagnoses of depression and/or post-traumatic stress disorder (PTSD), to that of controls. Additionally, the relationships between autobiographical memory, social problem-solving skills, and various clinical characteristics were examined in the BPD group. Individuals with BPD who had made at least two suicide attempts were compared to controls with regard to specificity of autobiographical memory and social problem-solving skills. Autobiographical memory specificity and social problem-solving skills were further studied in the BPD group by comparing depressed participants to non-depressed participants; and autobiographical memory specificity was also studied by comparing participants with and without PTSD. A total of 47 women with a diagnosis of BPD and 30 controls completed the Autobiographical Memory Test, assessing memory specificity, and the means-end problem solving-procedure, measuring social problem-solving skills. The prevalence of suicidal/self-injurious behaviour, and the exposure to violence, was also assessed in the BPD group. Compared to controls, participants with BPD showed reduced specificity of autobiographical memory, irrespective of either concurrent depression, previous depression, or concurrent PTSD. The depressed BPD group displayed poor problem-solving skills. Further, an association between unspecific memory and poor problem-solving was displayed in the BPD group. Our results confirmed that reduced specificity of autobiographical memory is an important characteristic of BPD individuals with a history of suicide attempt, independent of depression, or PTSD. Reduced specificity of autobiographical memory was further related to poor social problem-solving capacity in the BPD group.
NASA Astrophysics Data System (ADS)
Jafari, Hamed; Salmasi, Nasser
2015-09-01
The nurse scheduling problem (NSP) has received a great amount of attention in recent years. In the NSP, the goal is to assign shifts to the nurses in order to satisfy the hospital's demand during the planning horizon by considering different objective functions. In this research, we focus on maximizing the nurses' preferences for working shifts and weekends off by considering several important factors such as hospital's policies, labor laws, governmental regulations, and the status of nurses at the end of the previous planning horizon in one of the largest hospitals in Iran i.e., Milad Hospital. Due to the shortage of available nurses, at first, the minimum total number of required nurses is determined. Then, a mathematical programming model is proposed to solve the problem optimally. Since the proposed research problem is NP-hard, a meta-heuristic algorithm based on simulated annealing (SA) is applied to heuristically solve the problem in a reasonable time. An initial feasible solution generator and several novel neighborhood structures are applied to enhance performance of the SA algorithm. Inspired from our observations in Milad hospital, random test problems are generated to evaluate the performance of the SA algorithm. The results of computational experiments indicate that the applied SA algorithm provides solutions with average percentage gap of 5.49 % compared to the upper bounds obtained from the mathematical model. Moreover, the applied SA algorithm provides significantly better solutions in a reasonable time than the schedules provided by the head nurses.
EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.
Hadinia, M; Jafari, R; Soleimani, M
2016-06-01
This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.
Investigating the role of future thinking in social problem solving.
Noreen, Saima; Whyte, Katherine E; Dritschel, Barbara
2015-03-01
There is well-established evidence that both rumination and depressed mood negatively impact the ability to solve social problems. A preliminary stage of the social problem solving process may be the process of catapulting oneself forward in time to think about the consequences of a problem before attempting to solve it. The aim of the present study was to examine how thinking about the consequences of a social problem being resolved or unresolved prior to solving it influences the solution of the problem as a function of levels of rumination and dysphoric mood. Eighty six participants initially completed the Beck Depression Inventory- II (BDI-II) and the Ruminative Response Scale (RRS). They were then presented with six social problems and generated consequences for half of the problems being resolved and half of the problems remaining unresolved. Participants then solved some of the problems, and following a delay, were asked to recall all of the consequences previously generated. Participants reporting higher levels of depressed mood and rumination were less effective at generating problem solutions. Specifically, those reporting higher levels of rumination produced less effective solutions for social problems that they had previously generated unresolved than resolved consequences. We also found that individuals higher in rumination, irrespective of depressed mood recalled more of the unresolved consequences in a subsequent memory test. As participants did not solve problems for scenarios where no consequences were generated, no baseline measure of problem solving was obtained. Our results suggest thinking about the consequences of a problem remaining unresolved may impair the generation of effective solutions in individuals with higher levels of rumination. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Sousa, Fernando Cardoso; Monteiro, Ileana Pardal; Pellissier, René
2014-01-01
This article presents the development of a small-world network using an adapted version of the large-group problem-solving method "Future Search." Two management classes in a higher education setting were selected and required to plan a project. The students completed a survey focused on the frequency of communications before and after…
ERIC Educational Resources Information Center
Tai, Robert H.; Loehr, John F.; Brigham, Frederick J.
2006-01-01
This pilot study investigated the capacity of eye-gaze tracking to identify differences in problem-solving behaviours within a group of individuals who possessed varying degrees of knowledge and expertise in three disciplines of science (biology, chemistry and physics). The six participants, all pre-service science teachers, completed an 18-item…
ERIC Educational Resources Information Center
Wai, Nu Nu; Hirakawa, Yukiko
2001-01-01
Studied the participation and performance of upper secondary school teachers in Japan through surveys completed by 360 Geography teachers. Findings suggest that the importance of developing problem-solving skills is widely recognized among these teachers. Implementing training in such skills is much more difficult. Developing effective teaching…
ERIC Educational Resources Information Center
Ovington, Linda A.; Saliba, Anthony J.; Goldring, Jeremy
2016-01-01
This article reports the development of a brief self-report measure of dispositional insight problem solving, the Dispositional Insight Scale (DIS). From a representative Australian database, 1,069 adults (536 women and 533 men) completed an online questionnaire. An exploratory and confirmatory factor analysis revealed a 5-item scale, with all…
The Complex Route to Success: Complex Problem-Solving Skills in the Prediction of University Success
ERIC Educational Resources Information Center
Stadler, Matthias J.; Becker, Nicolas; Greiff, Samuel; Spinath, Frank M.
2016-01-01
Successful completion of a university degree is a complex matter. Based on considerations regarding the demands of acquiring a university degree, the aim of this paper was to investigate the utility of complex problem-solving (CPS) skills in the prediction of objective and subjective university success (SUS). The key finding of this study was that…
Computer Graphics-aided systems analysis: application to well completion design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detamore, J.E.; Sarma, M.P.
1985-03-01
The development of an engineering tool (in the form of a computer model) for solving design and analysis problems related with oil and gas well production operations is discussed. The development of the method is based on integrating the concepts of ''Systems Analysis'' with the techniques of ''Computer Graphics''. The concepts behind the method are very general in nature. This paper, however, illustrates the application of the method in solving gas well completion design problems. The use of the method will save time and improve the efficiency of such design and analysis problems. The method can be extended to othermore » design and analysis aspects of oil and gas wells.« less
On the Critical Behaviour, Crossover Point and Complexity of the Exact Cover Problem
NASA Technical Reports Server (NTRS)
Morris, Robin D.; Smelyanskiy, Vadim N.; Shumow, Daniel; Koga, Dennis (Technical Monitor)
2003-01-01
Research into quantum algorithms for NP-complete problems has rekindled interest in the detailed study a broad class of combinatorial problems. A recent paper applied the quantum adiabatic evolution algorithm to the Exact Cover problem for 3-sets (EC3), and provided an empirical evidence that the algorithm was polynomial. In this paper we provide a detailed study of the characteristics of the exact cover problem. We present the annealing approximation applied to EC3, which gives an over-estimate of the phase transition point. We also identify empirically the phase transition point. We also study the complexity of two classical algorithms on this problem: Davis-Putnam and Simulated Annealing. For these algorithms, EC3 is significantly easier than 3-SAT.
Burton, Catherine L; Strauss, Esther; Hultsch, David F; Hunter, Michael A
2009-09-01
The purpose of the present study was to investigate whether inconsistency in reaction time (RT) is predictive of older adults' ability to solve everyday problems. A sample of 304 community dwelling non-demented older adults, ranging in age from 62 to 92, completed a measure of everyday problem solving, the Everyday Problems Test (EPT). Inconsistency in latencies across trials was assessed on four RT tasks. Performance on the EPT was found to vary according to age and cognitive status. Both mean latencies and inconsistency were significantly associated with EPT performance, such that slower and more inconsistent RTs were associated with poorer everyday problem solving abilities. Even after accounting for age, education, and mean level of performance, inconsistency in reaction time continued to account for a significant proportion of the variance in EPT scores. These findings suggest that indicators of inconsistency in RT may be of functional relevance.
NASA Astrophysics Data System (ADS)
Mirabi, Mohammad; Fatemi Ghomi, S. M. T.; Jolai, F.
2014-04-01
Flow-shop scheduling problem (FSP) deals with the scheduling of a set of n jobs that visit a set of m machines in the same order. As the FSP is NP-hard, there is no efficient algorithm to reach the optimal solution of the problem. To minimize the holding, delay and setup costs of large permutation flow-shop scheduling problems with sequence-dependent setup times on each machine, this paper develops a novel hybrid genetic algorithm (HGA) with three genetic operators. Proposed HGA applies a modified approach to generate a pool of initial solutions, and also uses an improved heuristic called the iterated swap procedure to improve the initial solutions. We consider the make-to-order production approach that some sequences between jobs are assumed as tabu based on maximum allowable setup cost. In addition, the results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.
Modification of Prim’s algorithm on complete broadcasting graph
NASA Astrophysics Data System (ADS)
Dairina; Arif, Salmawaty; Munzir, Said; Halfiani, Vera; Ramli, Marwan
2017-09-01
Broadcasting is an information dissemination from one object to another object through communication between two objects in a network. Broadcasting for n objects can be solved by n - 1 communications and minimum time unit defined by ⌈2log n⌉ In this paper, weighted graph broadcasting is considered. The minimum weight of a complete broadcasting graph will be determined. Broadcasting graph is said to be complete if every vertex is connected. Thus to determine the minimum weight of complete broadcasting graph is equivalent to determine the minimum spanning tree of a complete graph. The Kruskal’s and Prim’s algorithm will be used to determine the minimum weight of a complete broadcasting graph regardless the minimum time unit ⌈2log n⌉ and modified Prim’s algorithm for the problems of the minimum time unit ⌈2log n⌉ is done. As an example case, here, the training of trainer problem is solved using these algorithms.
Communication and complexity in a GRN-based multicellular system for graph colouring.
Buck, Moritz; Nehaniv, Chrystopher L
2008-01-01
Artificial Genetic Regulatory Networks (GRNs) are interesting control models through their simplicity and versatility. They can be easily implemented, evolved and modified, and their similarity to their biological counterparts makes them interesting for simulations of life-like systems as well. These aspects suggest they may be perfect control systems for distributed computing in diverse situations, but to be usable for such applications the computational power and evolvability of GRNs need to be studied. In this research we propose a simple distributed system implementing GRNs to solve the well known NP-complete graph colouring problem. Every node (cell) of the graph to be coloured is controlled by an instance of the same GRN. All the cells communicate directly with their immediate neighbours in the graph so as to set up a good colouring. The quality of this colouring directs the evolution of the GRNs using a genetic algorithm. We then observe the quality of the colouring for two different graphs according to different communication protocols and the number of different proteins in the cell (a measure for the possible complexity of a GRN). Those two points, being the main scalability issues that any computational paradigm raises, will then be discussed.
How do Rumination and Social Problem Solving Intensify Depression? A Longitudinal Study.
Hasegawa, Akira; Kunisato, Yoshihiko; Morimoto, Hiroshi; Nishimura, Haruki; Matsuda, Yuko
2018-01-01
In order to examine how rumination and social problem solving intensify depression, the present study investigated longitudinal associations among each dimension of rumination and social problem solving and evaluated aspects of these constructs that predicted subsequent depression. A three-wave longitudinal study, with an interval of 4 weeks between waves, was conducted. Japanese university students completed the Beck Depression Inventory-Second Edition, Ruminative Responses Scale, Social Problem-Solving Inventory-Revised Short Version, and Interpersonal Stress Event Scale on three occasions 4 weeks apart ( n = 284 at Time 1, 198 at Time 2, 165 at Time 3). Linear mixed models were analyzed to test whether each variable predicted subsequent depression, rumination, and each dimension of social problem solving. Rumination and negative problem orientation demonstrated a mutually enhancing relationship. Because these two variables were not associated with interpersonal conflict during the subsequent 4 weeks, rumination and negative problem orientation appear to strengthen each other without environmental change. Rumination and impulsivity/carelessness style were associated with subsequent depressive symptoms, after controlling for the effect of initial depression. Because rumination and impulsivity/carelessness style were not concurrently and longitudinally associated with each other, rumination and impulsive/careless problem solving style appear to be independent processes that serve to intensify depression.
McCann, Terence V; Cotton, Sue M; Lubman, Dan I
2017-08-01
Caring for young people with first-episode psychosis is difficult and demanding, and has detrimental effects on carers' well-being, with few evidence-based resources available to assist carers to deal with the problems they are confronted with in this situation. We aimed to examine if completion of a self-directed problem-solving bibliotherapy by first-time carers of young people with first-episode psychosis improved their social problem solving compared with carers who only received treatment as usual. A randomized controlled trial was carried out through two early intervention psychosis services in Melbourne, Australia. A sample of 124 carers were randomized to problem-solving bibliotherapy or treatment as usual. Participants were assessed at baseline, 6- and 16-week follow-up. Intent-to-treat analyses were used and showed that recipients of bibliotherapy had greater social problem-solving abilities than those receiving treatment as usual, and these effects were maintained at both follow-up time points. Our findings affirm that bibliotherapy, as a low-cost complement to treatment as usual for carers, had some effects in improving their problem-solving skills when addressing problems related to the care and support of young people with first-episode psychosis. © 2015 The Authors. Early Intervention in Psychiatry published by Wiley Publishing Asia Pty Ltd.
Protein local structure alignment under the discrete Fréchet distance.
Zhu, Binhai
2007-12-01
Protein structure alignment is a fundamental problem in computational and structural biology. While there has been lots of experimental/heuristic methods and empirical results, very few results are known regarding the algorithmic/complexity aspects of the problem, especially on protein local structure alignment. A well-known measure to characterize the similarity of two polygonal chains is the famous Fréchet distance, and with the application of protein-related research, a related discrete Fréchet distance has been used recently. In this paper, following the recent work of Jiang et al. we investigate the protein local structural alignment problem using bounded discrete Fréchet distance. Given m proteins (or protein backbones, which are 3D polygonal chains), each of length O(n), our main results are summarized as follows: * If the number of proteins, m, is not part of the input, then the problem is NP-complete; moreover, under bounded discrete Fréchet distance it is NP-hard to approximate the maximum size common local structure within a factor of n(1-epsilon). These results hold both when all the proteins are static and when translation/rotation are allowed. * If the number of proteins, m, is a constant, then there is a polynomial time solution for the problem.
Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems
Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo
2015-01-01
Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016
Dixon-Gordon, Katherine L; Whalen, Diana J; Scott, Lori N; Cummins, Nicole D; Stepp, Stephanie D
2016-06-01
The transaction of adolescent's expressed negative affect and parental interpersonal emotion regulation are theoretically implicated in the development of borderline personality disorder (BPD). Although problem solving and support/validation are interpersonal strategies that foster emotion regulation, little is known about whether these strategies are associated with less BPD severity among adolescents. Adolescent girls (age 16; N = 74) and their mothers completed a conflict discussion task, and maternal problem solving, support/validation, and girls' negative affect were coded. Girls' BPD symptoms were assessed at four time points. A 3-way interaction of girls' negative affect, problem solving, and support/validation indicated that girls' negative affect was only associated with BPD severity in the context of low maternal support/validation and high maternal problem solving. These variables did not predict changes in BPD symptoms over time. Although high negative affect is a risk for BPD severity in adolescent girls, maternal interpersonal emotion regulation strategies moderate this link. Whereas maternal problem solving coupled with low support/validation is associated with a stronger negative affect-BPD relation, maternal problem solving paired with high support/validation is associated with an attenuated relationship.
Whalen, Diana J.; Scott, Lori N.; Cummins, Nicole D.; Stepp, Stephanie D.
2015-01-01
The transaction of adolescent’s expressed negative affect and parental interpersonal emotion regulation are theoretically implicated in the development of borderline personality disorder (BPD). Although problem solving and support/validation are interpersonal strategies that foster emotion regulation, little is known about whether these strategies are associated with less BPD severity among adolescents. Adolescent girls (age 16; N = 74) and their mothers completed a conflict discussion task, and maternal problem solving, support/validation, and girls’ negative affect were coded. Girls’ BPD symptoms were assessed at four time points. A 3-way interaction of girls’ negative affect, problem solving, and support/validation indicated that girls’ negative affect was only associated with BPD severity in the context of low maternal support/validation and high maternal problem solving. These variables did not predict changes in BPD symptoms over time. Although high negative affect is a risk for BPD severity in adolescent girls, maternal interpersonal emotion regulation strategies moderate this link. Whereas maternal problem solving coupled with low support/validation is associated with a stronger negative affect-BPD relation, maternal problem solving paired with high support/validation is associated with an attenuated relationship. PMID:27185969
Dense Subgraphs with Restrictions and Applications to Gene Annotation Graphs
NASA Astrophysics Data System (ADS)
Saha, Barna; Hoch, Allison; Khuller, Samir; Raschid, Louiqa; Zhang, Xiao-Ning
In this paper, we focus on finding complex annotation patterns representing novel and interesting hypotheses from gene annotation data. We define a generalization of the densest subgraph problem by adding an additional distance restriction (defined by a separate metric) to the nodes of the subgraph. We show that while this generalization makes the problem NP-hard for arbitrary metrics, when the metric comes from the distance metric of a tree, or an interval graph, the problem can be solved optimally in polynomial time. We also show that the densest subgraph problem with a specified subset of vertices that have to be included in the solution can be solved optimally in polynomial time. In addition, we consider other extensions when not just one solution needs to be found, but we wish to list all subgraphs of almost maximum density as well. We apply this method to a dataset of genes and their annotations obtained from The Arabidopsis Information Resource (TAIR). A user evaluation confirms that the patterns found in the distance restricted densest subgraph for a dataset of photomorphogenesis genes are indeed validated in the literature; a control dataset validates that these are not random patterns. Interestingly, the complex annotation patterns potentially lead to new and as yet unknown hypotheses. We perform experiments to determine the properties of the dense subgraphs, as we vary parameters, including the number of genes and the distance.
An evolutionary strategy based on partial imitation for solving optimization problems
NASA Astrophysics Data System (ADS)
Javarone, Marco Alberto
2016-12-01
In this work we introduce an evolutionary strategy to solve combinatorial optimization tasks, i.e. problems characterized by a discrete search space. In particular, we focus on the Traveling Salesman Problem (TSP), i.e. a famous problem whose search space grows exponentially, increasing the number of cities, up to becoming NP-hard. The solutions of the TSP can be codified by arrays of cities, and can be evaluated by fitness, computed according to a cost function (e.g. the length of a path). Our method is based on the evolution of an agent population by means of an imitative mechanism, we define 'partial imitation'. In particular, agents receive a random solution and then, interacting among themselves, may imitate the solutions of agents with a higher fitness. Since the imitation mechanism is only partial, agents copy only one entry (randomly chosen) of another array (i.e. solution). In doing so, the population converges towards a shared solution, behaving like a spin system undergoing a cooling process, i.e. driven towards an ordered phase. We highlight that the adopted 'partial imitation' mechanism allows the population to generate solutions over time, before reaching the final equilibrium. Results of numerical simulations show that our method is able to find, in a finite time, both optimal and suboptimal solutions, depending on the size of the considered search space.
White, Worawan; Grant, Joan S; Pryor, Erica R; Keltner, Norman L; Vance, David E; Raper, James L
2012-01-01
Social support, stigma, and social problem solving may be mediators of the relationship between sign and symptom severity and depressive symptoms in people living with HIV (PLWH). However, no published studies have examined these individual variables as mediators in PLWH. This cross-sectional, correlational study of 150 PLWH examined whether social support, stigma, and social problem solving were mediators of the relationship between HIV-related sign and symptom severity and depressive symptoms. Participants completed self-report questionnaires during their visits at two HIV outpatient clinics in the Southeastern United States. Using multiple regression analyses as a part of mediation testing, social support, stigma, and social problem solving were found to be partial mediators of the relationship between sign and symptom severity and depressive symptoms, considered individually and as a set.
New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times
NASA Astrophysics Data System (ADS)
Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid
2017-09-01
In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.
The Ebola Virus VP30-NP Interaction Is a Regulator of Viral RNA Synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchdoerfer, Robert N.; Moyer, Crystal L.; Abelson, Dafna M.
Filoviruses are capable of causing deadly hemorrhagic fevers. All nonsegmented negative-sense RNA-virus nucleocapsids are composed of a nucleoprotein (NP), a phosphoprotein (VP35) and a polymerase (L). However, the VP30 RNA-synthesis co-factor is unique to the filoviruses. The assembly, structure, and function of the filovirus RNA replication complex remain unclear. Here, we have characterized the interactions of Ebola, Sudan and Marburg virus VP30 with NP using in vitro biochemistry, structural biology and cell-based mini-replicon assays. We have found that the VP30 C-terminal domain interacts with a short peptide in the C-terminal region of NP. Further, we have solved crystal structures ofmore » the VP30-NP complex for both Ebola and Marburg viruses. These structures reveal that a conserved, proline-rich NP peptide binds a shallow hydrophobic cleft on the VP30 C-terminal domain. Structure-guided Ebola virus VP30 mutants have altered affinities for the NP peptide. Correlation of these VP30-NP affinities with the activity for each of these mutants in a cell-based mini-replicon assay suggests that the VP30-NP interaction plays both essential and inhibitory roles in Ebola virus RNA synthesis.« less
Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.
Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian
2017-01-01
Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Genetic Local Search for Optimum Multiuser Detection Problem in DS-CDMA Systems
NASA Astrophysics Data System (ADS)
Wang, Shaowei; Ji, Xiaoyong
Optimum multiuser detection (OMD) in direct-sequence code-division multiple access (DS-CDMA) systems is an NP-complete problem. In this paper, we present a genetic local search algorithm, which consists of an evolution strategy framework and a local improvement procedure. The evolution strategy searches the space of feasible, locally optimal solutions only. A fast iterated local search algorithm, which employs the proprietary characteristics of the OMD problem, produces local optima with great efficiency. Computer simulations show the bit error rate (BER) performance of the GLS outperforms other multiuser detectors in all cases discussed. The computation time is polynomial complexity in the number of users.
NASA Astrophysics Data System (ADS)
Bai, Danyu; Zhang, Zhihai
2014-08-01
This article investigates the open-shop scheduling problem with the optimal criterion of minimising the sum of quadratic completion times. For this NP-hard problem, the asymptotic optimality of the shortest processing time block (SPTB) heuristic is proven in the sense of limit. Moreover, three different improvements, namely, the job-insert scheme, tabu search and genetic algorithm, are introduced to enhance the quality of the original solution generated by the SPTB heuristic. At the end of the article, a series of numerical experiments demonstrate the convergence of the heuristic, the performance of the improvements and the effectiveness of the quadratic objective.
ERIC Educational Resources Information Center
Masson, J. D.; Dagnan, D.; Evans, J.
2010-01-01
Background: There is a need for validated, standardised tools for the assessment of executive functions in adults with intellectual disabilities (ID). This study examines the validity of a test of planning and problem solving (Tower of London) with adults with ID. Method: Participants completed an adapted version of the Tower of London (ToL) while…
ERIC Educational Resources Information Center
Oliver, Renee; Williams, Robert L.
2006-01-01
Three contingency conditions were applied to the math performance of 4th and 5th graders: bonus credit for accurately solving math problems, bonus credit for completing math problems, and no bonus credit for accurately answering or completing math problems. Mixed ANOVAs were used in tracking the performance of high, medium, and low performers…
Boda, Dezső; Gillespie, Dirk
2012-03-13
We propose a procedure to compute the steady-state transport of charged particles based on the Nernst-Planck (NP) equation of electrodiffusion. To close the NP equation and to establish a relation between the concentration and electrochemical potential profiles, we introduce the Local Equilibrium Monte Carlo (LEMC) method. In this method, Grand Canonical Monte Carlo simulations are performed using the electrochemical potential specified for the distinct volume elements. An iteration procedure that self-consistently solves the NP and flux continuity equations with LEMC is shown to converge quickly. This NP+LEMC technique can be used in systems with diffusion of charged or uncharged particles in complex three-dimensional geometries, including systems with low concentrations and small applied voltages that are difficult for other particle simulation techniques.
Computing Smallest Intervention Strategies for Multiple Metabolic Networks in a Boolean Model
Lu, Wei; Song, Jiangning; Akutsu, Tatsuya
2015-01-01
Abstract This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online. PMID:25684199
Development and Validation of the Diabetes Adolescent Problem Solving Questionnaire
Mulvaney, Shelagh A.; Jaser, Sarah S.; Rothman, Russell L.; Russell, William; Pittel, Eric J.; Lybarger, Cindy; Wallston, Kenneth A.
2014-01-01
Objective Problem solving is a critical diabetes self-management skill. Because of a lack of clinically feasible measures, our aim was to develop and validate a self-report self-management problem solving questionnaire for adolescents with type 1 diabetes (T1D). Methods A multidisciplinary team of diabetes experts generated questionnaire items that addressed diabetes self-management problem solving. Iterative feedback from parents and adolescents resulted in 27 items. Adolescents from two studies (N=156) aged 13–17 were recruited through a pediatric diabetes clinic and completed measures through an online survey. Glycemic control was measured by HbA1c recorded in the medical record. Results Empirical elimination of items using Principal Components Analyses resulted in a 13-item unidimensional measure, the Diabetes Adolescent Problem Solving Questionnaire (DAPSQ) that explained 57% of the variance. The DAPSQ demonstrated internal consistency (Cronbach’s alpha = 0.92) and was correlated with diabetes self-management (r=0.53, p<.001), self-efficacy (r=0.54, p<.001), and glycemic control (r= −0.24, p<.01). Conclusion The DAPSQ is a brief instrument for assessment of diabetes self-management problem solving in youth with T1D associated with better self-management behaviors and glycemic control. Practice Implications The DAPSQ is a clinically feasible self-report measure that can provide valuable information regarding level of self-management problem solving and guide patient education. PMID:25063715
Development and validation of the diabetes adolescent problem solving questionnaire.
Mulvaney, Shelagh A; Jaser, Sarah S; Rothman, Russell L; Russell, William E; Pittel, Eric J; Lybarger, Cindy; Wallston, Kenneth A
2014-10-01
Problem solving is a critical diabetes self-management skill. Because of a lack of clinically feasible measures, our aim was to develop and validate a self-report self-management problem solving questionnaire for adolescents with type 1 diabetes (T1D). A multidisciplinary team of diabetes experts generated questionnaire items that addressed diabetes self-management problem solving. Iterative feedback from parents and adolescents resulted in 27 items. Adolescents from two studies (N=156) aged 13-17 were recruited through a pediatric diabetes clinic and completed measures through an online survey. Glycemic control was measured by HbA1c recorded in the medical record. Empirical elimination of items using principal components analyses resulted in a 13-item unidimensional measure, the diabetes adolescent problem solving questionnaire (DAPSQ) that explained 56% of the variance. The DAPSQ demonstrated internal consistency (Cronbach's alpha=0.92) and was correlated with diabetes self-management (r=0.53, p<.001), self-efficacy (r=0.54, p<.001), and glycemic control (r=-0.24, p<.01). The DAPSQ is a brief instrument for assessment of diabetes self-management problem solving in youth with T1D and is associated with better self-management behaviors and glycemic control. The DAPSQ is a clinically feasible self-report measure that can provide valuable information regarding level of self-management problem solving and guide patient education. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mujiasih; Waluya, S. B.; Kartono; Mariani
2018-03-01
Skills in working on the geometry problems great needs of the competence of Geometric Reasoning. As a teacher candidate, State Islamic University (UIN) students need to have the competence of this Geometric Reasoning. When the geometric reasoning in solving of geometry problems has grown well, it is expected the students are able to write their ideas to be communicative for the reader. The ability of a student's mathematical communication is supposed to be used as a marker of the growth of their Geometric Reasoning. Thus, the search for the growth of geometric reasoning in solving of analytic geometry problems will be characterized by the growth of mathematical communication abilities whose work is complete, correct and sequential, especially in writing. Preceded with qualitative research, this article was the result of a study that explores the problem: Was the search for the growth of geometric reasoning in solving analytic geometry problems could be characterized by the growth of mathematical communication abilities? The main activities in this research were done through a series of activities: (1) Lecturer trains the students to work on analytic geometry problems that were not routine and algorithmic process but many problems that the process requires high reasoning and divergent/open ended. (2) Students were asked to do the problems independently, in detail, complete, order, and correct. (3) Student answers were then corrected each its stage. (4) Then taken 6 students as the subject of this research. (5) Research subjects were interviewed and researchers conducted triangulation. The results of this research, (1) Mathematics Education student of UIN Semarang, had adequate the mathematical communication ability, (2) the ability of this mathematical communication, could be a marker of the geometric reasoning in solving of problems, and (3) the geometric reasoning of UIN students had grown in a category that tends to be good.
Single product lot-sizing on unrelated parallel machines with non-decreasing processing times
NASA Astrophysics Data System (ADS)
Eremeev, A.; Kovalyov, M.; Kuznetsov, P.
2018-01-01
We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.
An episodic specificity induction enhances means-end problem solving in young and older adults.
Madore, Kevin P; Schacter, Daniel L
2014-12-01
Episodic memory plays an important role not only in remembering past experiences, but also in constructing simulations of future experiences and solving means-end social problems. We recently found that an episodic specificity induction-brief training in recollecting details of past experiences-enhances performance of young and older adults on memory and imagination tasks. Here we tested the hypothesis that this specificity induction would also positively impact a means-end problem-solving task on which age-related changes have been linked to impaired episodic memory. Young and older adults received the specificity induction or a control induction before completing a means-end problem-solving task, as well as memory and imagination tasks. Consistent with previous findings, older adults provided fewer relevant steps on problem solving than did young adults, and their responses also contained fewer internal (i.e., episodic) details across the 3 tasks. There was no difference in the number of other (e.g., irrelevant) steps on problem solving or external (i.e., semantic) details generated on the 3 tasks as a function of age. Critically, the specificity induction increased the number of relevant steps and internal details (but not other steps or external details) that both young and older adults generated in problem solving compared with the control induction, as well as the number of internal details (but not external details) generated for memory and imagination. Our findings support the idea that episodic retrieval processes are involved in means-end problem solving, extend the range of tasks on which a specificity induction targets these processes, and show that the problem-solving performance of older adults can benefit from a specificity induction as much as that of young adults. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
An episodic specificity induction enhances means-end problem solving in young and older adults
Madore, Kevin P.; Schacter, Daniel L.
2014-01-01
Episodic memory plays an important role not only in remembering past experiences, but also in constructing simulations of future experiences and solving means-end social problems. We recently found that an episodic specificity induction- brief training in recollecting details of past experiences- enhances performance of young and older adults on memory and imagination tasks. Here we tested the hypothesis that this specificity induction would also positively impact a means-end problem solving task on which age-related changes have been linked to impaired episodic memory. Young and older adults received the specificity induction or a control induction before completing a means-end problem solving task as well as memory and imagination tasks. Consistent with previous findings, older adults provided fewer relevant steps on problem solving than did young adults, and their responses also contained fewer internal (i.e., episodic) details across the three tasks. There was no difference in the number of other (e.g., irrelevant) steps on problem solving or external (i.e., semantic) details generated on the three tasks as a function of age. Critically, the specificity induction increased the number of relevant steps and internal details (but not other steps or external details) that both young and older adults generated in problem solving compared with the control induction, as well as the number of internal details (but not external details) generated for memory and imagination. Our findings support the idea that episodic retrieval processes are involved in means-end problem solving, extend the range of tasks on which a specificity induction targets these processes, and show that the problem solving performance of older adults can benefit from a specificity induction as much as that of young adults. PMID:25365688
Problem-solving style and adaptation in breast cancer survivors: a prospective analysis.
Heppner, P Paul; Armer, Jane M; Mallinckrodt, Brent
2009-06-01
Emotional care of the breast cancer patient is not well understood; this lack of understanding results in both a high cost to the patient, as well as the health care system. This study examined the role of problem-solving style as a predictor of emotional distress, adjustment to breast cancer, and physical function immediately post-surgery and 12 months later. The sample consisted of 121 women diagnosed with breast cancer and undergoing surgery as a primary treatment. The survivors completed a measure of problem-solving style and three outcome measures immediately post-surgery, as well as at 1 year later. There was a 95.6% retention rate at 1 year. Multiple hierarchical regressions revealed, after controlling for patient demographics and stage of cancer, that problem-solving style (particularly personal control) was associated with emotional distress, adjustment to chronic illness, and physical function immediately following surgical intervention. In addition, a more positive problem-solving style was associated with less emotional distress, but not a better adaptation to a chronic illness or physical functioning 12 months later; the Personal Control again was the best single predictor of the emotional distress, adding 10% of the variance in predicting this outcome. The utility of post-surgery assessment may help identify those in need for problem-solving training to improve these outcomes at 1 year. Future studies need to determine the impact of interventions tailored to levels of problem-solving styles in cancer survivors over time. Understanding the role of problem solving style in breast cancer survivors deserves attention as it is associated with emotional distress immediately and one year after medical intervention. Problem-solving style should be evaluated early, and interventions established for those most at risk for emotional distress.
Multigrid solution strategies for adaptive meshing problems
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1995-01-01
This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.
Jenkins, Mark C; Stevens, Laura; O'Brien, Celia; Parker, Carolyn; Miska, Katrzyna; Konjufca, Vjollca
2018-02-14
The purpose of this study was to determine if conjugating a recombinant Eimeria maxima protein, namely EmaxIMP1, into 20 nm polystyrene nanoparticles (NP) could improve the level of protective immunity against E. maxima challenge infection. Recombinant EmaxIMP1 was expressed in Escherichia coli as a poly-His fusion protein, purified by NiNTA chromatography, and conjugated to 20 nm polystyrene NP (NP-EmaxIMP1). NP-EMaxIMP1 or control non-recombinant (NP-NR) protein were delivered per os to newly-hatched broiler chicks with subsequent booster immunizations at 3 and 21 days of age. In battery cage studies (n = 4), chickens immunized with NP-EMaxIMP1 displayed complete protection as measured by weight gain (WG) against E. maxima challenge compared to chickens immunized with NP-NR. WG in the NP-EMaxIMP1-immunized groups was identical to WG in chickens that were not infected with E. maxima infected chickens. In floor pen studies (n = 2), chickens immunized with NP-EMaxIMP1 displayed partial protection as measured by WG against E. maxima challenge compared to chickens immunized with NP-NR. In order to understand the basis for immune stimulation, newly-hatched chicks were inoculated per os with NP-EMaxIMP1 or NP-NR protein, and the small intestine, bursa, and spleen, were examined for NP localization at 1 h and 6 h post-inoculation. Within 1 h, both NP-EMaxIMP1 and NP-NR were observed in all 3 tissues. An increase was observed in the level of NP-EmaxIMP1 and NP-NR in all tissues at 6 h post-inoculation. These data indicate that 20 nm NP-EmaxIMP1 or NP-NR reached deeper tissues within hours of oral inoculation and elicited complete to partial immunity against E. maxima challenge infection. Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Fujisawa, Keiko K.; Yamagata, Shinji; Ozaki, Koken; Ando, Juko
2012-01-01
This study investigated the association between negative parenting (NP) and conduct problems (CP) in 6-year-old twins, taking into account the severity of hyperactivity/inattention problems (HIAP). Analyses of the data from 1,677 pairs of twins and their parents revealed that the shared environmental covariance between NP and CP was moderated by…
A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.
Risk factors for readmission after neonatal cardiac surgery.
Mackie, Andrew S; Gauvreau, Kimberlee; Newburger, Jane W; Mayer, John E; Erickson, Lars C
2004-12-01
Repeat hospitalizations place a significant burden on health care resources. Factors predisposing infants to unplanned hospital readmission after congenital heart surgery are unknown. This is a single-center, case-control study. Cases were rehospitalized or died within 30 days of discharge following an arterial switch operation (ASO) or Norwood procedure (NP) between 1992 and 2002. Controls underwent an ASO or NP between 1992 and 2002, and were neither readmitted nor died within 30 days of discharge. Patients and controls were matched by gender, year of birth, and procedure. Potential risk factors examined included indices of medical status at the time of discharge, determinants of access to health care, and provider characteristics. Forty-eight patients were readmitted; 19 of 498 (3.8%) following an ASO and 29 of 254 (11.4%) after a NP (p < 0.001). Six infants died within 30 days of discharge; 1 after an ASO and 5 after a NP. In multivariate analysis, predictors of readmission or death were: residual hemodynamic problem(s) (odds ratio [OR] 4.10 [1.18, 14.3], p = 0.026); an intensive care unit stay greater than 7 days (OR 5.17 [1.12, 23.9] p = 0.035) (ASO); residual hemodynamic problem(s) (OR 5.84 [1.98, 17.2], p = 0.001); and establishment of full oral intake less than 2 days before discharge (OR 5.83 [1.83, 18.6], p = 0.003) (NP). Combining both groups, living in a low income Zip Code (< 30,000 dollars/annum) was associated with a lower likelihood of readmission (OR 0.25 [0.07, 0.85], p = 0.027). Residual hemodynamic problem(s) predispose to hospital readmission after the ASO and NP. Low socioeconomic status may reduce the likelihood of readmission even when problems arise.
Simulated annealing algorithm for solving chambering student-case assignment problem
NASA Astrophysics Data System (ADS)
Ghazali, Saadiah; Abdul-Rahman, Syariza
2015-12-01
The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.
Learning optimal quantum models is NP-hard
NASA Astrophysics Data System (ADS)
Stark, Cyril J.
2018-02-01
Physical modeling translates measured data into a physical model. Physical modeling is a major objective in physics and is generally regarded as a creative process. How good are computers at solving this task? Here, we show that in the absence of physical heuristics, the inference of optimal quantum models cannot be computed efficiently (unless P=NP ). This result illuminates rigorous limits to the extent to which computers can be used to further our understanding of nature.
Taboo search algorithm for item assignment in synchronized zone automated order picking system
NASA Astrophysics Data System (ADS)
Wu, Yingying; Wu, Yaohua
2014-07-01
The idle time which is part of the order fulfillment time is decided by the number of items in the zone; therefore the item assignment method affects the picking efficiency. Whereas previous studies only focus on the balance of number of kinds of items between different zones but not the number of items and the idle time in each zone. In this paper, an idle factor is proposed to measure the idle time exactly. The idle factor is proven to obey the same vary trend with the idle time, so the object of this problem can be simplified from minimizing idle time to minimizing idle factor. Based on this, the model of item assignment problem in synchronized zone automated order picking system is built. The model is a form of relaxation of parallel machine scheduling problem which had been proven to be NP-complete. To solve the model, a taboo search algorithm is proposed. The main idea of the algorithm is minimizing the greatest idle factor of zones with the 2-exchange algorithm. Finally, the simulation which applies the data collected from a tobacco distribution center is conducted to evaluate the performance of the algorithm. The result verifies the model and shows the algorithm can do a steady work to reduce idle time and the idle time can be reduced by 45.63% on average. This research proposed an approach to measure the idle time in synchronized zone automated order picking system. The approach can improve the picking efficiency significantly and can be seen as theoretical basis when optimizing the synchronized automated order picking systems.
Algorithms for Automatic Alignment of Arrays
NASA Technical Reports Server (NTRS)
Chatterjee, Siddhartha; Gilbert, John R.; Oliker, Leonid; Schreiber, Robert; Sheffler, Thomas J.
1996-01-01
Aggregate data objects (such as arrays) are distributed across the processor memories when compiling a data-parallel language for a distributed-memory machine. The mapping determines the amount of communication needed to bring operands of parallel operations into alignment with each other. A common approach is to break the mapping into two stages: an alignment that maps all the objects to an abstract template, followed by a distribution that maps the template to the processors. This paper describes algorithms for solving the various facets of the alignment problem: axis and stride alignment, static and mobile offset alignment, and replication labeling. We show that optimal axis and stride alignment is NP-complete for general program graphs, and give a heuristic method that can explore the space of possible solutions in a number of ways. We show that some of these strategies can give better solutions than a simple greedy approach proposed earlier. We also show how local graph contractions can reduce the size of the problem significantly without changing the best solution. This allows more complex and effective heuristics to be used. We show how to model the static offset alignment problem using linear programming, and we show that loop-dependent mobile offset alignment is sometimes necessary for optimum performance. We describe an algorithm with for determining mobile alignments for objects within do loops. We also identify situations in which replicated alignment is either required by the program itself or can be used to improve performance. We describe an algorithm based on network flow that replicates objects so as to minimize the total amount of broadcast communication in replication.
Ridout, Nathan; Matharu, Munveen; Sanders, Elizabeth; Wallis, Deborah J
2015-08-30
The primary aim was to examine the influence of subclinical disordered eating on autobiographical memory specificity (AMS) and social problem solving (SPS). A further aim was to establish if AMS mediated the relationship between eating psychopathology and SPS. A non-clinical sample of 52 females completed the autobiographical memory test (AMT), where they were asked to retrieve specific memories of events from their past in response to cue words, and the means-end problem-solving task (MEPS), where they were asked to generate means of solving a series of social problems. Participants also completed the Eating Disorders Inventory (EDI) and Hospital Anxiety and Depression Scale. After controlling for mood, high scores on the EDI subscales, particularly Drive-for-Thinness, were associated with the retrieval of fewer specific and a greater proportion of categorical memories on the AMT and with the generation of fewer and less effective means on the MEPS. Memory specificity fully mediated the relationship between eating psychopathology and SPS. These findings have implications for individuals exhibiting high levels of disordered eating, as poor AMS and SPS are likely to impact negatively on their psychological wellbeing and everyday social functioning and could represent a risk factor for the development of clinically significant eating disorders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Expecting innovation: psychoactive drug primes and the generation of creative solutions.
Hicks, Joshua A; Pedersen, Sarah L; Pederson, Sarah L; Friedman, Ronald S; McCarthy, Denis M
2011-08-01
Many individuals expect that alcohol and drug consumption will enhance creativity. The present studies tested whether substance related primes would influence creative performance for individuals who possessed creativity-related substance expectancies. Participants (n = 566) were briefly exposed to stimuli related to psychoactive substances (alcohol, for Study 1, Sample 1, and Study 2; and marijuana, for Study 1, Sample 2) or neutral stimuli. Participants in Study 1 then completed a creative problem-solving task, while participants in Study 2 completed a divergent thinking task or a task unrelated to creative problem solving. The results of Study 1 revealed that exposure to the experimental stimuli enhanced performance on the creative problem-solving task for those who expected the corresponding substance would trigger creative functioning. In a conceptual replication, Study 2 showed that participants exposed to alcohol cues performed better on a divergent thinking task if they expected alcohol to enhance creativity. It is important to note that this same interaction did not influence performance on measures unrelated to creative problem solving, suggesting that the activation of creativity-related expectancies influenced creative performance, specifically. These findings highlight the importance of assessing expectancies when examining pharmacological effects of alcohol and marijuana. Future directions and implications for substance-related interventions are discussed. (c) 2011 APA, all rights reserved.
Dynamic cellular manufacturing system considering machine failure and workload balance
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad
2018-02-01
Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.
A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem
NASA Astrophysics Data System (ADS)
Jolai, Fariborz; Assadipour, Ghazal
Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.
A 16-bit Coherent Ising Machine for One-Dimensional Ring and Cubic Graph Problems
NASA Astrophysics Data System (ADS)
Takata, Kenta; Marandi, Alireza; Hamerly, Ryan; Haribara, Yoshitaka; Maruo, Daiki; Tamate, Shuhei; Sakaguchi, Hiromasa; Utsunomiya, Shoko; Yamamoto, Yoshihisa
2016-09-01
Many tasks in our modern life, such as planning an efficient travel, image processing and optimizing integrated circuit design, are modeled as complex combinatorial optimization problems with binary variables. Such problems can be mapped to finding a ground state of the Ising Hamiltonian, thus various physical systems have been studied to emulate and solve this Ising problem. Recently, networks of mutually injected optical oscillators, called coherent Ising machines, have been developed as promising solvers for the problem, benefiting from programmability, scalability and room temperature operation. Here, we report a 16-bit coherent Ising machine based on a network of time-division-multiplexed femtosecond degenerate optical parametric oscillators. The system experimentally gives more than 99.6% of success rates for one-dimensional Ising ring and nondeterministic polynomial-time (NP) hard instances. The experimental and numerical results indicate that gradual pumping of the network combined with multiple spectral and temporal modes of the femtosecond pulses can improve the computational performance of the Ising machine, offering a new path for tackling larger and more complex instances.
2011-01-01
Background Consumer use of herbal and natural products (H/NP) is increasing, yet physicians are often unprepared to provide guidance due to lack of educational training. This knowledge deficit may place consumers at risk of clinical complications. We wished to evaluate the impact that a natural medicine clinical decision tool has on faculty attitudes, practice experiences, and needs with respect to H/NP. Methods All physicians and clinical staff (nurse practitioners, physicians assistants) (n = 532) in departments of Pediatrics, Family and Community Medicine, and Internal Medicine at our medical center were invited to complete 2 electronic surveys. The first survey was completed immediately before access to a H/NP clinical-decision tool was obtained; the second survey was completed the following year. Results Responses were obtained from 89 of 532 practitioners (16.7%) on the first survey and 87 of 535 (16.3%) clinicians on the second survey. Attitudes towards H/NP varied with gender, age, time in practice, and training. At baseline, before having an evidence-based resource available, nearly half the respondents indicated that they rarely or never ask about H/NP when taking a patient medication history. The majority of these respondents (81%) indicated that they would like to learn more about H/NP, but 72% admitted difficulty finding evidence-based information. After implementing the H/NP tool, 63% of database-user respondents indicated that they now ask patients about H/NP when taking a drug history. Compared to results from the baseline survey, respondents who used the database indicated that the tool significantly increased their ability to find reliable H/NP information (P < 0.0001), boosted their knowledge of H/NP (p < 0.0001), and increased their confidence in providing accurate H/NP answers to patients and colleagues (P < 0.0001). Conclusions Our results demonstrate healthcare provider knowledge and confidence with H/NP can be improved without costly and time-consuming formal H/NP curricula. Yet, it will be challenging to make providers aware of such resources. PMID:22011398
Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT
Nguyen, Thu L. N.; Shin, Yoan
2016-01-01
Localization in wireless sensor networks (WSNs) is one of the primary functions of the intelligent Internet of Things (IoT) that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach. PMID:27213378
Calculus Problem Solving Behavior of Mathematic Education Students
NASA Astrophysics Data System (ADS)
Rizal, M.; Mansyur, J.
2017-04-01
The purpose of this study is to obtain a description of the problem-solving behaviour of mathematics education students. The attainment of the purpose consisted of several stages: (1) to gain the subject from the mathematic education of first semester students, each of them who has a high, medium, and low competence of mathematic case. (2) To give two mathematical problems with different characteristics. The first problem (M1), the statement does not lead to a resolution. The second problem (M2), a statement leads to problem-solving. (3) To explore the behaviour of problem-solving based on the step of Polya (Rizal, 2011) by way of thinking aloud and in-depth interviews. The obtained data are analysed as suggested by Miles and Huberman (1994) but at first, time triangulation is done or data’s credibility by providing equivalent problem contexts and at different times. The results show that the behavioral problem solvers (mathematic education students) who are capable of high mathematic competency (ST). In understanding M1, ST is more likely to pay attention to an image first, read the texts piecemeal and repeatedly, then as a whole and more focus to the sentences that contain equations, numbers or symbols. As a result, not all information can be received well. When understanding the M2, ST can link the information from a problem that is stored in the working memory to the information on the long-term memory. ST makes planning to the solution of M1 and M2 by using a formula based on similar experiences which have been ever received before. Another case when implementing the troubleshooting plans, ST complete the M1 according to the plan, but not all can be resolved correctly. In contrast to the implementation of the solving plan of M2, ST can solve the problem according to plan quickly and correctly. According to the solving result of M1 and M2, ST conducts by reading the job based on an algorithm and reasonability. Furthermore, when SS and SR understand the problem of M1 and M2 similar to the ST’s, but both of the problem solvers read the questions with not complete so that they cannot pay attention to the questions of the problems. SS and SR create and execute M2 plan same as ST, but for M1, SS and SR cannot do it, but only active on reading the statement of the problem. On the checking of the M2 task, SS and SR retrace the task according to the used formula.
The effect of problem structure on problem-solving: an fMRI study of word versus number problems.
Newman, Sharlene D; Willoughby, Gregory; Pruce, Benjamin
2011-09-02
It has long been thought that word problems are more difficult to solve than number/equation problems. However, recent findings have begun to bring this broadly believed idea into question. The current study examined the processing differences between these two types of problems. The behavioral results presented here failed to show an overwhelming advantage for number problems. In fact, there were more errors for the number problems than the word problems. The neuroimaging results reported demonstrate that there is significant overlap in the processing of what, on the surface, appears to be completely different problems that elicit different problem-solving strategies. Word and number problems rely on a general network responsible for problem-solving that includes the superior posterior parietal cortex, the horizontal segment of the intraparietal sulcus which is hypothesized to be involved in problem representation and calculation as well as the regions that have been linked to executive aspects of working memory such as the pre-SMA and basal ganglia. While overlap was observed, significant differences were also found primarily in language processing regions such as Broca's and Wernicke's areas for the word problems and the horizontal segment of the intraparietal sulcus for the number problems. Copyright © 2011 Elsevier B.V. All rights reserved.
Camp, Joanne S; Karmiloff-Smith, Annette; Thomas, Michael S C; Farran, Emily K
2016-12-01
Individuals with neurodevelopmental disorders like Williams syndrome and Down syndrome exhibit executive function impairments on experimental tasks (Lanfranchi, Jerman, Dal Pont, Alberti, & Vianello, 2010; Menghini, Addona, Costanzo, & Vicari, 2010), but the way that they use executive functioning for problem solving in everyday life has not hitherto been explored. The study aim is to understand cross-syndrome characteristics of everyday executive functioning and problem solving. Parents/carers of individuals with Williams syndrome (n=47) or Down syndrome (n=31) of a similar chronological age (m=17 years 4 months and 18 years respectively) as well as those of a group of younger typically developing children (n=34; m=8years 3 months) completed two questionnaires: the Behavior Rating Inventory of Executive Function (BRIEF; Gioia, Isquith, Guy, & Kenworthy, 2000) and a novel Problem-Solving Questionnaire. The rated likelihood of reaching a solution in a problem solving situation was lower for both syndromic groups than the typical group, and lower still for the Williams syndrome group than the Down syndrome group. The proportion of group members meeting the criterion for clinical significance on the BRIEF was also highest for the Williams syndrome group. While changing response, avoiding losing focus and maintaining perseverance were important for problem-solving success in all groups, asking for help and avoiding becoming emotional were also important for the Down syndrome and Williams syndrome groups respectively. Keeping possessions in order was a relative strength amongst BRIEF scales for the Down syndrome group. Results suggest that individuals with Down syndrome tend to use compensatory strategies for problem solving (asking for help and potentially, keeping items well ordered), while for individuals with Williams syndrome, emotional reactions disrupt their problem-solving skills. This paper highlights the importance of identifying syndrome-specific problem-solving strengths and difficulties to improve effective functioning in everyday life. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zani, Carlos L; Carroll, Anthony R
2017-06-23
The discovery of novel and/or new bioactive natural products from biota sources is often confounded by the reisolation of known natural products. Dereplication strategies that involve the analysis of NMR and MS spectroscopic data to infer structural features present in purified natural products in combination with database searches of these substructures provide an efficient method to rapidly identify known natural products. Unfortunately this strategy has been hampered by the lack of publically available and comprehensive natural product databases and open source cheminformatics tools. A new platform, DEREP-NP, has been developed to help solve this problem. DEREP-NP uses the open source cheminformatics program DataWarrior to generate a database containing counts of 65 structural fragments present in 229 358 natural product structures derived from plants, animals, and microorganisms, published before 2013 and freely available in the nonproprietary Universal Natural Products Database (UNPD). By counting the number of times one or more of these structural features occurs in an unknown compound, as deduced from the analysis of its NMR ( 1 H, HSQC, and/or HMBC) and/or MS data, matching structures carrying the same numeric combination of searched structural features can be retrieved from the database. Confirmation that the matching structure is the same compound can then be verified through literature comparison of spectroscopic data. This methodology can be applied to both purified natural products and fractions containing a small number of individual compounds that are often generated as screening libraries. The utility of DEREP-NP has been verified through the analysis of spectra derived from compounds (and fractions containing two or three compounds) isolated from plant, marine invertebrate, and fungal sources. DEREP-NP is freely available at https://github.com/clzani/DEREP-NP and will help to streamline the natural product discovery process.
Sweat, Noah W; Bates, Larry W; Hendricks, Peter S
2016-01-01
Developing methods for improving creativity is of broad interest. Classic psychedelics may enhance creativity; however, the underlying mechanisms of action are unknown. This study was designed to assess whether a relationship exists between naturalistic classic psychedelic use and heightened creative problem-solving ability and if so, whether this is mediated by lifetime mystical experience. Participants (N = 68) completed a survey battery assessing lifetime mystical experience and circumstances surrounding the most memorable experience. They were then administered a functional fixedness task in which faster completion times indicate greater creative problem-solving ability. Participants reporting classic psychedelic use concurrent with mystical experience (n = 11) exhibited significantly faster times on the functional fixedness task (Cohen's d = -.87; large effect) and significantly greater lifetime mystical experience (Cohen's d = .93; large effect) than participants not reporting classic psychedelic use concurrent with mystical experience. However, lifetime mystical experience was unrelated to completion times on the functional fixedness task (standardized β = -.06), and was therefore not a significant mediator. Classic psychedelic use may increase creativity independent of its effects on mystical experience. Maximizing the likelihood of mystical experience may need not be a goal of psychedelic interventions designed to boost creativity.
Multilevel decomposition of complete vehicle configuration in a parallel computing environment
NASA Technical Reports Server (NTRS)
Bhatt, Vinay; Ragsdell, K. M.
1989-01-01
This research summarizes various approaches to multilevel decomposition to solve large structural problems. A linear decomposition scheme based on the Sobieski algorithm is selected as a vehicle for automated synthesis of a complete vehicle configuration in a parallel processing environment. The research is in a developmental state. Preliminary numerical results are presented for several example problems.
Stochastic Local Search for Core Membership Checking in Hedonic Games
NASA Astrophysics Data System (ADS)
Keinänen, Helena
Hedonic games have emerged as an important tool in economics and show promise as a useful formalism to model multi-agent coalition formation in AI as well as group formation in social networks. We consider a coNP-complete problem of core membership checking in hedonic coalition formation games. No previous algorithms to tackle the problem have been presented. In this work, we overcome this by developing two stochastic local search algorithms for core membership checking in hedonic games. We demonstrate the usefulness of the algorithms by showing experimentally that they find solutions efficiently, particularly for large agent societies.
The use of MACSYMA for solving elliptic boundary value problems
NASA Technical Reports Server (NTRS)
Thejll, Peter; Gilbert, Robert P.
1990-01-01
A boundary method is presented for the solution of elliptic boundary value problems. An approach based on the use of complete systems of solutions is emphasized. The discussion is limited to the Dirichlet problem, even though the present method can possibly be adapted to treat other boundary value problems.
NASA Astrophysics Data System (ADS)
Ghezavati, V. R.; Beigi, M.
2016-12-01
During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.
Teaching children with autism to explain how: A case for problem solving?
Frampton, Sarah E; Alice Shillingsburg, M
2018-04-01
Few studies have applied Skinner's (1953) conceptualization of problem solving to teach socially significant behaviors to individuals with developmental disabilities. The current study used a multiple probe design across behavior (sets) to evaluate the effects of problem-solving strategy training (PSST) on the target behavior of explaining how to complete familiar activities. During baseline, none of the three participants with autism spectrum disorder (ASD) could respond to the problems presented to them (i.e., explain how to do the activities). Tact training of the actions in each activity alone was ineffective; however, all participants demonstrated independent explaining-how following PSST. Further, following PSST with Set 1, tact training alone was sufficient for at least one scenario in sets 2 and 3 for all 3 participants. Results have implications for generative responding for individuals with ASD and further the discussion regarding the role of problem solving in complex verbal behavior. © 2018 Society for the Experimental Analysis of Behavior.
1985-12-01
Office of Scientific Research , and Air Force Space Division are sponsoring research for the development of a high speed DFT processor. This DFT...to the arithmetic circuitry through a master/slave 11-15 %v OPR ONESHOT OUTPUT OUTPUT .., ~ INITIALIZATION COLUMN’ 00 N DONE CUTRPLANE PLAtNE Figure...Since the TSP is an NP-complete problem, many mathematicians, operations researchers , computer scientists and the like have proposed heuristic
Van Liew, Charles; Gluhm, Shea; Goldstein, Jody; Cronan, Terry A; Corey-Bloom, Jody
2013-01-01
Huntington's disease (HD) is a genetic, neurodegenerative disorder characterized by motor, cognitive, and psychiatric dysfunction. In HD, the inability to solve problems successfully affects not only disease coping, but also interpersonal relationships, judgment, and independent living. The aim of the present study was to examine social problem-solving (SPS) in well-characterized HD and at-risk (AR) individuals and to examine its unique and conjoint effects with motor, cognitive, and psychiatric states on functional ratings. Sixty-three participants, 31 HD and 32 gene-positive AR, were included in the study. Participants completed the Social Problem-Solving Inventory-Revised: Long (SPSI-R:L), a 52-item, reliable, standardized measure of SPS. Items are aggregated under five scales (Positive, Negative, and Rational Problem-Solving; Impulsivity/Carelessness and Avoidance Styles). Participants also completed the Unified Huntington's Disease Rating Scale functional, behavioral, and cognitive assessments, as well as additional neuropsychological examinations and the Symptom Checklist-90-Revised (SCL-90R). A structural equation model was used to examine the effects of motor, cognitive, psychiatric, and SPS states on functionality. The multifactor structural model fit well descriptively. Cognitive and motor states uniquely and significantly predicted function in HD; however, neither psychiatric nor SPS states did. SPS was, however, significantly related to motor, cognitive, and psychiatric states, suggesting that it may bridge the correlative gap between psychiatric and cognitive states in HD. SPS may be worth assessing in conjunction with the standard gamut of clinical assessments in HD. Suggestions for future research and implications for patients, families, caregivers, and clinicians are discussed.
A multiobjective hybrid genetic algorithm for the capacitated multipoint network design problem.
Lo, C C; Chang, W H
2000-01-01
The capacitated multipoint network design problem (CMNDP) is NP-complete. In this paper, a hybrid genetic algorithm for CMNDP is proposed. The multiobjective hybrid genetic algorithm (MOHGA) differs from other genetic algorithms (GAs) mainly in its selection procedure. The concept of subpopulation is used in MOHGA. Four subpopulations are generated according to the elitism reservation strategy, the shifting Prufer vector, the stochastic universal sampling, and the complete random method, respectively. Mixing these four subpopulations produces the next generation population. The MOHGA can effectively search the feasible solution space due to population diversity. The MOHGA has been applied to CMNDP. By examining computational and analytical results, we notice that the MOHGA can find most nondominated solutions and is much more effective and efficient than other multiobjective GAs.
Wang, Lipo; Li, Sa; Tian, Fuyu; Fu, Xiuju
2004-10-01
Recently Chen and Aihara have demonstrated both experimentally and mathematically that their chaotic simulated annealing (CSA) has better search ability for solving combinatorial optimization problems compared to both the Hopfield-Tank approach and stochastic simulated annealing (SSA). However, CSA may not find a globally optimal solution no matter how slowly annealing is carried out, because the chaotic dynamics are completely deterministic. In contrast, SSA tends to settle down to a global optimum if the temperature is reduced sufficiently slowly. Here we combine the best features of both SSA and CSA, thereby proposing a new approach for solving optimization problems, i.e., stochastic chaotic simulated annealing, by using a noisy chaotic neural network. We show the effectiveness of this new approach with two difficult combinatorial optimization problems, i.e., a traveling salesman problem and a channel assignment problem for cellular mobile communications.
Guturu, Parthasarathy; Dantu, Ram
2008-06-01
Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.
Berry, Jack W.; Elliott, Timothy R.; Grant, Joan S.; Edwards, Gary; Fine, Philip R.
2012-01-01
Objective To examine whether an individualized problem-solving intervention provided to family caregivers of persons with severe disabilities provides benefits to both caregivers and their care recipients. Design Family caregivers were randomly assigned to an education-only control group or a problem-solving training (PST) intervention group. Participants received monthly contacts for 1 year. Participants Family caregivers (129 women, 18 men) and their care recipients (81 women, 66 men) consented to participate. Main Outcome Measures Caregivers completed the Social Problem-Solving Inventory–Revised, the Center for Epidemiological Studies-Depression scale, the Satisfaction with Life scale, and a measure of health complaints at baseline and in 3 additional assessments throughout the year. Care recipient depression was assessed with a short form of the Hamilton Depression Scale. Results Latent growth modeling was used to analyze data from the dyads. Caregivers who received PST reported a significant decrease in depression over time, and they also displayed gains in constructive problem-solving abilities and decreases in dysfunctional problem-solving abilities. Care recipients displayed significant decreases in depression over time, and these decreases were significantly associated with decreases in caregiver depression in response to training. Conclusions PST significantly improved the problem-solving skills of community-residing caregivers and also lessened their depressive symptoms. Care recipients in the PST group also had reductions in depression over time, and it appears that decreases in caregiver depression may account for this effect. PMID:22686549
Berry, Jack W; Elliott, Timothy R; Grant, Joan S; Edwards, Gary; Fine, Philip R
2012-05-01
To examine whether an individualized problem-solving intervention provided to family caregivers of persons with severe disabilities provides benefits to both caregivers and their care recipients. Family caregivers were randomly assigned to an education-only control group or a problem-solving training (PST) intervention group. Participants received monthly contacts for 1 year. Family caregivers (129 women, 18 men) and their care recipients (81 women, 66 men) consented to participate. Caregivers completed the Social Problem-Solving Inventory-Revised, the Center for Epidemiological Studies-Depression scale, the Satisfaction with Life scale, and a measure of health complaints at baseline and in 3 additional assessments throughout the year. Care recipient depression was assessed with a short form of the Hamilton Depression Scale. Latent growth modeling was used to analyze data from the dyads. Caregivers who received PST reported a significant decrease in depression over time, and they also displayed gains in constructive problem-solving abilities and decreases in dysfunctional problem-solving abilities. Care recipients displayed significant decreases in depression over time, and these decreases were significantly associated with decreases in caregiver depression in response to training. PST significantly improved the problem-solving skills of community-residing caregivers and also lessened their depressive symptoms. Care recipients in the PST group also had reductions in depression over time, and it appears that decreases in caregiver depression may account for this effect. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Quantum proofs can be verified using only single-qubit measurements
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Nagaj, Daniel; Schuch, Norbert
2016-02-01
Quantum Merlin Arthur (QMA) is the class of problems which, though potentially hard to solve, have a quantum solution that can be verified efficiently using a quantum computer. It thus forms a natural quantum version of the classical complexity class NP (and its probabilistic variant MA, Merlin-Arthur games), where the verifier has only classical computational resources. In this paper, we study what happens when we restrict the quantum resources of the verifier to the bare minimum: individual measurements on single qubits received as they come, one by one. We find that despite this grave restriction, it is still possible to soundly verify any problem in QMA for the verifier with the minimum quantum resources possible, without using any quantum memory or multiqubit operations. We provide two independent proofs of this fact, based on measurement-based quantum computation and the local Hamiltonian problem. The former construction also applies to QMA1, i.e., QMA with one-sided error.
Kiesewetter, Jan; Ebersbach, René; Görlitz, Anja; Holzer, Matthias; Fischer, Martin R; Schmidmaier, Ralf
2013-01-01
Problem-solving in terms of clinical reasoning is regarded as a key competence of medical doctors. Little is known about the general cognitive actions underlying the strategies of problem-solving among medical students. In this study, a theory-based model was used and adapted in order to investigate the cognitive actions in which medical students are engaged when dealing with a case and how patterns of these actions are related to the correct solution. Twenty-three medical students worked on three cases on clinical nephrology using the think-aloud method. The transcribed recordings were coded using a theory-based model consisting of eight different cognitive actions. The coded data was analysed using time sequences in a graphical representation software. Furthermore the relationship between the coded data and accuracy of diagnosis was investigated with inferential statistical methods. The observation of all main actions in a case elaboration, including evaluation, representation and integration, was considered a complete model and was found in the majority of cases (56%). This pattern significantly related to the accuracy of the case solution (φ = 0.55; p<.001). Extent of prior knowledge was neither related to the complete model nor to the correct solution. The proposed model is suitable to empirically verify the cognitive actions of problem-solving of medical students. The cognitive actions evaluation, representation and integration are crucial for the complete model and therefore for the accuracy of the solution. The educational implication which may be drawn from this study is to foster students reasoning by focusing on higher level reasoning.
Problem-solving ability and comorbid personality disorders in depressed outpatients.
Harley, Rebecca; Petersen, Timothy; Scalia, Margaret; Papakostas, George I; Farabaugh, Amy; Fava, Maurizio
2006-01-01
Major depressive disorder (MDD) is associated with poor problem-solving abilities. In addition, certain personality disorders (PDs) that are common among patients with MDD are also associated with limited problem-solving skills. Attempts to understand the relationship between PDs and problem solving can be complicated by the presence of acute MDD. Our objective in this study was to investigate the relationships between PDs, problem-solving skills, and response to treatment among outpatients with MDD. We enrolled 312 outpatients with MDD in an open, fixed-dose, 8-week fluoxetine trial. PD diagnoses were ascertained via structured clinical interview before and after fluoxetine treatment. Subjects completed the Problem-Solving Inventory (PSI) at both time points. We used analyses of covariance (ANCOVAs) to assess relationships between PD diagnoses and PSI scores prior to treatment. Subjects were divided into three groups: those with PD diagnoses that remained stable after fluoxetine treatment (N=91), those who no longer met PD criteria after fluoxetine treatment (N=119), and those who did not meet criteria for a PD at any time point in the study (N=95). We used multiple chi(2) analyses to compare rates of MDD response and remission between the three PD groups. ANCOVA was also used to compare posttreatment PSI scores between PD groups. Prior to fluoxetine treatment, patients with avoidant, dependent, narcissistic, and borderline PDs reported significantly worse problem-solving ability than did patients without any PDs. Only subjects with dependent PD remained associated with poorer baseline problem-solving reports after the effects of baseline depression severity were controlled. Patients with stable PD diagnoses had significantly lower rates of MDD remission. Across PD groups, problem solving improved as MDD improved. No significant differences in posttreatment problem-solving were found between PD groups after controlling for baseline depression severity, baseline PSI score, and response to treatment. Treatment with fluoxetine is less likely to lead to remission of MDD in patients with stable PDs. More study is needed to investigate causal links between PDs, problem solving, and MDD treatment response. Published 2006 Wiley-Liss, Inc.
2010-06-01
of Not at all Somewhat Mostly Completely membership such as clothes , signs, art, architecture, logos , landmarks, and flags that people can...on a ?whole of nation? approach to solving complex problems. Psychological sense of community (PSOC) theory provides the link that explains how an...States during complex contingency operations depends on a “whole of nation” approach to solving complex problems. Psychological sense of community
NASA Astrophysics Data System (ADS)
Bai, Danyu
2015-08-01
This paper discusses the flow shop scheduling problem to minimise the total quadratic completion time (TQCT) with release dates in offline and online environments. For this NP-hard problem, the investigation is focused on the performance of two online algorithms based on the Shortest Processing Time among Available jobs rule. Theoretical results indicate the asymptotic optimality of the algorithms as the problem scale is sufficiently large. To further enhance the quality of the original solutions, the improvement scheme is provided for these algorithms. A new lower bound with performance guarantee is provided, and computational experiments show the effectiveness of these heuristics. Moreover, several results of the single-machine TQCT problem with release dates are also obtained for the deduction of the main theorem.
Facility Layout Problems Using Bays: A Survey
NASA Astrophysics Data System (ADS)
Davoudpour, Hamid; Jaafari, Amir Ardestani; Farahani, Leila Najafabadi
2010-06-01
Layout design is one of the most important activities done by industrial Engineers. Most of these problems have NP hard Complexity. In a basic layout design, each cell is represented by a rectilinear, but not necessarily convex polygon. The set of fully packed adjacent polygons is known as a block layout (Asef-Vaziri and Laporte 2007). Block layout is divided by slicing tree and bay layout. In bay layout, departments are located in vertical columns or horizontal rows, bays. Bay layout is used in real worlds especially in concepts such as semiconductor and aisles. There are several reviews in facility layout; however none of them focus on bay layout. The literature analysis given here is not limited to specific considerations about bay layout design. We present a state of art review for bay layout considering some issues such as the used objectives, the techniques of solving and the integration methods in bay.
FPFH-based graph matching for 3D point cloud registration
NASA Astrophysics Data System (ADS)
Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua
2018-04-01
Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.
Single molecule sequencing-guided scaffolding and correction of draft assemblies.
Zhu, Shenglong; Chen, Danny Z; Emrich, Scott J
2017-12-06
Although single molecule sequencing is still improving, the lengths of the generated sequences are inevitably an advantage in genome assembly. Prior work that utilizes long reads to conduct genome assembly has mostly focused on correcting sequencing errors and improving contiguity of de novo assemblies. We propose a disassembling-reassembling approach for both correcting structural errors in the draft assembly and scaffolding a target assembly based on error-corrected single molecule sequences. To achieve this goal, we formulate a maximum alternating path cover problem. We prove that this problem is NP-hard, and solve it by a 2-approximation algorithm. Our experimental results show that our approach can improve the structural correctness of target assemblies in the cost of some contiguity, even with smaller amounts of long reads. In addition, our reassembling process can also serve as a competitive scaffolder relative to well-established assembly benchmarks.
A bi-objective model for robust yard allocation scheduling for outbound containers
NASA Astrophysics Data System (ADS)
Liu, Changchun; Zhang, Canrong; Zheng, Li
2017-01-01
This article examines the yard allocation problem for outbound containers, with consideration of uncertainty factors, mainly including the arrival and operation time of calling vessels. Based on the time buffer inserting method, a bi-objective model is constructed to minimize the total operational cost and to maximize the robustness of fighting against the uncertainty. Due to the NP-hardness of the constructed model, a two-stage heuristic is developed to solve the problem. In the first stage, initial solutions are obtained by a greedy algorithm that looks n-steps ahead with the uncertainty factors set as their respective expected values; in the second stage, based on the solutions obtained in the first stage and with consideration of uncertainty factors, a neighbourhood search heuristic is employed to generate robust solutions that can fight better against the fluctuation of uncertainty factors. Finally, extensive numerical experiments are conducted to test the performance of the proposed method.
Foresight beyond the very next event: four-year-olds can link past and deferred future episodes
Redshaw, Jonathan; Suddendorf, Thomas
2013-01-01
Previous experiments have demonstrated that by 4 years of age children can use information from a past episode to solve a problem for the very next future episode. However, it remained unclear whether 4-year-olds can similarly use such information to solve a problem for a more removed future episode that is not of immediate concern. In the current study we introduced 4-year-olds to problems in one room before taking them to another room and distracting them for 15 min. The children were then offered a choice of items to place into a bucket that was to be taken back to the first room when a 5-min sand-timer had completed a cycle. Across two conceptually distinct domains, the children placed the item that could solve the deferred future problem above chance level. This result demonstrates that by 48 months many children can recall a problem from the past and act in the present to solve that problem for a deferred future episode. We discuss implications for theories about the nature of episodic foresight. PMID:23847575
NASA Astrophysics Data System (ADS)
Kosasih, U.; Wahyudin, W.; Prabawanto, S.
2017-09-01
This study aims to understand how learners do look back their idea of problem solving. This research is based on qualitative approach with case study design. Participants in this study were xx students of Junior High School, who were studying the material of congruence and similarity. The supporting instruments in this research are test and interview sheet. The data obtained were analyzed by coding and constant-comparison. The analysis find that there are three ways in which the students review the idea of problem solving, which is 1) carried out by comparing answers to the completion measures exemplified by learning resources; 2) carried out by examining the logical relationship between the solution and the problem; and 3) carried out by means of confirmation to the prior knowledge they have. This happens because most students learn in a mechanistic way. This study concludes that students validate the idea of problem solving obtained, influenced by teacher explanations, learning resources, and prior knowledge. Therefore, teacher explanations and learning resources contribute to the success or failure of students in solving problems.
Problem-based learning on quantitative analytical chemistry course
NASA Astrophysics Data System (ADS)
Fitri, Noor
2017-12-01
This research applies problem-based learning method on chemical quantitative analytical chemistry, so called as "Analytical Chemistry II" course, especially related to essential oil analysis. The learning outcomes of this course include aspects of understanding of lectures, the skills of applying course materials, and the ability to identify, formulate and solve chemical analysis problems. The role of study groups is quite important in improving students' learning ability and in completing independent tasks and group tasks. Thus, students are not only aware of the basic concepts of Analytical Chemistry II, but also able to understand and apply analytical concepts that have been studied to solve given analytical chemistry problems, and have the attitude and ability to work together to solve the problems. Based on the learning outcome, it can be concluded that the problem-based learning method in Analytical Chemistry II course has been proven to improve students' knowledge, skill, ability and attitude. Students are not only skilled at solving problems in analytical chemistry especially in essential oil analysis in accordance with local genius of Chemistry Department, Universitas Islam Indonesia, but also have skilled work with computer program and able to understand material and problem in English.
Student Interaction with Campus Help-Givers: Mapping the Network's Efficacy.
ERIC Educational Resources Information Center
Huebner, Lois A.; And Others
Procedures to map the broad outline of student interaction with various help-giving persons and campus agencies were investigated. A sample of 633 undergraduate students completed an 8-part problem-solving questionnaire that identified current problems, problems that previously existed, the 5 most important problems, improvement rates for the most…
Wu, Jiang; Qin, Yufei; Zhou, Quan; Xu, Zhenming
2009-05-30
The electrostatic separation is an effective and environmentally friendly method for recycling metals and nonmetals from crushed printed circuit board (PCB) wastes. However, it still confronts some problems brought by nonconductive powder (NP). Firstly, the NP is fine and liable to aggregate. This leads to an increase of middling products and loss of metals. Secondly, the stability of separation process is influenced by NP. Finally, some NPs accumulate on the surface of the corona and electrostatic electrodes during the process. These problems lead to an inefficient separation. In the present research, the impacts of NP on electrostatic separation are investigated. The experimental results show that: the separation is notably influenced when the NP content is more than 10%. With the increase of NP content, the middling products sharply increase from 1.4 g to 4.3g (increase 207.1%), while the conductive products decrease from 24.0 g to 19.1g (decrease 20.4%), and the separation process become more instable.
An outer approximation method for the road network design problem
2018-01-01
Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well. PMID:29590111
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.
An outer approximation method for the road network design problem.
Asadi Bagloee, Saeed; Sarvi, Majid
2018-01-01
Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well.
7 CFR 1955.115 - Sales steps for nonprogram (NP) property (housing).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 14 2010-01-01 2009-01-01 true Sales steps for nonprogram (NP) property (housing... Inventory Property Rural Housing (rh) Real Property § 1955.115 Sales steps for nonprogram (NP) property... following steps after repairs, if economically feasible, are completed. The appraisal will be updated to...
Paterson, Gillian; Power, Kevin; Yellowlees, Alex; Park, Katy; Taylor, Louise
2007-01-01
Research examining cognitive and behavioural determinants of anorexia is currently lacking. This has implications for the success of treatment programmes for anorexics, particularly, given the high reported dropout rates. This study examines two-dimensional self-esteem (comprising of self-competence and self-liking) and social problem-solving in an anorexic population and predicts that self-esteem will mediate the relationship between problem-solving and eating pathology by facilitating/inhibiting use of faulty/effective strategies. Twenty-seven anorexic inpatients and 62 controls completed measures of social problem solving and two-dimensional self-esteem. Anorexics scored significantly higher than the non-clinical group on measures of eating pathology, negative problem orientation, impulsivity/carelessness and avoidance and significantly lower on positive problem orientation and both self-esteem components. In the clinical sample, disordered eating correlated significantly with self-competence, negative problem-orientation and avoidance. Associations between disordered eating and problem solving lost significance when self-esteem was controlled in the clinical group only. Self-competence was found to be the main predictor of eating pathology in the clinical sample while self-liking, impulsivity and negative and positive problem orientation were main predictors in the non-clinical sample. Findings support the two-dimensional self-esteem theory with self-competence only being relevant to the anorexic population and support the hypothesis that self-esteem mediates the relationship between disordered eating and problem solving ability in an anorexic sample. Treatment implications include support for programmes emphasising increasing self-appraisal and self-efficacy. 2006 John Wiley & Sons, Ltd and Eating Disorders Association
Efficient RNA structure comparison algorithms.
Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason
2017-12-01
Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
New optimization model for routing and spectrum assignment with nodes insecurity
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-04-01
By adopting the orthogonal frequency division multiplexing technology, elastic optical networks can provide the flexible and variable bandwidth allocation to each connection request and get higher spectrum utilization. The routing and spectrum assignment problem in elastic optical network is a well-known NP-hard problem. In addition, information security has received worldwide attention. We combine these two problems to investigate the routing and spectrum assignment problem with the guaranteed security in elastic optical network, and establish a new optimization model to minimize the maximum index of the used frequency slots, which is used to determine an optimal routing and spectrum assignment schemes. To solve the model effectively, a hybrid genetic algorithm framework integrating a heuristic algorithm into a genetic algorithm is proposed. The heuristic algorithm is first used to sort the connection requests and then the genetic algorithm is designed to look for an optimal routing and spectrum assignment scheme. In the genetic algorithm, tailor-made crossover, mutation and local search operators are designed. Moreover, simulation experiments are conducted with three heuristic strategies, and the experimental results indicate that the effectiveness of the proposed model and algorithm framework.
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Conjecturing via analogical reasoning constructs ordinary students into like gifted student
NASA Astrophysics Data System (ADS)
Supratman; Ratnaningsih, N.; Ryane, S.
2017-12-01
The purpose of this study is to reveal the development of knowledge of ordinary students to be like gifted students in the classroom based on Piaget's theory. In exposing it, students are given an open problem of classical analogy. Researchers explore students who conjecture via analogical reasoning in problem solving. Of the 32 students, through the method of think out loud and the interview was completed: 25 students conjecture via analogical reasoning. Of the 25 students, all of them have almost the same character in problem solving/knowledge construction. For that, a student is taken to analyze the thinking process while solving the problem/construction of knowledge based on Piaget's theory. Based on Piaget's theory in the development of the same knowledge, gifted students and ordinary students have similar structures in final equilibrium. They begin processing: assimilation and accommodation of problem, strategies, and relationships.
Adham, Manal T; Bentley, Peter J
2016-08-01
This paper proposes and evaluates a solution to the truck redistribution problem prominent in London's Santander Cycle scheme. Due to the complexity of this NP-hard combinatorial optimisation problem, no efficient optimisation techniques are known to solve the problem exactly. This motivates our use of the heuristic Artificial Ecosystem Algorithm (AEA) to find good solutions in a reasonable amount of time. The AEA is designed to take advantage of highly distributed computer architectures and adapt to changing problems. In the AEA a problem is first decomposed into its relative sub-components; they then evolve solution building blocks that fit together to form a single optimal solution. Three variants of the AEA centred on evaluating clustering methods are presented: the baseline AEA, the community-based AEA which groups stations according to journey flows, and the Adaptive AEA which actively modifies clusters to cater for changes in demand. We applied these AEA variants to the redistribution problem prominent in bike share schemes (BSS). The AEA variants are empirically evaluated using historical data from Santander Cycles to validate the proposed approach and prove its potential effectiveness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Mental Models and Cooperative Problem Solving with Expert Systems,
1984-09-01
THIS PAGE ( "o Do le Entera) READINSTRUCTION- REPORT DOCUMENTATION PAGE BRE COMPLETING FORM I. REPORT NUMBER 2. GOVT ACCESSION NO- I. RECIPIENT’S CATALOG...the user’s con- ceptual understanding of the basic principle of the system s problem solving processes. An experimental study is described that strongly...design :A principles that lead to the optimal user engineering of future expert systems. The central theory discussed below is that the nature of the
Optimization of aerodynamic form of projectile for solving the problem of shooting range increasing
NASA Astrophysics Data System (ADS)
Lipanov, Alexey M.; Korolev, Stanislav A.; Rusyak, Ivan G.
2017-10-01
The article is devoted to the development of methods for solving the problem of external ballistics using a more complete system of motion equation taken into account the rotation and oscillation about the mass center and using aerodynamic coefficients of forces and moments which are calculated on the basis of modeling the hydrodynamics of flow around the projectile. Developed methods allows to study the basic ways of increasing the shooting range or artillery.
Enterprise Management Network Architecture Distributed Knowledge Base Support
1990-11-01
Advantages Potentially, this makes a distributed system more powerful than a conventional, centralized one in two ways: " First, it can be more reliable...does not completely apply [35]. The grain size of the processors measures the individual problem-solving power of the agents. In this definition...problem-solving power amounts to the conceptual size of a single action taken by an agent visible to the other agents in the system. If the grain is coarse
A learning approach to the bandwidth multicolouring problem
NASA Astrophysics Data System (ADS)
Akbari Torkestani, Javad
2016-05-01
In this article, a generalisation of the vertex colouring problem known as bandwidth multicolouring problem (BMCP), in which a set of colours is assigned to each vertex such that the difference between the colours, assigned to each vertex and its neighbours, is by no means less than a predefined threshold, is considered. It is shown that the proposed method can be applied to solve the bandwidth colouring problem (BCP) as well. BMCP is known to be NP-hard in graph theory, and so a large number of approximation solutions, as well as exact algorithms, have been proposed to solve it. In this article, two learning automata-based approximation algorithms are proposed for estimating a near-optimal solution to the BMCP. We show, for the first proposed algorithm, that by choosing a proper learning rate, the algorithm finds the optimal solution with a probability close enough to unity. Moreover, we compute the worst-case time complexity of the first algorithm for finding a 1/(1-ɛ) optimal solution to the given problem. The main advantage of this method is that a trade-off between the running time of algorithm and the colour set size (colouring optimality) can be made, by a proper choice of the learning rate also. Finally, it is shown that the running time of the proposed algorithm is independent of the graph size, and so it is a scalable algorithm for large graphs. The second proposed algorithm is compared with some well-known colouring algorithms and the results show the efficiency of the proposed algorithm in terms of the colour set size and running time of algorithm.
Chamberlin, Scott A; Moore, Alan D; Parks, Kelly
2017-09-01
Student affect plays a considerable role in mathematical problem solving performance, yet is rarely formally assessed. In this manuscript, an instrument and its properties are discussed to enable educational psychologists the opportunity to assess student affect. The study was conducted to norm the CAIMPS (instrument) with gifted students. In so doing, educational psychologists are informed of the process and the instrument's properties. The sample was comprised of 160 middle-grade (7 and 8) students, identified as gifted, in the United States. After completing one of four model-eliciting activities (MEAs), all participants completed the CAIMPS (Chamberlin Affective Instrument for Mathematical Problem Solving). Data were analysed using confirmatory factor analysis to ascertain the number of factors in the instrument. The normed fit index (0.6939), non-normed fit index (0.8072), and root mean square error approximation (.076) were at or near the acceptable levels. Alpha levels for factors were also robust (.637-.923). Data suggest that the instrument was a good fit for use with mathematics students in middle grades when solving problems. Perhaps the most impressive characteristic of the instrument was that the four factors (AVI: anxiety, value, and interest), SS (self-efficacy and self-esteem), ASP (aspiration), and ANX (anxiety) did not correlate highly with one another, which defies previous hypotheses in educational psychology. © 2017 The British Psychological Society.
Nasiri, Saeideh; Kordi, Masoumeh; Gharavi, Morteza Modares
2015-01-01
Background: Self-esteem is a determinant factor of mental health. Individuals with low self-esteem have depression, and low self-esteem is one of main symptoms of depression. Aim of this study is to compare the effects of problem-solving skills and relaxation on the score of self-esteem in women with postpartum depression. Materials and Methods: This clinical trial was performed on 80 women. Sampling was done in Mashhad healthy centers from December 2009 to June 2010. Women were randomly divided and assigned to problem-solving skills (n = 26), relaxation (n = 26), and control groups (n = 28). Interventions were implemented for 6 weeks and the subjects again completed Eysenck self-esteem scale 9 weeks after delivery. Data analysis was done by descriptive statistics, Kruskal–Wallis test, and analysis of variance (ANOVA) test by SPSS software. Results: The findings showed that the mean of self-esteem scale scores was 117.9 ± 9.7 after intervention in the problem-solving group, 117.0 ± 11.8 in the relaxation group, and 113.5 ± 10.4 in the control group and there was significant difference between the groups of relaxation and problem solving, and also between intervention groups and control group. Conclusions: According to the results, problem-solving skills and relaxation can be used to prevent and recover from postpartum depression. PMID:25709699
NASA Astrophysics Data System (ADS)
Huda, Nizlel; Sutawidjaja, Akbar; Subanji; Rahardjo, Swasono
2018-04-01
Metacognitive activity is very important in mathematical problems solving. Metacognitive activity consists of metacognitive awareness, metacognitive evaluation and metacognitive regulation. This study aimed to reveal the errors of metacognitive evaluation in students’ metacognitive failure in solving mathematical problems. 20 students taken as research subjects were grouped into three groups: the first group was students who experienced one metacognitive failure, the second group was students who experienced two metacognitive failures and the third group was students who experienced three metacognitive failures. One person was taken from each group as the reasearch subject. The research data was collected from worksheets done using think aload then followed by interviewing the research subjects based on the results’ of subject work. The findings in this study were students who experienced metacognitive failure in solving mathematical problems tends to miscalculate metacognitive evaluation in considering the effectiveness and limitations of their thinking and the effectiveness of their chosen strategy of completion.
Geary, D C; Frensch, P A; Wiley, J G
1993-06-01
Thirty-six younger adults (10 male, 26 female; ages 18 to 38 years) and 36 older adults (14 male, 22 female; ages 61 to 80 years) completed simple and complex paper-and-pencil subtraction tests and solved a series of simple and complex computer-presented subtraction problems. For the computer task, strategies and solution times were recorded on a trial-by-trial basis. Older Ss used a developmentally more mature mix of problem-solving strategies to solve both simple and complex subtraction problems. Analyses of component scores derived from the solution times suggest that the older Ss are slower at number encoding and number production but faster at executing the borrow procedure. In contrast, groups did not appear to differ in the speed of subtraction fact retrieval. Results from a computational simulation are consistent with the interpretation that older adults' advantage for strategy choices and for the speed of executing the borrow procedure might result from more practice solving subtraction problems.
Multiple representations and free-body diagrams: Do students benefit from using them?
NASA Astrophysics Data System (ADS)
Rosengrant, David R.
2007-12-01
Introductory physics students have difficulties understanding concepts and solving problems. When they solve problems, they use surface features of the problems to find an equation to calculate a numerical answer often not understanding the physics in the problem. How do we help students approach problem solving in an expert manner? A possible answer is to help them learn to represent knowledge in multiple ways and then use these different representations for conceptual understanding and problem solving. This solution follows from research in cognitive science and in physics education. However, there are no studies in physics that investigate whether students who learn to use multiple representations are in fact better problem solvers. This study focuses on one specific representation used in physics--a free body diagram. A free-body diagram is a graphical representation of forces exerted on an object of interest by other objects. I used the free-body diagram to investigate five main questions: (1) If students are in a course where they consistently use free body diagrams to construct and test concepts in mechanics, electricity and magnetism and to solve problems in class and in homework, will they draw free-body diagrams on their own when solving exam problems? (2) Are students who use free-body diagrams to solve problems more successful then those who do not? (3) Why do students draw free-body diagrams when solving problems? (4) Are students consistent in constructing diagrams for different concepts in physics and are they consistent in the quality of their diagrams? (5) What are possible relationships between features of a problem and how likely a student will draw a free body diagram to help them solve the problem? I utilized a mixed-methods approach to answer these questions. Questions 1, 2, 4 and 5 required a quantitative approach while question 3 required a qualitative approach, a case study. When I completed my study, I found that if students are in an environment which fosters the use of representations for problem solving and for concept development, then the majority of students will consistently construct helpful free-body diagrams and use them on their own to solve problems. Additionally, those that construct correct free-body diagrams are significantly more likely to successfully solve the problem. Finally, those students that are high achieving tend to use diagrams more and for more reasons then students who have low course grades. These findings will have major impacts on how introductory physics instructors run their classes and how curriculums are designed. These results favor a problem solving strategy that is rich with representations.
Nguyen, Ngan; Mulla, Ali; Nelson, Andrew J; Wilson, Timothy D
2014-01-01
The present study explored the problem-solving strategies of high- and low-spatial visualization ability learners on a novel spatial anatomy task to determine whether differences in strategies contribute to differences in task performance. The results of this study provide further insights into the processing commonalities and differences among learners beyond the classification of spatial visualization ability alone, and help elucidate what, if anything, high- and low-spatial visualization ability learners do differently while solving spatial anatomy task problems. Forty-two students completed a standardized measure of spatial visualization ability, a novel spatial anatomy task, and a questionnaire involving personal self-analysis of the processes and strategies used while performing the spatial anatomy task. Strategy reports revealed that there were different ways students approached answering the spatial anatomy task problems. However, chi-square test analyses established that differences in problem-solving strategies did not contribute to differences in task performance. Therefore, underlying spatial visualization ability is the main source of variation in spatial anatomy task performance, irrespective of strategy. In addition to scoring higher and spending less time on the anatomy task, participants with high spatial visualization ability were also more accurate when solving the task problems. © 2013 American Association of Anatomists.
Qualitative Understanding of Magnetism at Three Levels of Expertise
NASA Astrophysics Data System (ADS)
Stefani, Francesco; Marshall, Jill
2010-03-01
This work set out to investigate the state of qualitative understanding of magnetism at various stages of expertise, and what approaches to problem-solving are used across the spectrum of expertise. We studied three groups: 10 novices, 10 experts-in-training, and 11 experts. Data collection involved structured interviews during which participants solved a series of non-standard problems designed to test for conceptual understanding of magnetism. The interviews were analyzed using a grounded theory approach. None of the novices and only a few of the experts in training showed a strong understanding of inductance, magnetic energy, and magnetic pressure; and for the most part they tended not to approach problems visually. Novices frequently described gist memories of demonstrations, text book problems, and rules (heuristics). However, these fragmentary mental models were not complete enough to allow them to reason productively. Experts-in-training were able to solve problems that the novices were not able to solve, many times simply because they had greater recall of the material, and therefore more confidence in their facts. Much of their thinking was concrete, based on mentally manipulating objects. The experts solved most of the problems in ways that were both effective and efficient. Part of the efficiency derived from their ability to visualize and thus reason in terms of field lines.
Qualitative Understanding of Magnetism at Three Levels of Expertise
NASA Astrophysics Data System (ADS)
Stefani, Francesco; Marshall, Jill
2009-04-01
This work set out to investigate the state of qualitative understanding of magnetism at various stages of expertise, and what approaches to problem-solving are used across the spectrum of expertise. We studied three groups: 10 novices, 10 experts-in-training, and 11 experts. Data collection involved structured interviews during which participants solved a series of non-standard problems designed to test for conceptual understanding of magnetism. The interviews were analyzed using a grounded theory approach. None of the novices and only a few of the experts in training showed a strong understanding of inductance, magnetic energy, and magnetic pressure; and for the most part they tended not to approach problems visually. Novices frequently described gist memories of demonstrations, text book problems, and rules (heuristics). However, these fragmentary mental models were not complete enough to allow them to reason productively. Experts-in-training were able to solve problems that the novices were not able to solve, many times simply because they had greater recall of the material, and therefore more confidence in their facts. Much of their thinking was concrete, based on mentally manipulating objects. The experts solved most of the problems in ways that were both effective and efficient. Part of the efficiency derived from their ability to visualize and thus reason in terms of field lines.
Chalmers, Charlotte; Leathem, Janet; Bennett, Simon; McNaughton, Harry; Mahawish, Karim
2017-11-26
To investigate the efficacy of problem solving therapy for reducing the emotional distress experienced by younger stroke survivors. A non-randomized waitlist controlled design was used to compare outcome measures for the treatment group and a waitlist control group at baseline and post-waitlist/post-therapy. After the waitlist group received problem solving therapy an analysis was completed on the pooled outcome measures at baseline, post-treatment, and three-month follow-up. Changes on outcome measures between baseline and post-treatment (n = 13) were not significantly different between the two groups, treatment (n = 13), and the waitlist control group (n = 16) (between-subject design). The pooled data (n = 28) indicated that receiving problem solving therapy significantly reduced participants levels of depression and anxiety and increased quality of life levels from baseline to follow up (within-subject design), however, methodological limitations, such as the lack of a control group reduce the validity of this finding. The between-subject results suggest that there was no significant difference between those that received problem solving therapy and a waitlist control group between baseline and post-waitlist/post-therapy. The within-subject design suggests that problem solving therapy may be beneficial for younger stroke survivors when they are given some time to learn and implement the skills into their day to day life. However, additional research with a control group is required to investigate this further. This study provides limited evidence for the provision of support groups for younger stroke survivors post stroke, however, it remains unclear about what type of support this should be. Implications for Rehabilitation Problem solving therapy is no more effective for reducing post stroke distress than a wait-list control group. Problem solving therapy may be perceived as helpful and enjoyable by younger stroke survivors. Younger stroke survivors may use the skills learnt from problem solving therapy to solve problems in their day to day lives. Younger stroke survivors may benefit from age appropriate psychological support; however, future research is needed to determine what type of support this should be.
Vieira, J; Cunha, M C
2011-01-01
This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.
Sahler, Olle Jane Z.; Sherman, Sandra A.; Fairclough, Diane L.; Butler, Robert W.; Katz, Ernest R.; Dolgin, Michael J.; Varni, James W.; Noll, Robert B.; Phipps, Sean
2009-01-01
Objectives To evaluate the feasibility and efficacy of a handheld personal digital assistant (PDA)-based supplement for maternal Problem-Solving Skills Training (PSST) and to explore Spanish-speaking mothers’ experiences with it. Methods Mothers (n = 197) of children with newly diagnosed cancer were randomized to traditional PSST or PSST + PDA 8-week programs. Participants completed the Social Problem-Solving Inventory-Revised, Beck Depression Inventory-II, Profile of Mood States, and Impact of Event Scale-Revised pre-, post-treatment, and 3 months after completion of the intervention. Mothers also rated optimism, logic, and confidence in the intervention and technology. Results Both groups demonstrated significant positive change over time on all psychosocial measures. No between-group differences emerged. Despite technological “glitches,” mothers expressed moderately high optimism, appreciation for logic, and confidence in both interventions and rated the PDA-based program favorably. Technology appealed to all Spanish-speaking mothers, with younger mothers showing greater proficiency. Conclusions Well-designed, supported technology holds promise for enhancing psychological interventions. PMID:19091804
Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.
Efficient Bounding Schemes for the Two-Center Hybrid Flow Shop Scheduling Problem with Removal Times
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures. PMID:25610911
Optimizing Restriction Site Placement for Synthetic Genomes
NASA Astrophysics Data System (ADS)
Montes, Pablo; Memelli, Heraldo; Ward, Charles; Kim, Joondong; Mitchell, Joseph S. B.; Skiena, Steven
Restriction enzymes are the workhorses of molecular biology. We introduce a new problem that arises in the course of our project to design virus variants to serve as potential vaccines: we wish to modify virus-length genomes to introduce large numbers of unique restriction enzyme recognition sites while preserving wild-type function by substitution of synonymous codons. We show that the resulting problem is NP-Complete, give an exponential-time algorithm, and propose effective heuristics, which we show give excellent results for five sample viral genomes. Our resulting modified genomes have several times more unique restriction sites and reduce the maximum gap between adjacent sites by three to nine-fold.
Super flame-retardant lightweight rime-like carbon-phenolic nanofoam
Cheng, Haiming; Hong, Changqing; Zhang, Xinghong; Xue, Huafei; Meng, Songhe; Han, Jiecai
2016-01-01
The desire for lightweight nanoporous materials with high-performance thermal insulation and efficient anti-ablation resistance for energy conservation and thermal protection/insulation has greatly motivated research and development recently. The main challenge to synthesize such lightweight materials is how to balance the relationship of low thermal conductivity and flame retardancy. Herein, we propose a new concept of lightweight “rime-like” structured carbon-phenolic nanocomposites to solve this problem, where the 3D chopped network-structured carbon fiber (NCF) monoliths are incorporated with nanoporous phenolic aerogel to retain structural and functional integrity. The nanometer-scaled porous phenolic (NP) was synthesized through polymerization-induced phase separation and ambient pressure drying using phenolic resin (PR) solution as reaction source, ethylene glycol (EG) as solvent and hexamethylenetetramine (HMTA) as catalyst. We demonstrate that the as-prepared NCF-NP nanocomposite exhibits with a low density of 0.25–0.35 g/cm3, low thermal conductivity of 0.125 Wm−1K−1 and outstanding flame retardancy exceeding 2000 °C under arc-jet wind tunnel simulation environment. Our results show that the synthesis strategy is a promising approach for producing nanocomposites with excellent high-temperature heat blocking property. PMID:27629114
Super flame-retardant lightweight rime-like carbon-phenolic nanofoam
NASA Astrophysics Data System (ADS)
Cheng, Haiming; Hong, Changqing; Zhang, Xinghong; Xue, Huafei; Meng, Songhe; Han, Jiecai
2016-09-01
The desire for lightweight nanoporous materials with high-performance thermal insulation and efficient anti-ablation resistance for energy conservation and thermal protection/insulation has greatly motivated research and development recently. The main challenge to synthesize such lightweight materials is how to balance the relationship of low thermal conductivity and flame retardancy. Herein, we propose a new concept of lightweight “rime-like” structured carbon-phenolic nanocomposites to solve this problem, where the 3D chopped network-structured carbon fiber (NCF) monoliths are incorporated with nanoporous phenolic aerogel to retain structural and functional integrity. The nanometer-scaled porous phenolic (NP) was synthesized through polymerization-induced phase separation and ambient pressure drying using phenolic resin (PR) solution as reaction source, ethylene glycol (EG) as solvent and hexamethylenetetramine (HMTA) as catalyst. We demonstrate that the as-prepared NCF-NP nanocomposite exhibits with a low density of 0.25-0.35 g/cm3, low thermal conductivity of 0.125 Wm-1K-1 and outstanding flame retardancy exceeding 2000 °C under arc-jet wind tunnel simulation environment. Our results show that the synthesis strategy is a promising approach for producing nanocomposites with excellent high-temperature heat blocking property.
On the Prevention of Juvenile Crime
ERIC Educational Resources Information Center
Lelekov, V. A.; Kosheleva, E. V.
2008-01-01
Crimes committed by juveniles are among the most urgent social problems. Juvenile crime is as prevalent as crime itself is, and it has not been solved completely in any society and cannot be solved through law enforcement measures alone. In this article, the authors discuss the dynamics and structure of juvenile crime in Russia and present data…
The Role of Content Knowledge in Ill-Structured Problem Solving for High School Physics Students
NASA Astrophysics Data System (ADS)
Milbourne, Jeff; Wiebe, Eric
2018-02-01
While Physics Education Research has a rich tradition of problem-solving scholarship, most of the work has focused on more traditional, well-defined problems. Less work has been done with ill-structured problems, problems that are better aligned with the engineering and design-based scenarios promoted by the Next Generation Science Standards. This study explored the relationship between physics content knowledge and ill-structured problem solving for two groups of high school students with different levels of content knowledge. Both groups of students completed an ill-structured problem set, using a talk-aloud procedure to narrate their thought process as they worked. Analysis of the data focused on identifying students' solution pathways, as well as the obstacles that prevented them from reaching "reasonable" solutions. Students with more content knowledge were more successful reaching reasonable solutions for each of the problems, experiencing fewer obstacles. These students also employed a greater variety of solution pathways than those with less content knowledge. Results suggest that a student's solution pathway choice may depend on how she perceives the problem.
Tyagi, Shachi; Perera, Subashan; Clarkson, Becky D.; Tadic, Stasa D; Resnick, Neil M
2016-01-01
Purpose Nocturia is common and bothersome in older adults especially those who are also incontinent. Since nocturnal polyuria (NP) is a major contributor, we examined factors associated with NP in this population to identify those possibly amenable to intervention. Method We analyzed baseline data from two previously-completed studies of urge urinary incontinence (UUI). The studies involved 284 women (mean 72.9 ±7.9 years) who also completed 3-day voiding diaries. Participants with nocturnal polyuria index (NPi) of > 33% were categorized as having NP (NPi= nocturnal urinary volume/24-hour urine volume). Associations between NP and various demographic, clinical, and sleep-related parameters were determined. Results Fifty-five percent of the participants had NP. Multivariable regression analysis revealed that age, body mass index (BMI), use of angiotensin-converting-enzyme inhibitor (ACE-I)/angiotensin receptor blocker (ARB), time spent in bed, and duration of first uninterrupted sleep (DUS) were independent correlates of NP. Participants with larger nocturnal excretion reported shorter DUS and worse sleep quality despite spending similar time in bed. Conclusion BMI, use of ACE-I/ARB, time in bed and DUS are independently associated with NP in older women with UUI, and are potentially modifiable. These findings also confirm the association between sleep and NP. Further studies should explore whether interventions to reduce NP and/or increase DUS help to improve sleep quality in this population and thereby reduce or eliminate the need for sedative hypnotics. PMID:27678299
Materials Data on NpP (SG:225) by Materials Project
Kristin Persson
2015-02-09
Computed materials data using density functional theory calculations. These calculations determine the electronic structure of bulk materials by solving approximations to the Schrodinger equation. For more information, see https://materialsproject.org/docs/calculations
Extended Islands of Tractability for Parsimony Haplotyping
NASA Astrophysics Data System (ADS)
Fleischer, Rudolf; Guo, Jiong; Niedermeier, Rolf; Uhlmann, Johannes; Wang, Yihui; Weller, Mathias; Wu, Xi
Parsimony haplotyping is the problem of finding a smallest size set of haplotypes that can explain a given set of genotypes. The problem is NP-hard, and many heuristic and approximation algorithms as well as polynomial-time solvable special cases have been discovered. We propose improved fixed-parameter tractability results with respect to the parameter "size of the target haplotype set" k by presenting an O *(k 4k )-time algorithm. This also applies to the practically important constrained case, where we can only use haplotypes from a given set. Furthermore, we show that the problem becomes polynomial-time solvable if the given set of genotypes is complete, i.e., contains all possible genotypes that can be explained by the set of haplotypes.
NASA Astrophysics Data System (ADS)
Gomberoff, Andrés; Muñoz, Víctor; Romagnoli, Pierre Paul
2014-02-01
Divorced individuals face complex situations when they have children with different ex-partners, or even more, when their new partners have children of their own. In such cases, and when kids spend every other weekend with each parent, a practical problem emerges: is it possible to have such a custody arrangement that every couple has either all of the kids together or no kids at all? We show that in general, it is not possible, but that the number of couples that do can be maximized. The problem turns out to be equivalent to finding the ground state of a spin glass system, which is known to be equivalent to what is called a weighted max-cut problem in graph theory, and hence it is NP-complete.
Sparse subspace clustering for data with missing entries and high-rank matrix completion.
Fan, Jicong; Chow, Tommy W S
2017-09-01
Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Adversarial Geospatial Abduction Problems
2011-01-01
which is new , shows that #GCD is #P-complete and, moreover, that there is no fully-polynomial random approximation scheme for #GCD unless NP equals the...use L∗ to form a new set of constraints to find a δ-core optimal explanation. We now present these δ-core constraints. Notice that the cardinality...EXBrf (∅, efd), flag1 = true, i = 2 (4) While flag1 (a) new val = cur val + inci (b) If new val > (1 + |L|2 ) · cur val then i. If EXBrf (B ∪ {pi
Communication-Efficient Arbitration Models for Low-Resolution Data Flow Computing
1988-12-01
phase can be formally described as follows: Graph Partitioning Problem NP-complete: (Garey & Johnson) Given graph G = (V, E), weights w (v) for each v e V...Technical Report, MIT/LCS/TR-218, Cambridge, Mass. Agerwala, Tilak, February 1982, "Data Flow Systems", Computer, pp. 10-13. Babb, Robert G ., July 1984...34Parallel Processing with Large-Grain Data Flow Techniques," IEEE Computer 17, 7, pp. 55-61. Babb, Robert G ., II, Lise Storc, and William C. Ragsdale
Miranda, Caitlin; Rentería, Miguel Arce; Fuentes, Armando; Coulehan, Kelly; Arentoft, Alyssa; Byrd, Desiree; Rosario, Ana; Monzones, Jennifer; Morgello, Susan; Mindt, Monica Rivera
2016-01-01
Objective Given the disproportionate impact of neurologic disorders such as HIV on racial/ethnic minorities, neuropsychologists are increasingly evaluating individuals of diverse linguistic backgrounds. This study compares the utility of two brief and one comprehensive language measure to account for variation in English neuropsychological performance within a bilingual population. Method Sixty-two HIV+ English/Spanish bilingual Latino adults completed three language measures in English and Spanish: Self-Reported Language Ability; Verbal Fluency (FAS/PMR); and the Woodcock Munoz Language Survey-Revised (WMLS-R). All participants also completed an English language neuropsychological (NP) battery. Results It was hypothesized that the comprehensive English/Spanish WMLS-R language dominance index (LDI) would be significantly correlated with NP performance, as well as the best predictor of NP performance over and above the two brief language measures. Contrary to our hypothesis, the WMLS-R LDI was not significantly correlated to NP performance, whereas the easily administered Verbal Fluency and Self-Report LDIs were each correlated with global NP performance and multiple NP domains. After accounting for Verbal Fluency and Self-Report LDI in a multivariate regression predicting NP performance, the WMLS-R LDI did not provide a unique contribution to the model. Conclusions These findings suggest that the more comprehensive WMLS-R does not improve understanding of the effects of language on NP performance in an HIV+ bilingual Latino population. PMID:26934820
Solving the nanostructure problem: exemplified on metallic alloy nanoparticles
NASA Astrophysics Data System (ADS)
Petkov, Valeri; Prasai, Binay; Ren, Yang; Shan, Shiyao; Luo, Jin; Joseph, Pharrah; Zhong, Chuan-Jian
2014-08-01
With current technology moving rapidly toward smaller scales nanometer-size materials, hereafter called nanometer-size particles (NPs), are being produced in increasing numbers and explored for various useful applications ranging from photonics and catalysis to detoxification of wastewater and cancer therapy. Nature also is a prolific producer of useful NPs. Evidence can be found in ores on the ocean floor, minerals and soils on land and in the human body that, when water is excluded, is mostly made of proteins that are 6-10 nm in size and globular in shape. Precise knowledge of the 3D atomic-scale structure, that is how atoms are arranged in space, is a crucial prerequisite for understanding and so gaining more control over the properties of any material, including NPs. In the case of bulk materials such knowledge is fairly easy to obtain by Bragg diffraction experiments. Determining the 3D atomic-scale structure of NPs is, however, still problematic spelling trouble for science and technology at the nanoscale. Here we explore this so-called ``nanostructure problem'' from a practical point of view arguing that it can be solved when its technical, that is the inapplicability of Bragg diffraction to NPs, and fundamental, that is the incompatibility of traditional crystallography with NPs, aspects are both addressed properly. As evidence we present a successful and broadly applicable, 6-step approach to determining the 3D atomic-scale structure of NPs based on a suitable combination of a few experimental and computational techniques. This approach is exemplified on 5 nm sized PdxNi100-x particles (x = 26, 56 and 88) explored for catalytic applications. Furthermore, we show how once an NP atomic structure is determined precisely, a strategy for improving NP structure-dependent properties of particular interest to science and technology can be designed rationally and not subjectively as frequently done now.With current technology moving rapidly toward smaller scales nanometer-size materials, hereafter called nanometer-size particles (NPs), are being produced in increasing numbers and explored for various useful applications ranging from photonics and catalysis to detoxification of wastewater and cancer therapy. Nature also is a prolific producer of useful NPs. Evidence can be found in ores on the ocean floor, minerals and soils on land and in the human body that, when water is excluded, is mostly made of proteins that are 6-10 nm in size and globular in shape. Precise knowledge of the 3D atomic-scale structure, that is how atoms are arranged in space, is a crucial prerequisite for understanding and so gaining more control over the properties of any material, including NPs. In the case of bulk materials such knowledge is fairly easy to obtain by Bragg diffraction experiments. Determining the 3D atomic-scale structure of NPs is, however, still problematic spelling trouble for science and technology at the nanoscale. Here we explore this so-called ``nanostructure problem'' from a practical point of view arguing that it can be solved when its technical, that is the inapplicability of Bragg diffraction to NPs, and fundamental, that is the incompatibility of traditional crystallography with NPs, aspects are both addressed properly. As evidence we present a successful and broadly applicable, 6-step approach to determining the 3D atomic-scale structure of NPs based on a suitable combination of a few experimental and computational techniques. This approach is exemplified on 5 nm sized PdxNi100-x particles (x = 26, 56 and 88) explored for catalytic applications. Furthermore, we show how once an NP atomic structure is determined precisely, a strategy for improving NP structure-dependent properties of particular interest to science and technology can be designed rationally and not subjectively as frequently done now. Electronic supplementary information (ESI) available: XRD patterns, TEM and 3D structure modeling results. See DOI: 10.1039/c4nr01633e
Improving the learning of clinical reasoning through computer-based cognitive representation.
Wu, Bian; Wang, Minhong; Johnson, Janice M; Grotzer, Tina A
2014-01-01
Objective Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Methods Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. Results A significant improvement was found in students' learning products from the beginning to the end of the study, consistent with students' report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. Conclusions The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction.
Improving the learning of clinical reasoning through computer-based cognitive representation
Wu, Bian; Wang, Minhong; Johnson, Janice M.; Grotzer, Tina A.
2014-01-01
Objective Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Methods Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. Results A significant improvement was found in students’ learning products from the beginning to the end of the study, consistent with students’ report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. Conclusions The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction. PMID:25518871
NASA Astrophysics Data System (ADS)
Ismail; Suwarsono, St.; Lukito, A.
2018-01-01
Critical thinking is one of the most important skills of the 21st century in addition to other learning skills such as creative thinking, communication skills and collaborative skills. This is what makes researchers feel the need to conduct research on critical thinking skills in junior high school students. The purpose of this study is to describe the critical thinking skills of junior high school female students with high mathematical skills in solving contextual and formal mathematical problems. To achieve this is used qualitative research. The subject of the study was a female student of eight grade junior high school. The students’ critical thinking skills are derived from in-depth problem-based interviews using interview guidelines. Interviews conducted in this study are problem-based interviews, which are done by the subject given a written assignment and given time to complete. The results show that critical thinking skills of female high school students with high math skills are as follows: In solving the problem at the stage of understanding the problem used interpretation skills with sub-indicators: categorization, decode, and clarify meaning. At the planning stage of the problem-solving strategy is used analytical skills with sub-indicators: idea checking, argument identification and argument analysis and evaluation skills with sub indicators: assessing the argument. In the implementation phase of problem solving, inference skills are used with subindicators: drawing conclusions, and problem solving and explanatory skills with sub-indicators: problem presentation, justification procedures, and argument articulation. At the re-checking stage all steps have been employed self-regulatory skills with sub-indicators: self-correction and selfstudy.
Improving the learning of clinical reasoning through computer-based cognitive representation.
Wu, Bian; Wang, Minhong; Johnson, Janice M; Grotzer, Tina A
2014-01-01
Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. A significant improvement was found in students' learning products from the beginning to the end of the study, consistent with students' report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction.
An Examination of High School Students' Online Engagement in Mathematics Problems
ERIC Educational Resources Information Center
Lim, Woong; Son, Ji-Won; Gregson, Susan; Kim, Jihye
2018-01-01
This article examines high school students' engagement in a set of trigonometry problems. Students completed this task independently in an online environment with access to Internet search engines, online textbooks, and YouTube videos. The findings imply that students have the resourcefulness to solve procedure-based mathematics problems in an…
Executive Functions Underlying Multiplicative Reasoning: Problem Type Matters
ERIC Educational Resources Information Center
Agostino, Alba; Johnson, Janice; Pascual-Leone, Juan
2010-01-01
We investigated the extent to which inhibition, updating, shifting, and mental-attentional capacity ("M"-capacity) contribute to children's ability to solve multiplication word problems. A total of 155 children in Grades 3-6 (8- to 13-year-olds) completed a set of multiplication word problems at two levels of difficulty: one-step and multiple-step…
Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J
2009-06-01
As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.
High performance genetic algorithm for VLSI circuit partitioning
NASA Astrophysics Data System (ADS)
Dinu, Simona
2016-12-01
Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.
A hybrid heuristic for the multiple choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Mansi, Raïd; Alves, Cláudio; Valério de Carvalho, J. M.; Hanafi, Saïd
2013-08-01
In this article, a new solution approach for the multiple choice multidimensional knapsack problem is described. The problem is a variant of the multidimensional knapsack problem where items are divided into classes, and exactly one item per class has to be chosen. Both problems are NP-hard. However, the multiple choice multidimensional knapsack problem appears to be more difficult to solve in part because of its choice constraints. Many real applications lead to very large scale multiple choice multidimensional knapsack problems that can hardly be addressed using exact algorithms. A new hybrid heuristic is proposed that embeds several new procedures for this problem. The approach is based on the resolution of linear programming relaxations of the problem and reduced problems that are obtained by fixing some variables of the problem. The solutions of these problems are used to update the global lower and upper bounds for the optimal solution value. A new strategy for defining the reduced problems is explored, together with a new family of cuts and a reformulation procedure that is used at each iteration to improve the performance of the heuristic. An extensive set of computational experiments is reported for benchmark instances from the literature and for a large set of hard instances generated randomly. The results show that the approach outperforms other state-of-the-art methods described so far, providing the best known solution for a significant number of benchmark instances.
Lee, Myung Kyung
2018-01-01
Objectives This study examined the effect of flipped learning in comparison to traditional learning in a surgical nursing practicum. Methods The subjects of this study were 102 nursing students in their third year of university who were scheduled to complete a clinical nursing practicum in an operating room or surgical unit. Participants were randomly assigned to either a flipped learning group (n = 51) or a traditional learning group (n = 51) for the 1-week, 45-hour clinical nursing practicum. The flipped-learning group completed independent e-learning lessons on surgical nursing and received a brief orientation prior to the commencement of the practicum, while the traditional-learning group received a face-to-face orientation and on-site instruction. After the completion of the practicum, both groups completed a case study and a conference. The student's self-efficacy, self-leadership, and problem-solving skills in clinical practice were measured both before and after the one-week surgical nursing practicum. Results Participants' independent goal setting and evaluation of beliefs and assumptions for the subscales of self-leadership and problem-solving skills were compared for the flipped learning group and the traditional learning group. The results showed greater improvement on these indicators for the flipped learning group in comparison to the traditional learning group. Conclusions The flipped learning method might offer more effective e-learning opportunities in terms of self-leadership and problem-solving than the traditional learning method in surgical nursing practicums. PMID:29503755
Lee, Myung Kyung; Park, Bu Kyung
2018-01-01
This study examined the effect of flipped learning in comparison to traditional learning in a surgical nursing practicum. The subjects of this study were 102 nursing students in their third year of university who were scheduled to complete a clinical nursing practicum in an operating room or surgical unit. Participants were randomly assigned to either a flipped learning group (n = 51) or a traditional learning group (n = 51) for the 1-week, 45-hour clinical nursing practicum. The flipped-learning group completed independent e-learning lessons on surgical nursing and received a brief orientation prior to the commencement of the practicum, while the traditional-learning group received a face-to-face orientation and on-site instruction. After the completion of the practicum, both groups completed a case study and a conference. The student's self-efficacy, self-leadership, and problem-solving skills in clinical practice were measured both before and after the one-week surgical nursing practicum. Participants' independent goal setting and evaluation of beliefs and assumptions for the subscales of self-leadership and problem-solving skills were compared for the flipped learning group and the traditional learning group. The results showed greater improvement on these indicators for the flipped learning group in comparison to the traditional learning group. The flipped learning method might offer more effective e-learning opportunities in terms of self-leadership and problem-solving than the traditional learning method in surgical nursing practicums.
Ilk, Sedef; Saglam, Necdet; Özgen, Mustafa
2017-08-01
Flavonoid compounds are strong antioxidant and antifungal agents but their applications are limited due to their poor dissolution and bioavailability. The use of nanotechnology in agriculture has received increasing attention, with the development of new formulations containing active compounds. In this study, kaempferol (KAE) was loaded into lecithin/chitosan nanoparticles (LC NPs) to determine antifungal activity compared to pure KAE against the phytopathogenic fungus Fusarium oxysporium to resolve the bioavailability problem. The influence of formulation parameters on the physicochemical properties of KAE loaded lecithin chitosan nanoparticles (KAE-LC NPs) were studied by using the electrostatic self-assembly technique. KAE-LC NPs were characterized in terms of physicochemical properties. KAE has been successfully encapsulated in LC NPs with an efficiency of 93.8 ± 4.28% and KAE-LC NPs showed good physicochemical stability. Moreover, in vitro evaluation of the KAE-LC NP system was made by the release kinetics, antioxidant and antifungal activity in a time-dependent manner against free KAE. Encapsulated KAE exhibited a significantly inhibition efficacy (67%) against Fusarium oxysporium at the end of the 60 day storage period. The results indicated that KAE-LC NP formulation could solve the problems related to the solubility and loss of KAE during use and storage. The new nanoparticle system enables the use of smaller quantities of fungicide and therefore, offers a more environmentally friendly method of controlling fungal pathogens in agriculture.
Manifold regularized matrix completion for multi-label learning with ADMM.
Liu, Bin; Li, Yingming; Xu, Zenglin
2018-05-01
Multi-label learning is a common machine learning problem arising from numerous real-world applications in diverse fields, e.g, natural language processing, bioinformatics, information retrieval and so on. Among various multi-label learning methods, the matrix completion approach has been regarded as a promising approach to transductive multi-label learning. By constructing a joint matrix comprising the feature matrix and the label matrix, the missing labels of test samples are regarded as missing values of the joint matrix. With the low-rank assumption of the constructed joint matrix, the missing labels can be recovered by minimizing its rank. Despite its success, most matrix completion based approaches ignore the smoothness assumption of unlabeled data, i.e., neighboring instances should also share a similar set of labels. Thus they may under exploit the intrinsic structures of data. In addition, the matrix completion problem can be less efficient. To this end, we propose to efficiently solve the multi-label learning problem as an enhanced matrix completion model with manifold regularization, where the graph Laplacian is used to ensure the label smoothness over it. To speed up the convergence of our model, we develop an efficient iterative algorithm, which solves the resulted nuclear norm minimization problem with the alternating direction method of multipliers (ADMM). Experiments on both synthetic and real-world data have shown the promising results of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
Korte, Jojanneke; Bohlmeijer, Ernst T; Westerhof, Gerben J; Pot, Anne Margriet; Pot, Anne M
2011-07-01
The role of reminiscence as a way of adapting to critical life events and chronic medical conditions was investigated in older adults with mild to moderate depressive symptoms. Reminiscence is the (non)volitional act or process of recollecting memories of one's self in the past. 171 Dutch older adults with a mean age of 64 years (SD = 7.4) participated in this study. All of them had mild to moderate depressive symptoms. Participants completed measures on critical life events, chronic medical conditions, depressive symptoms, symptoms of anxiety and satisfaction with life. The reminiscence functions included were: identity, problem solving, bitterness revival and boredom reduction. Critical life events were positively correlated with identity and problem solving. Bitterness revival and boredom reduction were both positively correlated with depressive and anxiety symptoms, and negatively to satisfaction with life. Problem solving had a negative relation with anxiety symptoms. When all the reminiscence functions were included, problem solving was uniquely associated with symptoms of anxiety, and bitterness revival was uniquely associated with depressive symptoms and satisfaction with life. Interestingly, problem solving mediated the relation of critical life events with anxiety. This study corroborates the theory that reminiscence plays a role in coping with critical life events, and thereby maintaining mental health. Furthermore, it is recommended that therapists focus on techniques which reduce bitterness revival in people with depressive symptoms, and focus on problem-solving reminiscences among people with anxiety symptoms.
Exploring the relationship between work-related rumination, sleep quality, and work-related fatigue.
Querstret, Dawn; Cropley, Mark
2012-07-01
This study examined the association among three conceptualizations of work-related rumination (affective rumination, problem-solving pondering, and detachment) with sleep quality and work-related fatigue. It was hypothesized that affective rumination and poor sleep quality would be associated with increased fatigue and that problem-solving pondering and detachment would be associated with decreased fatigue. The mediating effect of sleep quality on the relationship between work-related rumination and fatigue was also tested. An online questionnaire was completed by a heterogeneous sample of 719 adult workers in diverse occupations. The following variables were entered as predictors in a regression model: affective rumination, problem-solving pondering, detachment, and sleep quality. The dependent variables were chronic work-related fatigue (CF) and acute work-related fatigue (AF). Affective rumination was the strongest predictor of increased CF and AF. Problem-solving pondering was a significant predictor of decreased CF and AF. Poor sleep quality was predictive of increased CF and AF. Detachment was significantly negatively predictive for AF. Sleep quality partially mediated the relationship between affective rumination and fatigue and between problem-solving pondering and fatigue. Work-related affective rumination appears more detrimental to an individual's ability to recover from work than problem-solving pondering. In the context of identifying mechanisms by which demands at work are translated into ill-health, this appears to be a key finding and suggests that it is the type of work-related rumination, not rumination per se, that is important.
Inductive reasoning and implicit memory: evidence from intact and impaired memory systems.
Girelli, Luisa; Semenza, Carlo; Delazer, Margarete
2004-01-01
In this study, we modified a classic problem solving task, number series completion, in order to explore the contribution of implicit memory to inductive reasoning. Participants were required to complete number series sharing the same underlying algorithm (e.g., +2), differing in both constituent elements (e.g., 2468 versus 57911) and correct answers (e.g., 10 versus 13). In Experiment 1, reliable priming effects emerged, whether primes and targets were separated by four or ten fillers. Experiment 2 provided direct evidence that the observed facilitation arises at central stages of problem solving, namely the identification of the algorithm and its subsequent extrapolation. The observation of analogous priming effects in a severely amnesic patient strongly supports the hypothesis that the facilitation in number series completion was largely determined by implicit memory processes. These findings demonstrate that the influence of implicit processes extends to higher level cognitive domain such as induction reasoning.
Materials Data on NpTe3 (SG:63) by Materials Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kristin Persson
2017-05-05
Computed materials data using density functional theory calculations. These calculations determine the electronic structure of bulk materials by solving approximations to the Schrodinger equation. For more information, see https://materialsproject.org/docs/calculations
Kumah-Crystal, Yaa A.; Hood, Korey K.; Ho, Yu-Xian; Lybarger, Cindy K.; O'Connor, Brendan H.; Rothman, Russell L.
2015-01-01
Abstract Background: This study examines technology use for problem solving in diabetes and its relationship to hemoglobin A1C (A1C). Subjects and Methods: A sample of 112 adolescents with type 1 diabetes completed measures assessing use of technologies for diabetes problem solving, including mobile applications, social technologies, and glucose software. Hierarchical regression was performed to identify the contribution of a new nine-item Technology Use for Problem Solving in Type 1 Diabetes (TUPS) scale to A1C, considering known clinical contributors to A1C. Results: Mean age for the sample was 14.5 (SD 1.7) years, mean A1C was 8.9% (SD 1.8%), 50% were female, and diabetes duration was 5.5 (SD 3.5) years. Cronbach's α reliability for TUPS was 0.78. In regression analyses, variables significantly associated with A1C were the socioeconomic status (β=−0.26, P<0.01), Diabetes Adolescent Problem Solving Questionnaire (β=−0.26, P=0.01), and TUPS (β=0.26, P=0.01). Aside from the Diabetes Self-Care Inventory—Revised, each block added significantly to the model R2. The final model R2 was 0.22 for modeling A1C (P<0.001). Conclusions: Results indicate a counterintuitive relationship between higher use of technologies for problem solving and higher A1C. Adolescents with poorer glycemic control may use technology in a reactive, as opposed to preventive, manner. Better understanding of the nature of technology use for self-management over time is needed to guide the development of technology-mediated problem solving tools for youth with type 1 diabetes. PMID:25826706
Information Seeking When Problem Solving: Perspectives of Public Health Professionals.
Newman, Kristine; Dobbins, Maureen; Yost, Jennifer; Ciliska, Donna
2017-04-01
Given the many different types of professionals working in public health and their diverse roles, it is likely that their information needs, information-seeking behaviors, and problem-solving abilities differ. Although public health professionals often work in interdisciplinary teams, few studies have explored their information needs and behaviors within the context of teamwork. This study explored the relationship between Canadian public health professionals' perceptions of their problem-solving abilities and their information-seeking behaviors with a specific focus on the use of evidence in practice settings. It also explored their perceptions of collaborative information seeking and the work contexts in which they sought information. Key Canadian contacts at public health organizations helped recruit study participants through their list-servs. An electronic survey was used to gather data about (a) individual information-seeking behaviors, (b) collaborative information-seeking behaviors, (c) use of evidence in practice environments, (d) perceived problem-solving abilities, and (e) demographic characteristics. Fifty-eight public health professionals were recruited, with different roles and representing most Canadian provinces and one territory. A significant relationship was found between perceived problem-solving abilities and collaborative information-seeking behavior (r = -.44, p < .00, N = 58), but not individual information seeking. The results suggested that when public health professionals take a shared, active approach to problem solving, maintain personal control, and have confidence, they are more likely collaborate with others in seeking information to complete a work task. Administrators of public health organizations should promote collaboration by implementing effective communication and information-seeking strategies, and by providing information resources and retrieval tools. Public health professionals' perceived problem-solving abilities can influence how they collaborate in seeking information. Educators in public health organizations should tailor training in information searching to promote collaboration through collaborative technology systems. © 2016 Sigma Theta Tau International.
Kim, Hae-Ran; Song, Yeoungsuk; Lindquist, Ruth; Kang, Hee-Young
2016-03-01
Team-based learning (TBL) has been used as a learner-centered teaching strategy in efforts to improve students' problem-solving, knowledge and practice performance. Although TBL has been used in nursing education in Korea for a decade, few studies have studied its effects on Korean nursing students' learning outcomes. To examine the effects of TBL on problem-solving ability and learning outcomes (knowledge and clinical performance) of Korean nursing students. Randomized controlled trial. 63 third-year undergraduate nursing students attending a single university were randomly assigned to the TBL group (n=32), or a control group (n=31). The TBL and control groups attended 2h of class weekly for 3weeks. Three scenarios with pulmonary disease content were employed in both groups. However, the control group received lectures and traditional case study teaching/learning strategies instead of TBL. A questionnaire of problem-solving ability was administered at baseline, prior to students' exposure to the teaching strategies. Students' problem-solving ability, knowledge of pulmonary nursing care, and clinical performance were assessed following completion of the three-week pulmonary unit. After the three-week educational interventions, the scores on problem-solving ability in the TBL group were significantly improved relative to that of the control group (t=10.89, p<.001). In addition, there were significant differences in knowledge, and in clinical performance with standardized patients between the two groups (t=2.48, p=.016, t=12.22, p<.001). This study demonstrated that TBL is an effective teaching strategy to enhance problem-solving ability, knowledge and clinical performance. More research on other specific learning outcomes of TBL for nursing students is recommended. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kumah-Crystal, Yaa A; Hood, Korey K; Ho, Yu-Xian; Lybarger, Cindy K; O'Connor, Brendan H; Rothman, Russell L; Mulvaney, Shelagh A
2015-07-01
This study examines technology use for problem solving in diabetes and its relationship to hemoglobin A1C (A1C). A sample of 112 adolescents with type 1 diabetes completed measures assessing use of technologies for diabetes problem solving, including mobile applications, social technologies, and glucose software. Hierarchical regression was performed to identify the contribution of a new nine-item Technology Use for Problem Solving in Type 1 Diabetes (TUPS) scale to A1C, considering known clinical contributors to A1C. Mean age for the sample was 14.5 (SD 1.7) years, mean A1C was 8.9% (SD 1.8%), 50% were female, and diabetes duration was 5.5 (SD 3.5) years. Cronbach's α reliability for TUPS was 0.78. In regression analyses, variables significantly associated with A1C were the socioeconomic status (β = -0.26, P < 0.01), Diabetes Adolescent Problem Solving Questionnaire (β = -0.26, P = 0.01), and TUPS (β = 0.26, P = 0.01). Aside from the Diabetes Self-Care Inventory--Revised, each block added significantly to the model R(2). The final model R(2) was 0.22 for modeling A1C (P < 0.001). Results indicate a counterintuitive relationship between higher use of technologies for problem solving and higher A1C. Adolescents with poorer glycemic control may use technology in a reactive, as opposed to preventive, manner. Better understanding of the nature of technology use for self-management over time is needed to guide the development of technology-mediated problem solving tools for youth with type 1 diabetes.