Sample records for hard computational problems

  1. The Role of the Goal in Solving Hard Computational Problems: Do People Really Optimize?

    ERIC Educational Resources Information Center

    Carruthers, Sarah; Stege, Ulrike; Masson, Michael E. J.

    2018-01-01

    The role that the mental, or internal, representation plays when people are solving hard computational problems has largely been overlooked to date, despite the reality that this internal representation drives problem solving. In this work we investigate how performance on versions of two hard computational problems differs based on what internal…

  2. Must "Hard Problems" Be Hard?

    ERIC Educational Resources Information Center

    Kolata, Gina

    1985-01-01

    To determine how hard it is for computers to solve problems, researchers have classified groups of problems (polynomial hierarchy) according to how much time they seem to require for their solutions. A difficult and complex proof is offered which shows that a combinatorial approach (using Boolean circuits) may resolve the problem. (JN)

  3. On Evaluating Human Problem Solving of Computationally Hard Problems

    ERIC Educational Resources Information Center

    Carruthers, Sarah; Stege, Ulrike

    2013-01-01

    This article is concerned with how computer science, and more exactly computational complexity theory, can inform cognitive science. In particular, we suggest factors to be taken into account when investigating how people deal with computational hardness. This discussion will address the two upper levels of Marr's Level Theory: the computational…

  4. Solving Hard Computational Problems Efficiently: Asymptotic Parametric Complexity 3-Coloring Algorithm

    PubMed Central

    Martín H., José Antonio

    2013-01-01

    Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711

  5. Human Performance on Hard Non-Euclidean Graph Problems: Vertex Cover

    ERIC Educational Resources Information Center

    Carruthers, Sarah; Masson, Michael E. J.; Stege, Ulrike

    2012-01-01

    Recent studies on a computationally hard visual optimization problem, the Traveling Salesperson Problem (TSP), indicate that humans are capable of finding close to optimal solutions in near-linear time. The current study is a preliminary step in investigating human performance on another hard problem, the Minimum Vertex Cover Problem, in which…

  6. Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.

    PubMed

    Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian

    2017-01-01

    Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.

  7. Phase Transitions in Planning Problems: Design and Analysis of Parameterized Families of Hard Planning Problems

    NASA Technical Reports Server (NTRS)

    Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide

    2014-01-01

    There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning

  8. Quantum Computing: Solving Complex Problems

    ScienceCinema

    DiVincenzo, David

    2018-05-22

    One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

  9. NP-hardness of the cluster minimization problem revisited

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  10. A Volunteer Computing Project for Solving Geoacoustic Inversion Problems

    NASA Astrophysics Data System (ADS)

    Zaikin, Oleg; Petrov, Pavel; Posypkin, Mikhail; Bulavintsev, Vadim; Kurochkin, Ilya

    2017-12-01

    A volunteer computing project aimed at solving computationally hard inverse problems in underwater acoustics is described. This project was used to study the possibilities of the sound speed profile reconstruction in a shallow-water waveguide using a dispersion-based geoacoustic inversion scheme. The computational capabilities provided by the project allowed us to investigate the accuracy of the inversion for different mesh sizes of the sound speed profile discretization grid. This problem suits well for volunteer computing because it can be easily decomposed into independent simpler subproblems.

  11. Complex network problems in physics, computer science and biology

    NASA Astrophysics Data System (ADS)

    Cojocaru, Radu Ionut

    There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe

  12. Computational Study for Planar Connected Dominating Set Problem

    NASA Astrophysics Data System (ADS)

    Marzban, Marjan; Gu, Qian-Ping; Jia, Xiaohua

    The connected dominating set (CDS) problem is a well studied NP-hard problem with many important applications. Dorn et al. [ESA2005, LNCS3669,pp95-106] introduce a new technique to generate 2^{O(sqrt{n})} time and fixed-parameter algorithms for a number of non-local hard problems, including the CDS problem in planar graphs. The practical performance of this algorithm is yet to be evaluated. We perform a computational study for such an evaluation. The results show that the size of instances can be solved by the algorithm mainly depends on the branchwidth of the instances, coinciding with the theoretical result. For graphs with small or moderate branchwidth, the CDS problem instances with size up to a few thousands edges can be solved in a practical time and memory space. This suggests that the branch-decomposition based algorithms can be practical for the planar CDS problem.

  13. Amoeba-inspired nanoarchitectonic computing: solving intractable computational problems using nanoscale photoexcitation transfer dynamics.

    PubMed

    Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko

    2013-06-18

    Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.

  14. The Hard Problem of Cooperation

    PubMed Central

    Eriksson, Kimmo; Strimling, Pontus

    2012-01-01

    Based on individual variation in cooperative inclinations, we define the “hard problem of cooperation” as that of achieving high levels of cooperation in a group of non-cooperative types. Can the hard problem be solved by institutions with monitoring and sanctions? In a laboratory experiment we find that the answer is affirmative if the institution is imposed on the group but negative if development of the institution is left to the group to vote on. In the experiment, participants were divided into groups of either cooperative types or non-cooperative types depending on their behavior in a public goods game. In these homogeneous groups they repeatedly played a public goods game regulated by an institution that incorporated several of the key properties identified by Ostrom: operational rules, monitoring, rewards, punishments, and (in one condition) change of rules. When change of rules was not possible and punishments were set to be high, groups of both types generally abided by operational rules demanding high contributions to the common good, and thereby achieved high levels of payoffs. Under less severe rules, both types of groups did worse but non-cooperative types did worst. Thus, non-cooperative groups profited the most from being governed by an institution demanding high contributions and employing high punishments. Nevertheless, in a condition where change of rules through voting was made possible, development of the institution in this direction was more often voted down in groups of non-cooperative types. We discuss the relevance of the hard problem and fit our results into a bigger picture of institutional and individual determinants of cooperative behavior. PMID:22792282

  15. Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem

    NASA Astrophysics Data System (ADS)

    Skakov, E. S.; Malysh, V. N.

    2018-03-01

    The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.

  16. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    PubMed

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  17. Methodological problems with gamma-ray burst hardness/intensity correlations

    NASA Technical Reports Server (NTRS)

    Schaefer, Bradley E.

    1993-01-01

    The hardness and intensity are easily measured quantities for all gamma-ray bursts (GRBs), and so, many past and current studies have sought correlations between them. This Letter presents many serious methodological problems with the practical definitions for both hardness and intensity. These difficulties are such that significant correlations can be easily introduced as artifacts of the reduction procedure. In particular, cosmological models of GRBs cannot be tested with hardness/intensity correlations with current instrumentation and the time evolution of the hardness in a given burst may be correlated with intensity for reasons that are unrelated to intrinsic change in the spectral shape.

  18. Solving NP-Hard Problems with Physarum-Based Ant Colony System.

    PubMed

    Liu, Yuxin; Gao, Chao; Zhang, Zili; Lu, Yuxiao; Chen, Shi; Liang, Mingxin; Tao, Li

    2017-01-01

    NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS.

  19. Deaf and hard of hearing students' problem-solving strategies with signed arithmetic story problems.

    PubMed

    Pagliaro, Claudia M; Ansell, Ellen

    2012-01-01

    The use of problem-solving strategies by 59 deaf and hard of hearing children, grades K-3, was investigated. The children were asked to solve 9 arithmetic story problems presented to them in American Sign Language. The researchers found that while the children used the same general types of strategies that are used by hearing children (i.e., modeling, counting, and fact-based strategies), they showed an overwhelming use of counting strategies for all types of problems and at all ages. This difference may have its roots in language or instruction (or in both), and calls attention to the need for conceptual rather than procedural mathematics instruction for deaf and hard of hearing students.

  20. Computational search for rare-earth free hard-magnetic materials

    NASA Astrophysics Data System (ADS)

    Flores Livas, José A.; Sharma, Sangeeta; Dewhurst, John Kay; Gross, Eberhard; MagMat Team

    2015-03-01

    It is difficult to over state the importance of hard magnets for human life in modern times; they enter every walk of our life from medical equipments (NMR) to transport (trains, planes, cars, etc) to electronic appliances (for house hold use to computers). All the known hard magnets in use today contain rare-earth elements, extraction of which is expensive and environmentally harmful. Rare-earths are also instrumental in tipping the balance of world economy as most of them are mined in limited specific parts of the world. Hence it would be ideal to have similar characteristics as a hard magnet but without or at least with reduced amount of rare-earths. This is the main goal of our work: search for rare-earth-free magnets. To do so we employ a combination of density functional theory and crystal prediction methods. The quantities which define a hard magnet are magnetic anisotropy energy (MAE) and saturation magnetization (Ms), which are the quantities we maximize in search for an ideal magnet. In my talk I will present details of the computation search algorithm together with some potential newly discovered rare-earth free hard magnet. J.A.F.L. acknowledge financial support from EU's 7th Framework Marie-Curie scholarship program within the ``ExMaMa'' Project (329386).

  1. Mathematical Problem-Solving Styles in the Education of Deaf and Hard-of-Hearing Individuals

    ERIC Educational Resources Information Center

    Erickson, Elizabeth E. A.

    2012-01-01

    This study explored the mathematical problem-solving styles of middle school and high school deaf and hard-of-hearing students and the mathematical problem-solving styles of the mathematics teachers of middle school and high school deaf and hard-of-hearing students. The research involved 45 deaf and hard-of-hearing students and 19 teachers from a…

  2. Reply to "Comment on 'Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit' ".

    PubMed

    Gebremedhin, Daniel H; Weatherford, Charles A

    2015-02-01

    This is a response to the comment we received on our recent paper "Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit." In that paper, we introduced a computational algorithm that is appropriate for solving stiff initial value problems, and which we applied to the one-dimensional time-independent Schrödinger equation with a soft Coulomb potential. We solved for the eigenpairs using a shooting method and hence turned it into an initial value problem. In particular, we examined the behavior of the eigenpairs as the softening parameter approached zero (hard Coulomb limit). The commenters question the existence of the ground state of the hard Coulomb potential, which we inferred by extrapolation of the softening parameter to zero. A key distinction between the commenters' approach and ours is that they consider only the half-line while we considered the entire x axis. Based on mathematical considerations, the commenters consider only a vanishing solution function at the origin, and they question our conclusion that the ground state of the hard Coulomb potential exists. The ground state we inferred resembles a δ(x), and hence it cannot even be addressed based on their argument. For the excited states, there is agreement with the fact that the particle is always excluded from the origin. Our discussion with regard to the symmetry of the excited states is an extrapolation of the soft Coulomb case and is further explained herein.

  3. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  4. Deaf and Hard of Hearing Students' Problem-Solving Strategies with Signed Arithmetic Story Problems

    ERIC Educational Resources Information Center

    Pagliaro, Claudia M.; Ansell, Ellen

    2011-01-01

    The use of problem-solving strategies by 59 deaf and hard of hearing children, grades K-3, was investigated. The children were asked to solve 9 arithmetic story problems presented to them in American Sign Language. The researchers found that while the children used the same general types of strategies that are used by hearing children (i.e.,…

  5. People efficiently explore the solution space of the computationally intractable traveling salesman problem to find near-optimal tours.

    PubMed

    Acuña, Daniel E; Parada, Víctor

    2010-07-29

    Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be sub-optimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have self-crossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution ("good" edges) were significantly more likely to stay than other edges ("bad" edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants' ability to tell good from bad edges, suggesting that after too many trials the participants "ran out of ideas." In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback and evidence that human problem-solving may exploit the structure of hard problems paralleling behavior of state-of-the-art heuristics.

  6. Global Optimal Trajectory in Chaos and NP-Hardness

    NASA Astrophysics Data System (ADS)

    Latorre, Vittorio; Gao, David Yang

    This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.

  7. People Efficiently Explore the Solution Space of the Computationally Intractable Traveling Salesman Problem to Find Near-Optimal Tours

    PubMed Central

    Acuña, Daniel E.; Parada, Víctor

    2010-01-01

    Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be sub-optimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have self-crossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution (“good” edges) were significantly more likely to stay than other edges (“bad” edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants' ability to tell good from bad edges, suggesting that after too many trials the participants “ran out of ideas.” In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback and evidence that human problem-solving may exploit the structure of hard problems paralleling behavior of state-of-the-art heuristics. PMID:20686597

  8. When students can choose easy, medium, or hard homework problems

    NASA Astrophysics Data System (ADS)

    Teodorescu, Raluca E.; Seaton, Daniel T.; Cardamone, Caroline N.; Rayyan, Saif; Abbott, Jonathan E.; Barrantes, Analia; Pawl, Andrew; Pritchard, David E.

    2012-02-01

    We investigate student-chosen, multi-level homework in our Integrated Learning Environment for Mechanics [1] built using the LON-CAPA [2] open-source learning system. Multi-level refers to problems categorized as easy, medium, and hard. Problem levels were determined a priori based on the knowledge needed to solve them [3]. We analyze these problems using three measures: time-per-problem, LON-CAPA difficulty, and item difficulty measured by item response theory. Our analysis of student behavior in this environment suggests that time-per-problem is strongly dependent on problem category, unlike either score-based measures. We also found trends in student choice of problems, overall effort, and efficiency across the student population. Allowing students choice in problem solving seems to improve their motivation; 70% of students worked additional problems for which no credit was given.

  9. Structural qualia: a solution to the hard problem of consciousness.

    PubMed

    Loorits, Kristjan

    2014-01-01

    The hard problem of consciousness has been often claimed to be unsolvable by the methods of traditional empirical sciences. It has been argued that all the objects of empirical sciences can be fully analyzed in structural terms but that consciousness is (or has) something over and above its structure. However, modern neuroscience has introduced a theoretical framework in which also the apparently non-structural aspects of consciousness, namely the so called qualia or qualitative properties, can be analyzed in structural terms. That framework allows us to see qualia as something compositional with internal structures that fully determine their qualitative nature. Moreover, those internal structures can be identified which certain neural patterns. Thus consciousness as a whole can be seen as a complex neural pattern that misperceives some of its own highly complex structural properties as monadic and qualitative. Such neural pattern is analyzable in fully structural terms and thereby the hard problem is solved.

  10. Structural qualia: a solution to the hard problem of consciousness

    PubMed Central

    Loorits, Kristjan

    2014-01-01

    The hard problem of consciousness has been often claimed to be unsolvable by the methods of traditional empirical sciences. It has been argued that all the objects of empirical sciences can be fully analyzed in structural terms but that consciousness is (or has) something over and above its structure. However, modern neuroscience has introduced a theoretical framework in which also the apparently non-structural aspects of consciousness, namely the so called qualia or qualitative properties, can be analyzed in structural terms. That framework allows us to see qualia as something compositional with internal structures that fully determine their qualitative nature. Moreover, those internal structures can be identified which certain neural patterns. Thus consciousness as a whole can be seen as a complex neural pattern that misperceives some of its own highly complex structural properties as monadic and qualitative. Such neural pattern is analyzable in fully structural terms and thereby the hard problem is solved. PMID:24672510

  11. A mixed analog/digital chaotic neuro-computer system for quadratic assignment problems.

    PubMed

    Horio, Yoshihiko; Ikeguchi, Tohru; Aihara, Kazuyuki

    2005-01-01

    We construct a mixed analog/digital chaotic neuro-computer prototype system for quadratic assignment problems (QAPs). The QAP is one of the difficult NP-hard problems, and includes several real-world applications. Chaotic neural networks have been used to solve combinatorial optimization problems through chaotic search dynamics, which efficiently searches optimal or near optimal solutions. However, preliminary experiments have shown that, although it obtained good feasible solutions, the Hopfield-type chaotic neuro-computer hardware system could not obtain the optimal solution of the QAP. Therefore, in the present study, we improve the system performance by adopting a solution construction method, which constructs a feasible solution using the analog internal state values of the chaotic neurons at each iteration. In order to include the construction method into our hardware, we install a multi-channel analog-to-digital conversion system to observe the internal states of the chaotic neurons. We show experimentally that a great improvement in the system performance over the original Hopfield-type chaotic neuro-computer is obtained. That is, we obtain the optimal solution for the size-10 QAP in less than 1000 iterations. In addition, we propose a guideline for parameter tuning of the chaotic neuro-computer system according to the observation of the internal states of several chaotic neurons in the network.

  12. The "Hard Problem" and the Quantum Physicists. Part 1: The First Generation

    ERIC Educational Resources Information Center

    Smith, C. U. M.

    2006-01-01

    All four of the most important figures in the early twentieth-century development of quantum physics--Niels Bohr, Erwin Schroedinger, Werner Heisenberg and Wolfgang Pauli--had strong interests in the traditional mind--brain, or "hard," problem. This paper reviews their approach to this problem, showing the influence of Bohr's complementarity…

  13. On the Hardness of Subset Sum Problem from Different Intervals

    NASA Astrophysics Data System (ADS)

    Kogure, Jun; Kunihiro, Noboru; Yamamoto, Hirosuke

    The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.

  14. In vitro Fracture strength and hardness of different computer-aided design/computer-aided manufacturing inlays.

    PubMed

    Sagsoz, O; Yildiz, M; Hojjat Ghahramanzadeh, A S L; Alsaran, A

    2018-03-01

    The purpose of this study was to examine the fracture strength and surface microhardness of computer-aided design/computer-aided manufacturing (CAD/CAM) materials in vitro. Mesial-occlusal-distal inlays were made from five different CAD/CAM materials (feldspathic ceramic, CEREC blocs; leucite-reinforced ceramic, IPS Empress CAD; resin nano ceramic, 3M ESPE Lava Ultimate; hybrid ceramic, VITA Enamic; and lithium disilicate ceramic, IPS e.max CAD) using CEREC 4 CAD/CAM system. Samples were adhesively cemented to metal analogs with a resin cement (3M ESPE, U200). The fracture tests were carried out with a universal testing machine. Furthermore, five samples were prepared from each CAD/CAM material for micro-Vickers hardness test. Data were analyzed with statistics software SPSS 20 (IBM Corp., New York, USA). Fracture strength of lithium disilicate inlays (3949 N) was found to be higher than other ceramic inlays (P < 0.05). There was no difference between other inlays statistically (P > 0.05). The highest micro-Vickers hardness was measured in lithium disilicate samples, and the lowest was in resin nano ceramic samples. Fracture strength results demonstrate that inlays can withstand the forces in the mouth. Statistical results showed that fracture strength and micro-Vickers hardness of feldspathic ceramic, leucite-reinforced ceramic, and lithium disilicate ceramic materials had a positive correlation.

  15. A Grand Canonical Monte Carlo simulation program for computing ion distributions around biomolecules in hard sphere solvents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The GIBS software program is a Grand Canonical Monte Carlo (GCMC) simulation program (written in C++) that can be used for 1) computing the excess chemical potential of ions and the mean activity coefficients of salts in homogeneous electrolyte solutions; and, 2) for computing the distribution of ions around fixed macromolecules such as, nucleic acids and proteins. The solvent can be represented as neutral hard spheres or as a dielectric continuum. The ions are represented as charged hard spheres that can interact via Coulomb, hard-sphere, or Lennard-Jones potentials. In addition to hard-sphere repulsions, the ions can also be made tomore » interact with the solvent hard spheres via short-ranged attractive square-well potentials.« less

  16. Statistical physics of hard combinatorial optimization: Vertex cover problem

    NASA Astrophysics Data System (ADS)

    Zhao, Jin-Hua; Zhou, Hai-Jun

    2014-07-01

    Typical-case computation complexity is a research topic at the boundary of computer science, applied mathematics, and statistical physics. In the last twenty years, the replica-symmetry-breaking mean field theory of spin glasses and the associated message-passing algorithms have greatly deepened our understanding of typical-case computation complexity. In this paper, we use the vertex cover problem, a basic nondeterministic-polynomial (NP)-complete combinatorial optimization problem of wide application, as an example to introduce the statistical physical methods and algorithms. We do not go into the technical details but emphasize mainly the intuitive physical meanings of the message-passing equations. A nonfamiliar reader shall be able to understand to a large extent the physics behind the mean field approaches and to adjust the mean field methods in solving other optimization problems.

  17. Target: Alcohol Abuse in the Hard-to-Reach Work Force. Ideas and Resources for Responding to Problems of the Hard-to-Reach Work Force.

    ERIC Educational Resources Information Center

    Informatics, Inc., Rockville, MD.

    This guide is designed as a source of ideas and information for individuals and organizations interested in occupational alcoholism programs for the hard-to-reach work force. Following a brief overview of the problem and a report on progress in occupational alcoholism programming, a working definition of the hard-to-reach work force is offered;…

  18. Computer Game Play as an Imaginary Stage for Reading: Implicit Spatial Effects of Computer Games Embedded in Hard Copy Books

    ERIC Educational Resources Information Center

    Smith, Glenn Gordon

    2012-01-01

    This study compared books with embedded computer games (via pentop computers with microdot paper and audio feedback) with regular books with maps, in terms of fifth graders' comprehension and retention of spatial details from stories. One group read a story in hard copy with embedded computer games, the other group read it in regular book format…

  19. Socio-Emotional Problems Experienced by Deaf and Hard of Hearing Students in Ethiopia

    ERIC Educational Resources Information Center

    Mekonnen, Mulat; Hannu, Savolainen; Elina, Lehtomäki; Matti, Kuorelahti

    2015-01-01

    This study compares the socio-emotional problems experienced by deaf and hard of hearing (DHH) students with those of hearing students in Ethiopia. The research involved a sample of 103 grade 4 students attending a special school for the deaf, a special class for the deaf and a regular school. Socio-emotional problems were measured using Goodman's…

  20. Continuous-Variable Instantaneous Quantum Computing is Hard to Sample.

    PubMed

    Douce, T; Markham, D; Kashefi, E; Diamanti, E; Coudreau, T; Milman, P; van Loock, P; Ferrini, G

    2017-02-17

    Instantaneous quantum computing is a subuniversal quantum complexity class, whose circuits have proven to be hard to simulate classically in the discrete-variable realm. We extend this proof to the continuous-variable (CV) domain by using squeezed states and homodyne detection, and by exploring the properties of postselected circuits. In order to treat postselection in CVs, we consider finitely resolved homodyne detectors, corresponding to a realistic scheme based on discrete probability distributions of the measurement outcomes. The unavoidable errors stemming from the use of finitely squeezed states are suppressed through a qubit-into-oscillator Gottesman-Kitaev-Preskill encoding of quantum information, which was previously shown to enable fault-tolerant CV quantum computation. Finally, we show that, in order to render postselected computational classes in CVs meaningful, a logarithmic scaling of the squeezing parameter with the circuit size is necessary, translating into a polynomial scaling of the input energy.

  1. Some insights on hard quadratic assignment problem instances

    NASA Astrophysics Data System (ADS)

    Hussin, Mohamed Saifullah

    2017-11-01

    Since the formal introduction of metaheuristics, a huge number Quadratic Assignment Problem (QAP) instances have been introduced. Those instances however are loosely-structured, and therefore made it difficult to perform any systematic analysis. The QAPLIB for example, is a library that contains a huge number of QAP benchmark instances that consists of instances with different size and structure, but with a very limited availability for every instance type. This prevents researchers from performing organized study on those instances, such as parameter tuning and testing. In this paper, we will discuss several hard instances that have been introduced over the years, and algorithms that have been used for solving them.

  2. Analysis of Algorithms: Coping with Hard Problems

    ERIC Educational Resources Information Center

    Kolata, Gina Bari

    1974-01-01

    Although today's computers can perform as many as one million operations per second, there are many problems that are still too large to be solved in a straightforward manner. Recent work indicates that many approximate solutions are useful and more efficient than exact solutions. (Author/RH)

  3. The "Hard Problem" and the Quantum Physicists. Part 2: Modern Times

    ERIC Educational Resources Information Center

    Smith, C. U. M.

    2009-01-01

    This is the second part of a review of the work of quantum physicists on the "hard part" of the problem of mind. After an introduction which sets the scene and a brief review of contemporary work on the neural correlates of consciousness (NCC) the work of four prominent modern investigators is examined: J.C. Eccles/Friedrich Beck; Henry Stapp;…

  4. Disposal of waste computer hard disk drive: data destruction and resources recycling.

    PubMed

    Yan, Guoqing; Xue, Mianqiang; Xu, Zhenming

    2013-06-01

    An increasing quantity of discarded computers is accompanied by a sharp increase in the number of hard disk drives to be eliminated. A waste hard disk drive is a special form of waste electrical and electronic equipment because it holds large amounts of information that is closely connected with its user. Therefore, the treatment of waste hard disk drives is an urgent issue in terms of data security, environmental protection and sustainable development. In the present study the degaussing method was adopted to destroy the residual data on the waste hard disk drives and the housing of the disks was used as an example to explore the coating removal process, which is the most important pretreatment for aluminium alloy recycling. The key operation points of the degaussing determined were: (1) keep the platter plate parallel with the magnetic field direction; and (2) the enlargement of magnetic field intensity B and action time t can lead to a significant upgrade in the degaussing effect. The coating removal experiment indicated that heating the waste hard disk drives housing at a temperature of 400 °C for 24 min was the optimum condition. A novel integrated technique for the treatment of waste hard disk drives is proposed herein. This technique offers the possibility of destroying residual data, recycling the recovered resources and disposing of the disks in an environmentally friendly manner.

  5. A computational analysis of lower bounds for the economic lot sizing problem in remanufacturing with separate setups

    NASA Astrophysics Data System (ADS)

    Aishah Syed Ali, Sharifah

    2017-09-01

    This paper considers economic lot sizing problem in remanufacturing with separate setup (ELSRs), where remanufactured and new products are produced on dedicated production lines. Since this problem is NP-hard in general, which leads to computationally inefficient and low-quality of solutions, we present (a) a multicommodity formulation and (b) a strengthened formulation based on a priori addition of valid inequalities in the space of original variables, which are then compared with the Wagner-Whitin based formulation available in the literature. Computational experiments on a large number of test data sets are performed to evaluate the different approaches. The numerical results show that our strengthened formulation outperforms all the other tested approaches in terms of linear relaxation bounds. Finally, we conclude with future research directions.

  6. The 'hard problem' and the quantum physicists. Part 1: the first generation.

    PubMed

    Smith, C U M

    2006-07-01

    All four of the most important figures in the early twentieth-century development of quantum physics-Niels Bohr, Erwin Schroedinger, Werner Heisenberg and Wolfgang Pauli-had strong interests in the traditional mind-brain, or 'hard,' problem. This paper reviews their approach to this problem, showing the influence of Bohr's complementarity thesis, the significance of Schroedinger's small book, 'What is life?,' the updated Platonism of Heisenberg and, perhaps most interesting of all, the interaction of Carl Jung and Wolfgang Pauli in the latter's search for a unification of mind and matter.

  7. Solving optimization problems on computational grids.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, S. J.; Mathematics and Computer Science

    2001-05-01

    Multiprocessor computing platforms, which have become more and more widely available since the mid-1980s, are now heavily used by organizations that need to solve very demanding computational problems. Parallel computing is now central to the culture of many research communities. Novel parallel approaches were developed for global optimization, network optimization, and direct-search methods for nonlinear optimization. Activity was particularly widespread in parallel branch-and-bound approaches for various problems in combinatorial and network optimization. As the cost of personal computers and low-end workstations has continued to fall, while the speed and capacity of processors and networks have increased dramatically, 'cluster' platforms havemore » become popular in many settings. A somewhat different type of parallel computing platform know as a computational grid (alternatively, metacomputer) has arisen in comparatively recent times. Broadly speaking, this term refers not to a multiprocessor with identical processing nodes but rather to a heterogeneous collection of devices that are widely distributed, possibly around the globe. The advantage of such platforms is obvious: they have the potential to deliver enormous computing power. Just as obviously, however, the complexity of grids makes them very difficult to use. The Condor team, headed by Miron Livny at the University of Wisconsin, were among the pioneers in providing infrastructure for grid computations. More recently, the Globus project has developed technologies to support computations on geographically distributed platforms consisting of high-end computers, storage and visualization devices, and other scientific instruments. In 1997, we started the metaneos project as a collaborative effort between optimization specialists and the Condor and Globus groups. Our aim was to address complex, difficult optimization problems in several areas, designing and implementing the algorithms and the software

  8. Matrix Interdiction Problem

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, Shiva Prasad; Pan, Feng

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove a set of k matrix columns that minimizes in the residual matrix the sum of the row values, where the value of a row is defined to be the largest entry in that row. This combinatorial problem is closely related to bipartite network interdiction problem that can be applied to minimize the probability that an adversary can successfully smuggle weapons. After introducing the matrix interdiction problem, we study the computational complexity of this problem. We show that the matrix interdiction problem is NP-hard and that there exists a constant γ such that it is even NP-hard to approximate this problem within an n γ additive factor. We also present an algorithm for this problem that achieves an (n - k) multiplicative approximation ratio.

  9. A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.

    PubMed

    Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei

    2013-10-01

    The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  10. Application of evolutionary computation in ECAD problems

    NASA Astrophysics Data System (ADS)

    Lee, Dae-Hyun; Hwang, Seung H.

    1998-10-01

    Design of modern electronic system is a complicated task which demands the use of computer- aided design (CAD) tools. Since a lot of problems in ECAD are combinatorial optimization problems, evolutionary computations such as genetic algorithms and evolutionary programming have been widely employed to solve those problems. We have applied evolutionary computation techniques to solve ECAD problems such as technology mapping, microcode-bit optimization, data path ordering and peak power estimation, where their benefits are well observed. This paper presents experiences and discusses issues in those applications.

  11. Classical problems in computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    In relation to the expected problems in the development of computational aeroacoustics (CAA), the preliminary applications were to classical problems where the known analytical solutions could be used to validate the numerical results. Such comparisons were used to overcome the numerical problems inherent in these calculations. Comparisons were made between the various numerical approaches to the problems such as direct simulations, acoustic analogies and acoustic/viscous splitting techniques. The aim was to demonstrate the applicability of CAA as a tool in the same class as computational fluid dynamics. The scattering problems that occur are considered and simple sources are discussed.

  12. Quantum algorithm for energy matching in hard optimization problems

    NASA Astrophysics Data System (ADS)

    Baldwin, C. L.; Laumann, C. R.

    2018-06-01

    We consider the ability of local quantum dynamics to solve the "energy-matching" problem: given an instance of a classical optimization problem and a low-energy state, find another macroscopically distinct low-energy state. Energy matching is difficult in rugged optimization landscapes, as the given state provides little information about the distant topography. Here, we show that the introduction of quantum dynamics can provide a speedup over classical algorithms in a large class of hard optimization problems. Tunneling allows the system to explore the optimization landscape while approximately conserving the classical energy, even in the presence of large barriers. Specifically, we study energy matching in the random p -spin model of spin-glass theory. Using perturbation theory and exact diagonalization, we show that introducing a transverse field leads to three sharp dynamical phases, only one of which solves the matching problem: (1) a small-field "trapped" phase, in which tunneling is too weak for the system to escape the vicinity of the initial state; (2) a large-field "excited" phase, in which the field excites the system into high-energy states, effectively forgetting the initial energy; and (3) the intermediate "tunneling" phase, in which the system succeeds at energy matching. The rate at which distant states are found in the tunneling phase, although exponentially slow in system size, is exponentially faster than classical search algorithms.

  13. Integrating Computers into the Problem-Solving Process.

    ERIC Educational Resources Information Center

    Lowther, Deborah L.; Morrison, Gary R.

    2003-01-01

    Asserts that within the context of problem-based learning environments, professors can encourage students to use computers as problem-solving tools. The ten-step Integrating Technology for InQuiry (NteQ) model guides professors through the process of integrating computers into problem-based learning activities. (SWM)

  14. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions

    PubMed Central

    Box, Simon

    2014-01-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human ‘player’ to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable. PMID:26064570

  15. Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions.

    PubMed

    Box, Simon

    2014-12-01

    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human 'player' to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable.

  16. Monte Carlo computer simulation of sedimentation of charged hard spherocylinders.

    PubMed

    Viveros-Méndez, P X; Gil-Villegas, Alejandro; Aranda-Espinoza, S

    2014-07-28

    In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e(2)/Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions Lx ≈ Ly and Lz = 5Lx, where Lx, Ly, and Lz are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface.

  17. Towards solving the hard problem of consciousness: The varieties of brain resonances and the conscious experiences that they support.

    PubMed

    Grossberg, Stephen

    2017-03-01

    The hard problem of consciousness is the problem of explaining how we experience qualia or phenomenal experiences, such as seeing, hearing, and feeling, and knowing what they are. To solve this problem, a theory of consciousness needs to link brain to mind by modeling how emergent properties of several brain mechanisms interacting together embody detailed properties of individual conscious psychological experiences. This article summarizes evidence that Adaptive Resonance Theory, or ART, accomplishes this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART has predicted that "all conscious states are resonant states" as part of its specification of mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious perceptual, cognitive, and cognitive-emotional experiences. ART has reached sufficient maturity to begin classifying the brain resonances that support conscious experiences of seeing, hearing, feeling, and knowing. Psychological and neurobiological data in both normal individuals and clinical patients are clarified by this classification. This analysis also explains why not all resonances become conscious, and why not all brain dynamics are resonant. The global organization of the brain into computationally complementary cortical processing streams (complementary computing), and the organization of the cerebral cortex into characteristic layers of cells (laminar computing), figure prominently in these explanations of conscious and unconscious processes. Alternative models of consciousness are also discussed. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.

  18. Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang

    2017-07-01

    MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.

  19. A case study in programming a quantum annealer for hard operational planning problems

    NASA Astrophysics Data System (ADS)

    Rieffel, Eleanor G.; Venturelli, Davide; O'Gorman, Bryan; Do, Minh B.; Prystay, Elicia M.; Smelyanskiy, Vadim N.

    2015-01-01

    We report on a case study in programming an early quantum annealer to attack optimization problems related to operational planning. While a number of studies have looked at the performance of quantum annealers on problems native to their architecture, and others have examined performance of select problems stemming from an application area, ours is one of the first studies of a quantum annealer's performance on parametrized families of hard problems from a practical domain. We explore two different general mappings of planning problems to quadratic unconstrained binary optimization (QUBO) problems, and apply them to two parametrized families of planning problems, navigation-type and scheduling-type. We also examine two more compact, but problem-type specific, mappings to QUBO, one for the navigation-type planning problems and one for the scheduling-type planning problems. We study embedding properties and parameter setting and examine their effect on the efficiency with which the quantum annealer solves these problems. From these results, we derive insights useful for the programming and design of future quantum annealers: problem choice, the mapping used, the properties of the embedding, and the annealing profile all matter, each significantly affecting the performance.

  20. The Evolution of Computer Based Learning Software Design: Computer Assisted Teaching Unit Experience.

    ERIC Educational Resources Information Center

    Blandford, A. E.; Smith, P. R.

    1986-01-01

    Describes the style of design of computer simulations developed by Computer Assisted Teaching Unit at Queen Mary College with reference to user interface, input and initialization, input data vetting, effective display screen use, graphical results presentation, and need for hard copy. Procedures and problems relating to academic involvement are…

  1. Visual problems in young adults due to computer use.

    PubMed

    Moschos, M M; Chatziralli, I P; Siasou, G; Papazisis, L

    2012-04-01

    Computer use can cause visual problems. The purpose of our study was to evaluate visual problems due to computer use in young adults. Participants in our study were 87 adults, 48 male and 39 female, mean aged 31.3 years old (SD 7.6). All the participants completed a questionnaire regarding visual problems detected after computer use. The mean daily use of computers was 3.2 hours (SD 2.7). 65.5 % of the participants complained for dry eye, mainly after more than 2.5 hours of computer use. 32 persons (36.8 %) had a foreign body sensation in their eyes, while 15 participants (17.2 %) complained for blurred vision which caused difficulties in driving, after 3.25 hours of continuous computer use. 10.3 % of the participants sought medical advice for their problem. There was a statistically significant correlation between the frequency of visual problems and the duration of computer use (p = 0.021). 79.3 % of the participants use artificial tears during or after long use of computers, so as not to feel any ocular discomfort. The main symptom after computer use in young adults was dry eye. All visual problems associated with the duration of computer use. Artificial tears play an important role in the treatment of ocular discomfort after computer use. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Polyomino Problems to Confuse Computers

    ERIC Educational Resources Information Center

    Coffin, Stewart

    2009-01-01

    Computers are very good at solving certain types combinatorial problems, such as fitting sets of polyomino pieces into square or rectangular trays of a given size. However, most puzzle-solving programs now in use assume orthogonal arrangements. When one departs from the usual square grid layout, complications arise. The author--using a computer,…

  3. NP-hardness of decoding quantum error-correction codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  4. Further links between the maximum hardness principle and the hard/soft acid/base principle: insights from hard/soft exchange reactions.

    PubMed

    Chattaraj, Pratim K; Ayers, Paul W; Melin, Junia

    2007-08-07

    Ayers, Parr, and Pearson recently showed that insight into the hard/soft acid/base (HSAB) principle could be obtained by analyzing the energy of reactions in hard/soft exchange reactions, i.e., reactions in which a soft acid replaces a hard acid or a soft base replaces a hard base [J. Chem. Phys., 2006, 124, 194107]. We show, in accord with the maximum hardness principle, that the hardness increases for favorable hard/soft exchange reactions and decreases when the HSAB principle indicates that hard/soft exchange reactions are unfavorable. This extends the previous work of the authors, which treated only the "double hard/soft exchange" reaction [P. K. Chattaraj and P. W. Ayers, J. Chem. Phys., 2005, 123, 086101]. We also discuss two different approaches to computing the hardness of molecules from the hardness of the composing fragments, and explain how the results differ. In the present context, it seems that the arithmetic mean of fragment softnesses is the preferable definition.

  5. An inverse problem for Gibbs fields with hard core potential

    NASA Astrophysics Data System (ADS)

    Koralov, Leonid

    2007-05-01

    It is well known that for a regular stable potential of pair interaction and a small value of activity one can define the corresponding Gibbs field (a measure on the space of configurations of points in Rd). In this paper we consider a converse problem. Namely, we show that for a sufficiently small constant ρ¯1 and a sufficiently small function ρ¯2(x), x ∈Rd, that is equal to zero in a neighborhood of the origin, there exist a hard core pair potential and a value of activity such that ρ¯1 is the density and ρ¯2 is the pair correlation function of the corresponding Gibbs field.

  6. Optical solver of combinatorial problems: nanotechnological approach.

    PubMed

    Cohen, Eyal; Dolev, Shlomi; Frenkel, Sergey; Kryzhanovsky, Boris; Palagushkin, Alexandr; Rosenblit, Michael; Zakharov, Victor

    2013-09-01

    We present an optical computing system to solve NP-hard problems. As nano-optical computing is a promising venue for the next generation of computers performing parallel computations, we investigate the application of submicron, or even subwavelength, computing device designs. The system utilizes a setup of exponential sized masks with exponential space complexity produced in polynomial time preprocessing. The masks are later used to solve the problem in polynomial time. The size of the masks is reduced to nanoscaled density. Simulations were done to choose a proper design, and actual implementations show the feasibility of such a system.

  7. Robot, computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1973-01-01

    The TENEX computer system, the ARPA network, and computer language design technology was applied to support the complex system programs. By combining the pragmatic and theoretical aspects of robot development, an approach is created which is grounded in realism, but which also has at its disposal the power that comes from looking at complex problems from an abstract analytical point of view.

  8. Third Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D. (Editor)

    2000-01-01

    The proceedings of the Third Computational Aeroacoustics (CAA) Workshop on Benchmark Problems cosponsored by the Ohio Aerospace Institute and the NASA Glenn Research Center are the subject of this report. Fan noise was the chosen theme for this workshop with representative problems encompassing four of the six benchmark problem categories. The other two categories were related to jet noise and cavity noise. For the first time in this series of workshops, the computational results for the cavity noise problem were compared to experimental data. All the other problems had exact solutions, which are included in this report. The Workshop included a panel discussion by representatives of industry. The participants gave their views on the status of applying computational aeroacoustics to solve practical industry related problems and what issues need to be addressed to make CAA a robust design tool.

  9. A comparative approach for the investigation of biological information processing: An examination of the structure and function of computer hard drives and DNA

    PubMed Central

    2010-01-01

    Background The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Methods Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Results Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological

  10. A comparative approach for the investigation of biological information processing: an examination of the structure and function of computer hard drives and DNA.

    PubMed

    D'Onofrio, David J; An, Gary

    2010-01-21

    The robust storage, updating and utilization of information are necessary for the maintenance and perpetuation of dynamic systems. These systems can exist as constructs of metal-oxide semiconductors and silicon, as in a digital computer, or in the "wetware" of organic compounds, proteins and nucleic acids that make up biological organisms. We propose that there are essential functional properties of centralized information-processing systems; for digital computers these properties reside in the computer's hard drive, and for eukaryotic cells they are manifest in the DNA and associated structures. Presented herein is a descriptive framework that compares DNA and its associated proteins and sub-nuclear structure with the structure and function of the computer hard drive. We identify four essential properties of information for a centralized storage and processing system: (1) orthogonal uniqueness, (2) low level formatting, (3) high level formatting and (4) translation of stored to usable form. The corresponding aspects of the DNA complex and a computer hard drive are categorized using this classification. This is intended to demonstrate a functional equivalence between the components of the two systems, and thus the systems themselves. Both the DNA complex and the computer hard drive contain components that fulfill the essential properties of a centralized information storage and processing system. The functional equivalence of these components provides insight into both the design process of engineered systems and the evolved solutions addressing similar system requirements. However, there are points where the comparison breaks down, particularly when there are externally imposed information-organizing structures on the computer hard drive. A specific example of this is the imposition of the File Allocation Table (FAT) during high level formatting of the computer hard drive and the subsequent loading of an operating system (OS). Biological systems do not have an

  11. Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D. (Editor)

    2004-01-01

    This publication contains the proceedings of the Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems. In this workshop, as in previous workshops, the problems were devised to gauge the technological advancement of computational techniques to calculate all aspects of sound generation and propagation in air directly from the fundamental governing equations. A variety of benchmark problems have been previously solved ranging from simple geometries with idealized acoustic conditions to test the accuracy and effectiveness of computational algorithms and numerical boundary conditions; to sound radiation from a duct; to gust interaction with a cascade of airfoils; to the sound generated by a separating, turbulent viscous flow. By solving these and similar problems, workshop participants have shown the technical progress from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The fourth CAA workshop emphasized the application of CAA methods to the solution of realistic problems. The workshop was held at the Ohio Aerospace Institute in Cleveland, Ohio, on October 20 to 22, 2003. At that time, workshop participants presented their solutions to problems in one or more of five categories. Their solutions are presented in this proceedings along with the comparisons of their solutions to the benchmark solutions or experimental data. The five categories for the benchmark problems were as follows: Category 1:Basic Methods. The numerical computation of sound is affected by, among other issues, the choice of grid used and by the boundary conditions. Category 2:Complex Geometry. The ability to compute the sound in the presence of complex geometric surfaces is important in practical applications of CAA. Category 3:Sound Generation by Interacting With a Gust. The practical application of CAA for computing noise generated by turbomachinery involves the modeling of the noise source mechanism as a

  12. Computing Evans functions numerically via boundary-value problems

    NASA Astrophysics Data System (ADS)

    Barker, Blake; Nguyen, Rose; Sandstede, Björn; Ventura, Nathaniel; Wahl, Colin

    2018-03-01

    The Evans function has been used extensively to study spectral stability of travelling-wave solutions in spatially extended partial differential equations. To compute Evans functions numerically, several shooting methods have been developed. In this paper, an alternative scheme for the numerical computation of Evans functions is presented that relies on an appropriate boundary-value problem formulation. Convergence of the algorithm is proved, and several examples, including the computation of eigenvalues for a multi-dimensional problem, are given. The main advantage of the scheme proposed here compared with earlier methods is that the scheme is linear and scalable to large problems.

  13. [Problem list in computer-based patient records].

    PubMed

    Ludwig, C A

    1997-01-14

    Computer-based clinical information systems are capable of effectively processing even large amounts of patient-related data. However, physicians depend on rapid access to summarized, clearly laid out data on the computer screen to inform themselves about a patient's current clinical situation. In introducing a clinical workplace system, we therefore transformed the problem list-which for decades has been successfully used in clinical information management-into an electronic equivalent and integrated it into the medical record. The table contains a concise overview of diagnoses and problems as well as related findings. Graphical information can also be integrated into the table, and an additional space is provided for a summary of planned examinations or interventions. The digital form of the problem list makes it possible to use the entire list or selected text elements for generating medical documents. Diagnostic terms for medical reports are transferred automatically to corresponding documents. Computer technology has an immense potential for the further development of problem list concepts. With multimedia applications sound and images will be included in the problem list. For hyperlink purpose the problem list could become a central information board and table of contents of the medical record, thus serving as the starting point for database searches and supporting the user in navigating through the medical record.

  14. A Cognitive Model for Problem Solving in Computer Science

    ERIC Educational Resources Information Center

    Parham, Jennifer R.

    2009-01-01

    According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in…

  15. Analyzing the Effects of a Mathematics Problem-Solving Program, Exemplars, on Mathematics Problem-Solving Scores with Deaf and Hard-of-Hearing Students

    ERIC Educational Resources Information Center

    Chilvers, Amanda Leigh

    2013-01-01

    Researchers have noted that mathematics achievement for deaf and hard-of-hearing (d/hh) students has been a concern for many years, including the ability to problem solve. This quasi-experimental study investigates the use of the Exemplars mathematics program with students in grades 2-8 in a school for the deaf that utilizes American Sign Language…

  16. Tactile sensor is useful for estimating liver hardness and liver fibrosis compared with ultrasonography and computed tomography.

    PubMed

    Suzuki, Satoshi; Watanabe, Yohei; Yazawa, Takashi; Ishigame, Teruhide; Sassa, Motoki; Monma, Tomoyuki; Takawa, Tadashi; Kumamoto, Kensuke; Nakamura, Izumi; Ohoki, Shinji; Hatakeyama, Yuichi; Sakuma, Hiroshi; Ono, Toshiyuki; Omata, Sadao; Takenoshita, Seiichi

    2014-01-01

    We examined whether conventional ultrasonography (US) and computed tomography (CT) were useful to evaluate liver hardness and hepatic fibrosis by comparing the results with those obtained by a tactile sensor using rats with liver fibrosis. We used 44 Wistar rats in which liver fibrosis was induced by intraperitoneal administration of thioacetamide. The CT and US values of each liver were measured before laparotomy. After laparotomy, a tactile sensor was used to measure liver hardness. We prepared Azan stained sections of each excised liver specimen and calculated the degree of liver fibrosis (HFI: hepatic fibrosis index) by computed color image analysis. The stiffness values and HFI showed a positive correlation (r=0.690, p<0.001), as did the tactile values and HFI (r=0.709, p<0.001).In addition, the stiffness and tactile values correlated positively with each other (r=0.814, p<0.001). There was no correlation between the CT values and HFI, as well as no correlation between the US values and HFI. We confirmed that it was difficult to evaluate liver hardness and HFI by CT or US examination, and considered that, at present, a tactile sensor is useful method for evaluating HFI.

  17. Unraveling Quantum Annealers using Classical Hardness

    PubMed Central

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  18. Expectation maximization for hard X-ray count modulation profiles

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.

    2013-07-01

    Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.

  19. Robot, computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.

    1972-01-01

    The development of a computer problem solving system is reported that considers physical problems faced by an artificial robot moving around in a complex environment. Fundamental interaction constraints with a real environment are simulated for the robot by visual scan and creation of an internal environmental model. The programming system used in constructing the problem solving system for the simulated robot and its simulated world environment is outlined together with the task that the system is capable of performing. A very general framework for understanding the relationship between an observed behavior and an adequate description of that behavior is included.

  20. Recycling potential of neodymium: the case of computer hard disk drives.

    PubMed

    Sprecher, Benjamin; Kleijn, Rene; Kramer, Gert Jan

    2014-08-19

    Neodymium, one of the more critically scarce rare earth metals, is often used in sustainable technologies. In this study, we investigate the potential contribution of neodymium recycling to reducing scarcity in supply, with a case study on computer hard disk drives (HDDs). We first review the literature on neodymium production and recycling potential. From this review, we find that recycling of computer HDDs is currently the most feasible pathway toward large-scale recycling of neodymium, even though HDDs do not represent the largest application of neodymium. We then use a combination of dynamic modeling and empirical experiments to conclude that within the application of NdFeB magnets for HDDs, the potential for loop-closing is significant: up to 57% in 2017. However, compared to the total NdFeB production capacity, the recovery potential from HDDs is relatively small (in the 1-3% range). The distributed nature of neodymium poses a significant challenge for recycling of neodymium.

  1. Adapting Experiential Learning to Develop Problem-Solving Skills in Deaf and Hard-of-Hearing Engineering Students

    ERIC Educational Resources Information Center

    Marshall, Matthew M.; Carrano, Andres L.; Dannels, Wendy A.

    2016-01-01

    Individuals who are deaf and hard-of-hearing (DHH) are underrepresented in science, technology, engineering, and mathematics (STEM) professions, and this may be due in part to their level of preparation in the development and retention of mathematical and problem-solving skills. An approach was developed that incorporates experiential learning and…

  2. A Comparison of Approaches for Solving Hard Graph-Theoretic Problems

    DTIC Science & Technology

    2015-04-29

    can be converted to a quadratic unconstrained binary optimization ( QUBO ) problem that uses 0/1-valued variables, and so they are often used...Frontiers in Physics, 2:5 (12 Feb 2014). [7] “Programming with QUBOs ,” (instructional document) D-Wave: The Quantum Computing Company, 2013. [8

  3. Computational Modeling Develops Ultra-Hard Steel

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Glenn Research Center's Mechanical Components Branch developed a spiral bevel or face gear test rig for testing thermal behavior, surface fatigue, strain, vibration, and noise; a full-scale, 500-horsepower helicopter main-rotor transmission testing stand; a gear rig that allows fundamental studies of the dynamic behavior of gear systems and gear noise; and a high-speed helical gear test for analyzing thermal behavior for rotorcraft. The test rig provides accelerated fatigue life testing for standard spur gears at speeds of up to 10,000 rotations per minute. The test rig enables engineers to investigate the effects of materials, heat treat, shot peen, lubricants, and other factors on the gear's performance. QuesTek Innovations LLC, based in Evanston, Illinois, recently developed a carburized, martensitic gear steel with an ultra-hard case using its computational design methodology, but needed to verify surface fatigue, lifecycle performance, and overall reliability. The Battelle Memorial Institute introduced the company to researchers at Glenn's Mechanical Components Branch and facilitated a partnership allowing researchers at the NASA Center to conduct spur gear fatigue testing for the company. Testing revealed that QuesTek's gear steel outperforms the current state-of-the-art alloys used for aviation gears in contact fatigue by almost 300 percent. With the confidence and credibility provided by the NASA testing, QuesTek is commercializing two new steel alloys. Uses for this new class of steel are limitless in areas that demand exceptional strength for high throughput applications.

  4. Numerical Boundary Conditions for Computational Aeroacoustics Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Tam, Chritsopher K. W.; Kurbatskii, Konstantin A.; Fang, Jun

    1997-01-01

    Category 1, Problems 1 and 2, Category 2, Problem 2, and Category 3, Problem 2 are solved computationally using the Dispersion-Relation-Preserving (DRP) scheme. All these problems are governed by the linearized Euler equations. The resolution requirements of the DRP scheme for maintaining low numerical dispersion and dissipation as well as accurate wave speeds in solving the linearized Euler equations are now well understood. As long as 8 or more mesh points per wavelength is employed in the numerical computation, high quality results are assured. For the first three categories of benchmark problems, therefore, the real challenge is to develop high quality numerical boundary conditions. For Category 1, Problems 1 and 2, it is the curved wall boundary conditions. For Category 2, Problem 2, it is the internal radiation boundary conditions inside the duct. For Category 3, Problem 2, they are the inflow and outflow boundary conditions upstream and downstream of the blade row. These are the foci of the present investigation. Special nonhomogeneous radiation boundary conditions that generate the incoming disturbances and at the same time allow the outgoing reflected or scattered acoustic disturbances to leave the computation domain without significant reflection are developed. Numerical results based on these boundary conditions are provided.

  5. Assessment of computer-related health problems among post-graduate nursing students.

    PubMed

    Khan, Shaheen Akhtar; Sharma, Veena

    2013-01-01

    The study was conducted to assess computer-related health problems among post-graduate nursing students and to develop a Self Instructional Module for prevention of computer-related health problems in a selected university situated in Delhi. A descriptive survey with co-relational design was adopted. A total of 97 samples were selected from different faculties of Jamia Hamdard by multi stage sampling with systematic random sampling technique. Among post-graduate students, majority of sample subjects had average compliance with computer-related ergonomics principles. As regards computer related health problems, majority of post graduate students had moderate computer-related health problems, Self Instructional Module developed for prevention of computer-related health problems was found to be acceptable by the post-graduate students.

  6. Problems experienced by people with arthritis when using a computer.

    PubMed

    Baker, Nancy A; Rogers, Joan C; Rubinstein, Elaine N; Allaire, Saralynn H; Wasko, Mary Chester

    2009-05-15

    To describe the prevalence of computer use problems experienced by a sample of people with arthritis, and to determine differences in the magnitude of these problems among people with rheumatoid arthritis (RA), osteoarthritis (OA), and fibromyalgia (FM). Subjects were recruited from the Arthritis Network Disease Registry and asked to complete a survey, the Computer Problems Survey, which was developed for this study. Descriptive statistics were calculated for the total sample and the 3 diagnostic subgroups. Ordinal regressions were used to determine differences between the diagnostic subgroups with respect to each equipment item while controlling for confounding demographic variables. A total of 359 respondents completed a survey. Of the 315 respondents who reported using a computer, 84% reported a problem with computer use attributed to their underlying disorder, and approximately 77% reported some discomfort related to computer use. Equipment items most likely to account for problems and discomfort were the chair, keyboard, mouse, and monitor. Of the 3 subgroups, significantly more respondents with FM reported more severe discomfort, more problems, and greater limitations related to computer use than those with RA or OA for all 4 equipment items. Computer use is significantly affected by arthritis. This could limit the ability of a person with arthritis to participate in work and home activities. Further study is warranted to delineate disease-related limitations and develop interventions to reduce them.

  7. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed.

  8. Advantages of soft versus hard constraints in self-modeling curve resolution problems. Penalty alternating least squares (P-ALS) extension to multi-way problems.

    PubMed

    Richards, Selena; Miller, Robert; Gemperline, Paul

    2008-02-01

    An extension to the penalty alternating least squares (P-ALS) method, called multi-way penalty alternating least squares (NWAY P-ALS), is presented. Optionally, hard constraints (no deviation from predefined constraints) or soft constraints (small deviations from predefined constraints) were applied through the application of a row-wise penalty least squares function. NWAY P-ALS was applied to the multi-batch near-infrared (NIR) data acquired from the base catalyzed esterification reaction of acetic anhydride in order to resolve the concentration and spectral profiles of l-butanol with the reaction constituents. Application of the NWAY P-ALS approach resulted in the reduction of the number of active constraints at the solution point, while the batch column-wise augmentation allowed hard constraints in the spectral profiles and resolved rank deficiency problems of the measurement matrix. The results were compared with the multi-way multivariate curve resolution (MCR)-ALS results using hard and soft constraints to determine whether any advantages had been gained through using the weighted least squares function of NWAY P-ALS over the MCR-ALS resolution.

  9. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Merriam, E. W.; Becker, J. D.

    1973-01-01

    A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.

  10. Decision making and problem solving with computer assistance

    NASA Technical Reports Server (NTRS)

    Kraiss, F.

    1980-01-01

    In modern guidance and control systems, the human as manager, supervisor, decision maker, problem solver and trouble shooter, often has to cope with a marginal mental workload. To improve this situation, computers should be used to reduce the operator from mental stress. This should not solely be done by increased automation, but by a reasonable sharing of tasks in a human-computer team, where the computer supports the human intelligence. Recent developments in this area are summarized. It is shown that interactive support of operator by intelligent computer is feasible during information evaluation, decision making and problem solving. The applied artificial intelligence algorithms comprehend pattern recognition and classification, adaptation and machine learning as well as dynamic and heuristic programming. Elementary examples are presented to explain basic principles.

  11. Computing sparse derivatives and consecutive zeros problem

    NASA Astrophysics Data System (ADS)

    Chandra, B. V. Ravi; Hossain, Shahadat

    2013-02-01

    We describe a substitution based sparse Jacobian matrix determination method using algorithmic differentiation. Utilizing the a priori known sparsity pattern, a compression scheme is determined using graph coloring. The "compressed pattern" of the Jacobian matrix is then reordered into a form suitable for computation by substitution. We show that the column reordering of the compressed pattern matrix (so as to align the zero entries into consecutive locations in each row) can be viewed as a variant of traveling salesman problem. Preliminary computational results show that on the test problems the performance of nearest-neighbor type heuristic algorithms is highly encouraging.

  12. AI tools in computer based problem solving

    NASA Technical Reports Server (NTRS)

    Beane, Arthur J.

    1988-01-01

    The use of computers to solve value oriented, deterministic, algorithmic problems, has evolved a structured life cycle model of the software process. The symbolic processing techniques used, primarily in research, for solving nondeterministic problems, and those for which an algorithmic solution is unknown, have evolved a different model, much less structured. Traditionally, the two approaches have been used completely independently. With the advent of low cost, high performance 32 bit workstations executing identical software with large minicomputers and mainframes, it became possible to begin to merge both models into a single extended model of computer problem solving. The implementation of such an extended model on a VAX family of micro/mini/mainframe systems is described. Examples in both development and deployment of applications involving a blending of AI and traditional techniques are given.

  13. [Computer-assisted phacoemulsification for hard cataracts].

    PubMed

    Zemba, M; Papadatu, Adriana-Camelia; Sîrbu, Laura-Nicoleta; Avram, Corina

    2012-01-01

    to evaluate the efficiency of new torsional phacoemulsification software (Ozil IP system) in hard nucleus cataract extraction. 45 eyes with hard senile cataract (degree III and IV) underwent phacoemulsification performed by the same surgeon, using the same technique (stop and chop). Infiniti (Alcon) platform was used, with Ozil IP software and Kelman phaco tip miniflared, 45 degrees. The nucleus was split into two and after that the first half was phacoemulsificated with IP-on (group 1) and the second half with IP-off (group 2). For every group we measured: cumulative dissipated energy (CDE), numbers of tip closure that needed manual desobstruction the amount of BSS used. The mean CDE was the same in group 1 and in group 2 (between 6.2 and 14.9). The incidence of occlusion that needed manual desobstruction was lower in group 1 (5 times) than in group 2 (13 times). Group 2 used more BSS compared to group 1. The new torsional software (IP system) significantly decreased occlusion time and balanced salt solution use over standard torsional software, particularly with denser cataracts.

  14. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.

    1972-01-01

    Continuing research is reported in a program aimed at the development of a robot computer problem solving system. The motivation and results are described of a theoretical investigation concerning the general properties of behavioral systems. Some of the important issues which a general theory of behavioral organization should encompass are outlined and discussed.

  15. Computer Programming: A Medium for Teaching Problem Solving.

    ERIC Educational Resources Information Center

    Casey, Patrick J.

    1997-01-01

    Argues that including computer programming in the curriculum as a medium for instruction is a feasible alternative for teaching problem solving. Discusses the nature of problem solving; the problem-solving elements of discovery, motivation, practical learning situations and flexibility which are inherent in programming; capabilities of computer…

  16. The checkpoint ordering problem

    PubMed Central

    Hungerländer, P.

    2017-01-01

    Abstract We suggest a new variant of a row layout problem: Find an ordering of n departments with given lengths such that the total weighted sum of their distances to a given checkpoint is minimized. The Checkpoint Ordering Problem (COP) is both of theoretical and practical interest. It has several applications and is conceptually related to some well-studied combinatorial optimization problems, namely the Single-Row Facility Layout Problem, the Linear Ordering Problem and a variant of parallel machine scheduling. In this paper we study the complexity of the (COP) and its special cases. The general version of the (COP) with an arbitrary but fixed number of checkpoints is NP-hard in the weak sense. We propose both a dynamic programming algorithm and an integer linear programming approach for the (COP) . Our computational experiments indicate that the (COP) is hard to solve in practice. While the run time of the dynamic programming algorithm strongly depends on the length of the departments, the integer linear programming approach is able to solve instances with up to 25 departments to optimality. PMID:29170574

  17. Multiple grid problems on concurrent-processing computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.

    1986-01-01

    Three computer codes were studied which make use of concurrent processing computer architectures in computational fluid dynamics (CFD). The three parallel codes were tested on a two processor multiple-instruction/multiple-data (MIMD) facility at NASA Ames Research Center, and are suggested for efficient parallel computations. The first code is a well-known program which makes use of the Beam and Warming, implicit, approximate factored algorithm. This study demonstrates the parallelism found in a well-known scheme and it achieved speedups exceeding 1.9 on the two processor MIMD test facility. The second code studied made use of an embedded grid scheme which is used to solve problems having complex geometries. The particular application for this study considered an airfoil/flap geometry in an incompressible flow. The scheme eliminates some of the inherent difficulties found in adapting approximate factorization techniques onto MIMD machines and allows the use of chaotic relaxation and asynchronous iteration techniques. The third code studied is an application of overset grids to a supersonic blunt body problem. The code addresses the difficulties encountered when using embedded grids on a compressible, and therefore nonlinear, problem. The complex numerical boundary system associated with overset grids is discussed and several boundary schemes are suggested. A boundary scheme based on the method of characteristics achieved the best results.

  18. Exact sampling hardness of Ising spin models

    NASA Astrophysics Data System (ADS)

    Fefferman, B.; Foss-Feig, M.; Gorshkov, A. V.

    2017-09-01

    We study the complexity of classically sampling from the output distribution of an Ising spin model, which can be implemented naturally in a variety of atomic, molecular, and optical systems. In particular, we construct a specific example of an Ising Hamiltonian that, after time evolution starting from a trivial initial state, produces a particular output configuration with probability very nearly proportional to the square of the permanent of a matrix with arbitrary integer entries. In a similar spirit to boson sampling, the ability to sample classically from the probability distribution induced by time evolution under this Hamiltonian would imply unlikely complexity theoretic consequences, suggesting that the dynamics of such a spin model cannot be efficiently simulated with a classical computer. Physical Ising spin systems capable of achieving problem-size instances (i.e., qubit numbers) large enough so that classical sampling of the output distribution is classically difficult in practice may be achievable in the near future. Unlike boson sampling, our current results only imply hardness of exact classical sampling, leaving open the important question of whether a much stronger approximate-sampling hardness result holds in this context. The latter is most likely necessary to enable a convincing experimental demonstration of quantum supremacy. As referenced in a recent paper [A. Bouland, L. Mancinska, and X. Zhang, in Proceedings of the 31st Conference on Computational Complexity (CCC 2016), Leibniz International Proceedings in Informatics (Schloss Dagstuhl-Leibniz-Zentrum für Informatik, Dagstuhl, 2016)], our result completes the sampling hardness classification of two-qubit commuting Hamiltonians.

  19. Analysis of Hard Thin Film Coating

    NASA Technical Reports Server (NTRS)

    Shen, Dashen

    1998-01-01

    Marshall Space Flight Center (MSFC) is interested in developing hard thin film coating for bearings. The wearing of the bearing is an important problem for space flight engine. Hard thin film coating can drastically improve the surface of the bearing and improve the wear-endurance of the bearing. However, many fundamental problems in surface physics, plasma deposition, etc, need further research. The approach is using Electron Cyclotron Resonance Chemical Vapor Deposition (ECRCVD) to deposit hard thin film on stainless steel bearing. The thin films in consideration include SiC, SiN and other materials. An ECRCVD deposition system is being assembled at MSFC.

  20. Analysis of Hard Thin Film Coating

    NASA Technical Reports Server (NTRS)

    Shen, Dashen

    1998-01-01

    MSFC is interested in developing hard thin film coating for bearings. The wearing of the bearing is an important problem for space flight engine. Hard thin film coating can drastically improve the surface of the bearing and improve the wear-endurance of the bearing. However, many fundamental problems in surface physics, plasma deposition, etc, need further research. The approach is using electron cyclotron resonance chemical vapor deposition (ECRCVD) to deposit hard thin film an stainless steel bearing. The thin films in consideration include SiC, SiN and other materials. An ECRCVD deposition system is being assembled at MSFC.

  1. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  2. Combining Computational and Social Effort for Collaborative Problem Solving

    PubMed Central

    Wagy, Mark D.; Bongard, Josh C.

    2015-01-01

    Rather than replacing human labor, there is growing evidence that networked computers create opportunities for collaborations of people and algorithms to solve problems beyond either of them. In this study, we demonstrate the conditions under which such synergy can arise. We show that, for a design task, three elements are sufficient: humans apply intuitions to the problem, algorithms automatically determine and report back on the quality of designs, and humans observe and innovate on others’ designs to focus creative and computational effort on good designs. This study suggests how such collaborations should be composed for other domains, as well as how social and computational dynamics mutually influence one another during collaborative problem solving. PMID:26544199

  3. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  4. Partitioning problems in parallel, pipelined and distributed computing

    NASA Technical Reports Server (NTRS)

    Bokhari, S.

    1985-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.

  5. Using Computer Simulations in Chemistry Problem Solving

    ERIC Educational Resources Information Center

    Avramiotis, Spyridon; Tsaparlis, Georgios

    2013-01-01

    This study is concerned with the effects of computer simulations of two novel chemistry problems on the problem solving ability of students. A control-experimental group, equalized by pair groups (n[subscript Exp] = n[subscript Ctrl] = 78), research design was used. The students had no previous experience of chemical practical work. Student…

  6. Solving a Hamiltonian Path Problem with a bacterial computer

    PubMed Central

    Baumgardner, Jordan; Acker, Karen; Adefuye, Oyinade; Crowley, Samuel Thomas; DeLoache, Will; Dickson, James O; Heard, Lane; Martens, Andrew T; Morton, Nickolaus; Ritter, Michelle; Shoecraft, Amber; Treece, Jessica; Unzicker, Matthew; Valencia, Amanda; Waters, Mike; Campbell, A Malcolm; Heyer, Laurie J; Poet, Jeffrey L; Eckdahl, Todd T

    2009-01-01

    Background The Hamiltonian Path Problem asks whether there is a route in a directed graph from a beginning node to an ending node, visiting each node exactly once. The Hamiltonian Path Problem is NP complete, achieving surprising computational complexity with modest increases in size. This challenge has inspired researchers to broaden the definition of a computer. DNA computers have been developed that solve NP complete problems. Bacterial computers can be programmed by constructing genetic circuits to execute an algorithm that is responsive to the environment and whose result can be observed. Each bacterium can examine a solution to a mathematical problem and billions of them can explore billions of possible solutions. Bacterial computers can be automated, made responsive to selection, and reproduce themselves so that more processing capacity is applied to problems over time. Results We programmed bacteria with a genetic circuit that enables them to evaluate all possible paths in a directed graph in order to find a Hamiltonian path. We encoded a three node directed graph as DNA segments that were autonomously shuffled randomly inside bacteria by a Hin/hixC recombination system we previously adapted from Salmonella typhimurium for use in Escherichia coli. We represented nodes in the graph as linked halves of two different genes encoding red or green fluorescent proteins. Bacterial populations displayed phenotypes that reflected random ordering of edges in the graph. Individual bacterial clones that found a Hamiltonian path reported their success by fluorescing both red and green, resulting in yellow colonies. We used DNA sequencing to verify that the yellow phenotype resulted from genotypes that represented Hamiltonian path solutions, demonstrating that our bacterial computer functioned as expected. Conclusion We successfully designed, constructed, and tested a bacterial computer capable of finding a Hamiltonian path in a three node directed graph. This proof

  7. Relevancy in Problem Solving: A Computational Framework

    ERIC Educational Resources Information Center

    Kwisthout, Johan

    2012-01-01

    When computer scientists discuss the computational complexity of, for example, finding the shortest path from building A to building B in some town or city, their starting point typically is a formal description of the problem at hand, e.g., a graph with weights on every edge where buildings correspond to vertices, routes between buildings to…

  8. Fault-Tolerant, Radiation-Hard DSP

    NASA Technical Reports Server (NTRS)

    Czajkowski, David

    2011-01-01

    Commercial digital signal processors (DSPs) for use in high-speed satellite computers are challenged by the damaging effects of space radiation, mainly single event upsets (SEUs) and single event functional interrupts (SEFIs). Innovations have been developed for mitigating the effects of SEUs and SEFIs, enabling the use of very-highspeed commercial DSPs with improved SEU tolerances. Time-triple modular redundancy (TTMR) is a method of applying traditional triple modular redundancy on a single processor, exploiting the VLIW (very long instruction word) class of parallel processors. TTMR improves SEU rates substantially. SEFIs are solved by a SEFI-hardened core circuit, external to the microprocessor. It monitors the health of the processor, and if a SEFI occurs, forces the processor to return to performance through a series of escalating events. TTMR and hardened-core solutions were developed for both DSPs and reconfigurable field-programmable gate arrays (FPGAs). This includes advancement of TTMR algorithms for DSPs and reconfigurable FPGAs, plus a rad-hard, hardened-core integrated circuit that services both the DSP and FPGA. Additionally, a combined DSP and FPGA board architecture was fully developed into a rad-hard engineering product. This technology enables use of commercial off-the-shelf (COTS) DSPs in computers for satellite and other space applications, allowing rapid deployment at a much lower cost. Traditional rad-hard space computers are very expensive and typically have long lead times. These computers are either based on traditional rad-hard processors, which have extremely low computational performance, or triple modular redundant (TMR) FPGA arrays, which suffer from power and complexity issues. Even more frustrating is that the TMR arrays of FPGAs require a fixed, external rad-hard voting element, thereby causing them to lose much of their reconfiguration capability and in some cases significant speed reduction. The benefits of COTS high

  9. Application of computational aero-acoustics to real world problems

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.

  10. Origin of the computational hardness for learning with binary synapses.

    PubMed

    Huang, Haiping; Kabashima, Yoshiyuki

    2014-11-01

    Through supervised learning in a binary perceptron one is able to classify an extensive number of random patterns by a proper assignment of binary synaptic weights. However, to find such assignments in practice is quite a nontrivial task. The relation between the weight space structure and the algorithmic hardness has not yet been fully understood. To this end, we analytically derive the Franz-Parisi potential for the binary perceptron problem by starting from an equilibrium solution of weights and exploring the weight space structure around it. Our result reveals the geometrical organization of the weight space; the weight space is composed of isolated solutions, rather than clusters of exponentially many close-by solutions. The pointlike clusters far apart from each other in the weight space explain the previously observed glassy behavior of stochastic local search heuristics.

  11. Quantum computation with coherent spin states and the close Hadamard problem

    NASA Astrophysics Data System (ADS)

    Adcock, Mark R. A.; Høyer, Peter; Sanders, Barry C.

    2016-04-01

    We study a model of quantum computation based on the continuously parameterized yet finite-dimensional Hilbert space of a spin system. We explore the computational powers of this model by analyzing a pilot problem we refer to as the close Hadamard problem. We prove that the close Hadamard problem can be solved in the spin system model with arbitrarily small error probability in a constant number of oracle queries. We conclude that this model of quantum computation is suitable for solving certain types of problems. The model is effective for problems where symmetries between the structure of the information associated with the problem and the structure of the unitary operators employed in the quantum algorithm can be exploited.

  12. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  13. Students' Activity in Computer-Supported Collaborative Problem Solving in Mathematics

    ERIC Educational Resources Information Center

    Hurme, Tarja-riitta; Jarvela, Sanna

    2005-01-01

    The purpose of this study was to analyse secondary school students' (N = 16) computer-supported collaborative mathematical problem solving. The problem addressed in the study was: What kinds of metacognitive processes appear during computer-supported collaborative learning in mathematics? Another aim of the study was to consider the applicability…

  14. Development and application of unified algorithms for problems in computational science

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  15. Verifiable fault tolerance in measurement-based quantum computation

    NASA Astrophysics Data System (ADS)

    Fujii, Keisuke; Hayashi, Masahito

    2017-09-01

    Quantum systems, in general, cannot be simulated efficiently by a classical computer, and hence are useful for solving certain mathematical problems and simulating quantum many-body systems. This also implies, unfortunately, that verification of the output of the quantum systems is not so trivial, since predicting the output is exponentially hard. As another problem, the quantum system is very delicate for noise and thus needs an error correction. Here, we propose a framework for verification of the output of fault-tolerant quantum computation in a measurement-based model. In contrast to existing analyses on fault tolerance, we do not assume any noise model on the resource state, but an arbitrary resource state is tested by using only single-qubit measurements to verify whether or not the output of measurement-based quantum computation on it is correct. Verifiability is equipped by a constant time repetition of the original measurement-based quantum computation in appropriate measurement bases. Since full characterization of quantum noise is exponentially hard for large-scale quantum computing systems, our framework provides an efficient way to practically verify the experimental quantum error correction.

  16. Solving subsurface structural problems using a computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witte, D.M.

    1987-02-01

    Until recently, the solution of subsurface structural problems has required a combination of graphical construction, trigonometry, time, and patience. Recent advances in software available for both mainframe and microcomputers now reduce the time and potential error of these calculations by an order of magnitude. Software for analysis of deviated wells, three point problems, apparent dip, apparent thickness, and the intersection of two planes, as well as the plotting and interpretation of these data can be used to allow timely and accurate exploration or operational decisions. The available computer software provides a set of utilities, or tools, rather than a comprehensive,more » intelligent system. The burden for selection of appropriate techniques, computation methods, and interpretations still lies with the explorationist user.« less

  17. Computational problems and signal processing in SETI

    NASA Technical Reports Server (NTRS)

    Deans, Stanley R.; Cullers, D. K.; Stauduhar, Richard

    1991-01-01

    The Search for Extraterrestrial Intelligence (SETI), currently being planned at NASA, will require that an enormous amount of data (on the order of 10 exp 11 distinct signal paths for a typical observation) be analyzed in real time by special-purpose hardware. Even though the SETI system design is not based on maximum entropy and Bayesian methods (partly due to the real-time processing constraint), it is expected that enough data will be saved to be able to apply these and other methods off line where computational complexity is not an overriding issue. Interesting computational problems that relate directly to the system design for processing such an enormous amount of data have emerged. Some of these problems are discussed, along with the current status on their solution.

  18. Computing camera heading: A study

    NASA Astrophysics Data System (ADS)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  19. Date Sensitive Computing Problems: Understanding the Threat

    DTIC Science & Technology

    1998-08-29

    equipment on Earth.3 It can also interfere with electromagnetic signals from such devices as cell phones, radio, televison , and radar. By itself, the ...spacecraft. Debris from impacted satellites will add to the existing orbital debris problem, and could eventually cause damage to other satellites...Date Sensitive Computing Problems Understanding the Threat Aug. 17, 1998 Revised Aug. 29, 1998 Prepared by: The National Crisis Response

  20. Double hard scattering without double counting

    NASA Astrophysics Data System (ADS)

    Diehl, Markus; Gaunt, Jonathan R.; Schönwald, Kay

    2017-06-01

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  1. Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W. (Editor); Hardin, J. C. (Editor)

    1997-01-01

    The proceedings of the Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems held at Florida State University are the subject of this report. For this workshop, problems arising in typical industrial applications of CAA were chosen. Comparisons between numerical solutions and exact solutions are presented where possible.

  2. Students' Mathematics Word Problem-Solving Achievement in a Computer-Based Story

    ERIC Educational Resources Information Center

    Gunbas, N.

    2015-01-01

    The purpose of this study was to investigate the effect of a computer-based story, which was designed in anchored instruction framework, on sixth-grade students' mathematics word problem-solving achievement. Problems were embedded in a story presented on a computer as computer story, and then compared with the paper-based version of the same story…

  3. New Hardness Results for Diophantine Approximation

    NASA Astrophysics Data System (ADS)

    Eisenbrand, Friedrich; Rothvoß, Thomas

    We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.

  4. Decision and function problems based on boson sampling

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Georgios M.; Brougham, Thomas

    2016-07-01

    Boson sampling is a mathematical problem that is strongly believed to be intractable for classical computers, whereas passive linear interferometers can produce samples efficiently. So far, the problem remains a computational curiosity, and the possible usefulness of boson-sampling devices is mainly limited to the proof of quantum supremacy. The purpose of this work is to investigate whether boson sampling can be used as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications. After the definition of a rather general theoretical framework for the design of such problems, we discuss their solution by means of a brute-force numerical approach, as well as by means of nonboson samplers. Moreover, we estimate the sample sizes required for their solution by passive linear interferometers, and it is shown that they are independent of the size of the Hilbert space.

  5. Using Functional Assessment to Treat Behavior Problems of Deaf and Hard of Hearing Children Diagnosed with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Zane, Thomas; Carlson, Mark; Estep, David; Quinn, Mike

    2013-01-01

    A defining feature of autism spectrum disorders is atypical behaviors, e.g., stereotypy, noncompliance, rituals, and aggression. Deaf and hard of hearing individuals with autism present a greater challenge because of additional issues related to their hearing status. One conceptualization of problem behavior is that it serves a communication…

  6. A neuropsychoanalytical approach to the hard problem of consciousness.

    PubMed

    Solms, Mark

    2014-06-01

    A neuropsychoanalytical approach to the hard problem of consciousness revolves around the distinction between the subject of consciousness and objects of consciousness. In contrast to the mainstream of cognitive science, neuropsychoanalysis prioritizes the subject. The subject of consciousness is the indispensable page upon which consciousness of objects is inscribed. This has implications for our conception of the mental. The subjective being of consciousness is not registered in the classical exteroceptive modalities; it is not merely a cognitive representation, not only a memory trace. Rather, the exteroceptive modalities are registered in the subjective being. Cognitive representations are mental solids embedded within subjectivity, the tangible and visible (etc) properties of which are projected onto reality. It is important to recognize that mental solids (e.g., the body-as-object) are no more real than the subjective being they are inscribed in (the body-as-subject). Moreover, pure subjectivity is not without content or quality. This aspect of consciousness is conventionally described quantitatively as the level of consciousness, or wakefulness. But it feels like something to be awake. The primary modality of this aspect of consciousness is affect. Affect supplies the subjectivity that underpins all consciousness. Some implications of this approach are discussed here, in broad brush strokes.

  7. Problem-Solving Models for Computer Literacy: Getting Smarter at Solving Problems. Student Lessons.

    ERIC Educational Resources Information Center

    Moursund, David

    This book is intended for use as a student guide. It is about human problem solving and provides information on how the mind works, placing a major emphasis on the role of computers as an aid in problem solving. The book is written with the underlying philosophy of discovery-based learning based on two premises: first, through the appropriate…

  8. Optimization of lattice surgery is NP-hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon J.

    2017-09-01

    The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.

  9. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    PubMed Central

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  10. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness.

    PubMed

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  11. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molzahn, Daniel K.

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  12. Computing the Feasible Spaces of Optimal Power Flow Problems

    DOE PAGES

    Molzahn, Daniel K.

    2017-03-15

    The solution to an optimal power flow (OPF) problem provides a minimum cost operating point for an electric power system. The performance of OPF solution techniques strongly depends on the problem’s feasible space. This paper presents an algorithm that is guaranteed to compute the entire feasible spaces of small OPF problems to within a specified discretization tolerance. Specifically, the feasible space is computed by discretizing certain of the OPF problem’s inequality constraints to obtain a set of power flow equations. All solutions to the power flow equations at each discretization point are obtained using the Numerical Polynomial Homotopy Continuation (NPHC)more » algorithm. To improve computational tractability, “bound tightening” and “grid pruning” algorithms use convex relaxations to preclude consideration of many discretization points that are infeasible for the OPF problem. Here, the proposed algorithm is used to generate the feasible spaces of two small test cases.« less

  13. Benchmark Problems Used to Assess Computational Aeroacoustics Codes

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Envia, Edmane

    2005-01-01

    The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.

  14. A Study of the Correlation between Computer Games and Adolescent Behavioral Problems

    PubMed Central

    Shokouhi-Moqhaddam, Solmaz; Khezri-Moghadam, Noshiravan; Javanmard, Zeinab; Sarmadi-Ansar, Hassan; Aminaee, Mehran; Shokouhi-Moqhaddam, Majid; Zivari-Rahman, Mahmoud

    2013-01-01

    Background Today, due to developing communicative technologies, computer games and other audio-visual media as social phenomena, are very attractive and have a great effect on children and adolescents. The increasing popularity of these games among children and adolescents results in the public uncertainties about plausible harmful effects of these games. This study aimed to investigate the correlation between computer games and behavioral problems on male guidance school students. Methods This was a descriptive-correlative study on 384 randomly chosen male guidance school students. They were asked to answer the researcher's questionnaire about computer games and Achenbach’s Youth Self-Report (YSR). Findings The Results of this study indicated that there was about 95% direct significant correlation between the amount of playing games among adolescents and anxiety/depression, withdrawn/depression, rule-breaking behaviors, aggression, and social problems. However, there was no statistically significant correlation between the amount of computer game usage and physical complaints, thinking problems, and attention problems. In addition, there was a significant correlation between the students’ place of living and their parents’ job, and using computer games. Conclusion Computer games lead to anxiety, depression, withdrawal, rule-breaking behavior, aggression, and social problems in adolescents. PMID:24494157

  15. A Study of the Correlation between Computer Games and Adolescent Behavioral Problems.

    PubMed

    Shokouhi-Moqhaddam, Solmaz; Khezri-Moghadam, Noshiravan; Javanmard, Zeinab; Sarmadi-Ansar, Hassan; Aminaee, Mehran; Shokouhi-Moqhaddam, Majid; Zivari-Rahman, Mahmoud

    2013-01-01

    Today, due to developing communicative technologies, computer games and other audio-visual media as social phenomena, are very attractive and have a great effect on children and adolescents. The increasing popularity of these games among children and adolescents results in the public uncertainties about plausible harmful effects of these games. This study aimed to investigate the correlation between computer games and behavioral problems on male guidance school students. This was a descriptive-correlative study on 384 randomly chosen male guidance school students. They were asked to answer the researcher's questionnaire about computer games and Achenbach's Youth Self-Report (YSR). The Results of this study indicated that there was about 95% direct significant correlation between the amount of playing games among adolescents and anxiety/depression, withdrawn/depression, rule-breaking behaviors, aggression, and social problems. However, there was no statistically significant correlation between the amount of computer game usage and physical complaints, thinking problems, and attention problems. In addition, there was a significant correlation between the students' place of living and their parents' job, and using computer games. Computer games lead to anxiety, depression, withdrawal, rule-breaking behavior, aggression, and social problems in adolescents.

  16. Adapting Experiential Learning to Develop Problem-Solving Skills in Deaf and Hard-of-Hearing Engineering Students.

    PubMed

    Marshall, Matthew M; Carrano, Andres L; Dannels, Wendy A

    2016-10-01

    Individuals who are deaf and hard-of-hearing (DHH) are underrepresented in science, technology, engineering, and mathematics (STEM) professions, and this may be due in part to their level of preparation in the development and retention of mathematical and problem-solving skills. An approach was developed that incorporates experiential learning and best practices of STEM instruction to give first-year DHH students enrolled in a postsecondary STEM program the opportunity to develop problem-solving skills in real-world scenarios. Using an industrial engineering laboratory that provides manufacturing and warehousing environments, students were immersed in real-world scenarios in which they worked on teams to address prescribed problems encountered during the activities. The highly structured, Plan-Do-Check-Act approach commonly used in industry was adapted for the DHH student participants to document and communicate the problem-solving steps. Students who experienced the intervention realized a 14.6% improvement in problem-solving proficiency compared with a control group, and this gain was retained at 6 and 12 months, post-intervention. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. On computations of variance, covariance and correlation for interval data

    NASA Astrophysics Data System (ADS)

    Kishida, Masako

    2017-02-01

    In many practical situations, the data on which statistical analysis is to be performed is only known with interval uncertainty. Different combinations of values from the interval data usually lead to different values of variance, covariance, and correlation. Hence, it is desirable to compute the endpoints of possible values of these statistics. This problem is, however, NP-hard in general. This paper shows that the problem of computing the endpoints of possible values of these statistics can be rewritten as the problem of computing skewed structured singular values ν, for which there exist feasible (polynomial-time) algorithms that compute reasonably tight bounds in most practical cases. This allows one to find tight intervals of the aforementioned statistics for interval data.

  18. A hybrid heuristic for the multiple choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Mansi, Raïd; Alves, Cláudio; Valério de Carvalho, J. M.; Hanafi, Saïd

    2013-08-01

    In this article, a new solution approach for the multiple choice multidimensional knapsack problem is described. The problem is a variant of the multidimensional knapsack problem where items are divided into classes, and exactly one item per class has to be chosen. Both problems are NP-hard. However, the multiple choice multidimensional knapsack problem appears to be more difficult to solve in part because of its choice constraints. Many real applications lead to very large scale multiple choice multidimensional knapsack problems that can hardly be addressed using exact algorithms. A new hybrid heuristic is proposed that embeds several new procedures for this problem. The approach is based on the resolution of linear programming relaxations of the problem and reduced problems that are obtained by fixing some variables of the problem. The solutions of these problems are used to update the global lower and upper bounds for the optimal solution value. A new strategy for defining the reduced problems is explored, together with a new family of cuts and a reformulation procedure that is used at each iteration to improve the performance of the heuristic. An extensive set of computational experiments is reported for benchmark instances from the literature and for a large set of hard instances generated randomly. The results show that the approach outperforms other state-of-the-art methods described so far, providing the best known solution for a significant number of benchmark instances.

  19. Problems and accommodation strategies reported by computer users with rheumatoid arthritis or fibromyalgia.

    PubMed

    Baker, Nancy A; Rubinstein, Elaine N; Rogers, Joan C

    2012-09-01

    Little is known about the problems experienced by and the accommodation strategies used by computer users with rheumatoid arthritis (RA) or fibromyalgia (FM). This study (1) describes specific problems and accommodation strategies used by people with RA and FM during computer use; and (2) examines if there were significant differences in the problems and accommodation strategies between the different equipment items for each diagnosis. Subjects were recruited from the Arthritis Network Disease Registry. Respondents completed a self-report survey, the Computer Problems Survey. Data were analyzed descriptively (percentages; 95% confidence intervals). Differences in the number of problems and accommodation strategies were calculated using nonparametric tests (Friedman's test and Wilcoxon Signed Rank Test). Eighty-four percent of respondents reported at least one problem with at least one equipment item (RA = 81.5%; FM = 88.9%), with most respondents reporting problems with their chair. Respondents most commonly used timing accommodation strategies to cope with mouse and keyboard problems, personal accommodation strategies to cope with chair problems and environmental accommodation strategies to cope with monitor problems. The number of problems during computer use was substantial in our sample, and our respondents with RA and FM may not implement the most effective strategies to deal with their chair, keyboard, or mouse problems. This study suggests that workers with RA and FM might potentially benefit from education and interventions to assist with the development of accommodation strategies to reduce problems related to computer use.

  20. Automatically Generated Algorithms for the Vertex Coloring Problem

    PubMed Central

    Contreras Bolton, Carlos; Gatica, Gustavo; Parada, Víctor

    2013-01-01

    The vertex coloring problem is a classical problem in combinatorial optimization that consists of assigning a color to each vertex of a graph such that no adjacent vertices share the same color, minimizing the number of colors used. Despite the various practical applications that exist for this problem, its NP-hardness still represents a computational challenge. Some of the best computational results obtained for this problem are consequences of hybridizing the various known heuristics. Automatically revising the space constituted by combining these techniques to find the most adequate combination has received less attention. In this paper, we propose exploring the heuristics space for the vertex coloring problem using evolutionary algorithms. We automatically generate three new algorithms by combining elementary heuristics. To evaluate the new algorithms, a computational experiment was performed that allowed comparing them numerically with existing heuristics. The obtained algorithms present an average 29.97% relative error, while four other heuristics selected from the literature present a 59.73% error, considering 29 of the more difficult instances in the DIMACS benchmark. PMID:23516506

  1. Problems and Prospects in Foreign Language Computing.

    ERIC Educational Resources Information Center

    Pusack, James P.

    The problems and prospects of the field of foreign language computing are profiled through a survey of typical implementation, development, and research projects that language teachers may undertake. Basic concepts in instructional design, hardware, and software are first clarified. Implementation projects involving courseware evaluation, textbook…

  2. Hard-real-time resource management for autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Gat, E.

    2000-01-01

    This paper describes tickets, a computational mechanism for hard-real-time autonomous resource management. Autonomous spacecraftcontrol can be considered abstractly as a computational process whose outputs are spacecraft commands.

  3. Molecular computation: RNA solutions to chess problems.

    PubMed

    Faulhammer, D; Cukras, A R; Lipton, R J; Landweber, L F

    2000-02-15

    We have expanded the field of "DNA computers" to RNA and present a general approach for the solution of satisfiability problems. As an example, we consider a variant of the "Knight problem," which asks generally what configurations of knights can one place on an n x n chess board such that no knight is attacking any other knight on the board. Using specific ribonuclease digestion to manipulate strands of a 10-bit binary RNA library, we developed a molecular algorithm and applied it to a 3 x 3 chessboard as a 9-bit instance of this problem. Here, the nine spaces on the board correspond to nine "bits" or placeholders in a combinatorial RNA library. We recovered a set of "winning" molecules that describe solutions to this problem.

  4. CSI: Hard Drive

    ERIC Educational Resources Information Center

    Sturgeon, Julie

    2008-01-01

    Acting on information from students who reported seeing a classmate looking at inappropriate material on a school computer, school officials used forensics software to plunge the depths of the PC's hard drive, searching for evidence of improper activity. Images were found in a deleted Internet Explorer cache as well as deleted file space.…

  5. Solution of a large hydrodynamic problem using the STAR-100 computer

    NASA Technical Reports Server (NTRS)

    Weilmuenster, K. J.; Howser, L. M.

    1976-01-01

    A representative hydrodynamics problem, the shock initiated flow over a flat plate, was used for exploring data organizations and program structures needed to exploit the STAR-100 vector processing computer. A brief description of the problem is followed by a discussion of how each portion of the computational process was vectorized. Finally, timings of different portions of the program are compared with equivalent operations on serial machines. The speed up of the STAR-100 over the CDC 6600 program is shown to increase as the problem size increases. All computations were carried out on a CDC 6600 and a CDC STAR 100, with code written in FORTRAN for the 6600 and in STAR FORTRAN for the STAR 100.

  6. Solving Quantum Ground-State Problems with Nuclear Magnetic Resonance

    PubMed Central

    Li, Zhaokai; Yung, Man-Hong; Chen, Hongwei; Lu, Dawei; Whitfield, James D.; Peng, Xinhua; Aspuru-Guzik, Alán; Du, Jiangfeng

    2011-01-01

    Quantum ground-state problems are computationally hard problems for general many-body Hamiltonians; there is no classical or quantum algorithm known to be able to solve them efficiently. Nevertheless, if a trial wavefunction approximating the ground state is available, as often happens for many problems in physics and chemistry, a quantum computer could employ this trial wavefunction to project the ground state by means of the phase estimation algorithm (PEA). We performed an experimental realization of this idea by implementing a variational-wavefunction approach to solve the ground-state problem of the Heisenberg spin model with an NMR quantum simulator. Our iterative phase estimation procedure yields a high accuracy for the eigenenergies (to the 10−5 decimal digit). The ground-state fidelity was distilled to be more than 80%, and the singlet-to-triplet switching near the critical field is reliably captured. This result shows that quantum simulators can better leverage classical trial wave functions than classical computers PMID:22355607

  7. Analysis of the Optimum Receiver Design Problem Using Interactive Computer Graphics.

    DTIC Science & Technology

    1981-12-01

    7 _AD A115 498A l AR FORCE INST OF TECH WR16HT-PATTERSON AF8 OH SCHOO--ETC F/6 9/2 ANALYSIS OF THE OPTIMUM RECEIVER DESIGN PROBLEM USING INTERACTI...ANALYSIS OF THE OPTIMUM RECEIVER DESIGN PROBLEM USING INTERACTIVE COMPUTER GRAPHICS THESIS AFIT/GE/EE/81D-39 Michael R. Mazzuechi Cpt USA Approved for...public release; distribution unlimited AFIT/GE/EE/SlD-39 ANALYSIS OF THE OPTIMUM RECEIVER DESIGN PROBLEM USING INTERACTIVE COMPUTER GRAPHICS THESIS

  8. Solving satisfiability problems using a novel microarray-based DNA computer.

    PubMed

    Lin, Che-Hsin; Cheng, Hsiao-Ping; Yang, Chang-Biau; Yang, Chia-Ning

    2007-01-01

    An algorithm based on a modified sticker model accompanied with an advanced MEMS-based microarray technology is demonstrated to solve SAT problem, which has long served as a benchmark in DNA computing. Unlike conventional DNA computing algorithms needing an initial data pool to cover correct and incorrect answers and further executing a series of separation procedures to destroy the unwanted ones, we built solutions in parts to satisfy one clause in one step, and eventually solve the entire Boolean formula through steps. No time-consuming sample preparation procedures and delicate sample applying equipment were required for the computing process. Moreover, experimental results show the bound DNA sequences can sustain the chemical solutions during computing processes such that the proposed method shall be useful in dealing with large-scale problems.

  9. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    DTIC Science & Technology

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  10. Ultrasonic material hardness depth measurement

    DOEpatents

    Good, M.S.; Schuster, G.J.; Skorpik, J.R.

    1997-07-08

    The invention is an ultrasonic surface hardness depth measurement apparatus and method permitting rapid determination of hardness depth of shafts, rods, tubes and other cylindrical parts. The apparatus of the invention has a part handler, sensor, ultrasonic electronics component, computer, computer instruction sets, and may include a display screen. The part handler has a vessel filled with a couplant, and a part rotator for rotating a cylindrical metal part with respect to the sensor. The part handler further has a surface follower upon which the sensor is mounted, thereby maintaining a constant distance between the sensor and the exterior surface of the cylindrical metal part. The sensor is mounted so that a front surface of the sensor is within the vessel with couplant between the front surface of the sensor and the part. 12 figs.

  11. Computational open-channel hydraulics for movable-bed problems

    USGS Publications Warehouse

    Lai, Chintu; ,

    1990-01-01

    As a major branch of computational hydraulics, notable advances have been made in numerical modeling of unsteady open-channel flow since the beginning of the computer age. According to the broader definition and scope of 'computational hydraulics,' the basic concepts and technology of modeling unsteady open-channel flow have been systematically studied previously. As a natural extension, computational open-channel hydraulics for movable-bed problems are addressed in this paper. The introduction of the multimode method of characteristics (MMOC) has made the modeling of this class of unsteady flows both practical and effective. New modeling techniques are developed, thereby shedding light on several aspects of computational hydraulics. Some special features of movable-bed channel-flow simulation are discussed here in the same order as given by the author in the fixed-bed case.

  12. Quantum Computation: Entangling with the Future

    NASA Technical Reports Server (NTRS)

    Jiang, Zhang

    2017-01-01

    Commercial applications of quantum computation have become viable due to the rapid progress of the field in the recent years. Efficient quantum algorithms are discovered to cope with the most challenging real-world problems that are too hard for classical computers. Manufactured quantum hardware has reached unprecedented precision and controllability, enabling fault-tolerant quantum computation. Here, I give a brief introduction on what principles in quantum mechanics promise its unparalleled computational power. I will discuss several important quantum algorithms that achieve exponential or polynomial speedup over any classical algorithm. Building a quantum computer is a daunting task, and I will talk about the criteria and various implementations of quantum computers. I conclude the talk with near-future commercial applications of a quantum computer.

  13. Evoking Knowledge and Information Awareness for Enhancing Computer-Supported Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Engelmann, Tanja; Tergan, Sigmar-Olaf; Hesse, Friedrich W.

    2010-01-01

    Computer-supported collaboration by spatially distributed group members still involves interaction problems within the group. This article presents an empirical study investigating the question of whether computer-supported collaborative problem solving by spatially distributed group members can be fostered by evoking knowledge and information…

  14. Linear solver performance in elastoplastic problem solution on GPU cluster

    NASA Astrophysics Data System (ADS)

    Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.

    2017-12-01

    Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.

  15. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  16. Perspective: Memcomputing: Leveraging memory and physics to compute efficiently

    NASA Astrophysics Data System (ADS)

    Di Ventra, Massimiliano; Traversa, Fabio L.

    2018-05-01

    It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this perspective, we discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing. In particular, we focus on digital memcomputing machines (DMMs) that are scalable. DMMs can be realized with non-linear dynamical systems with memory. The latter property allows the realization of a new type of Boolean logic, one that is self-organizing. Self-organizing logic gates are "terminal-agnostic," namely, they do not distinguish between the input and output terminals. When appropriately assembled to represent a given combinatorial/optimization problem, the corresponding self-organizing circuit converges to the equilibrium points that express the solutions of the problem at hand. In doing so, DMMs take advantage of the long-range order that develops during the transient dynamics. This collective dynamical behavior, reminiscent of a phase transition, or even the "edge of chaos," is mediated by families of classical trajectories (instantons) that connect critical points of increasing stability in the system's phase space. The topological character of the solution search renders DMMs robust against noise and structural disorder. Since DMMs are non-quantum systems described by ordinary differential equations, not only can they be built in hardware with the available technology, they can also be simulated efficiently on modern classical computers. As an example, we will show the polynomial-time solution of the subset-sum problem for the worst cases, and point to other types of hard problems where simulations of DMMs

  17. The Glass Computer

    ERIC Educational Resources Information Center

    Paesler, M. A.

    2009-01-01

    Digital computers use different kinds of memory, each of which is either volatile or nonvolatile. On most computers only the hard drive memory is nonvolatile, i.e., it retains all information stored on it when the power is off. When a computer is turned on, an operating system stored on the hard drive is loaded into the computer's memory cache and…

  18. Cavity method for force transmission in jammed disordered packings of hard particles.

    PubMed

    Bo, Lin; Mari, Romain; Song, Chaoming; Makse, Hernán A

    2014-10-07

    The force distribution of jammed disordered packings has always been considered a central object in the physics of granular materials. However, many of its features are poorly understood. In particular, analytic relations to other key macroscopic properties of jammed matter, such as the contact network and its coordination number, are still lacking. Here we develop a mean-field theory for this problem, based on the consideration of the contact network as a random graph where the force transmission becomes a constraint satisfaction problem. We can thus use the cavity method developed in the past few decades within the statistical physics of spin glasses and hard computer science problems. This method allows us to compute the force distribution P(f) for random packings of hard particles of any shape, with or without friction. We find a new signature of jamming in the small force behavior P(f) ∼ f(θ), whose exponent has attracted recent active interest: we find a finite value for P(f = 0), along with θ = 0. Furthermore, we relate the force distribution to a lower bound of the average coordination number z[combining macron](μ) of jammed packings of frictional spheres with coefficient μ. This bridges the gap between the two known isostatic limits z[combining macron]c (μ = 0) = 2D (in dimension D) and z[combining macron]c(μ → ∞) = D + 1 by extending the naive Maxwell's counting argument to frictional spheres. The theoretical framework describes different types of systems, such as non-spherical objects in arbitrary dimensions, providing a common mean-field scenario to investigate force transmission, contact networks and coordination numbers of jammed disordered packings.

  19. Performance of a Group of Deaf and Hard-of-Hearing Students and a Comparison Group of Hearing Students on a Series of Problem-Solving Tasks.

    ERIC Educational Resources Information Center

    Luckner, John L.; McNeill, Joyce H.

    1994-01-01

    This study found that 43 school-age deaf and hard-of-hearing students did not perform as well as a matched group of hearing students on problem-solving tasks. As they got older, both groups made incremental gains in problem-solving ability, and the gap between groups narrowed. (Author/JDD)

  20. A Novel Approach to Hardness Testing

    NASA Technical Reports Server (NTRS)

    Spiegel, F. Xavier; West, Harvey A.

    1996-01-01

    This paper gives a description of the application of a simple rebound time measuring device and relates the determination of relative hardness of a variety of common engineering metals. A relation between rebound time and hardness will be sought. The effect of geometry and surface condition will also be discussed in order to acquaint the student with the problems associated with this type of method.

  1. Computational nuclear quantum many-body problem: The UNEDF project

    NASA Astrophysics Data System (ADS)

    Bogner, S.; Bulgac, A.; Carlson, J.; Engel, J.; Fann, G.; Furnstahl, R. J.; Gandolfi, S.; Hagen, G.; Horoi, M.; Johnson, C.; Kortelainen, M.; Lusk, E.; Maris, P.; Nam, H.; Navratil, P.; Nazarewicz, W.; Ng, E.; Nobre, G. P. A.; Ormand, E.; Papenbrock, T.; Pei, J.; Pieper, S. C.; Quaglioni, S.; Roche, K. J.; Sarich, J.; Schunck, N.; Sosonkina, M.; Terasaki, J.; Thompson, I.; Vary, J. P.; Wild, S. M.

    2013-10-01

    The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.

  2. A New Approach for Solving the Generalized Traveling Salesman Problem

    NASA Astrophysics Data System (ADS)

    Pop, P. C.; Matei, O.; Sabo, C.

    The generalized traveling problem (GTSP) is an extension of the classical traveling salesman problem. The GTSP is known to be an NP-hard problem and has many interesting applications. In this paper we present a local-global approach for the generalized traveling salesman problem. Based on this approach we describe a novel hybrid metaheuristic algorithm for solving the problem using genetic algorithms. Computational results are reported for Euclidean TSPlib instances and compared with the existing ones. The obtained results point out that our hybrid algorithm is an appropriate method to explore the search space of this complex problem and leads to good solutions in a reasonable amount of time.

  3. Distributed data fusion across multiple hard and soft mobile sensor platforms

    NASA Astrophysics Data System (ADS)

    Sinsley, Gregory

    is a younger field than centralized fusion. The main issues in distributed fusion that are addressed are distributed classification and distributed tracking. There are several well established methods for performing distributed fusion that are first reviewed. The chapter on distributed fusion concludes with a multiple unmanned vehicle collaborative test involving an unmanned aerial vehicle and an unmanned ground vehicle. The third issue this thesis addresses is that of soft sensor only data fusion. Soft-only fusion is a newer field than centralized or distributed hard sensor fusion. Because of the novelty of the field, the chapter on soft only fusion contains less background information and instead focuses on some new results in soft sensor data fusion. Specifically, it discusses a novel fuzzy logic based soft sensor data fusion method. This new method is tested using both simulations and field measurements. The biggest issue addressed in this thesis is that of combined hard and soft fusion. Fusion of hard and soft data is the newest area for research in the data fusion community; therefore, some of the largest theoretical contributions in this thesis are in the chapter on combined hard and soft fusion. This chapter presents a novel combined hard and soft data fusion method based on random set theory, which processes random set data using a particle filter. Furthermore, the particle filter is designed to be distributed across multiple robots and portable computers (used by human observers) so that there is no centralized failure point in the system. After laying out a theoretical groundwork for hard and soft sensor data fusion the thesis presents practical applications for hard and soft sensor data fusion in simulation. Through a series of three progressively more difficult simulations, some important hard and soft sensor data fusion capabilities are demonstrated. The first simulation demonstrates fusing data from a single soft sensor and a single hard sensor in

  4. Use of Computer-Based Case Studies in a Problem-Solving Curriculum.

    ERIC Educational Resources Information Center

    Haworth, Ian S.; And Others

    1997-01-01

    Describes the use of three case studies, on computer, to enhance problem solving and critical thinking among doctoral pharmacy students in a physical chemistry course. Students are expected to use specific computer programs, spreadsheets, electronic mail, molecular graphics, word processing, online literature searching, and other computer-based…

  5. Computational complexity in entanglement transformations

    NASA Astrophysics Data System (ADS)

    Chitambar, Eric A.

    In physics, systems having three parts are typically much more difficult to analyze than those having just two. Even in classical mechanics, predicting the motion of three interacting celestial bodies remains an insurmountable challenge while the analogous two-body problem has an elementary solution. It is as if just by adding a third party, a fundamental change occurs in the structure of the problem that renders it unsolvable. In this thesis, we demonstrate how such an effect is likewise present in the theory of quantum entanglement. In fact, the complexity differences between two-party and three-party entanglement become quite conspicuous when comparing the difficulty in deciding what state changes are possible for these systems when no additional entanglement is consumed in the transformation process. We examine this entanglement transformation question and its variants in the language of computational complexity theory, a powerful subject that formalizes the concept of problem difficulty. Since deciding feasibility of a specified bipartite transformation is relatively easy, this task belongs to the complexity class P. On the other hand, for tripartite systems, we find the problem to be NP-Hard, meaning that its solution is at least as hard as the solution to some of the most difficult problems humans have encountered. One can then rigorously defend the assertion that a fundamental complexity difference exists between bipartite and tripartite entanglement since unlike the former, the full range of forms realizable by the latter is incalculable (assuming P≠NP). However, similar to the three-body celestial problem, when one examines a special subclass of the problem---invertible transformations on systems having at least one qubit subsystem---we prove that the problem can be solved efficiently. As a hybrid of the two questions, we find that the question of tripartite to bipartite transformations can be solved by an efficient randomized algorithm. Our results are

  6. A Benders based rolling horizon algorithm for a dynamic facility location problem

    DOE PAGES

    Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.

    2016-06-28

    This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less

  7. Hard Sphere Simulation by Event-Driven Molecular Dynamics: Breakthrough, Numerical Difficulty, and Overcoming the issues

    NASA Astrophysics Data System (ADS)

    Isobe, Masaharu

    Hard sphere/disk systems are among the simplest models and have been used to address numerous fundamental problems in the field of statistical physics. The pioneering numerical works on the solid-fluid phase transition based on Monte Carlo (MC) and molecular dynamics (MD) methods published in 1957 represent historical milestones, which have had a significant influence on the development of computer algorithms and novel tools to obtain physical insights. This chapter addresses the works of Alder's breakthrough regarding hard sphere/disk simulation: (i) event-driven molecular dynamics, (ii) long-time tail, (iii) molasses tail, and (iv) two-dimensional melting/crystallization. From a numerical viewpoint, there are serious issues that must be overcome for further breakthrough. Here, we present a brief review of recent progress in this area.

  8. Comparing hard and soft prior bounds in geophysical inverse problems

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1988-01-01

    In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describes the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.

  9. Comparing hard and soft prior bounds in geophysical inverse problems

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1987-01-01

    In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describeds the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.

  10. Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem

    PubMed Central

    Molla-Alizadeh-Zavardehi, S.; Tavakkoli-Moghaddam, R.; Lotfi, F. Hosseinzadeh

    2014-01-01

    This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM) scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA), variable neighborhood search (VNS), and simulated annealing (SA) frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms. PMID:24883359

  11. New computer program solves wide variety of heat flow problems

    NASA Technical Reports Server (NTRS)

    Almond, J. C.

    1966-01-01

    Boeing Engineering Thermal Analyzer /BETA/ computer program uses numerical methods to provide accurate heat transfer solutions to a wide variety of heat flow problems. The program solves steady-state and transient problems in almost any situation that can be represented by a resistance-capacitance network.

  12. The noble gases: how their electronegativity and hardness determines their chemistry.

    PubMed

    Furtado, Jonathan; De Proft, Frank; Geerlings, Paul

    2015-02-26

    The establishment of an internally consistent scale of noble gas electronegativities is a long-standing problem. In the present study, the problem is attacked via the Mulliken definition, which in recent years gained widespread use to its natural appearance in the context of conceptual density functional theory. Basic ingredients of this scale are the electron affinity and the ionization potential. Whereas the latter can be computed routinely, the instability of the anion makes the judicious choice of computational technique for evaluating electron affinities much more tricky. We opted for Puiatti's approach, extrapolating the energy of high ε solvent stabilized anions to the ε = 1 (gas phase) case. The results give negative electron affinity values, monotonically increasing (except for helium which is an outlier in most of the story) to almost zero at eka-radon in agreement with high level calculations. The stability of the B3LYP results is successfully tested both via improving the level of theory (CCSD(T)) and expanding the basis set. Combined with the ionization energies (in good agreement with experiment), an electronegativity scale is obtained displaying (1) a monotonic decrease of χ when going down the periodic table, (2) top values not for the noble gases but for the halogens, as opposed to most (extrapolation) procedures of existing scales, invariably placing the noble gases on top, and (3) noble gases having electronegativities close to the chalcogens. In the accompanying hardness scale (hardly, if ever, discussed in the literature) the noble gases turn out to be by far the farthest the hardest elements, again with a continuous decrease with increasing Z. Combining χ value of the halogens and the noble gases the Ng(δ+)F(δ-) bond polarity emerging from ab initio calculations naturally emerges. In conclusion, the chemistry of the noble gases is for a large part determined by their extreme hardness, equivalent to a high resistance to change in its

  13. Galois groups of Schubert problems via homotopy computation

    NASA Astrophysics Data System (ADS)

    Leykin, Anton; Sottile, Frank

    2009-09-01

    Numerical homotopy continuation of solutions to polynomial equations is the foundation for numerical algebraic geometry, whose development has been driven by applications of mathematics. We use numerical homotopy continuation to investigate the problem in pure mathematics of determining Galois groups in the Schubert calculus. For example, we show by direct computation that the Galois group of the Schubert problem of 3-planes in mathbb{C}^8 meeting 15 fixed 5-planes non-trivially is the full symmetric group S_{6006} .

  14. Principles versus Artifacts in Computer Science Curriculum Design

    ERIC Educational Resources Information Center

    Machanick, Philip

    2003-01-01

    Computer Science is a subject which has difficulty in marketing itself. Further, pinning down a standard curriculum is difficult--there are many preferences which are hard to accommodate. This paper argues the case that part of the problem is the fact that, unlike more established disciplines, the subject does not clearly distinguish the study of…

  15. Defining Information Needs of Computer Users: A Human Communication Problem.

    ERIC Educational Resources Information Center

    Kimbrough, Kenneth L.

    This exploratory investigation of the process of defining the information needs of computer users and the impact of that process on information retrieval focuses on communication problems. Six sites were visited that used computers to process data or to provide information, including the California Department of Transportation, the California…

  16. Computer Power. Part 2: Electrical Power Problems and Their Amelioration.

    ERIC Educational Resources Information Center

    Price, Bennett J.

    1989-01-01

    Describes electrical power problems that affect computer users, including spikes, sags, outages, noise, frequency variations, and static electricity. Ways in which these problems may be diagnosed and cured are discussed. Sidebars consider transformers; power distribution units; surge currents/linear and non-linear loads; and sizing the power…

  17. A parallel-machine scheduling problem with two competing agents

    NASA Astrophysics Data System (ADS)

    Lee, Wen-Chiung; Chung, Yu-Hsiang; Wang, Jen-Ya

    2017-06-01

    Scheduling with two competing agents has become popular in recent years. Most of the research has focused on single-machine problems. This article considers a parallel-machine problem, the objective of which is to minimize the total completion time of jobs from the first agent given that the maximum tardiness of jobs from the second agent cannot exceed an upper bound. The NP-hardness of this problem is also examined. A genetic algorithm equipped with local search is proposed to search for the near-optimal solution. Computational experiments are conducted to evaluate the proposed genetic algorithm.

  18. A multidisciplinary approach to solving computer related vision problems.

    PubMed

    Long, Jennifer; Helland, Magne

    2012-09-01

    This paper proposes a multidisciplinary approach to solving computer related vision issues by including optometry as a part of the problem-solving team. Computer workstation design is increasing in complexity. There are at least ten different professions who contribute to workstation design or who provide advice to improve worker comfort, safety and efficiency. Optometrists have a role identifying and solving computer-related vision issues and in prescribing appropriate optical devices. However, it is possible that advice given by optometrists to improve visual comfort may conflict with other requirements and demands within the workplace. A multidisciplinary approach has been advocated for solving computer related vision issues. There are opportunities for optometrists to collaborate with ergonomists, who coordinate information from physical, cognitive and organisational disciplines to enact holistic solutions to problems. This paper proposes a model of collaboration and examples of successful partnerships at a number of professional levels including individual relationships between optometrists and ergonomists when they have mutual clients/patients, in undergraduate and postgraduate education and in research. There is also scope for dialogue between optometry and ergonomics professional associations. A multidisciplinary approach offers the opportunity to solve vision related computer issues in a cohesive, rather than fragmented way. Further exploration is required to understand the barriers to these professional relationships. © 2012 The College of Optometrists.

  19. Computer problem-solving coaches for introductory physics: Design and usability studies

    NASA Astrophysics Data System (ADS)

    Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew

    2016-06-01

    The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how effective such coaches might be, they will only be useful if they are attractive to students. We describe the design and testing of a set of web-based computer programs that act as personal coaches to students while they practice solving problems from introductory physics. The coaches are designed to supplement regular human instruction, giving students access to effective forms of practice outside class. We present results from large-scale usability tests of the computer coaches and discuss their implications for future versions of the coaches.

  20. Computers in medical education 1: evaluation of a problem-orientated learning package.

    PubMed

    Devitt, P; Palmer, E

    1998-04-01

    A computer-based learning package has been developed, aimed at expanding students' knowledge base, as well as improving data-handling abilities and clinical problem-solving skills. The program was evaluated by monitoring its use by students, canvassing users' opinions and measuring its effectiveness as a learning tool compared to tutorials on the same material. Evaluation was undertaken using three methods: initially, by a questionnaire on computers as a learning tool and the applicability of the content: second, through monitoring by the computer of student use, decisions and performance; finally, through pre- and post-test assessment of fifth-year students who either used a computer package or attended a tutorial on equivalent material. Most students provided positive comments on the learning material and expressed a willingness to see computer-aided learning (CAL) introduced into the curriculum. Over a 3-month period, 26 modules in the program were used on 1246 occasions. Objective measurement showed a significant gain in knowledge, data handling and problem-solving skills. Computer-aided learning is a valuable learning resource that deserves better attention in medical education. When used appropriately, the computer can be an effective learning resource, not only for the delivery of knowledge. but also to help students develop their problem-solving skills.

  1. Executive functions and behavioral problems in deaf and hard-of-hearing students at general and special schools.

    PubMed

    Hintermair, Manfred

    2013-01-01

    In this study, behavioral problems of deaf and hard-of-hearing (D/HH) school-aged children are discussed in the context of executive functioning and communicative competence. Teachers assessed the executive functions of a sample of 214 D/HH students from general schools and schools for the deaf, using a German version of the Behavior Rating Inventory of Executive Functions (BRIEF-D). This was complemented by a questionnaire that measured communicative competence and behavioral problems (German version of the Strengths and Difficulties Questionnaire; SDQ-D). The results in nearly all the scales show a significantly higher problem rate for executive functions in the group of D/HH students compared with a normative sample of hearing children. In the D/HH group, students at general schools had better scores on most scales than students at schools for the deaf. Regression analysis reveals the importance of executive functions and communicative competence for behavioral problems. The relevance of the findings for pedagogical work is discussed. A specific focus on competencies such as self-efficacy or self-control in educational concepts for D/HH students seems to be necessary in addition to extending language competencies.

  2. The benefits of computer-generated feedback for mathematics problem solving.

    PubMed

    Fyfe, Emily R; Rittle-Johnson, Bethany

    2016-07-01

    The goal of the current research was to better understand when and why feedback has positive effects on learning and to identify features of feedback that may improve its efficacy. In a randomized experiment, second-grade children received instruction on a correct problem-solving strategy and then solved a set of relevant problems. Children were assigned to receive no feedback, immediate feedback, or summative feedback from the computer. On a posttest the following day, feedback resulted in higher scores relative to no feedback for children who started with low prior knowledge. Immediate feedback was particularly effective, facilitating mastery of the material for children with both low and high prior knowledge. Results suggest that minimal computer-generated feedback can be a powerful form of guidance during problem solving. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Finding the probability of infection in an SIR network is NP-Hard

    PubMed Central

    Shapiro, Michael; Delgado-Eckert, Edgar

    2012-01-01

    It is the purpose of this article to review results that have long been known to communications network engineers and have direct application to epidemiology on networks. A common approach in epidemiology is to study the transmission of a disease in a population where each individual is initially susceptible (S), may become infective (I) and then removed or recovered (R) and plays no further epidemiological role. Much of the recent work gives explicit consideration to the network of social interactions or disease-transmitting contacts and attendant probability of transmission for each interacting pair. The state of such a network is an assignment of the values {S, I, R} to its members. Given such a network, an initial state and a particular susceptible individual, we would like to compute their probability of becoming infected in the course of an epidemic. It turns out that this and related problems are NP-hard. In particular, it belongs in a class of problems for which no efficient algorithms for their solution are known. Moreover, finding an efficient algorithm for the solution of any problem in this class would entail a major breakthrough in theoretical computer science. PMID:22824138

  4. Computer Assisted Problem Solving in an Introductory Statistics Course. Technical Report No. 56.

    ERIC Educational Resources Information Center

    Anderson, Thomas H.; And Others

    The computer assisted problem solving system (CAPS) described in this booklet administered "homework" problem sets designed to develop students' computational, estimation, and procedural skills. These skills were related to important concepts in an introductory statistics course. CAPS generated unique data, judged student performance,…

  5. Patch planting of hard spin-glass problems: Getting ready for the next generation of optimization approaches

    NASA Astrophysics Data System (ADS)

    Wang, Wenlong; Mandrà, Salvatore; Katzgraber, Helmut

    We propose a patch planting heuristic that allows us to create arbitrarily-large Ising spin-glass instances on any topology and with any type of disorder, and where the exact ground-state energy of the problem is known by construction. By breaking up the problem into patches that can be treated either with exact or heuristic solvers, we can reconstruct the optimum of the original, considerably larger, problem. The scaling of the computational complexity of these instances with various patch numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and quantum annealing on the D-Wave 2X quantum annealer. The method can be useful for benchmarking of novel computing technologies and algorithms. NSF-DMR-1208046 and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via MIT Lincoln Laboratory Air Force Contract No. FA8721-05-C-0002.

  6. Sleep problems and computer use during work and leisure: Cross-sectional study among 7800 adults.

    PubMed

    Andersen, Lars Louis; Garde, Anne Helene

    2015-01-01

    Previous studies linked heavy computer use to disturbed sleep. This study investigates the association between computer use during work and leisure and sleep problems in working adults. From the 2010 round of the Danish Work Environment Cohort Study, currently employed wage earners on daytime schedule (N = 7883) replied to the Bergen insomnia scale and questions on weekly duration of computer use. Results showed that sleep problems for three or more days per week (average of six questions) were experienced by 14.9% of the respondents. Logistic regression analyses, controlled for gender, age, physical and psychosocial work factors, lifestyle, chronic disease and mental health showed that computer use during leisure for 30 or more hours per week (reference 0-10 hours per week) was associated with increased odds of sleep problems (OR 1.83 [95% CI 1.06-3.17]). Computer use during work and shorter duration of computer use during leisure were not associated with sleep problems. In conclusion, excessive computer use during leisure - but not work - is associated with sleep problems in adults working on daytime schedule.

  7. Software Systems for High-performance Quantum Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; Britt, Keith A

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventionalmore » computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.« less

  8. Hard Copy Market Overview

    NASA Astrophysics Data System (ADS)

    Testan, Peter R.

    1987-04-01

    A number of Color Hard Copy (CHC) market drivers are currently indicating strong growth in the use of CHC technologies for the business graphics marketplace. These market drivers relate to product, software, color monitors and color copiers. The use of color in business graphics allows more information to be relayed than is normally the case in a monochrome format. The communicative powers of full-color computer generated output in the business graphics application area will continue to induce end users to desire and require color in their future applications. A number of color hard copy technologies will be utilized in the presentation graphics arena. Thermal transfer, ink jet, photographic and electrophotographic technologies are all expected to be utilized in the business graphics presentation application area in the future. Since the end of 1984, the availability of color application software packages has grown significantly. Sales revenue generated by business graphics software is expected to grow at a compound annual growth rate of just over 40 percent to 1990. Increased availability of packages to allow the integration of text and graphics is expected. Currently, the latest versions of page description languages such as Postscript, Interpress and DDL all support color output. The use of color monitors will also drive the demand for color hard copy in the business graphics market place. The availability of higher resolution screens is allowing color monitors to be easily used for both text and graphics applications in the office environment. During 1987, the sales of color monitors are expected to surpass the sales of monochrome monitors. Another major color hard copy market driver will be the color copier. In order to take advantage of the communications power of computer generated color output, multiple copies are required for distribution. Product introductions of a new generation of color copiers is now underway with additional introductions expected

  9. Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control

    NASA Astrophysics Data System (ADS)

    Kamyar, Reza

    In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to

  10. Designing a fuzzy scheduler for hard real-time systems

    NASA Technical Reports Server (NTRS)

    Yen, John; Lee, Jonathan; Pfluger, Nathan; Natarajan, Swami

    1992-01-01

    In hard real-time systems, tasks have to be performed not only correctly, but also in a timely fashion. If timing constraints are not met, there might be severe consequences. Task scheduling is the most important problem in designing a hard real-time system, because the scheduling algorithm ensures that tasks meet their deadlines. However, the inherent nature of uncertainty in dynamic hard real-time systems increases the problems inherent in scheduling. In an effort to alleviate these problems, we have developed a fuzzy scheduler to facilitate searching for a feasible schedule. A set of fuzzy rules are proposed to guide the search. The situation we are trying to address is the performance of the system when no feasible solution can be found, and therefore, certain tasks will not be executed. We wish to limit the number of important tasks that are not scheduled.

  11. Using Volunteer Computing to Study Some Features of Diagonal Latin Squares

    NASA Astrophysics Data System (ADS)

    Vatutin, Eduard; Zaikin, Oleg; Kochemazov, Stepan; Valyaev, Sergey

    2017-12-01

    In this research, the study concerns around several features of diagonal Latin squares (DLSs) of small order. Authors of the study suggest an algorithm for computing minimal and maximal numbers of transversals of DLSs. According to this algorithm, all DLSs of a particular order are generated, and for each square all its transversals and diagonal transversals are constructed. The algorithm was implemented and applied to DLSs of order at most 7 on a personal computer. The experiment for order 8 was performed in the volunteer computing project Gerasim@home. In addition, the problem of finding pairs of orthogonal DLSs of order 10 was considered and reduced to Boolean satisfiability problem. The obtained problem turned out to be very hard, therefore it was decomposed into a family of subproblems. In order to solve the problem, the volunteer computing project SAT@home was used. As a result, several dozen pairs of described kind were found.

  12. Better Decomposition Heuristics for the Maximum-Weight Connected Graph Problem Using Betweenness Centrality

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru

    We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.

  13. Trading a Problem-solving Task

    NASA Astrophysics Data System (ADS)

    Matsubara, Shigeo

    This paper focuses on a task allocation problem, especially cases where the task is to find a solution in a search problem or a constraint satisfaction problem. If the search problem is hard to solve, a contractor may fail to find a solution. Here, the more computational resources such as the CPU time the contractor invests in solving the search problem, the more a solution is likely to be found. This brings about a new problem that a contractee has to find an appropriate level of the quality in a task achievement as well as to find an efficient allocation of a task among contractors. For example, if the contractee asks the contractor to find a solution with certainty, the payment from the contractee to the contractor may exceed the contractee's benefit from obtaining a solution, which discourages the contractee from trading a task. However, solving this problem is difficult because the contractee cannot ascertain the contractor's problem-solving ability such as the amount of available resources and knowledge (e.g. algorithms, heuristics) or monitor what amount of resources are actually invested in solving the allocated task. To solve this problem, we propose a task allocation mechanism that is able to choose an appropriate level of the quality in a task achievement and prove that this mechanism guarantees that each contractor reveals its true information. Moreover, we show that our mechanism can increase the contractee's utility compared with a simple auction mechanism by using computer simulation.

  14. Numerical solutions of acoustic wave propagation problems using Euler computations

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.

    1984-01-01

    This paper reports solution procedures for problems arising from the study of engine inlet wave propagation. The first problem is the study of sound waves radiated from cylindrical inlets. The second one is a quasi-one-dimensional problem to study the effect of nonlinearities and the third one is the study of nonlinearities in two dimensions. In all three problems Euler computations are done with a fourth-order explicit scheme. For the first problem results are shown in agreement with experimental data and for the second problem comparisons are made with an existing asymptotic theory. The third problem is part of an ongoing work and preliminary results are presented for this case.

  15. Computer Algebra Reexamination of the Scaled Particle Theory for Hard-Sphere and Lennard-Jones Fluids

    NASA Astrophysics Data System (ADS)

    Khasare, S. B.

    In the present work, an extension of the scaled particle theory (ESPT) for fluid using computer algebra is developed to obtain an equation of state (EOS), for Lennard-Jones fluid. A suitable functional form for surface tension S(r,d,ɛ) is assumed with intermolecular separation r as a variable, given below: $$S(r,d,\\epsilon)=S_{0}[1+2\\delta(d/r)^{m}],\\qquad r\\geq d/2\\,,$$ where m is arbitrary real number, and d and ɛ are related to physical property such as average or suitable molecular diameter and the binding energy of the molecule respectively. It is found that, for hard sphere fluid ɛ = 0, the above assumption when introduced in scaled particle theory (SPT) frame and choosing arbitrary real number, m = 1/3, the corresponding EOS is in good agreement with the computer simulation of molecular dynamics (MD) result. Furthermore, for the value of m = -1 it gives a Percus-Yevick (pressure), and for the value of m = 1, it corresponds Percus-Yevick (compressibility) EOS.

  16. The Ordered Clustered Travelling Salesman Problem: A Hybrid Genetic Algorithm

    PubMed Central

    Ahmed, Zakir Hussain

    2014-01-01

    The ordered clustered travelling salesman problem is a variation of the usual travelling salesman problem in which a set of vertices (except the starting vertex) of the network is divided into some prespecified clusters. The objective is to find the least cost Hamiltonian tour in which vertices of any cluster are visited contiguously and the clusters are visited in the prespecified order. The problem is NP-hard, and it arises in practical transportation and sequencing problems. This paper develops a hybrid genetic algorithm using sequential constructive crossover, 2-opt search, and a local search for obtaining heuristic solution to the problem. The efficiency of the algorithm has been examined against two existing algorithms for some asymmetric and symmetric TSPLIB instances of various sizes. The computational results show that the proposed algorithm is very effective in terms of solution quality and computational time. Finally, we present solution to some more symmetric TSPLIB instances. PMID:24701148

  17. Potential Health Impacts of Hard Water

    PubMed Central

    Sengupta, Pallav

    2013-01-01

    In the past five decades or so evidence has been accumulating about an environmental factor, which appears to be influencing mortality, in particular, cardiovascular mortality, and this is the hardness of the drinking water. In addition, several epidemiological investigations have demonstrated the relation between risk for cardiovascular disease, growth retardation, reproductive failure, and other health problems and hardness of drinking water or its content of magnesium and calcium. In addition, the acidity of the water influences the reabsorption of calcium and magnesium in the renal tubule. Not only, calcium and magnesium, but other constituents also affect different health aspects. Thus, the present review attempts to explore the health effects of hard water and its constituents. PMID:24049611

  18. Potential health impacts of hard water.

    PubMed

    Sengupta, Pallav

    2013-08-01

    In the past five decades or so evidence has been accumulating about an environmental factor, which appears to be influencing mortality, in particular, cardiovascular mortality, and this is the hardness of the drinking water. In addition, several epidemiological investigations have demonstrated the relation between risk for cardiovascular disease, growth retardation, reproductive failure, and other health problems and hardness of drinking water or its content of magnesium and calcium. In addition, the acidity of the water influences the reabsorption of calcium and magnesium in the renal tubule. Not only, calcium and magnesium, but other constituents also affect different health aspects. Thus, the present review attempts to explore the health effects of hard water and its constituents.

  19. Problem Solving and Computational Skill: Are They Shared or Distinct Aspects of Mathematical Cognition?

    PubMed Central

    Fuchs, Lynn S.; Fuchs, Douglas; Hamlett, Carol L.; Lambert, Warren; Stuebing, Karla; Fletcher, Jack M.

    2009-01-01

    The purpose of this study was to explore patterns of difficulty in 2 domains of mathematical cognition: computation and problem solving. Third graders (n = 924; 47.3% male) were representatively sampled from 89 classrooms; assessed on computation and problem solving; classified as having difficulty with computation, problem solving, both domains, or neither domain; and measured on 9 cognitive dimensions. Difficulty occurred across domains with the same prevalence as difficulty with a single domain; specific difficulty was distributed similarly across domains. Multivariate profile analysis on cognitive dimensions and chi-square tests on demographics showed that specific computational difficulty was associated with strength in language and weaknesses in attentive behavior and processing speed; problem-solving difficulty was associated with deficient language as well as race and poverty. Implications for understanding mathematics competence and for the identification and treatment of mathematics difficulties are discussed. PMID:20057912

  20. The Quantum Measurement Problem and Physical reality: A Computation Theoretic Perspective

    NASA Astrophysics Data System (ADS)

    Srikanth, R.

    2006-11-01

    Is the universe computable? If yes, is it computationally a polynomial place? In standard quantum mechanics, which permits infinite parallelism and the infinitely precise specification of states, a negative answer to both questions is not ruled out. On the other hand, empirical evidence suggests that NP-complete problems are intractable in the physical world. Likewise, computational problems known to be algorithmically uncomputable do not seem to be computable by any physical means. We suggest that this close correspondence between the efficiency and power of abstract algorithms on the one hand, and physical computers on the other, finds a natural explanation if the universe is assumed to be algorithmic; that is, that physical reality is the product of discrete sub-physical information processing equivalent to the actions of a probabilistic Turing machine. This assumption can be reconciled with the observed exponentiality of quantum systems at microscopic scales, and the consequent possibility of implementing Shor's quantum polynomial time algorithm at that scale, provided the degree of superposition is intrinsically, finitely upper-bounded. If this bound is associated with the quantum-classical divide (the Heisenberg cut), a natural resolution to the quantum measurement problem arises. From this viewpoint, macroscopic classicality is an evidence that the universe is in BPP, and both questions raised above receive affirmative answers. A recently proposed computational model of quantum measurement, which relates the Heisenberg cut to the discreteness of Hilbert space, is briefly discussed. A connection to quantum gravity is noted. Our results are compatible with the philosophy that mathematical truths are independent of the laws of physics.

  1. Computer-aided programming for message-passing system; Problems and a solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, M.Y.; Gajski, D.D.

    1989-12-01

    As the number of processors and the complexity of problems to be solved increase, programming multiprocessing systems becomes more difficult and error-prone. Program development tools are necessary since programmers are not able to develop complex parallel programs efficiently. Parallel models of computation, parallelization problems, and tools for computer-aided programming (CAP) are discussed. As an example, a CAP tool that performs scheduling and inserts communication primitives automatically is described. It also generates the performance estimates and other program quality measures to help programmers in improving their algorithms and programs.

  2. Physical Problems Associated with Computer Use and Implemented Ergonomic Measures.

    ERIC Educational Resources Information Center

    Alexander, Melody A.

    1994-01-01

    Survey responses from 404 (of 523) office support personnel showed that most used computers 3-6 hours per day and had experienced vision or musculoskeletal problems, but most did not see a doctor, take regular breaks, do stretching exercises, or discuss problems with their supervisors. Many were not aware of ergonomic features that could help, and…

  3. Some unsolved problems in discrete mathematics and mathematical cybernetics

    NASA Astrophysics Data System (ADS)

    Korshunov, Aleksei D.

    2009-10-01

    There are many unsolved problems in discrete mathematics and mathematical cybernetics. Writing a comprehensive survey of such problems involves great difficulties. First, such problems are rather numerous and varied. Second, they greatly differ from each other in degree of completeness of their solution. Therefore, even a comprehensive survey should not attempt to cover the whole variety of such problems; only the most important and significant problems should be reviewed. An impersonal choice of problems to include is quite hard. This paper includes 13 unsolved problems related to combinatorial mathematics and computational complexity theory. The problems selected give an indication of the author's studies for 50 years; for this reason, the choice of the problems reviewed here is, to some extent, subjective. At the same time, these problems are very difficult and quite important for discrete mathematics and mathematical cybernetics. Bibliography: 74 items.

  4. A massively parallel computational approach to coupled thermoelastic/porous gas flow problems

    NASA Technical Reports Server (NTRS)

    Shia, David; Mcmanus, Hugh L.

    1995-01-01

    A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.

  5. Parameterized Complexity of k-Anonymity: Hardness and Tractability

    NASA Astrophysics Data System (ADS)

    Bonizzoni, Paola; Della Vedova, Gianluca; Dondi, Riccardo; Pirola, Yuri

    The problem of publishing personal data without giving up privacy is becoming increasingly important. A precise formalization that has been recently proposed is the k-anonymity, where the rows of a table are partitioned in clusters of size at least k and all rows in a cluster become the same tuple after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is hard even when the stored values are over a binary alphabet or the table consists of a bounded number of columns. In this paper we study how the complexity of the problem is influenced by different parameters. First we show that the problem is W[1]-hard when parameterized by the value of the solution (and k). Then we exhibit a fixed-parameter algorithm when the problem is parameterized by the number of columns and the number of different values in any column.

  6. Efficient computation of optimal actions.

    PubMed

    Todorov, Emanuel

    2009-07-14

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.

  7. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any

  8. A Computer Solution of the Parking Lot Problem.

    ERIC Educational Resources Information Center

    Rumble, Richard T.

    A computer program has been developed that will accept as inputs the physical description of a portion of land, and the parking design standards to be followed. The program will then give as outputs the numerical and graphical descriptions of the maximum-density parking lot for that portion of land. The problem has been treated as a standard…

  9. Computer Problem-Solving Coaches for Introductory Physics: Design and Usability Studies

    ERIC Educational Resources Information Center

    Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew

    2016-01-01

    The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how…

  10. Hard Constraints in Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2008-01-01

    This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.

  11. External Boundary Conditions for Three-Dimensional Problems of Computational Aerodynamics

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon V.

    1997-01-01

    We consider an unbounded steady-state flow of viscous fluid over a three-dimensional finite body or configuration of bodies. For the purpose of solving this flow problem numerically, we discretize the governing equations (Navier-Stokes) on a finite-difference grid. The grid obviously cannot stretch from the body up to infinity, because the number of the discrete variables in that case would not be finite. Therefore, prior to the discretization we truncate the original unbounded flow domain by introducing some artificial computational boundary at a finite distance of the body. Typically, the artificial boundary is introduced in a natural way as the external boundary of the domain covered by the grid. The flow problem formulated only on the finite computational domain rather than on the original infinite domain is clearly subdefinite unless some artificial boundary conditions (ABC's) are specified at the external computational boundary. Similarly, the discretized flow problem is subdefinite (i.e., lacks equations with respect to unknowns) unless a special closing procedure is implemented at this artificial boundary. The closing procedure in the discrete case is called the ABC's as well. In this paper, we present an innovative approach to constructing highly accurate ABC's for three-dimensional flow computations. The approach extends our previous technique developed for the two-dimensional case; it employs the finite-difference counterparts to Calderon's pseudodifferential boundary projections calculated in the framework of the difference potentials method (DPM) by Ryaben'kii. The resulting ABC's appear spatially nonlocal but particularly easy to implement along with the existing solvers. The new boundary conditions have been successfully combined with the NASA-developed production code TLNS3D and used for the analysis of wing-shaped configurations in subsonic (including incompressible limit) and transonic flow regimes. As demonstrated by the computational experiments

  12. A set partitioning reformulation for the multiple-choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Voß, Stefan; Lalla-Ruiz, Eduardo

    2016-05-01

    The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.

  13. Integer Linear Programming in Computational Biology

    NASA Astrophysics Data System (ADS)

    Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut

    Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.

  14. Computer Based Collaborative Problem Solving for Introductory Courses in Physics

    NASA Astrophysics Data System (ADS)

    Ilie, Carolina; Lee, Kevin

    2010-03-01

    We discuss collaborative problem solving computer-based recitation style. The course is designed by Lee [1], and the idea was proposed before by Christian, Belloni and Titus [2,3]. The students find the problems on a web-page containing simulations (physlets) and they write the solutions on an accompanying worksheet after discussing it with a classmate. Physlets have the advantage of being much more like real-world problems than textbook problems. We also compare two protocols for web-based instruction using simulations in an introductory physics class [1]. The inquiry protocol allowed students to control input parameters while the worked example protocol did not. We will discuss which of the two methods is more efficient in relation to Scientific Discovery Learning and Cognitive Load Theory. 1. Lee, Kevin M., Nicoll, Gayle and Brooks, Dave W. (2004). ``A Comparison of Inquiry and Worked Example Web-Based Instruction Using Physlets'', Journal of Science Education and Technology 13, No. 1: 81-88. 2. Christian, W., and Belloni, M. (2001). Physlets: Teaching Physics With Interactive Curricular Material, Prentice Hall, Englewood Cliffs, NJ. 3. Christian,W., and Titus,A. (1998). ``Developing web-based curricula using Java Physlets.'' Computers in Physics 12: 227--232.

  15. Internet computer coaches for introductory physics problem solving

    NASA Astrophysics Data System (ADS)

    Xu Ryan, Qing

    The ability to solve problems in a variety of contexts is becoming increasingly important in our rapidly changing technological society. Problem-solving is a complex process that is important for everyday life and crucial for learning physics. Although there is a great deal of effort to improve student problem solving skills throughout the educational system, national studies have shown that the majority of students emerge from such courses having made little progress toward developing good problem-solving skills. The Physics Education Research Group at the University of Minnesota has been developing Internet computer coaches to help students become more expert-like problem solvers. During the Fall 2011 and Spring 2013 semesters, the coaches were introduced into large sections (200+ students) of the calculus based introductory mechanics course at the University of Minnesota. This dissertation, will address the research background of the project, including the pedagogical design of the coaches and the assessment of problem solving. The methodological framework of conducting experiments will be explained. The data collected from the large-scale experimental studies will be discussed from the following aspects: the usage and usability of these coaches; the usefulness perceived by students; and the usefulness measured by final exam and problem solving rubric. It will also address the implications drawn from this study, including using this data to direct future coach design and difficulties in conducting authentic assessment of problem-solving.

  16. Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.

    2000-01-01

    The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.

  17. CHEMEX; Understanding and Solving Problems in Chemistry. A Computer-Assisted Instruction Program for General Chemistry.

    ERIC Educational Resources Information Center

    Lower, Stephen K.

    A brief overview of CHEMEX--a problem-solving, tutorial style computer-assisted instructional course--is provided and sample problems are offered. In CHEMEX, students receive problems in advance and attempt to solve them before moving through the computer program, which assists them in overcoming difficulties and serves as a review mechanism.…

  18. Substance Abuse: A Hidden Problem within the D/deaf and Hard of Hearing Communities

    ERIC Educational Resources Information Center

    Guthmann, Debra; Graham, Vicki

    2004-01-01

    Current research indicates that D/deaf and hard of hearing clients seeking treatment for substance abuse often encounter obstacles in receiving the help they need. Many of these obstacles are the result of a lack of knowledge and experience with regard to treating D/deaf and hard of hearing people. Programs designed for hearing people that attempt…

  19. Complexity and approximability for a problem of intersecting of proximity graphs with minimum number of equal disks

    NASA Astrophysics Data System (ADS)

    Kobylkin, Konstantin

    2016-10-01

    Computational complexity and approximability are studied for the problem of intersecting of a set of straight line segments with the smallest cardinality set of disks of fixed radii r > 0 where the set of segments forms straight line embedding of possibly non-planar geometric graph. This problem arises in physical network security analysis for telecommunication, wireless and road networks represented by specific geometric graphs defined by Euclidean distances between their vertices (proximity graphs). It can be formulated in a form of known Hitting Set problem over a set of Euclidean r-neighbourhoods of segments. Being of interest computational complexity and approximability of Hitting Set over so structured sets of geometric objects did not get much focus in the literature. Strong NP-hardness of the problem is reported over special classes of proximity graphs namely of Delaunay triangulations, some of their connected subgraphs, half-θ6 graphs and non-planar unit disk graphs as well as APX-hardness is given for non-planar geometric graphs at different scales of r with respect to the longest graph edge length. Simple constant factor approximation algorithm is presented for the case where r is at the same scale as the longest edge length.

  20. Design and implementation of reliability evaluation of SAS hard disk based on RAID card

    NASA Astrophysics Data System (ADS)

    Ren, Shaohua; Han, Sen

    2015-10-01

    Because of the huge advantage of RAID technology in storage, it has been widely used. However, the question associated with this technology is that the hard disk based on the RAID card can not be queried by Operating System. Therefore how to read the self-information and log data of hard disk has been a problem, while this data is necessary for reliability test of hard disk. In traditional way, this information can be read just suitable for SATA hard disk, but not for SAS hard disk. In this paper, we provide a method by using LSI RAID card's Application Program Interface, communicating with RAID card and analyzing the feedback data to solve the problem. Then we will get the necessary information to assess the SAS hard disk.

  1. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  2. Computational ecology as an emerging science

    PubMed Central

    Petrovskii, Sergei; Petrovskaya, Natalia

    2012-01-01

    It has long been recognized that numerical modelling and computer simulations can be used as a powerful research tool to understand, and sometimes to predict, the tendencies and peculiarities in the dynamics of populations and ecosystems. It has been, however, much less appreciated that the context of modelling and simulations in ecology is essentially different from those that normally exist in other natural sciences. In our paper, we review the computational challenges arising in modern ecology in the spirit of computational mathematics, i.e. with our main focus on the choice and use of adequate numerical methods. Somewhat paradoxically, the complexity of ecological problems does not always require the use of complex computational methods. This paradox, however, can be easily resolved if we recall that application of sophisticated computational methods usually requires clear and unambiguous mathematical problem statement as well as clearly defined benchmark information for model validation. At the same time, many ecological problems still do not have mathematically accurate and unambiguous description, and available field data are often very noisy, and hence it can be hard to understand how the results of computations should be interpreted from the ecological viewpoint. In this scientific context, computational ecology has to deal with a new paradigm: conventional issues of numerical modelling such as convergence and stability become less important than the qualitative analysis that can be provided with the help of computational techniques. We discuss this paradigm by considering computational challenges arising in several specific ecological applications. PMID:23565336

  3. A new parallel DNA algorithm to solve the task scheduling problem based on inspired computational model.

    PubMed

    Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei

    2017-12-01

    As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.

  4. Human-computer interfaces applied to numerical solution of the Plateau problem

    NASA Astrophysics Data System (ADS)

    Elias Fabris, Antonio; Soares Bandeira, Ivana; Ramos Batista, Valério

    2015-09-01

    In this work we present a code in Matlab to solve the Problem of Plateau numerically, and the code will include human-computer interface. The Problem of Plateau has applications in areas of knowledge like, for instance, Computer Graphics. The solution method will be the same one of the Surface Evolver, but the difference will be a complete graphical interface with the user. This will enable us to implement other kinds of interface like ocular mouse, voice, touch, etc. To date, Evolver does not include any graphical interface, which restricts its use by the scientific community. Specially, its use is practically impossible for most of the Physically Challenged People.

  5. Enhancing Digital Fluency through a Training Program for Creative Problem Solving Using Computer Programming

    ERIC Educational Resources Information Center

    Kim, SugHee; Chung, KwangSik; Yu, HeonChang

    2013-01-01

    The purpose of this paper is to propose a training program for creative problem solving based on computer programming. The proposed program will encourage students to solve real-life problems through a creative thinking spiral related to cognitive skills with computer programming. With the goal of enhancing digital fluency through this proposed…

  6. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems.

    PubMed

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-12-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.

  7. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems

    PubMed Central

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111

  8. Computer Use and Behavior Problems in Twice-Exceptional Students

    ERIC Educational Resources Information Center

    Alloway, Tracy Packiam; Elsworth, Miquela; Miley, Neal; Seckinger, Sean

    2016-01-01

    This pilot study investigated how engagement with computer games and TV exposure may affect behaviors of gifted students. We also compared behavioral and cognitive profiles of twice-exceptional students and children with Attention Deficit/Hyperactivity Disorder (ADHD). Gifted students were divided into those with behavioral problems and those…

  9. A review on economic emission dispatch problems using quantum computational intelligence

    NASA Astrophysics Data System (ADS)

    Mahdi, Fahad Parvez; Vasant, Pandian; Kallimani, Vish; Abdullah-Al-Wadud, M.

    2016-11-01

    Economic emission dispatch (EED) problems are one of the most crucial problems in power systems. Growing energy demand, limitation of natural resources and global warming make this topic into the center of discussion and research. This paper reviews the use of Quantum Computational Intelligence (QCI) in solving Economic Emission Dispatch problems. QCI techniques like Quantum Genetic Algorithm (QGA) and Quantum Particle Swarm Optimization (QPSO) algorithm are discussed here. This paper will encourage the researcher to use more QCI based algorithm to get better optimal result for solving EED problems.

  10. Pricing and location decisions in multi-objective facility location problem with M/M/m/k queuing systems

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Vazifeh-Noshafagh, Samira; Taleizadeh, Ata Allah; Hajipour, Vahid; Mahmoudi, Amin

    2017-01-01

    This article presents a new multi-objective model for a facility location problem with congestion and pricing policies. This model considers situations in which immobile service facilities are congested by a stochastic demand following M/M/m/k queues. The presented model belongs to the class of mixed-integer nonlinear programming models and NP-hard problems. To solve such a hard model, a new multi-objective optimization algorithm based on a vibration theory, namely multi-objective vibration damping optimization (MOVDO), is developed. In order to tune the algorithms parameters, the Taguchi approach using a response metric is implemented. The computational results are compared with those of the non-dominated ranking genetic algorithm and non-dominated sorting genetic algorithm. The outputs demonstrate the robustness of the proposed MOVDO in large-sized problems.

  11. Effect of Computer-Presented Organizational/Memory Aids on Problem Solving Behavior.

    ERIC Educational Resources Information Center

    Steinberg, Esther R.; And Others

    This research studied the effects of computer-presented organizational/memory aids on problem solving behavior. The aids were either matrix or verbal charts shown on the display screen next to the problem. The 104 college student subjects were randomly assigned to one of the four conditions: type of chart (matrix or verbal chart) and use of charts…

  12. Computer-Aided Group Problem Solving for Unified Life Cycle Engineering (ULCE)

    DTIC Science & Technology

    1989-02-01

    defining the problem, generating alternative solutions, evaluating alternatives, selecting alternatives, and implementing the solution. Systems...specialist in group dynamics, assists the group in formulating the problem and selecting a model framework. The analyst provides the group with computer...allocating resources, evaluating and selecting options, making judgments explicit, and analyzing dynamic systems. c. University of Rhode Island Drs. Geoffery

  13. Computer use problems and accommodation strategies at work and home for people with systemic sclerosis: a needs assessment.

    PubMed

    Baker, Nancy A; Aufman, Elyse L; Poole, Janet L

    2012-01-01

    We identified the extent of the need for interventions and assistive technology to prevent computer use problems in people with systemic sclerosis (SSc) and the accommodation strategies they use to alleviate such problems. Respondents were recruited through the Scleroderma Foundation. Twenty-seven people with SSc who used a computer and reported difficulty in working completed the Computer Problems Survey. All but 1 of the respondents reported one problem with at least one equipment type. The highest number of respondents reported problems with keyboards (88%) and chairs (85%). More than half reported discomfort in the past month associated with the chair, keyboard, and mouse. Respondents used a variety of accommodation strategies. Many respondents experienced problems and discomfort related to computer use. The characteristic symptoms of SSc may contribute to these problems. Occupational therapy interventions for computer use problems in clients with SSc need to be tested. Copyright © 2012 by the American Occupational Therapy Association, Inc.

  14. Solving Constraint-Satisfaction Problems with Distributed Neocortical-Like Neuronal Networks.

    PubMed

    Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney J

    2018-05-01

    Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.

  15. Computational procedure for finite difference solution of one-dimensional heat conduction problems reduces computer time

    NASA Technical Reports Server (NTRS)

    Iida, H. T.

    1966-01-01

    Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.

  16. [Computer-assisted education in problem-solving in neurology; a randomized educational study].

    PubMed

    Weverling, G J; Stam, J; ten Cate, T J; van Crevel, H

    1996-02-24

    To determine the effect of computer-based medical teaching (CBMT) as a supplementary method to teach clinical problem-solving during the clerkship in neurology. Randomized controlled blinded study. Academic Medical Centre, Amsterdam, the Netherlands. 103 Students were assigned at random to a group with access to CBMT and a control group. CBMT consisted of 20 computer-simulated patients with neurological diseases, and was permanently available during five weeks to students in the CBMT group. The ability to recognize and solve neurological problems was assessed with two free-response tests, scored by two blinded observers. The CBMT students scored significantly better on the test related to the CBMT cases (mean score 7.5 on a zero to 10 point scale; control group 6.2; p < 0.001). There was no significant difference on the control test not related to the problems practised with CBMT. CBMT can be an effective method for teaching clinical problem-solving, when used as a supplementary teaching facility during a clinical clerkship. The increased ability to solve problems learned by CBMT had no demonstrable effect on the performance with other neurological problems.

  17. Examining Information Problem-Solving, Knowledge, and Application Gains within Two Instructional Methods: Problem-Based and Computer-Mediated Participatory Simulation

    ERIC Educational Resources Information Center

    Newell, Terrance S.

    2008-01-01

    This study compared the effectiveness of two instructional methods--problem-based instruction within a face-to-face context and computer-mediated participatory simulation--in increasing students' content knowledge and application gains in the area of information problem-solving. The instructional methods were implemented over a four-week period. A…

  18. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons

    PubMed Central

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785

  19. Computer-Presented Organizational/Memory Aids as Instruction for Solving Pico-Fomi Problems.

    ERIC Educational Resources Information Center

    Steinberg, Esther R.; And Others

    1985-01-01

    Describes investigation of effectiveness of computer-presented organizational/memory aids (matrix and verbal charts controlled by computer or learner) as instructional technique for solving Pico-Fomi problems, and the acquisition of deductive inference rules when such aids are present. Results indicate chart use control should be adapted to…

  20. Data Assimilation on a Quantum Annealing Computer: Feasibility and Scalability

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.; Halem, M.; Chapman, D. R.; Pelissier, C. S.

    2014-12-01

    Data assimilation is one of the ubiquitous and computationally hard problems in the Earth Sciences. In particular, ensemble-based methods require a large number of model evaluations to estimate the prior probability density over system states, and variational methods require adjoint calculations and iteration to locate the maximum a posteriori solution in the presence of nonlinear models and observation operators. Quantum annealing computers (QAC) like the new D-Wave housed at the NASA Ames Research Center can be used for optimization and sampling, and therefore offers a new possibility for efficiently solving hard data assimilation problems. Coding on the QAC is not straightforward: a problem must be posed as a Quadratic Unconstrained Binary Optimization (QUBO) and mapped to a spherical Chimera graph. We have developed a method for compiling nonlinear 4D-Var problems on the D-Wave that consists of five steps: Emulating the nonlinear model and/or observation function using radial basis functions (RBF) or Chebyshev polynomials. Truncating a Taylor series around each RBF kernel. Reducing the Taylor polynomial to a quadratic using ancilla gadgets. Mapping the real-valued quadratic to a fixed-precision binary quadratic. Mapping the fully coupled binary quadratic to a partially coupled spherical Chimera graph using ancilla gadgets. At present the D-Wave contains 512 qbits (with 1024 and 2048 qbit machines due in the next two years); this machine size allows us to estimate only 3 state variables at each satellite overpass. However, QAC's solve optimization problems using a physical (quantum) system, and therefore do not require iterations or calculation of model adjoints. This has the potential to revolutionize our ability to efficiently perform variational data assimilation, as the size of these computers grows in the coming years.

  1. "Seeing It on the Screen Isn't Really Seeing It": Reading Problems of Writers Using Word Processing.

    ERIC Educational Resources Information Center

    Haas, Christina

    An observational study examined computer writers' use of hard copy for reading. The study begins with a description, based on interviews, of four kinds of reading problems encountered by writers using word processing; formatting, proofreading, reorganizing, and critical reading ("getting a sense of the text"). Subjects, six freshmen…

  2. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    NASA Astrophysics Data System (ADS)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  3. Artificial immune algorithm for multi-depot vehicle scheduling problems

    NASA Astrophysics Data System (ADS)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  4. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  5. Hard sphere packings within cylinders.

    PubMed

    Fu, Lin; Steinhardt, William; Zhao, Hao; Socolar, Joshua E S; Charbonneau, Patrick

    2016-03-07

    Arrangements of identical hard spheres confined to a cylinder with hard walls have been used to model experimental systems, such as fullerenes in nanotubes and colloidal wire assembly. Finding the densest configurations, called close packings, of hard spheres of diameter σ in a cylinder of diameter D is a purely geometric problem that grows increasingly complex as D/σ increases, and little is thus known about the regime for D > 2.873σ. In this work, we extend the identification of close packings up to D = 4.00σ by adapting Torquato-Jiao's adaptive-shrinking-cell formulation and sequential-linear-programming (SLP) technique. We identify 17 new structures, almost all of them chiral. Beyond D ≈ 2.85σ, most of the structures consist of an outer shell and an inner core that compete for being close packed. In some cases, the shell adopts its own maximum density configuration, and the stacking of core spheres within it is quasiperiodic. In other cases, an interplay between the two components is observed, which may result in simple periodic structures. In yet other cases, the very distinction between the core and shell vanishes, resulting in more exotic packing geometries, including some that are three-dimensional extensions of structures obtained from packing hard disks in a circle.

  6. How Children Solve Environmental Problems: Using Computer Simulations To Investigate Systems Thinking.

    ERIC Educational Resources Information Center

    Sheehy, N. P.; Wylie, J. W.; McGuinness, C.; Orchard, G.

    2000-01-01

    Describes the development and use of two computer simulations for investigating systems thinking and environmental problem-solving in children (n=92). Finds that older children outperformed younger children, who tended to exhibit magical thinking. Suggests that seemingly isomorphic environmental problems may not be interpreted as such by children.…

  7. Solving multiconstraint assignment problems using learning automata.

    PubMed

    Horn, Geir; Oommen, B John

    2010-02-01

    This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the

  8. Differential geometric treewidth estimation in adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Jonckheere, Edmond; Brun, Todd

    2016-10-01

    The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.

  9. Heuristic methods for the single machine scheduling problem with different ready times and a common due date

    NASA Astrophysics Data System (ADS)

    Birgin, Ernesto G.; Ronconi, Débora P.

    2012-10-01

    The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.

  10. Undergraduate Student Task Group Approach to Complex Problem Solving Employing Computer Programming.

    ERIC Educational Resources Information Center

    Brooks, LeRoy D.

    A project formulated a computer simulation game for use as an instructional device to improve financial decision making. The author constructed a hypothetical firm, specifying its environment, variables, and a maximization problem. Students, assisted by a professor and computer consultants and having access to B5500 and B6700 facilities, held 16…

  11. The application of generalized, cyclic, and modified numerical integration algorithms to problems of satellite orbit computation

    NASA Technical Reports Server (NTRS)

    Chesler, L.; Pierce, S.

    1971-01-01

    Generalized, cyclic, and modified multistep numerical integration methods are developed and evaluated for application to problems of satellite orbit computation. Generalized methods are compared with the presently utilized Cowell methods; new cyclic methods are developed for special second-order differential equations; and several modified methods are developed and applied to orbit computation problems. Special computer programs were written to generate coefficients for these methods, and subroutines were written which allow use of these methods with NASA's GEOSTAR computer program.

  12. Optimization of topological quantum algorithms using Lattice Surgery is hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon

    The traditional method for computation in the surface code or the Raussendorf model is the creation of holes or ''defects'' within the encoded lattice of qubits which are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work we turn attention to the Lattice Surgery representation, which realizes encoded logic operations without destroying the intrinsic 2D nearest-neighbor interactions sufficient for braided based logic and achieves universality without using defects for encoding information. In both braided and lattice surgery logic there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult to define and the classical complexity associated with this problem has yet to be determined. In the context of lattice surgery based logic, we can introduce an optimality condition, which corresponds to a circuit with lowest amount of physical qubit requirements, and prove that the complexity of optimizing the geometric (lattice surgery) representation of a quantum circuit is NP-hard.

  13. Accurate computation and continuation of homoclinic and heteroclinic orbits for singular perturbation problems

    NASA Technical Reports Server (NTRS)

    Vaughan, William W.; Friedman, Mark J.; Monteiro, Anand C.

    1993-01-01

    In earlier papers, Doedel and the authors have developed a numerical method and derived error estimates for the computation of branches of heteroclinic orbits for a system of autonomous ordinary differential equations in R(exp n). The idea of the method is to reduce a boundary value problem on the real line to a boundary value problem on a finite interval by using a local (linear or higher order) approximation of the stable and unstable manifolds. A practical limitation for the computation of homoclinic and heteroclinic orbits has been the difficulty in obtaining starting orbits. Typically these were obtained from a closed form solution or via a homotopy from a known solution. Here we consider extensions of our algorithm which allow us to obtain starting orbits on the continuation branch in a more systematic way as well as make the continuation algorithm more flexible. In applications, we use the continuation software package AUTO in combination with some initial value software. The examples considered include computation of homoclinic orbits in a singular perturbation problem and in a turbulent fluid boundary layer in the wall region problem.

  14. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  15. Statistical mechanics of the vertex-cover problem

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2003-10-01

    We review recent progress in the study of the vertex-cover problem (VC). The VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits a coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping the VC to a hard-core lattice gas, and then applying techniques such as the replica trick or the cavity approach. Using these methods, the phase diagram of the VC could be obtained exactly for connectivities c < e, where the VC is replica symmetric. Recently, this result could be confirmed using traditional mathematical techniques. For c > e, the solution of the VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for the VC. Finally, we describe recent results for the VC when studied on other ensembles of finite- and infinite-dimensional graphs.

  16. Redesigning the Quantum Mechanics Curriculum to Incorporate Problem Solving Using a Computer Algebra System

    NASA Astrophysics Data System (ADS)

    Roussel, Marc R.

    1999-10-01

    One of the traditional obstacles to learning quantum mechanics is the relatively high level of mathematical proficiency required to solve even routine problems. Modern computer algebra systems are now sufficiently reliable that they can be used as mathematical assistants to alleviate this difficulty. In the quantum mechanics course at the University of Lethbridge, the traditional three lecture hours per week have been replaced by two lecture hours and a one-hour computer-aided problem solving session using a computer algebra system (Maple). While this somewhat reduces the number of topics that can be tackled during the term, students have a better opportunity to familiarize themselves with the underlying theory with this course design. Maple is also available to students during examinations. The use of a computer algebra system expands the class of feasible problems during a time-limited exercise such as a midterm or final examination. A modern computer algebra system is a complex piece of software, so some time needs to be devoted to teaching the students its proper use. However, the advantages to the teaching of quantum mechanics appear to outweigh the disadvantages.

  17. Correction of facial and mandibular asymmetry using a computer aided design/computer aided manufacturing prefabricated titanium implant.

    PubMed

    Watson, Jason; Hatamleh, Muhanad; Alwahadni, Ahed; Srinivasan, Dilip

    2014-05-01

    Patients with significant craniofacial asymmetry may have functional problems associated with their occlusion and aesthetic concerns related to the imbalance in soft and hard tissue profiles. This report details a case of facial asymmetry secondary to left mandible angle deficiency due to undergoing previous radiotherapy. We describe the correction of the bony deformity using computer aided design/computer aided manufacturing custom-made titanium onlay using novel direct metal laser sintering. The direct metal laser sintering onlay proved a very accurate operative fit and showed a good aesthetic correction of the bony defect with no reported complications postoperatively. It is a useful low-morbidity technique, and there is no resorption or associated donor-site complications.

  18. Vectorization on the star computer of several numerical methods for a fluid flow problem

    NASA Technical Reports Server (NTRS)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  19. Adapting the traveling salesman problem to an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Warren, Richard H.

    2013-04-01

    We show how to guide a quantum computer to select an optimal tour for the traveling salesman. This is significant because it opens a rapid solution method for the wide range of applications of the traveling salesman problem, which include vehicle routing, job sequencing and data clustering.

  20. Deaf and hard of hearing students' perspectives on bullying and school climate.

    PubMed

    Weiner, Mary T; Day, Stefanie J; Galvan, Dennis

    2013-01-01

    Student perspectives reflect school climate. The study examined perspectives among deaf and hard of hearing students in residential and large day schools regarding bullying, and compared these perspectives with those of a national database of hearing students. The participants were 812 deaf and hard of hearing students in 11 U.S. schools. Data were derived from the Olweus Bullying Questionnaire (Olweus, 2007b), a standardized self-reported survey with multiple-choice questions focusing on different aspects of bullying problems. Significant bullying problems were found in deaf school programs. It appears that deaf and hard of hearing students experience bullying at rates 2-3 times higher than those reported by hearing students. Deaf and hard of hearing students reported that school personnel intervened less often when bullying occurred than was reported in the hearing sample. Results indicate the need for school climate improvement for all students, regardless of hearing status.

  1. ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics (CAA)

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C. (Editor); Ristorcelli, J. Ray (Editor); Tam, Christopher K. W. (Editor)

    1995-01-01

    The proceedings of the Benchmark Problems in Computational Aeroacoustics Workshop held at NASA Langley Research Center are the subject of this report. The purpose of the Workshop was to assess the utility of a number of numerical schemes in the context of the unusual requirements of aeroacoustical calculations. The schemes were assessed from the viewpoint of dispersion and dissipation -- issues important to long time integration and long distance propagation in aeroacoustics. Also investigated were the effect of implementation of different boundary conditions. The Workshop included a forum in which practical engineering problems related to computational aeroacoustics were discussed. This discussion took the form of a dialogue between an industrial panel and the workshop participants and was an effort to suggest the direction of evolution of this field in the context of current engineering needs.

  2. Perceived problems with computer gaming and internet use among adolescents: measurement tool for non-clinical survey studies

    PubMed Central

    2014-01-01

    Background Existing instruments for measuring problematic computer and console gaming and internet use are often lengthy and often based on a pathological perspective. The objective was to develop and present a new and short non-clinical measurement tool for perceived problems related to computer use and gaming among adolescents and to study the association between screen time and perceived problems. Methods Cross-sectional school-survey of 11-, 13-, and 15-year old students in thirteen schools in the City of Aarhus, Denmark, participation rate 89%, n = 2100. The main exposure was time spend on weekdays on computer- and console-gaming and internet use for communication and surfing. The outcome measures were three indexes on perceived problems related to computer and console gaming and internet use. Results The three new indexes showed high face validity and acceptable internal consistency. Most schoolchildren with high screen time did not experience problems related to computer use. Still, there was a strong and graded association between time use and perceived problems related to computer gaming, console gaming (only boys) and internet use, odds ratios ranging from 6.90 to 10.23. Conclusion The three new measures of perceived problems related to computer and console gaming and internet use among adolescents are appropriate, reliable and valid for use in non-clinical surveys about young people’s everyday life and behaviour. These new measures do not assess Internet Gaming Disorder as it is listed in the DSM and therefore has no parity with DSM criteria. We found an increasing risk of perceived problems with increasing time spent with gaming and internet use. Nevertheless, most schoolchildren who spent much time with gaming and internet use did not experience problems. PMID:24731270

  3. A Quantum Computing Approach to Model Checking for Advanced Manufacturing Problems

    DTIC Science & Technology

    2014-07-01

    amount of time. In summary, the tool we developed succeeded in allowing us to produce good solutions for optimization problems that did not fit ...We compared the value of the objective obtained in each run with the known optimal value, and used this information to compute the probability of ...success for each given instance. Then we used this information to compute the expected number of repetitions (or runs) needed to obtain the optimal

  4. Computational methods for inverse problems in geophysics: inversion of travel time observations

    USGS Publications Warehouse

    Pereyra, V.; Keller, H.B.; Lee, W.H.K.

    1980-01-01

    General ways of solving various inverse problems are studied for given travel time observations between sources and receivers. These problems are separated into three components: (a) the representation of the unknown quantities appearing in the model; (b) the nonlinear least-squares problem; (c) the direct, two-point ray-tracing problem used to compute travel time once the model parameters are given. Novel software is described for (b) and (c), and some ideas given on (a). Numerical results obtained with artificial data and an implementation of the algorithm are also presented. ?? 1980.

  5. Computing quantum discord is NP-complete

    NASA Astrophysics Data System (ADS)

    Huang, Yichen

    2014-03-01

    We study the computational complexity of quantum discord (a measure of quantum correlation beyond entanglement), and prove that computing quantum discord is NP-complete. Therefore, quantum discord is computationally intractable: the running time of any algorithm for computing quantum discord is believed to grow exponentially with the dimension of the Hilbert space so that computing quantum discord in a quantum system of moderate size is not possible in practice. As by-products, some entanglement measures (namely entanglement cost, entanglement of formation, relative entropy of entanglement, squashed entanglement, classical squashed entanglement, conditional entanglement of mutual information, and broadcast regularization of mutual information) and constrained Holevo capacity are NP-hard/NP-complete to compute. These complexity-theoretic results are directly applicable in common randomness distillation, quantum state merging, entanglement distillation, superdense coding, and quantum teleportation; they may offer significant insights into quantum information processing. Moreover, we prove the NP-completeness of two typical problems: linear optimization over classical states and detecting classical states in a convex set, providing evidence that working with classical states is generically computationally intractable.

  6. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  7. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  8. Enthalpy versus entropy: What drives hard-particle ordering in condensed phases?

    DOE PAGES

    Anthamatten, Mitchell; Ou, Jane J.; Weinfeld, Jeffrey A.; ...

    2016-07-27

    In support of mesoscopic-scale materials processing, spontaneous hard-particle ordering has been actively pursued for over a half-century. The generally accepted view that entropy alone can drive hard particle ordering is evaluated. Furthermore, a thermodynamic analysis of hard particle ordering was conducted and shown to agree with existing computations and experiments. Conclusions are that (i) hard particle ordering transitions between states in equilibrium are forbidden at constant volume but are allowed at constant pressure; (ii) spontaneous ordering transitions at constant pressure are driven by enthalpy, and (iii) ordering under constant volume necessarily involves a non-equilibrium initial state which has yet tomore » be rigorously defined.« less

  9. Combining Symbolic Computation and Theorem Proving: Some Problems of Ramanujan

    DTIC Science & Technology

    1994-01-01

    1994 CMU-CS--94- 103 ¶ DTIC MAY 0e o99 c -rnepe Combining symbolic computation and theorem proving: some problems of Ramanujan Edmund Clarke Xudong Zhao...Research and Development Center, Aeronautical Systems Division (AFSC), U.S. Air Force, Wright-Patterson AFB, Ohio 45433-6543 under Contract F33615-90- C ...Availability Codes n n = f Avail and Ior7. k= f(k) = _L k~of(nk Dist Special 8. =I f (k + c ) =_k=,+ I f (k) A .[ 3. List of problems The list of challenge

  10. Pathgroups, a dynamic data structure for genome reconstruction problems.

    PubMed

    Zheng, Chunfang

    2010-07-01

    Ancestral gene order reconstruction problems, including the median problem, quartet construction, small phylogeny, guided genome halving and genome aliquoting, are NP hard. Available heuristics dedicated to each of these problems are computationally costly for even small instances. We present a data structure enabling rapid heuristic solution to all these ancestral genome reconstruction problems. A generic greedy algorithm with look-ahead based on an automatically generated priority system suffices for all the problems using this data structure. The efficiency of the algorithm is due to fast updating of the structure during run time and to the simplicity of the priority scheme. We illustrate with the first rapid algorithm for quartet construction and apply this to a set of yeast genomes to corroborate a recent gene sequence-based phylogeny. http://albuquerque.bioinformatics.uottawa.ca/pathgroup/Quartet.html chunfang313@gmail.com Supplementary data are available at Bioinformatics online.

  11. Solving optimization problems by the public goods game

    NASA Astrophysics Data System (ADS)

    Javarone, Marco Alberto

    2017-09-01

    We introduce a method based on the Public Goods Game for solving optimization tasks. In particular, we focus on the Traveling Salesman Problem, i.e. a NP-hard problem whose search space exponentially grows increasing the number of cities. The proposed method considers a population whose agents are provided with a random solution to the given problem. In doing so, agents interact by playing the Public Goods Game using the fitness of their solution as currency of the game. Notably, agents with better solutions provide higher contributions, while those with lower ones tend to imitate the solution of richer agents for increasing their fitness. Numerical simulations show that the proposed method allows to compute exact solutions, and suboptimal ones, in the considered search spaces. As result, beyond to propose a new heuristic for combinatorial optimization problems, our work aims to highlight the potentiality of evolutionary game theory beyond its current horizons.

  12. Performance of Quantum Annealers on Hard Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Pokharel, Bibek; Venturelli, Davide; Rieffel, Eleanor

    Quantum annealers have been employed to attack a variety of optimization problems. We compared the performance of the current D-Wave 2X quantum annealer to that of the previous generation D-Wave Two quantum annealer on scheduling-type planning problems. Further, we compared the effect of different anneal times, embeddings of the logical problem, and different settings of the ferromagnetic coupling JF across the logical vertex-model on the performance of the D-Wave 2X quantum annealer. Our results show that at the best settings, the scaling of expected anneal time to solution for D-WAVE 2X is better than that of the DWave Two, but still inferior to that of state of the art classical solvers on these problems. We discuss the implication of our results for the design and programming of future quantum annealers. Supported by NASA Ames Research Center.

  13. Stochastic interactions of two Brownian hard spheres in the presence of depletants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karzar-Jeddi, Mehdi; Fan, Tai-Hsi, E-mail: thfan@engr.uconn.edu; Tuinier, Remco

    2014-06-07

    A quantitative analysis is presented for the stochastic interactions of a pair of Brownian hard spheres in non-adsorbing polymer solutions. The hard spheres are hypothetically trapped by optical tweezers and allowed for random motion near the trapped positions. The investigation focuses on the long-time correlated Brownian motion. The mobility tensor altered by the polymer depletion effect is computed by the boundary integral method, and the corresponding random displacement is determined by the fluctuation-dissipation theorem. From our computations it follows that the presence of depletion layers around the hard spheres has a significant effect on the hydrodynamic interactions and particle dynamicsmore » as compared to pure solvent and uniform polymer solution cases. The probability distribution functions of random walks of the two interacting hard spheres that are trapped clearly shift due to the polymer depletion effect. The results show that the reduction of the viscosity in the depletion layers around the spheres and the entropic force due to the overlapping of depletion zones have a significant influence on the correlated Brownian interactions.« less

  14. Computer-Based Science Inquiry: How Components of Metacognitive Self-Regulation Affect Problem-Solving.

    ERIC Educational Resources Information Center

    Howard, Bruce C.; McGee, Steven; Shia, Regina; Hong, Namsoo Shin

    This study sought to examine the effects of meta cognitive self-regulation on problem solving across three conditions: (1) an interactive, computer-based treatment condition; (2) a noninteractive computer-based alternative treatment condition; and (3) a control condition. Also investigated was which of five components of metacognitive…

  15. FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems

    NASA Astrophysics Data System (ADS)

    Vourc'h, Eric; Rodet, Thomas

    2015-11-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods

  16. A hybrid computer program for rapidly solving flowing or static chemical kinetic problems involving many chemical species

    NASA Technical Reports Server (NTRS)

    Mclain, A. G.; Rao, C. S. R.

    1976-01-01

    A hybrid chemical kinetic computer program was assembled which provides a rapid solution to problems involving flowing or static, chemically reacting, gas mixtures. The computer program uses existing subroutines for problem setup, initialization, and preliminary calculations and incorporates a stiff ordinary differential equation solution technique. A number of check cases were recomputed with the hybrid program and the results were almost identical to those previously obtained. The computational time saving was demonstrated with a propane-oxygen-argon shock tube combustion problem involving 31 chemical species and 64 reactions. Information is presented to enable potential users to prepare an input data deck for the calculation of a problem.

  17. Early identification: Language skills and social functioning in deaf and hard of hearing preschool children.

    PubMed

    Netten, Anouk P; Rieffe, Carolien; Theunissen, Stephanie C P M; Soede, Wim; Dirks, Evelien; Korver, Anna M H; Konings, Saskia; Oudesluys-Murphy, Anne Marie; Dekker, Friedo W; Frijns, Johan H M

    2015-12-01

    Permanent childhood hearing impairment often results in speech and language problems that are already apparent in early childhood. Past studies show a clear link between language skills and the child's social-emotional functioning. The aim of this study was to examine the level of language and communication skills after the introduction of early identification services and their relation with social functioning and behavioral problems in deaf and hard of hearing children. Nationwide cross-sectional observation of a cohort of 85 early identified deaf and hard of hearing preschool children (aged 30-66 months). Parents reported on their child's communicative abilities (MacArthur-Bates Communicative Development Inventory III), social functioning and appearance of behavioral problems (Strengths and Difficulties Questionnaire). Receptive and expressive language skills were measured using the Reynell Developmental Language Scale and the Schlichting Expressive Language Test, derived from the child's medical records. Language and communicative abilities of early identified deaf and hard of hearing children are not on a par with hearing peers. Compared to normative scores from hearing children, parents of deaf and hard of hearing children reported lower social functioning and more behavioral problems. Higher communicative abilities were related to better social functioning and less behavioral problems. No relation was found between the degree of hearing loss, age at amplification, uni- or bilateral amplification, mode of communication and social functioning and behavioral problems. These results suggest that improving the communicative abilities of deaf and hard of hearing children could improve their social-emotional functioning. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Personalized Computer-Assisted Mathematics Problem-Solving Program and Its Impact on Taiwanese Students

    ERIC Educational Resources Information Center

    Chen, Chiu-Jung; Liu, Pei-Lin

    2007-01-01

    This study evaluated the effects of a personalized computer-assisted mathematics problem-solving program on the performance and attitude of Taiwanese fourth grade students. The purpose of this study was to determine whether the personalized computer-assisted program improved student performance and attitude over the nonpersonalized program.…

  19. Neuroscientists Find Learning Is Not "Hard-Wired"

    ERIC Educational Resources Information Center

    Sparks, Sarah D.

    2012-01-01

    Neuroscience exploded into the education conversation more than 20 years ago, in step with the evolution of personal computers and the rise of the Internet, and policymakers hoped medical discoveries could likewise help doctors and teachers understand the "hard wiring" of the brain. That conception of how the brain works, exacerbated by the…

  20. Fuzzy logic, neural networks, and soft computing

    NASA Technical Reports Server (NTRS)

    Zadeh, Lofti A.

    1994-01-01

    The past few years have witnessed a rapid growth of interest in a cluster of modes of modeling and computation which may be described collectively as soft computing. The distinguishing characteristic of soft computing is that its primary aims are to achieve tractability, robustness, low cost, and high MIQ (machine intelligence quotient) through an exploitation of the tolerance for imprecision and uncertainty. Thus, in soft computing what is usually sought is an approximate solution to a precisely formulated problem or, more typically, an approximate solution to an imprecisely formulated problem. A simple case in point is the problem of parking a car. Generally, humans can park a car rather easily because the final position of the car is not specified exactly. If it were specified to within, say, a few millimeters and a fraction of a degree, it would take hours or days of maneuvering and precise measurements of distance and angular position to solve the problem. What this simple example points to is the fact that, in general, high precision carries a high cost. The challenge, then, is to exploit the tolerance for imprecision by devising methods of computation which lead to an acceptable solution at low cost. By its nature, soft computing is much closer to human reasoning than the traditional modes of computation. At this juncture, the major components of soft computing are fuzzy logic (FL), neural network theory (NN), and probabilistic reasoning techniques (PR), including genetic algorithms, chaos theory, and part of learning theory. Increasingly, these techniques are used in combination to achieve significant improvement in performance and adaptability. Among the important application areas for soft computing are control systems, expert systems, data compression techniques, image processing, and decision support systems. It may be argued that it is soft computing, rather than the traditional hard computing, that should be viewed as the foundation for artificial

  1. Computing with dynamical systems based on insulator-metal-transition oscillators

    NASA Astrophysics Data System (ADS)

    Parihar, Abhinav; Shukla, Nikhil; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit

    2017-04-01

    In this paper, we review recent work on novel computing paradigms using coupled oscillatory dynamical systems. We explore systems of relaxation oscillators based on linear state transitioning devices, which switch between two discrete states with hysteresis. By harnessing the dynamics of complex, connected systems, we embrace the philosophy of "let physics do the computing" and demonstrate how complex phase and frequency dynamics of such systems can be controlled, programmed, and observed to solve computationally hard problems. Although our discussion in this paper is limited to insulator-to-metallic state transition devices, the general philosophy of such computing paradigms can be translated to other mediums including optical systems. We present the necessary mathematical treatments necessary to understand the time evolution of these systems and demonstrate through recent experimental results the potential of such computational primitives.

  2. Programming and Tuning a Quantum Annealing Device to Solve Real World Problems

    NASA Astrophysics Data System (ADS)

    Perdomo-Ortiz, Alejandro; O'Gorman, Bryan; Fluegemann, Joseph; Smelyanskiy, Vadim

    2015-03-01

    Solving real-world applications with quantum algorithms requires overcoming several challenges, ranging from translating the computational problem at hand to the quantum-machine language to tuning parameters of the quantum algorithm that have a significant impact on the performance of the device. In this talk, we discuss these challenges, strategies developed to enhance performance, and also a more efficient implementation of several applications. Although we will focus on applications of interest to NASA's Quantum Artificial Intelligence Laboratory, the methods and concepts presented here apply to a broader family of hard discrete optimization problems, including those that occur in many machine-learning algorithms.

  3. An Evolutionary Algorithm for Feature Subset Selection in Hard Disk Drive Failure Prediction

    ERIC Educational Resources Information Center

    Bhasin, Harpreet

    2011-01-01

    Hard disk drives are used in everyday life to store critical data. Although they are reliable, failure of a hard disk drive can be catastrophic, especially in applications like medicine, banking, air traffic control systems, missile guidance systems, computer numerical controlled machines, and more. The use of Self-Monitoring, Analysis and…

  4. A linear programming approach to max-sum problem: a review.

    PubMed

    Werner, Tomás

    2007-07-01

    The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.

  5. Fractional Steps methods for transient problems on commodity computer architectures

    NASA Astrophysics Data System (ADS)

    Krotkiewski, M.; Dabrowski, M.; Podladchikov, Y. Y.

    2008-12-01

    Fractional Steps methods are suitable for modeling transient processes that are central to many geological applications. Low memory requirements and modest computational complexity facilitates calculations on high-resolution three-dimensional models. An efficient implementation of Alternating Direction Implicit/Locally One-Dimensional schemes for an Opteron-based shared memory system is presented. The memory bandwidth usage, the main bottleneck on modern computer architectures, is specially addressed. High efficiency of above 2 GFlops per CPU is sustained for problems of 1 billion degrees of freedom. The optimized sequential implementation of all 1D sweeps is comparable in execution time to copying the used data in the memory. Scalability of the parallel implementation on up to 8 CPUs is close to perfect. Performing one timestep of the Locally One-Dimensional scheme on a system of 1000 3 unknowns on 8 CPUs takes only 11 s. We validate the LOD scheme using a computational model of an isolated inclusion subject to a constant far field flux. Next, we study numerically the evolution of a diffusion front and the effective thermal conductivity of composites consisting of multiple inclusions and compare the results with predictions based on the differential effective medium approach. Finally, application of the developed parabolic solver is suggested for a real-world problem of fluid transport and reactions inside a reservoir.

  6. Extremal Optimization for Quadratic Unconstrained Binary Problems

    NASA Astrophysics Data System (ADS)

    Boettcher, S.

    We present an implementation of τ-EO for quadratic unconstrained binary optimization (QUBO) problems. To this end, we transform modify QUBO from its conventional Boolean presentation into a spin glass with a random external field on each site. These fields tend to be rather large compared to the typical coupling, presenting EO with a challenging two-scale problem, exploring smaller differences in couplings effectively while sufficiently aligning with those strong external fields. However, we also find a simple solution to that problem that indicates that those external fields apparently tilt the energy landscape to a such a degree such that global minima become more easy to find than those of spin glasses without (or very small) fields. We explore the impact of the weight distribution of the QUBO formulation in the operations research literature and analyze their meaning in a spin-glass language. This is significant because QUBO problems are considered among the main contenders for NP-hard problems that could be solved efficiently on a quantum computer such as D-Wave.

  7. Modeling Students' Problem Solving Performance in the Computer-Based Mathematics Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2017-01-01

    Purpose: The purpose of this paper is to develop a quantitative model of problem solving performance of students in the computer-based mathematics learning environment. Design/methodology/approach: Regularized logistic regression was used to create a quantitative model of problem solving performance of students that predicts whether students can…

  8. A restricted Steiner tree problem is solved by Geometric Method II

    NASA Astrophysics Data System (ADS)

    Lin, Dazhi; Zhang, Youlin; Lu, Xiaoxu

    2013-03-01

    The minimum Steiner tree problem has wide application background, such as transportation system, communication network, pipeline design and VISL, etc. It is unfortunately that the computational complexity of the problem is NP-hard. People are common to find some special problems to consider. In this paper, we first put forward a restricted Steiner tree problem, which the fixed vertices are in the same side of one line L and we find a vertex on L such the length of the tree is minimal. By the definition and the complexity of the Steiner tree problem, we know that the complexity of this problem is also Np-complete. In the part one, we have considered there are two fixed vertices to find the restricted Steiner tree problem. Naturally, we consider there are three fixed vertices to find the restricted Steiner tree problem. And we also use the geometric method to solve such the problem.

  9. Computational inverse methods of heat source in fatigue damage problems

    NASA Astrophysics Data System (ADS)

    Chen, Aizhou; Li, Yuan; Yan, Bo

    2018-04-01

    Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.

  10. Computing eigenfunctions and eigenvalues of boundary-value problems with the orthogonal spectral renormalization method

    NASA Astrophysics Data System (ADS)

    Cartarius, Holger; Musslimani, Ziad H.; Schwarz, Lukas; Wunner, Günter

    2018-03-01

    The spectral renormalization method was introduced in 2005 as an effective way to compute ground states of nonlinear Schrödinger and Gross-Pitaevskii type equations. In this paper, we introduce an orthogonal spectral renormalization (OSR) method to compute ground and excited states (and their respective eigenvalues) of linear and nonlinear eigenvalue problems. The implementation of the algorithm follows four simple steps: (i) reformulate the underlying eigenvalue problem as a fixed-point equation, (ii) introduce a renormalization factor that controls the convergence properties of the iteration, (iii) perform a Gram-Schmidt orthogonalization process in order to prevent the iteration from converging to an unwanted mode, and (iv) compute the solution sought using a fixed-point iteration. The advantages of the OSR scheme over other known methods (such as Newton's and self-consistency) are (i) it allows the flexibility to choose large varieties of initial guesses without diverging, (ii) it is easy to implement especially at higher dimensions, and (iii) it can easily handle problems with complex and random potentials. The OSR method is implemented on benchmark Hermitian linear and nonlinear eigenvalue problems as well as linear and nonlinear non-Hermitian PT -symmetric models.

  11. Computation of Transonic Nozzle Sound Transmission and Rotor Problems by the Dispersion-Relation-Preserving Scheme

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Aganin, Alexei

    2000-01-01

    The transonic nozzle transmission problem and the open rotor noise radiation problem are solved computationally. Both are multiple length scales problems. For efficient and accurate numerical simulation, the multiple-size-mesh multiple-time-step Dispersion-Relation-Preserving scheme is used to calculate the time periodic solution. To ensure an accurate solution, high quality numerical boundary conditions are also needed. For the nozzle problem, a set of nonhomogeneous, outflow boundary conditions are required. The nonhomogeneous boundary conditions not only generate the incoming sound waves but also, at the same time, allow the reflected acoustic waves and entropy waves, if present, to exit the computation domain without reflection. For the open rotor problem, there is an apparent singularity at the axis of rotation. An analytic extension approach is developed to provide a high quality axis boundary treatment.

  12. A Comparison of Approaches for Solving Hard Graph-Theoretic Problems

    DTIC Science & Technology

    2015-05-01

    collaborative effort “ Adiabatic Quantum Computing Applications Research” (14-RI-CRADA-02) between the Information Directorate and Lock- 3 Algorithm 3...using Matlab, a quantum annealing approach using the D-Wave computer , and lastly using satisfiability modulo theory (SMT) and corresponding SMT...methods are explored and consist of a parallel computing approach using Matlab, a quantum annealing approach using the D-Wave computer , and lastly using

  13. Algorithms Bridging Quantum Computation and Chemistry

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod Ryan

    The design of new materials and chemicals derived entirely from computation has long been a goal of computational chemistry, and the governing equation whose solution would permit this dream is known. Unfortunately, the exact solution to this equation has been far too expensive and clever approximations fail in critical situations. Quantum computers offer a novel solution to this problem. In this work, we develop not only new algorithms to use quantum computers to study hard problems in chemistry, but also explore how such algorithms can help us to better understand and improve our traditional approaches. In particular, we first introduce a new method, the variational quantum eigensolver, which is designed to maximally utilize the quantum resources available in a device to solve chemical problems. We apply this method in a real quantum photonic device in the lab to study the dissociation of the helium hydride (HeH+) molecule. We also enhance this methodology with architecture specific optimizations on ion trap computers and show how linear-scaling techniques from traditional quantum chemistry can be used to improve the outlook of similar algorithms on quantum computers. We then show how studying quantum algorithms such as these can be used to understand and enhance the development of classical algorithms. In particular we use a tool from adiabatic quantum computation, Feynman's Clock, to develop a new discrete time variational principle and further establish a connection between real-time quantum dynamics and ground state eigenvalue problems. We use these tools to develop two novel parallel-in-time quantum algorithms that outperform competitive algorithms as well as offer new insights into the connection between the fermion sign problem of ground states and the dynamical sign problem of quantum dynamics. Finally we use insights gained in the study of quantum circuits to explore a general notion of sparsity in many-body quantum systems. In particular we use

  14. Elementary EFL Teachers' Computer Phobia and Computer Self-Efficacy in Taiwan

    ERIC Educational Resources Information Center

    Chen, Kate Tzuching

    2012-01-01

    The advent and application of computer and information technology has increased the overall success of EFL teaching; however, such success is hard to assess, and teachers prone to computer avoidance face negative consequences. Two major obstacles are high computer phobia and low computer self-efficacy. However, little research has been carried out…

  15. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    PubMed

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  16. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    PubMed Central

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  17. Constraint-Based Local Search for Constrained Optimum Paths Problems

    NASA Astrophysics Data System (ADS)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  18. A soft computing-based approach to optimise queuing-inventory control problem

    NASA Astrophysics Data System (ADS)

    Alaghebandha, Mohammad; Hajipour, Vahid

    2015-04-01

    In this paper, a multi-product continuous review inventory control problem within batch arrival queuing approach (MQr/M/1) is developed to find the optimal quantities of maximum inventory. The objective function is to minimise summation of ordering, holding and shortage costs under warehouse space, service level and expected lost-sales shortage cost constraints from retailer and warehouse viewpoints. Since the proposed model is Non-deterministic Polynomial-time hard, an efficient imperialist competitive algorithm (ICA) is proposed to solve the model. To justify proposed ICA, both ganetic algorithm and simulated annealing algorithm are utilised. In order to determine the best value of algorithm parameters that result in a better solution, a fine-tuning procedure is executed. Finally, the performance of the proposed ICA is analysed using some numerical illustrations.

  19. Interactive Computer Based Assessment Tasks: How Problem-Solving Process Data Can Inform Instruction

    ERIC Educational Resources Information Center

    Zoanetti, Nathan

    2010-01-01

    This article presents key steps in the design and analysis of a computer based problem-solving assessment featuring interactive tasks. The purpose of the assessment is to support targeted instruction for students by diagnosing strengths and weaknesses at different stages of problem-solving. The first focus of this article is the task piloting…

  20. Progress in 1988 1990 with computer applications in the ``hard-rock'' arena: Geochemistry, mineralogy, petrology, and volcanology

    NASA Astrophysics Data System (ADS)

    Rock, Nicholas M. S.

    This review covers rock, mineral and isotope geochemistry, mineralogy, igneous and metamorphic petrology, and volcanology. Crystallography, exploration geochemistry, and mineral exploration are excluded. Fairly extended comments on software availability, and on computerization of the publication process and of specimen collection indexes, may interest a wider audience. A proliferation of both published and commercial software in the past 3 years indicates increasing interest in what traditionally has been a rather reluctant sphere of geoscience computer activity. However, much of this software duplicates the same old functions (Harker and triangular plots, mineral recalculations, etc.). It usually is more efficient nowadays to use someone else's program, or to employ the command language in one of many general-purpose spreadsheet or statistical packages available, than to program a specialist operation from scratch in, say, FORTRAN. Greatest activity has been in mineralogy, where several journals specifically encourage publication of computer-related activities, and IMA and MSA Working Groups on microcomputers have been convened. In petrology and geochemistry, large national databases of rock and mineral analyses continue to multiply, whereas the international database IGBA grows slowly; some form of integration is necessary to make these disparate systems of lasting value to the global "hard-rock" community. Total merging or separate addressing via an intelligent "front-end" are both possibilities. In volcanology, the BBC's videodisk Volcanoes and the Smithsonian Institution's Global Volcanism Project use the most up-to-date computer technology in an exciting and innovative way, to promote public education.

  1. Gravitational field calculations on a dynamic lattice by distributed computing.

    NASA Astrophysics Data System (ADS)

    Mähönen, P.; Punkka, V.

    A new method of calculating numerically time evolution of a gravitational field in general relativity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.

  2. Gravitation Field Calculations on a Dynamic Lattice by Distributed Computing

    NASA Astrophysics Data System (ADS)

    Mähönen, Petri; Punkka, Veikko

    A new method of calculating numerically time evolution of a gravitational field in General Relatity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.

  3. Computers Are for Kids: Designing Software Programs to Avoid Problems of Learning.

    ERIC Educational Resources Information Center

    Grimes, Lynn

    1981-01-01

    Procedures for programing computers to deal with handicapped students, problems in selective attention, visual discrimination, reaction time differences, short term memory, transfer and generalization, recognition of mistakes, and social skills are discussed. (CL)

  4. Is there a hard-to-reach audience?

    PubMed Central

    Freimuth, V S; Mettger, W

    1990-01-01

    The "hard-to-reach" label has been applied to many different audiences. Persons who have a low socioeconomic status (SES), members of ethnic minorities, and persons who have a low level of literacy often are tagged as "hard-to-reach." The authors identify reasons why these groups have been labelled "hard-to-reach," discuss preconceptions associated with the "hard-to-reach" label, propose alternative conceptualizations of these audiences, and present implications of such conceptualizations for health communication campaigns. Pejorative labels and preconceptions about various groups may lead to depicting these audiences as powerless, apathetic, and isolated. The authors discuss alternative conceptualizations, which highlight the strengths of different audience segments and encourage innovative approaches to the communication process. These alternative conceptualizations emphasize interactive communication, a view of society in which individuals are seen as members of equivalent--albeit different--cultures, and a shift of responsibility for health problems from individuals to social systems. Recommendations for incorporating these alternative concepts into health campaigns include formative research techniques that create a dialogue among participants, more sophisticated segmentation techniques to capture audience diversity, and new roles for mass media that are more interactive and responsive to individual needs. PMID:2113680

  5. A Pilot Study of a Self-Voicing Computer Program for Prealgebra Math Problems

    ERIC Educational Resources Information Center

    Beal, Carole R.; Rosenblum, L. Penny; Smith, Derrick W.

    2011-01-01

    Fourteen students with visual impairments in Grades 5-12 participated in the field-testing of AnimalWatch-VI-Beta. This computer program delivered 12 prealgebra math problems and hints through a self-voicing audio feature. The students provided feedback about how the computer program can be improved and expanded to make it accessible to all users.…

  6. Enhanced Low Dose Rate Effects in Bipolar Circuits: A New Hardness Assurance Problem for NASA

    NASA Technical Reports Server (NTRS)

    Johnston, A.; Barnes, C.

    1995-01-01

    Many bipolar integrated circuits are much more susceptible to ionizing radiation at low dose rates than they are at high dose rates typically used for radiation parts testing. Since the low dose rate is equivalent to that seen in space, the standard lab test no longer can be considered conservative and has caused the Air Force to issue an alert. Although a reliable radiation hardness assurance test has not yet been designed, possible mechanisms for low dose rate enhancement and hardness assurance tests are discussed.

  7. Computers and Young Children: New Frontiers in Computer Hardware and Software or What Computer Should I Buy?

    ERIC Educational Resources Information Center

    Shade, Daniel D.

    1994-01-01

    Provides advice and suggestions for educators or parents who are trying to decide what type of computer to buy to run the latest computer software for children. Suggests that purchasers should buy a computer with as large a hard drive as possible, at least 10 megabytes of RAM, and a CD-ROM drive. (MDM)

  8. Mucocele of the hard palate in children.

    PubMed

    Abdel-Aziz, Mosaad; Khalifa, Badawy; Nassar, Ahmed; Kamel, Ahmed; Naguib, Nader; El-Tahan, Abdel-Rahman

    2016-06-01

    Mucus retention cyst of the hard palate may result from obstruction of the ducts of the minor salivary glands, and it was defined as a mucocele. Although, the disease is not common in the hard palate, it was previously reported by many authors in the soft palate. The aim of our study was to present pediatric patients who were diagnosed to have mucocele of the hard palate, and to evaluate the outcome of the surgical excision of this lesion. This is a case series study included 8 pediatric patients who presented with cystic lesions on the hard palate which were removed surgically, and were diagnosed as mucoceles. Preoperative data, surgical procedures, and postoperative outcome were presented. Follow up of patients was performed for at least one year. The swelling was detected as a single isolated lesion, on the side of the hard palate, covered with healthy mucosa, not tender, oval or round in shape, and measuring 0.4 to 1.7cm in its greatest dimension. Computed tomography showed a well defined cavity which was not invading the bone, and not disrupting the muscles of the palate. Histopathological examination confirmed that the lesion was a cavity that is lined with an epithelial layer with pseudoepitheliomatous hyperplasia. No patients developed intraoperative or postoperative complications, and no recurrence was detected in any patient. Oral mucoceles can develop on the hard palate of the children, the lesions are mucus retention cysts. Complete surgical removal of the lesions with their cystic wall is a good treatment options, it carries no risk of recurrence. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. A Theoretical Analysis: Physical Unclonable Functions and The Software Protection Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nithyanand, Rishab; Solis, John H.

    2011-09-01

    Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli (i.e., challenges) are easy to measure (within reasonable error bounds) but hard to clone. This property of unclonability is due to the accepted hardness of replicating the multitude of uncontrollable manufacturing characteristics and makes PUFs useful in solving problems such as device authentication, software protection, licensing, and certified execution. In this paper, we focus on the effectiveness of PUFs for software protection and show that traditional non-computational (black-box) PUFs cannot solve the problem against real world adversaries in offline settings. Our contributionsmore » are the following: We provide two real world adversary models (weak and strong variants) and present definitions for security against the adversaries. We continue by proposing schemes secure against the weak adversary and show that no scheme is secure against a strong adversary without the use of trusted hardware. Finally, we present a protection scheme secure against strong adversaries based on trusted hardware.« less

  10. Hybrid setup for micro- and nano-computed tomography in the hard X-ray range

    NASA Astrophysics Data System (ADS)

    Fella, Christian; Balles, Andreas; Hanke, Randolf; Last, Arndt; Zabler, Simon

    2017-12-01

    With increasing miniaturization in industry and medical technology, non-destructive testing techniques are an area of ever-increasing importance. In this framework, X-ray microscopy offers an efficient tool for the analysis, understanding, and quality assurance of microscopic samples, in particular as it allows reconstructing three-dimensional data sets of the whole sample's volume via computed tomography (CT). The following article describes a compact X-ray microscope in the hard X-ray regime around 9 keV, based on a highly brilliant liquid-metal-jet source. In comparison to commercially available instruments, it is a hybrid that works in two different modes. The first one is a micro-CT mode without optics, which uses a high-resolution detector to allow scans of samples in the millimeter range with a resolution of 1 μm. The second mode is a microscope, which contains an X-ray optical element to magnify the sample and allows resolving 150 nm features. Changing between the modes is possible without moving the sample. Thus, the instrument represents an important step towards establishing high-resolution laboratory-based multi-mode X-ray microscopy as a standard investigation method.

  11. Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.

    PubMed

    Ćwik, Michał; Józefczyk, Jerzy

    2018-01-01

    An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.

  12. Perceived problems with computer gaming and Internet use are associated with poorer social relations in adolescence.

    PubMed

    Rasmussen, Mette; Meilstrup, Charlotte Riebeling; Bendtsen, Pernille; Pedersen, Trine Pagh; Nielsen, Line; Madsen, Katrine Rich; Holstein, Bjørn E

    2015-02-01

    Young people's engagement in electronic gaming and Internet communication have caused concerns about potential harmful effects on their social relations, but the literature is inconclusive. The aim of this paper was to examine whether perceived problems with computer gaming and Internet communication are associated with young people's social relations. Cross-sectional questionnaire survey in 13 schools in the city of Aarhus, Denmark, in 2009. Response rate 89%, n = 2,100 students in grades 5, 7, and 9. Independent variables were perceived problems related to computer gaming and Internet use, respectively. Outcomes were measures of structural (number of days/week with friends, number of friends) and functional (confidence in others, being bullied, bullying others) dimensions of student's social relations. Perception of problems related to computer gaming were associated with almost all aspects of poor social relations among boys. Among girls, an association was only seen for bullying. For both boys and girls, perceived problems related to Internet use were associated with bullying only. Although the study is cross-sectional, the findings suggest that computer gaming and Internet use may be harmful to young people's social relations.

  13. Colored Traveling Salesman Problem.

    PubMed

    Li, Jun; Zhou, MengChu; Sun, Qirui; Dai, Xianzhong; Yu, Xiaolong

    2015-11-01

    The multiple traveling salesman problem (MTSP) is an important combinatorial optimization problem. It has been widely and successfully applied to the practical cases in which multiple traveling individuals (salesmen) share the common workspace (city set). However, it cannot represent some application problems where multiple traveling individuals not only have their own exclusive tasks but also share a group of tasks with each other. This work proposes a new MTSP called colored traveling salesman problem (CTSP) for handling such cases. Two types of city groups are defined, i.e., each group of exclusive cities of a single color for a salesman to visit and a group of shared cities of multiple colors allowing all salesmen to visit. Evidences show that CTSP is NP-hard and a multidepot MTSP and multiple single traveling salesman problems are its special cases. We present a genetic algorithm (GA) with dual-chromosome coding for CTSP and analyze the corresponding solution space. Then, GA is improved by incorporating greedy, hill-climbing (HC), and simulated annealing (SA) operations to achieve better performance. By experiments, the limitation of the exact solution method is revealed and the performance of the presented GAs is compared. The results suggest that SAGA can achieve the best quality of solutions and HCGA should be the choice making good tradeoff between the solution quality and computing time.

  14. Phase diagram of two-dimensional hard rods from fundamental mixed measure density functional theory

    NASA Astrophysics Data System (ADS)

    Wittmann, René; Sitta, Christoph E.; Smallenburg, Frank; Löwen, Hartmut

    2017-10-01

    A density functional theory for the bulk phase diagram of two-dimensional orientable hard rods is proposed and tested against Monte Carlo computer simulation data. In detail, an explicit density functional is derived from fundamental mixed measure theory and freely minimized numerically for hard discorectangles. The phase diagram, which involves stable isotropic, nematic, smectic, and crystalline phases, is obtained and shows good agreement with the simulation data. Our functional is valid for a multicomponent mixture of hard particles with arbitrary convex shapes and provides a reliable starting point to explore various inhomogeneous situations of two-dimensional hard rods and their Brownian dynamics.

  15. Analysis of problem solving on project based learning with resource based learning approach computer-aided program

    NASA Astrophysics Data System (ADS)

    Kuncoro, K. S.; Junaedi, I.; Dwijanto

    2018-03-01

    This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.

  16. The effect of introducing computers into an introductory physics problem-solving laboratory

    NASA Astrophysics Data System (ADS)

    McCullough, Laura Ellen

    2000-10-01

    Computers are appearing in every type of classroom across the country. Yet they often appear without benefit of studying their effects. The research that is available on computer use in classrooms has found mixed results, and often ignores the theoretical and instructional contexts of the computer in the classroom. The University of Minnesota's physics department employs a cooperative-group problem solving pedagogy, based on a cognitive apprenticeship instructional model, in its calculus-based introductory physics course. This study was designed to determine possible negative effects of introducing a computerized data-acquisition and analysis tool into this pedagogy as a problem-solving tool for students to use in laboratory. To determine the effects of the computer tool, two quasi-experimental treatment groups were selected. The computer-tool group (N = 170) used a tool, designed for this study (VideoTool), to collect and analyze motion data in the laboratory. The control group (N = 170) used traditional non-computer equipment (spark tapes and Polaroid(TM) film). The curriculum was kept as similar as possible for the two groups. During the ten week academic quarter, groups were examined for effects on performance on conceptual tests and grades, attitudes towards the laboratory and the laboratory tools, and behaviors within cooperative groups. Possible interactions with gender were also examined. Few differences were found between the control and computer-tool groups. The control group received slightly higher scores on one conceptual test, but this difference was not educationally significant. The computer-tool group had slightly more positive attitudes towards using the computer tool than their counterparts had towards the traditional tools. The computer-tool group also perceived that they spoke more frequently about physics misunderstandings, while the control group felt that they discussed equipment difficulties more often. This perceptual difference interacted

  17. Evaluating Preclinical Medical Students by Using Computer-Based Problem-Solving Examinations.

    ERIC Educational Resources Information Center

    Stevens, Ronald H.; And Others

    1989-01-01

    A study to determine the feasibility of creating and administering computer-based problem-solving examinations for evaluating second-year medical students in immunology and to determine how students would perform on these tests relative to their performances on concurrently administered objective and essay examinations is described. (Author/MLW)

  18. Ant colony optimization for solving university facility layout problem

    NASA Astrophysics Data System (ADS)

    Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin

    2013-04-01

    Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).

  19. Honey bee-inspired algorithms for SNP haplotype reconstruction problem

    NASA Astrophysics Data System (ADS)

    PourkamaliAnaraki, Maryam; Sadeghi, Mehdi

    2016-03-01

    Reconstructing haplotypes from SNP fragments is an important problem in computational biology. There have been a lot of interests in this field because haplotypes have been shown to contain promising data for disease association research. It is proved that haplotype reconstruction in Minimum Error Correction model is an NP-hard problem. Therefore, several methods such as clustering techniques, evolutionary algorithms, neural networks and swarm intelligence approaches have been proposed in order to solve this problem in appropriate time. In this paper, we have focused on various evolutionary clustering techniques and try to find an efficient technique for solving haplotype reconstruction problem. It can be referred from our experiments that the clustering methods relying on the behaviour of honey bee colony in nature, specifically bees algorithm and artificial bee colony methods, are expected to result in more efficient solutions. An application program of the methods is available at the following link. http://www.bioinf.cs.ipm.ir/software/haprs/

  20. Class and Homework Problems: The Break-Even Radius of Insulation Computed Using Excel Solver and WolframAlpha

    ERIC Educational Resources Information Center

    Foley, Greg

    2014-01-01

    A problem that illustrates two ways of computing the break-even radius of insulation is outlined. The problem is suitable for students who are taking an introductory module in heat transfer or transport phenomena and who have some previous knowledge of the numerical solution of non- linear algebraic equations. The potential for computer algebra,…

  1. Simplified computational methods for elastic and elastic-plastic fracture problems

    NASA Technical Reports Server (NTRS)

    Atluri, Satya N.

    1992-01-01

    An overview is given of some of the recent (1984-1991) developments in computational/analytical methods in the mechanics of fractures. Topics covered include analytical solutions for elliptical or circular cracks embedded in isotropic or transversely isotropic solids, with crack faces being subjected to arbitrary tractions; finite element or boundary element alternating methods for two or three dimensional crack problems; a 'direct stiffness' method for stiffened panels with flexible fasteners and with multiple cracks; multiple site damage near a row of fastener holes; an analysis of cracks with bonded repair patches; methods for the generation of weight functions for two and three dimensional crack problems; and domain-integral methods for elastic-plastic or inelastic crack mechanics.

  2. Electrostatic solvation free energies of charged hard spheres using molecular dynamics with density functional theory interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duignan, Timothy T.; Baer, Marcel D.; Schenter, Gregory K.

    Determining the solvation free energies of single ions in water is one of the most fundamental problems in physical chemistry and yet many unresolved questions remain. In particular, the ability to decompose the solvation free energy into simple and intuitive contributions will have important implications for coarse grained models of electrolyte solution. Here, we provide rigorous definitions of the various types of single ion solvation free energies based on different simulation protocols. We calculate solvation free energies of charged hard spheres using density functional theory interaction potentials with molecular dynamics simulation (DFT-MD) and isolate the effects of charge and cavitation,more » comparing to the Born (linear response) model. We show that using uncorrected Ewald summation leads to highly unphysical values for the solvation free energy and that charging free energies for cations are approximately linear as a function of charge but that there is a small non-linearity for small anions. The charge hydration asymmetry (CHA) for hard spheres, determined with quantum mechanics, is much larger than for the analogous real ions. This suggests that real ions, particularly anions, are significantly more complex than simple charged hard spheres, a commonly employed representation. We would like to thank Thomas Beck, Shawn Kathmann, Richard Remsing and John Weeks for helpful discussions. Computing resources were generously allocated by PNNL's Institutional Computing program. This research also used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. TTD, GKS, and CJM were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. MDB was supported by MS3 (Materials Synthesis and Simulation Across

  3. Symbolic computation of the Birkhoff normal form in the problem of stability of the triangular libration points

    NASA Astrophysics Data System (ADS)

    Shevchenko, I. I.

    2008-05-01

    The problem of stability of the triangular libration points in the planar circular restricted three-body problem is considered. A software package, intended for normalization of autonomous Hamiltonian systems by means of computer algebra, is designed so that normalization problems of high analytical complexity could be solved. It is used to obtain the Birkhoff normal form of the Hamiltonian in the given problem. The normalization is carried out up to the 6th order of expansion of the Hamiltonian in the coordinates and momenta. Analytical expressions for the coefficients of the normal form of the 6th order are derived. Though intermediary expressions occupy gigabytes of the computer memory, the obtained coefficients of the normal form are compact enough for presentation in typographic format. The analogue of the Deprit formula for the stability criterion is derived in the 6th order of normalization. The obtained floating-point numerical values for the normal form coefficients and the stability criterion confirm the results by Markeev (1969) and Coppola and Rand (1989), while the obtained analytical and exact numeric expressions confirm the results by Meyer and Schmidt (1986) and Schmidt (1989). The given computational problem is solved without constructing a specialized algebraic processor, i.e., the designed computer algebra package has a broad field of applicability.

  4. Assembly of hard spheres in a cylinder: a computational and experimental study.

    PubMed

    Fu, Lin; Bian, Ce; Shields, C Wyatt; Cruz, Daniela F; López, Gabriel P; Charbonneau, Patrick

    2017-05-14

    Hard spheres are an important benchmark of our understanding of natural and synthetic systems. In this work, colloidal experiments and Monte Carlo simulations examine the equilibrium and out-of-equilibrium assembly of hard spheres of diameter σ within cylinders of diameter σ≤D≤ 2.82σ. Although phase transitions formally do not exist in such systems, marked structural crossovers can nonetheless be observed. Over this range of D, we find in simulations that structural crossovers echo the structural changes in the sequence of densest packings. We also observe that the out-of-equilibrium self-assembly depends on the compression rate. Slow compression approximates equilibrium results, while fast compression can skip intermediate structures. Crossovers for which no continuous line-slip exists are found to be dynamically unfavorable, which is the main source of this difference. Results from colloidal sedimentation experiments at low diffusion rate are found to be consistent with the results of fast compressions, as long as appropriate boundary conditions are used.

  5. Registration of 'TAM 305' hard red winter Wheat

    USDA-ARS?s Scientific Manuscript database

    Leaf and stripe rusts (cause by Puccinia triticina Erikss. and Puccinia striiformis Westend. f. sp. tritici Erikss., respectively) are major disease problems in South Texas, Rolling Plains, and the Blacklands area of the state where hard red winter wheat (HRW; Triticum aestivum L.) is a major crop a...

  6. Quantum computation with indefinite causal structures

    NASA Astrophysics Data System (ADS)

    Araújo, Mateus; Guérin, Philippe Allard; Baumeler, ńmin

    2017-11-01

    One way to study the physical plausibility of closed timelike curves (CTCs) is to examine their computational power. This has been done for Deutschian CTCs (D-CTCs) and postselection CTCs (P-CTCs), with the result that they allow for the efficient solution of problems in PSPACE and PP, respectively. Since these are extremely powerful complexity classes, which are not expected to be solvable in reality, this can be taken as evidence that these models for CTCs are pathological. This problem is closely related to the nonlinearity of this models, which also allows, for example, cloning quantum states, in the case of D-CTCs, or distinguishing nonorthogonal quantum states, in the case of P-CTCs. In contrast, the process matrix formalism allows one to model indefinite causal structures in a linear way, getting rid of these effects and raising the possibility that its computational power is rather tame. In this paper, we show that process matrices correspond to a linear particular case of P-CTCs, and therefore that its computational power is upperbounded by that of PP. We show, furthermore, a family of processes that can violate causal inequalities but nevertheless can be simulated by a causally ordered quantum circuit with only a constant overhead, showing that indefinite causality is not necessarily hard to simulate.

  7. Computer graphics application in the engineering design integration system

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  8. A Computational Intelligence (CI) Approach to the Precision Mars Lander Problem

    NASA Technical Reports Server (NTRS)

    Birge, Brian; Walberg, Gerald

    2002-01-01

    A Mars precision landing requires a landed footprint of no more than 100 meters. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions such as entry angle, parachute deployment height, environment parameters such as wind, atmospheric density, parachute deployment dynamics, unavoidable injection error or propagated error from launch, etc. Computational Intelligence (CI) techniques such as Artificial Neural Nets and Particle Swarm Optimization have been shown to have great success with other control problems. The research period extended previous work on investigating applicability of the computational intelligent approaches. The focus of this investigation was on Particle Swarm Optimization and basic Neural Net architectures. The research investigating these issues was performed for the grant cycle from 5/15/01 to 5/15/02. Matlab 5.1 and 6.0 along with NASA's POST were the primary computational tools.

  9. Computer use and vision-related problems among university students in ajman, United arab emirate.

    PubMed

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-03-01

    The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology were recruited into this study. Demographic characteristics, pattern of usage of computers and associated visual symptoms were recorded in a validated self-administered questionnaire. Chi-square test was used to determine the significance of the observed differences between the variables. The level of statistical significance was at P < 0.05. The crude odds ratio (OR) was determined using simple binary logistic regression and adjusted OR was calculated using the multiple logistic regression. The mean age of participants was 20.4 (3.2) years. The analysis of racial data reveals that 50% (236/471) students were from Middle East, 32% (151/471) from other parts of Asia, 11% (52/471) from Africa, 4% (19/471) from America and 3% (14/471) from Europe. The most common visual problems reported among computer users were headache - 53.3% (251/471), burning sensation in the eyes - 54.8% (258/471) and tired eyes - 48% (226/471). Female students were found to be at a higher risk. Nearly 72% of students reported frequent interruption of computer work. Headache caused interruption of work in 43.85% (110/168) of the students while tired eyes caused interruption of work in 43.5% (98/168) of the students. When the screen was viewed at distance more than 50 cm, the prevalence of headaches decreased by 38% (50-100 cm - OR: 0.62, 95% of the confidence interval [CI]: 0.42-0.92). Prevalence of tired eyes increased by 89% when screen filters were not used (OR: 1.894, 95% CI: 1.065-3.368). High prevalence of vision related problems was noted

  10. Computer Use and Vision-Related Problems Among University Students In Ajman, United Arab Emirate

    PubMed Central

    Shantakumari, N; Eldeeb, R; Sreedharan, J; Gopal, K

    2014-01-01

    Background: The extensive use of computers as medium of teaching and learning in universities necessitates introspection into the extent of computer related health disorders among student population. Aim: This study was undertaken to assess the pattern of computer usage and related visual problems, among University students in Ajman, United Arab Emirates. Materials and Methods: A total of 500 Students studying in Gulf Medical University, Ajman and Ajman University of Science and Technology were recruited into this study. Demographic characteristics, pattern of usage of computers and associated visual symptoms were recorded in a validated self-administered questionnaire. Chi-square test was used to determine the significance of the observed differences between the variables. The level of statistical significance was at P < 0.05. The crude odds ratio (OR) was determined using simple binary logistic regression and adjusted OR was calculated using the multiple logistic regression. Results: The mean age of participants was 20.4 (3.2) years. The analysis of racial data reveals that 50% (236/471) students were from Middle East, 32% (151/471) from other parts of Asia, 11% (52/471) from Africa, 4% (19/471) from America and 3% (14/471) from Europe. The most common visual problems reported among computer users were headache - 53.3% (251/471), burning sensation in the eyes - 54.8% (258/471) and tired eyes - 48% (226/471). Female students were found to be at a higher risk. Nearly 72% of students reported frequent interruption of computer work. Headache caused interruption of work in 43.85% (110/168) of the students while tired eyes caused interruption of work in 43.5% (98/168) of the students. When the screen was viewed at distance more than 50 cm, the prevalence of headaches decreased by 38% (50-100 cm – OR: 0.62, 95% of the confidence interval [CI]: 0.42-0.92). Prevalence of tired eyes increased by 89% when screen filters were not used (OR: 1.894, 95% CI: 1

  11. Job shop scheduling problem with late work criterion

    NASA Astrophysics Data System (ADS)

    Piroozfard, Hamed; Wong, Kuan Yew

    2015-05-01

    Scheduling is considered as a key task in many industries, such as project based scheduling, crew scheduling, flight scheduling, machine scheduling, etc. In the machine scheduling area, the job shop scheduling problems are considered to be important and highly complex, in which they are characterized as NP-hard. The job shop scheduling problems with late work criterion and non-preemptive jobs are addressed in this paper. Late work criterion is a fairly new objective function. It is a qualitative measure and concerns with late parts of the jobs, unlike classical objective functions that are quantitative measures. In this work, simulated annealing was presented to solve the scheduling problem. In addition, operation based representation was used to encode the solution, and a neighbourhood search structure was employed to search for the new solutions. The case studies are Lawrence instances that were taken from the Operations Research Library. Computational results of this probabilistic meta-heuristic algorithm were compared with a conventional genetic algorithm, and a conclusion was made based on the algorithm and problem.

  12. Application of computational fluid mechanics to atmospheric pollution problems

    NASA Technical Reports Server (NTRS)

    Hung, R. J.; Liaw, G. S.; Smith, R. E.

    1986-01-01

    One of the most noticeable effects of air pollution on the properties of the atmosphere is the reduction in visibility. This paper reports the results of investigations of the fluid dynamical and microphysical processes involved in the formation of advection fog on aerosols from combustion-related pollutants, as condensation nuclei. The effects of a polydisperse aerosol distribution, on the condensation/nucleation processes which cause the reduction in visibility are studied. This study demonstrates how computational fluid mechanics and heat transfer modeling can be applied to simulate the life cycle of the atmosphereic pollution problems.

  13. Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator

    PubMed Central

    Mohamd Shoukry, Alaa; Gani, Showkat

    2017-01-01

    Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements. PMID:29209364

  14. Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator.

    PubMed

    Hussain, Abid; Muhammad, Yousaf Shad; Nauman Sajid, M; Hussain, Ijaz; Mohamd Shoukry, Alaa; Gani, Showkat

    2017-01-01

    Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements.

  15. Mental health problems among survivors in hard-hit areas of the 5.12 Wenchuan and 4.20 Lushan earthquakes.

    PubMed

    Xie, Zongtang; Xu, Jiuping; Wu, Zhibin

    2017-02-01

    Earthquake exposure has often been associated with psychological distress. However, little is known about the cumulative effect of exposure to two earthquakes on psychological distress and in particular, the effect on the development of post-traumatic stress disorder (PTSD), anxiety and depression disorders. This study explored the effect of exposure on mental health outcomes after a first earthquake and again after a second earthquake. A population-based mental health survey using self-report questionnaires was conducted on 278 people in the hard-hit areas of Lushan and Baoxing Counties 13-16 months after the Wenchuan earthquake (Sample 1). 191 of these respondents were evaluated again 8-9 months after the Lushan earthquake (Sample 2), which struck almost 5 years after the Wenchuan earthquake. In Sample 1, the prevalence rates for PTSD, anxiety and depression disorders were 44.53, 54.25 and 51.82%, respectively, and in Sample 2 the corresponding rates were 27.27, 38.63 and 36.93%. Females, the middle-aged, those of Tibetan nationality, and people who reported fear during the earthquake were at an increased risk of experiencing post-traumatic symptoms. Although the incidence of PTSD, anxiety and depression disorders decreased from Sample 1 to Sample 2, the cumulative effect of exposure to two earthquakes on mental health problems was serious in the hard-hit areas. Therefore, it is important that psychological counseling be provided for earthquake victims, and especially those exposed to multiple earthquakes.

  16. Experiences with explicit finite-difference schemes for complex fluid dynamics problems on STAR-100 and CYBER-203 computers

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.

    1982-01-01

    Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.

  17. Surface texture and hardness of dental alloys processed by alternative technologies

    NASA Astrophysics Data System (ADS)

    Porojan, Liliana; Savencu, Cristina E.; Topală, Florin I.; Porojan, Sorin D.

    2017-08-01

    Technological developments have led to the implementation of novel digitalized manufacturing methods for the production of metallic structures in prosthetic dentistry. These technologies can be classified as based on subtractive manufacturing, assisted by computer-aided design/computer-aided manufacturing (CAD/CAM) systems, or on additive manufacturing (AM), such as the recently developed laser-based methods. The aim of the study was to assess the surface texture and hardness of metallic structures for dental restorations obtained by alternative technologies: conventional casting (CST), computerized milling (MIL), AM power bed fusion methods, respective selective laser melting (SLM) and selective laser sintering (SLS). For the experimental analyses metallic specimens made of Co-Cr dental alloys were prepared as indicated by the manufacturers. The specimen structure at the macro level was observed by an optical microscope and micro-hardness was measured in all substrates. Metallic frameworks obtained by AM are characterized by increased hardness, depending also on the surface processing. The formation of microstructural defects can be better controlled and avoided during SLM and MIL process. Application of power bed fusion techniques, like SLS and SLM, is currently a challenge in dental alloys processing.

  18. Solution of large nonlinear quasistatic structural mechanics problems on distributed-memory multiprocessor computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanford, M.

    1997-12-31

    Most commercially-available quasistatic finite element programs assemble element stiffnesses into a global stiffness matrix, then use a direct linear equation solver to obtain nodal displacements. However, for large problems (greater than a few hundred thousand degrees of freedom), the memory size and computation time required for this approach becomes prohibitive. Moreover, direct solution does not lend itself to the parallel processing needed for today`s multiprocessor systems. This talk gives an overview of the iterative solution strategy of JAS3D, the nonlinear large-deformation quasistatic finite element program. Because its architecture is derived from an explicit transient-dynamics code, it does not ever assemblemore » a global stiffness matrix. The author describes the approach he used to implement the solver on multiprocessor computers, and shows examples of problems run on hundreds of processors and more than a million degrees of freedom. Finally, he describes some of the work he is presently doing to address the challenges of iterative convergence for ill-conditioned problems.« less

  19. Common Infant and Newborn Problems

    MedlinePlus

    It is hard when your baby is sick. Common health problems in babies include colds, coughs, fevers, and vomiting. Babies also commonly have skin problems, like diaper rash or cradle cap. Many of these problems are ... are worried about your baby, call your health care provider right away.

  20. Solving the Mystery of the Short-Hard Gamma-Ray Bursts

    NASA Astrophysics Data System (ADS)

    Fox, Derek

    2004-07-01

    Seven years after the afterglow detections that revolutionized studies of the long-soft gamma-ray bursts, not even one afterglow of a short-hard GRB has been seen, and the nature of these events has become one of the most important problems in GRB research. The forthcoming Swift satellite will report few-arcsecond localizations for short-hard bursts in minutes, however, enabling prompt, deep optical afterglow searches for the first time. Discovery and observation of the first short-hard optical afterglows will answer most of the critical questions about these events: What are their distances and energies? Do they occur in distant galaxies, and if so, in which regions of those galaxies? Are they the result of collimated or quasi-spherical explosions? In combination with an extensive rapid-response ground-based campaign, we propose to make the critical high-sensitivity HST TOO observations that will allow us to answer these questions. If theorists are correct in attributing the short-hard bursts to binary neutron star coalescence events, then the short-hard bursts are signposts to the primary targeted source population for ground-based gravitational-wave detectors, and short-hard burst studies will have a vital role to play in guiding their observations.

  1. Nurturing Students' Problem-Solving Skills and Engagement in Computer-Mediated Communications (CMC)

    ERIC Educational Resources Information Center

    Chen, Ching-Huei

    2014-01-01

    The present study sought to investigate how to enhance students' well- and ill-structured problem-solving skills and increase productive engagement in computer-mediated communication with the assistance of external prompts, namely procedural and reflection. Thirty-three graduate students were randomly assigned to two conditions: procedural and…

  2. Problem-Solving in the Pre-Clinical Curriculum: The Uses of Computer Simulations.

    ERIC Educational Resources Information Center

    Michael, Joel A.; Rovick, Allen A.

    1986-01-01

    Promotes the use of computer-based simulations in the pre-clinical medical curriculum as a means of providing students with opportunities for problem solving. Describes simple simulations of skeletal muscle loads, complex simulations of major organ systems and comprehensive simulation models of the entire human body. (TW)

  3. Computational-hydrodynamic studies of the Noh compressible flow problem using non-ideal equations of state

    NASA Astrophysics Data System (ADS)

    Honnell, Kevin; Burnett, Sarah; Yorke, Chloe'; Howard, April; Ramsey, Scott

    2017-06-01

    The Noh problem is classic verification problem in the field of compressible flows. Simple to conceptualize, it is nonetheless difficult for numerical codes to predict correctly, making it an ideal code-verification test bed. In its original incarnation, the fluid is a simple ideal gas; once validated, however, these codes are often used to study highly non-ideal fluids and solids. In this work the classic Noh problem is extended beyond the commonly-studied polytropic ideal gas to more realistic equations of state (EOS) including the stiff gas, the Nobel-Abel gas, and the Carnahan-Starling hard-sphere fluid, thus enabling verification studies to be performed on more physically-realistic fluids. Exact solutions are compared with numerical results obtained from the Lagrangian hydrocode FLAG, developed at Los Alamos. For these more realistic EOSs, the simulation errors decreased in magnitude both at the origin and at the shock, but also spread more broadly about these points compared to the ideal EOS. The overall spatial convergence rate remained first order.

  4. Eigenmode computation of cavities with perturbed geometry using matrix perturbation methods applied on generalized eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gorgizadeh, Shahnam; Flisgen, Thomas; van Rienen, Ursula

    2018-07-01

    Generalized eigenvalue problems are standard problems in computational sciences. They may arise in electromagnetic fields from the discretization of the Helmholtz equation by for example the finite element method (FEM). Geometrical perturbations of the structure under concern lead to a new generalized eigenvalue problems with different system matrices. Geometrical perturbations may arise by manufacturing tolerances, harsh operating conditions or during shape optimization. Directly solving the eigenvalue problem for each perturbation is computationally costly. The perturbed eigenpairs can be approximated using eigenpair derivatives. Two common approaches for the calculation of eigenpair derivatives, namely modal superposition method and direct algebraic methods, are discussed in this paper. Based on the direct algebraic methods an iterative algorithm is developed for efficiently calculating the eigenvalues and eigenvectors of the perturbed geometry from the eigenvalues and eigenvectors of the unperturbed geometry.

  5. The Roles of Internal Representation and Processing in Problem Solving Involving Insight: A Computational Complexity Perspective

    ERIC Educational Resources Information Center

    Wareham, Todd

    2017-01-01

    In human problem solving, there is a wide variation between individuals in problem solution time and success rate, regardless of whether or not this problem solving involves insight. In this paper, we apply computational and parameterized analysis to a plausible formalization of extended representation change theory (eRCT), an integration of…

  6. An optimized full-configuration-interaction nuclear orbital approach to a ``hard-core'' interaction problem: Application to (3He)N-Cl2(B) clusters (N<=4)

    NASA Astrophysics Data System (ADS)

    de Lara-Castells, M. P.; Villarreal, P.; Delgado-Barrio, G.; Mitrushchenkov, A. O.

    2009-11-01

    An efficient full-configuration-interaction nuclear orbital treatment has been recently developed as a benchmark quantum-chemistry-like method to calculate ground and excited "solvent" energies and wave functions in small doped ΔEest clusters (N ≤4) [M. P. de Lara-Castells, G. Delgado-Barrio, P. Villarreal, and A. O. Mitrushchenkov, J. Chem. Phys. 125, 221101 (2006)]. Additional methodological and computational details of the implementation, which uses an iterative Jacobi-Davidson diagonalization algorithm to properly address the inherent "hard-core" He-He interaction problem, are described here. The convergence of total energies, average pair He-He interaction energies, and relevant one- and two-body properties upon increasing the angular part of the one-particle basis set (expanded in spherical harmonics) has been analyzed, considering Cl2 as the dopant and a semiempirical model (T-shaped) He-Cl2(B) potential. Converged results are used to analyze global energetic and structural aspects as well as the configuration makeup of the wave functions, associated with the ground and low-lying "solvent" excited states. Our study reveals that besides the fermionic nature of H3e atoms, key roles in determining total binding energies and wave-function structures are played by the strong repulsive core of the He-He potential as well as its very weak attractive region, the most stable arrangement somehow departing from the one of N He atoms equally spaced on equatorial "ring" around the dopant. The present results for N =4 fermions indicates the structural "pairing" of two H3e atoms at opposite sides on a broad "belt" around the dopant, executing a sort of asymmetric umbrella motion. This pairing is a compromise between maximizing the H3e-H3e and the He-dopant attractions, and suppressing at the same time the "hard-core" repulsion. Although the He-He attractive interaction is rather weak, its contribution to the total energy is found to scale as a power of three and it thus

  7. Hill Problem Analytical Theory to the Order Four. Application to the Computation of Frozen Orbits around Planetary Satellites

    NASA Technical Reports Server (NTRS)

    Lara, Martin; Palacian, Jesus F.

    2007-01-01

    Frozen orbits of the Hill problem are determined in the double averaged problem, where short and long period terms are removed by means of Lie transforms. The computation of initial conditions of corresponding quasi periodic solutions in the non-averaged problem is straightforward for the perturbation method used provides the explicit equations of the transformation that connects the averaged and non-averaged models. A fourth order analytical theory reveals necessary for the accurate computation of quasi periodic, frozen orbits.

  8. FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)

    NASA Astrophysics Data System (ADS)

    2014-10-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the

  9. Technology, attributions, and emotions in post-secondary education: An application of Weiner's attribution theory to academic computing problems.

    PubMed

    Maymon, Rebecca; Hall, Nathan C; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia

    2018-01-01

    As technology becomes increasingly integrated with education, research on the relationships between students' computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner's (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study.

  10. Improved teaching-learning-based and JAYA optimization algorithms for solving flexible flow shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Buddala, Raviteja; Mahapatra, Siba Sankar

    2017-11-01

    Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.

  11. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  12. Computer Needs and Computer Problems in Developing Countries.

    ERIC Educational Resources Information Center

    Huskey, Harry D.

    A survey of the computer environment in a developing country is provided. Levels of development are considered and the educational requirements of countries at various levels are discussed. Computer activities in India, Burma, Pakistan, Brazil and a United Nations sponsored educational center in Hungary are all described. (SK/Author)

  13. Decision theory for computing variable and value ordering decisions for scheduling problems

    NASA Technical Reports Server (NTRS)

    Linden, Theodore A.

    1993-01-01

    Heuristics that guide search are critical when solving large planning and scheduling problems, but most variable and value ordering heuristics are sensitive to only one feature of the search state. One wants to combine evidence from all features of the search state into a subjective probability that a value choice is best, but there has been no solid semantics for merging evidence when it is conceived in these terms. Instead, variable and value ordering decisions should be viewed as problems in decision theory. This led to two key insights: (1) The fundamental concept that allows heuristic evidence to be merged is the net incremental utility that will be achieved by assigning a value to a variable. Probability distributions about net incremental utility can merge evidence from the utility function, binary constraints, resource constraints, and other problem features. The subjective probability that a value is the best choice is then derived from probability distributions about net incremental utility. (2) The methods used for rumor control in Bayesian Networks are the primary way to prevent cycling in the computation of probable net incremental utility. These insights lead to semantically justifiable ways to compute heuristic variable and value ordering decisions that merge evidence from all available features of the search state.

  14. Analysis of Computer Teachers' Online Discussion Forum Messages about Their Occupational Problems

    ERIC Educational Resources Information Center

    Deryakulu, Deniz; Olkun, Sinan

    2007-01-01

    This study, using content analysis technique, examined the types of job-related problems that the Turkish computer teachers experienced and the types of social support provided by reciprocal discussions in an online forum. Results indicated that role conflict, inadequate teacher induction policies, lack of required technological infrastructure and…

  15. Regularization and computational methods for precise solution of perturbed orbit transfer problems

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn Michele

    The author has developed a suite of algorithms for solving the perturbed Lambert's problem in celestial mechanics. These algorithms have been implemented as a parallel computation tool that has broad applicability. This tool is composed of four component algorithms and each provides unique benefits for solving a particular type of orbit transfer problem. The first one utilizes a Keplerian solver (a-iteration) for solving the unperturbed Lambert's problem. This algorithm not only provides a "warm start" for solving the perturbed problem but is also used to identify which of several perturbed solvers is best suited for the job. The second algorithm solves the perturbed Lambert's problem using a variant of the modified Chebyshev-Picard iteration initial value solver that solves two-point boundary value problems. This method converges over about one third of an orbit and does not require a Newton-type shooting method and thus no state transition matrix needs to be computed. The third algorithm makes use of regularization of the differential equations through the Kustaanheimo-Stiefel transformation and extends the domain of convergence over which the modified Chebyshev-Picard iteration two-point boundary value solver will converge, from about one third of an orbit to almost a full orbit. This algorithm also does not require a Newton-type shooting method. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver to solve the perturbed two-impulse Lambert problem over multiple revolutions. The method of particular solutions is a shooting method but differs from the Newton-type shooting methods in that it does not require integration of the state transition matrix. The mathematical developments that underlie these four algorithms are derived in the chapters of this dissertation. For each of the algorithms, some orbit transfer test cases are included to provide insight on accuracy and efficiency of these

  16. Reduction of community alcohol problems: computer simulation experiments in three counties.

    PubMed

    Holder, H D; Blose, J O

    1987-03-01

    A series of alcohol abuse prevention strategies was evaluated using computer simulation for three counties in the United States: Wake County, North Carolina, Washington County, Vermont and Alameda County, California. A system dynamics model composed of a network of interacting variables was developed for the pattern of alcoholic beverage consumption in a community. The relationship of community drinking patterns to various stimulus factors was specified in the model based on available empirical research. Stimulus factors included disposable income, alcoholic beverage prices, advertising exposure, minimum drinking age and changes in cultural norms. After a generic model was developed and validated on the national level, a computer-based system dynamics model was developed for each county, and a series of experiments was conducted to project the potential impact of specific prevention strategies. The project concluded that prevention efforts can both lower current levels of alcohol abuse and reduce projected increases in alcohol-related problems. Without such efforts, already high levels of alcohol-related family disruptions in the three counties could be expected to rise an additional 6% and drinking-related work problems 1-5%, over the next 10 years after controlling for population growth. Of the strategies tested, indexing the price of alcoholic beverages to the consumer price index in conjunction with the implementation of a community educational program with well-defined target audiences has the best potential for significant problem reduction in all three counties.

  17. Evaluation of HardSys/HardDraw, An Expert System for Electromagnetic Interactions Modelling

    DTIC Science & Technology

    1993-05-01

    interactions ir complex systems. This report gives a description of HardSys/HardDraw and reviews the main concepts used in its design. Various aspects of its ...HardDraw, an expert system for the modelling of electromagnetic interactions in complex systems. It consists of two main components: HardSys and HardDraw...HardSys is the advisor part of the expert system. It is knowledge-based, that is it contains a database of models and properties for various types of

  18. Principal Support Is Imperative to the Retention of Teachers in Hard-to-Staff Schools

    ERIC Educational Resources Information Center

    Hughes, Amy L; Matt, John J.; O'Reilly, Frances L.

    2015-01-01

    Teacher retention is an ongoing problem in hard-to-staff schools. This research examined the relationship between principal support and retention of teachers in hard-to-staff schools. The purpose of this study was to, (a) to determine the relationship between teacher retention and principal support, (b) to examine the perception of support between…

  19. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network.

    PubMed

    Goto, Hayato

    2016-02-22

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.

  20. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network

    PubMed Central

    Goto, Hayato

    2016-01-01

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence. PMID:26899997

  1. Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network

    NASA Astrophysics Data System (ADS)

    Goto, Hayato

    2016-02-01

    The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.

  2. Hard and flexible optical printed circuit board

    NASA Astrophysics Data System (ADS)

    Lee, El-Hang; Lee, Hyun Sik; Lee, S. G.; O, B. H.; Park, S. G.; Kim, K. H.

    2007-02-01

    We report on the design and fabrication of hard and flexible optical printed circuit boards (O-PCBs). The objective is to realize generic and application-specific O-PCBs, either in hard form or flexible form, that are compact, light-weight, low-energy, high-speed, intelligent, and environmentally friendly, for low-cost and high-volume universal applications. The O-PCBs consist of 2-dimensional planar arrays of micro/nano-scale optical wires, circuits and devices that are interconnected and integrated to perform the functions of sensing, storing, transporting, processing, switching, routing and distributing optical signals on flat modular boards. For fabrication, the polymer and organic optical wires and waveguides are first fabricated on a board and are used to interconnect and integrate micro/nano-scale photonic devices. The micro/nano-optical functional devices include lasers, detectors, switches, sensors, directional couplers, multi-mode interference devices, ring-resonators, photonic crystal devices, plasmonic devices, and quantum devices. For flexible boards, the optical waveguide arrays are fabricated on flexible poly-ethylen terephthalate (PET) substrates by UV embossing. Electrical layer carrying VCSEL and PD array is laminated with the optical layer carrying waveguide arrays. Both hard and flexible electrical lines are replaced with high speed optical interconnection between chips over four waveguide channels up to 10Gbps on each. We discuss uses of hard or flexible O-PCBs for telecommunication systems, computer systems, transportation systems, space/avionic systems, and bio-sensor systems.

  3. Guaranteed Discrete Energy Optimization on Large Protein Design Problems.

    PubMed

    Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas

    2015-12-08

    In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.

  4. Recent advances in computational-analytical integral transforms for convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.

    2017-10-01

    An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.

  5. Computers and Children: Problems and Possibilities.

    ERIC Educational Resources Information Center

    Siegfried, Pat

    1983-01-01

    Discusses the use of computers by children, highlighting a definition of computer literacy, computer education in schools, computer software, microcomputers, programming languages, and public library involvement. Seven references and a 40-item bibliography are included. (EJS)

  6. Magnetic hyperthermia with hard-magnetic nanoparticles

    NASA Astrophysics Data System (ADS)

    Kashevsky, Bronislav E.; Kashevsky, Sergey B.; Korenkov, Victor S.; Istomin, Yuri P.; Terpinskaya, Tatyana I.; Ulashchik, Vladimir S.

    2015-04-01

    Recent clinical trials of magnetic hyperthermia have proved, and even hardened, the Ankinson-Brezovich restriction as upon magnetic field conditions applicable to any site of human body. Subject to this restriction, which is harshly violated in numerous laboratory and small animal studies, magnetic hyperthermia can relay on rather moderate heat source, so that optimization of the whole hyperthermia system remains, after all, the basic problem predetermining its clinical perspectives. We present short account of our complex (theoretical, laboratory and small animal) studies to demonstrate that such perspectives should be related with the hyperthermia system based on hard-magnetic (Stoner-Wohlfarth type) nanoparticles and strong low-frequency fields rather than with superparamagnetic (Brownian or Neél) nanoparticles and weak high-frequency fields. This conclusion is backed by an analytical evaluation of the maximum absorption rates possible under the field restriction in the ideal hard-magnetic (Stoner-Wohlarth) and the ideal superparamagnetic (single relaxation time) systems, by theoretical and experimental studies of the dynamic magnetic hysteresis in suspensions of movable hard-magnetic particles, by producing nanoparticles with adjusted coercivity and suspensions of such particles capable of effective energy absorption and intratumoral penetration, and finally, by successful treatment of a mice model tumor under field conditions acceptable for whole human body.

  7. Developing a multimodal biometric authentication system using soft computing methods.

    PubMed

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  8. From transistor to trapped-ion computers for quantum chemistry.

    PubMed

    Yung, M-H; Casanova, J; Mezzacapo, A; McClean, J; Lamata, L; Aspuru-Guzik, A; Solano, E

    2014-01-07

    Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology.

  9. From transistor to trapped-ion computers for quantum chemistry

    PubMed Central

    Yung, M.-H.; Casanova, J.; Mezzacapo, A.; McClean, J.; Lamata, L.; Aspuru-Guzik, A.; Solano, E.

    2014-01-01

    Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology. PMID:24395054

  10. Rad-hard computer elements for space applications

    NASA Technical Reports Server (NTRS)

    Krishnan, G. S.; Longerot, Carl D.; Treece, R. Keith

    1993-01-01

    Space Hardened CMOS computer elements emulating a commercial microcontroller and microprocessor family have been designed, fabricated, qualified, and delivered for a variety of space programs including NASA's multiple launch International Solar-Terrestrial Physics (ISTP) program, Mars Observer, and government and commercial communication satellites. Design techniques and radiation performance of the 1.25 micron feature size products are described.

  11. An Exact Algorithm to Compute the Double-Cut-and-Join Distance for Genomes with Duplicate Genes.

    PubMed

    Shao, Mingfu; Lin, Yu; Moret, Bernard M E

    2015-05-01

    Computing the edit distance between two genomes is a basic problem in the study of genome evolution. The double-cut-and-join (DCJ) model has formed the basis for most algorithmic research on rearrangements over the last few years. The edit distance under the DCJ model can be computed in linear time for genomes without duplicate genes, while the problem becomes NP-hard in the presence of duplicate genes. In this article, we propose an integer linear programming (ILP) formulation to compute the DCJ distance between two genomes with duplicate genes. We also provide an efficient preprocessing approach to simplify the ILP formulation while preserving optimality. Comparison on simulated genomes demonstrates that our method outperforms MSOAR in computing the edit distance, especially when the genomes contain long duplicated segments. We also apply our method to assign orthologous gene pairs among human, mouse, and rat genomes, where once again our method outperforms MSOAR.

  12. Modeling biological problems in computer science: a case study in genome assembly.

    PubMed

    Medvedev, Paul

    2018-01-30

    As computer scientists working in bioinformatics/computational biology, we often face the challenge of coming up with an algorithm to answer a biological question. This occurs in many areas, such as variant calling, alignment and assembly. In this tutorial, we use the example of the genome assembly problem to demonstrate how to go from a question in the biological realm to a solution in the computer science realm. We show the modeling process step-by-step, including all the intermediate failed attempts. Please note this is not an introduction to how genome assembly algorithms work and, if treated as such, would be incomplete and unnecessarily long-winded. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. HEAP: Heat Energy Analysis Program, a computer model simulating solar receivers. [solving the heat transfer problem

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1979-01-01

    A computer program which can distinguish between different receiver designs, and predict transient performance under variable solar flux, or ambient temperatures, etc. has a basic structure that fits a general heat transfer problem, but with specific features that are custom-made for solar receivers. The code is written in MBASIC computer language. The methodology followed in solving the heat transfer problem is explained. A program flow chart, an explanation of input and output tables, and an example of the simulation of a cavity-type solar receiver are included.

  14. Digital dissection - using contrast-enhanced computed tomography scanning to elucidate hard- and soft-tissue anatomy in the Common Buzzard Buteo buteo.

    PubMed

    Lautenschlager, Stephan; Bright, Jen A; Rayfield, Emily J

    2014-04-01

    Gross dissection has a long history as a tool for the study of human or animal soft- and hard-tissue anatomy. However, apart from being a time-consuming and invasive method, dissection is often unsuitable for very small specimens and often cannot capture spatial relationships of the individual soft-tissue structures. The handful of comprehensive studies on avian anatomy using traditional dissection techniques focus nearly exclusively on domestic birds, whereas raptorial birds, and in particular their cranial soft tissues, are essentially absent from the literature. Here, we digitally dissect, identify, and document the soft-tissue anatomy of the Common Buzzard (Buteo buteo) in detail, using the new approach of contrast-enhanced computed tomography using Lugol's iodine. The architecture of different muscle systems (adductor, depressor, ocular, hyoid, neck musculature), neurovascular, and other soft-tissue structures is three-dimensionally visualised and described in unprecedented detail. The three-dimensional model is further presented as an interactive PDF to facilitate the dissemination and accessibility of anatomical data. Due to the digital nature of the data derived from the computed tomography scanning and segmentation processes, these methods hold the potential for further computational analyses beyond descriptive and illustrative proposes. © 2013 The Authors. Journal of Anatomy published by John Wiley & Sons Ltd on behalf of Anatomical Society.

  15. Bond-orientational analysis of hard-disk and hard-sphere structures.

    PubMed

    Senthil Kumar, V; Kumaran, V

    2006-05-28

    We report the bond-orientational analysis results for the thermodynamic, random, and homogeneously sheared inelastic structures of hard-disks and hard-spheres. The thermodynamic structures show a sharp rise in the order across the freezing transition. The random structures show the absence of crystallization. The homogeneously sheared structures get ordered at a packing fraction higher than the thermodynamic freezing packing fraction, due to the suppression of crystal nucleation. On shear ordering, strings of close-packed hard-disks in two dimensions and close-packed layers of hard-spheres in three dimensions, oriented along the velocity direction, slide past each other. Such a flow creates a considerable amount of fourfold order in two dimensions and body-centered-tetragonal (bct) structure in three dimensions. These transitions are the flow analogs of the martensitic transformations occurring in metals due to the stresses induced by a rapid quench. In hard-disk structures, using the bond-orientational analysis we show the presence of fourfold order. In sheared inelastic hard-sphere structures, even though the global bond-orientational analysis shows that the system is highly ordered, a third-order rotational invariant analysis shows that only about 40% of the spheres have face-centered-cubic (fcc) order, even in the dense and near-elastic limits, clearly indicating the coexistence of multiple crystalline orders. When layers of close-packed spheres slide past each other, in addition to the bct structure, the hexagonal-close-packed (hcp) structure is formed due to the random stacking faults. Using the Honeycutt-Andersen pair analysis and an analysis based on the 14-faceted polyhedra having six quadrilateral and eight hexagonal faces, we show the presence of bct and hcp signatures in shear ordered inelastic hard-spheres. Thus, our analysis shows that the dense sheared inelastic hard-spheres have a mixture of fcc, bct, and hcp structures.

  16. Solving Set Cover with Pairs Problem using Quantum Annealing

    NASA Astrophysics Data System (ADS)

    Cao, Yudong; Jiang, Shuxian; Perouli, Debbie; Kais, Sabre

    2016-09-01

    Here we consider using quantum annealing to solve Set Cover with Pairs (SCP), an NP-hard combinatorial optimization problem that plays an important role in networking, computational biology, and biochemistry. We show an explicit construction of Ising Hamiltonians whose ground states encode the solution of SCP instances. We numerically simulate the time-dependent Schrödinger equation in order to test the performance of quantum annealing for random instances and compare with that of simulated annealing. We also discuss explicit embedding strategies for realizing our Hamiltonian construction on the D-wave type restricted Ising Hamiltonian based on Chimera graphs. Our embedding on the Chimera graph preserves the structure of the original SCP instance and in particular, the embedding for general complete bipartite graphs and logical disjunctions may be of broader use than that the specific problem we deal with.

  17. Theory of the interface between a classical plasma and a hard wall

    NASA Astrophysics Data System (ADS)

    Ballone, P.; Pastore, G.; Tosi, M. P.

    1983-09-01

    The interfacial density profile of a classical one-component plasma confined by a hard wall is studied in planar and spherical geometries. The approach adapts to interfacial problems a modified hypernetted-chain approximation developed by Lado and by Rosenfeld and Ashcroft for the bulk structure of simple liquids. The specific new aim is to embody selfconsistently into the theory a contact theorem, fixing the plasma density at the wall through an equilibrium condition which involves the electrical potential drop across the interface and the bulk pressure. The theory is brought into fully quantitative contact with computer simulation data for a plasma confined in a spherical cavity of large but finite radius. The interfacial potential at the point of zero charge is accurately reproduced by suitably combining the contact theorem with relevant bulk properties in a simple, approximate representation of the interfacial charge density profile.

  18. Theory of the interface between a classical plasma and a hard wall

    NASA Astrophysics Data System (ADS)

    Ballone, P.; Pastore, G.; Tosi, M. P.

    1984-12-01

    The interfacial density profile of a classical one-component plasma confined by a hard wall is studied in planar and spherical geometries. The approach adapts to interfacial problems a modified hypernetted-chain approximation developed by Lado and by Rosenfeld and Ashcroft for the bulk structure of simple liquids. The specific new aim is to embody self-consistently into the theory a “contact theorem”, fixing the plasma density at the wall through an equilibrium condition which involves the electrical potential drop across the interface and the bulk pressure. The theory is brought into fully quantitative contact with computer simulation data for a plasma confined in a spherical cavity of large but finite radius. It is also shown that the interfacial potential at the point of zero charge is accurately reproduced by suitably combining the contact theorem with relevant bulk properties in a simple, approximate representation of the interfacial charge density profile.

  19. Hard-tip, soft-spring lithography.

    PubMed

    Shim, Wooyoung; Braunschweig, Adam B; Liao, Xing; Chai, Jinan; Lim, Jong Kuk; Zheng, Gengfeng; Mirkin, Chad A

    2011-01-27

    Nanofabrication strategies are becoming increasingly expensive and equipment-intensive, and consequently less accessible to researchers. As an alternative, scanning probe lithography has become a popular means of preparing nanoscale structures, in part owing to its relatively low cost and high resolution, and a registration accuracy that exceeds most existing technologies. However, increasing the throughput of cantilever-based scanning probe systems while maintaining their resolution and registration advantages has from the outset been a significant challenge. Even with impressive recent advances in cantilever array design, such arrays tend to be highly specialized for a given application, expensive, and often difficult to implement. It is therefore difficult to imagine commercially viable production methods based on scanning probe systems that rely on conventional cantilevers. Here we describe a low-cost and scalable cantilever-free tip-based nanopatterning method that uses an array of hard silicon tips mounted onto an elastomeric backing. This method-which we term hard-tip, soft-spring lithography-overcomes the throughput problems of cantilever-based scanning probe systems and the resolution limits imposed by the use of elastomeric stamps and tips: it is capable of delivering materials or energy to a surface to create arbitrary patterns of features with sub-50-nm resolution over centimetre-scale areas. We argue that hard-tip, soft-spring lithography is a versatile nanolithography strategy that should be widely adopted by academic and industrial researchers for rapid prototyping applications.

  20. Analyzing Log Files to Predict Students' Problem Solving Performance in a Computer-Based Physics Tutor

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2015-01-01

    This study investigates whether information saved in the log files of a computer-based tutor can be used to predict the problem solving performance of students. The log files of a computer-based physics tutoring environment called Andes Physics Tutor was analyzed to build a logistic regression model that predicted success and failure of students'…

  1. Asymptotic analysis of online algorithms and improved scheme for the flow shop scheduling problem with release dates

    NASA Astrophysics Data System (ADS)

    Bai, Danyu

    2015-08-01

    This paper discusses the flow shop scheduling problem to minimise the total quadratic completion time (TQCT) with release dates in offline and online environments. For this NP-hard problem, the investigation is focused on the performance of two online algorithms based on the Shortest Processing Time among Available jobs rule. Theoretical results indicate the asymptotic optimality of the algorithms as the problem scale is sufficiently large. To further enhance the quality of the original solutions, the improvement scheme is provided for these algorithms. A new lower bound with performance guarantee is provided, and computational experiments show the effectiveness of these heuristics. Moreover, several results of the single-machine TQCT problem with release dates are also obtained for the deduction of the main theorem.

  2. EZLP: An Interactive Computer Program for Solving Linear Programming Problems. Final Report.

    ERIC Educational Resources Information Center

    Jarvis, John J.; And Others

    Designed for student use in solving linear programming problems, the interactive computer program described (EZLP) permits the student to input the linear programming model in exactly the same manner in which it would be written on paper. This report includes a brief review of the development of EZLP; narrative descriptions of program features,…

  3. Symbolic Computational Approach to the Marangoni Convection Problem With Soret Diffusion

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond

    1998-01-01

    A recently reported solution for stationary stability of a thermosolutal system with Soret diffusion is re-derived and examined using a symbolic computational package. Symbolic computational languages are well suited for such an analysis and facilitate a pragmatic approach that is adaptable to similar problems. Linearization of the equations, normal mode analysis, and extraction of the final solution are performed in a Mathematica notebook format. An exact solution is obtained for stationary stability in the limit of zero gravity. A closed form expression is also obtained for the location of asymptotes in relevant parameter, (Sm(sub c), Mac(sub c)), space. The stationary stability behavior is conveniently examined within the symbolic language environment. An abbreviated version of the Mathematica notebook is given in the Appendix.

  4. Computation and analysis for a constrained entropy optimization problem in finance

    NASA Astrophysics Data System (ADS)

    He, Changhong; Coleman, Thomas F.; Li, Yuying

    2008-12-01

    In [T. Coleman, C. He, Y. Li, Calibrating volatility function bounds for an uncertain volatility model, Journal of Computational Finance (2006) (submitted for publication)], an entropy minimization formulation has been proposed to calibrate an uncertain volatility option pricing model (UVM) from market bid and ask prices. To avoid potential infeasibility due to numerical error, a quadratic penalty function approach is applied. In this paper, we show that the solution to the quadratic penalty problem can be obtained by minimizing an objective function which can be evaluated via solving a Hamilton-Jacobian-Bellman (HJB) equation. We prove that the implicit finite difference solution of this HJB equation converges to its viscosity solution. In addition, we provide computational examples illustrating accuracy of calibration.

  5. Efficient bounding schemes for the two-center hybrid flow shop scheduling problem with removal times.

    PubMed

    Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly

    2014-01-01

    We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.

  6. Efficient Bounding Schemes for the Two-Center Hybrid Flow Shop Scheduling Problem with Removal Times

    PubMed Central

    2014-01-01

    We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures. PMID:25610911

  7. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  8. On computational experiments in some inverse problems of heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Bilchenko, G. G.; Bilchenko, N. G.

    2016-11-01

    The results of mathematical modeling of effective heat and mass transfer on hypersonic aircraft permeable surfaces are considered. The physic-chemical processes (the dissociation and the ionization) in laminar boundary layer of compressible gas are appreciated. Some algorithms of control restoration are suggested for the interpolation and approximation statements of heat and mass transfer inverse problems. The differences between the methods applied for the problem solutions search for these statements are discussed. Both the algorithms are realized as programs. Many computational experiments were accomplished with the use of these programs. The parameters of boundary layer obtained by means of the A.A.Dorodnicyn's generalized integral relations method from solving the direct problems have been used to obtain the inverse problems solutions. Two types of blowing laws restoration for the inverse problem in interpolation statement are presented as the examples. The influence of the temperature factor on the blowing restoration is investigated. The different character of sensitivity of controllable parameters (the local heat flow and local tangent friction) respectively to step (discrete) changing of control (the blowing) and the switching point position is studied.

  9. Hard Real-Time: C++ Versus RTSJ

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Reinholtz, William K.

    2004-01-01

    In the domain of hard real-time systems, which language is better: C++ or the Real-Time Specification for Java (RTSJ)? Although ordinary Java provides a more productive programming environment than C++ due to its automatic memory management, that benefit does not apply to RTSJ when using NoHeapRealtimeThread and non-heap memory areas. As a result, RTSJ programmers must manage non-heap memory explicitly. While that's not a deterrent for veteran real-time programmers-where explicit memory management is common-the lack of certain language features in RTSJ (and Java) makes that manual memory management harder to accomplish safely than in C++. This paper illustrates the problem for practitioners in the context of moving data and managing memory in a real-time producer/consumer pattern. The relative ease of implementation and safety of the C++ programming model suggests that RTSJ has a struggle ahead in the domain of hard real-time applications, despite its other attractive features.

  10. Differentiating between Women in Hard and Soft Science and Engineering Disciplines

    ERIC Educational Resources Information Center

    Camp, Amanda G.; Gilleland, Diane S.; Pearson, Carolyn; Vander Putten, James

    2010-01-01

    The intent of this study was to investigate characteristics that differentiate between women in soft (social, psychological, and life sciences) and hard (engineering, mathematics, computer science, physical science) science and engineering disciplines. Using the Beginning Postsecondary Students Longitudinal Study: 1996-2001 (2002), a descriptive…

  11. Using functional assessment to treat behavior problems of deaf and hard of hearing children diagnosed with autism spectrum disorder.

    PubMed

    Zane, Thomas; Carlson, Mark; Estep, David; Quinn, Mike

    2014-01-01

    A defining feature of autism spectrum disorders is atypical behaviors, e.g., stereotypy, noncompliance, rituals, and aggression. Deaf and hard of hearing individuals with autism present a greater challenge because of additional issues related to their hearing status. One conceptualization of problem behavior is that it serves a communication function, i.e., the person has learned that certain misbehaviors may be reinforced in some way. The present article describes "functional behavior assessment," a group of state-of-the-art methodologies that allow a caregiver to determine the cause of the behavior, so that treatment--based on that cause--will be more effective. Different methods of functional assessment are described, along with a step-by-step implementation sequence. The results of a functional assessment should lead to more effective programming, resulting in quicker elimination of the behavioral concerns, and allow the person to gain access to greater independence and more reinforcement.

  12. The Sizing and Optimization Language, (SOL): Computer language for design problems

    NASA Technical Reports Server (NTRS)

    Lucas, Stephen H.; Scotti, Stephen J.

    1988-01-01

    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.

  13. A knowledge-based system with learning for computer communication network design

    NASA Technical Reports Server (NTRS)

    Pierre, Samuel; Hoang, Hai Hoc; Tropper-Hausen, Evelyne

    1990-01-01

    Computer communication network design is well-known as complex and hard. For that reason, the most effective methods used to solve it are heuristic. Weaknesses of these techniques are listed and a new approach based on artificial intelligence for solving this problem is presented. This approach is particularly recommended for large packet switched communication networks, in the sense that it permits a high degree of reliability and offers a very flexible environment dealing with many relevant design parameters such as link cost, link capacity, and message delay.

  14. Automated Hypothesis Tests and Standard Errors for Nonstandard Problems with Description of Computer Package: A Draft.

    ERIC Educational Resources Information Center

    Lord, Frederic M.; Stocking, Martha

    A general Computer program is described that will compute asymptotic standard errors and carry out significance tests for an endless variety of (standard and) nonstandard large-sample statistical problems, without requiring the statistician to derive asymptotic standard error formulas. The program assumes that the observations have a multinormal…

  15. Hard X-ray (greater than 10 keV) telescope for space astronomy from the Moon

    NASA Astrophysics Data System (ADS)

    Frontera, F.; de Chiara, P.; Pasqualini, G.

    1994-06-01

    The use of the Moon as site for deep observations of astrophysical sources in hard X-rays (greater than 10 keV) is very exciting, in spite of several technological problems to be solved. A strong limitation to the sensitivity of hard X-ray experiments is imposed by the use of direct-viewing (with or without masks) detectors. We propose a lunar hard X-ray observatory, (LHEXO), that makes use of a hard X-ray concentrator which is based on the use of confocal paraboloidal mirrors made of mosaic crystals of graphite (002). In this paper we describe telescope concept and its expected performances.

  16. A Computationally Inexpensive Optimal Guidance via Radial-Basis-Function Neural Network for Autonomous Soft Landing on Asteroids

    PubMed Central

    Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun

    2015-01-01

    Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm. PMID:26367382

  17. Solving the Mystery of the Short-Hard Gamma-Ray Bursts

    NASA Astrophysics Data System (ADS)

    Fox, Derek

    2005-07-01

    Eight years after the afterglow detections that revolutionized studies of the long-soft gamma-ray bursts, not even one afterglow of a short-hard GRB has been seen, and the nature of these events has become one of the most important problems in GRB research. The Swift satellite, expected to be in full operation throughout Cycle 14, will report few-arcsecond localizations for short-hard bursts in minutes, enabling prompt, deep optical afterglow searches for the first time. Discovery and observation of the first short-hard optical afterglows will answer most of the critical questions about these events: What are their distances and energies? Do they occur in distant galaxies, and if so, in which regions of those galaxies? Are they the result of collimated or quasi-spherical explosions? In combination with an extensive rapid-response ground-based campaign, we propose to make the critical high-sensitivity HST TOO observations that will allow us to answer these questions. If theorists are correct in attributing the short-hard bursts to binary neutron star coalescence events, then they will serve as signposts to the primary targeted source population for ground-based gravitational-wave detectors, and short-hard burst studies will have a vital role to play in guiding those observations.

  18. Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes

    NASA Astrophysics Data System (ADS)

    Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv

    2007-04-01

    In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.

  19. Computational Psychometrics for the Measurement of Collaborative Problem Solving Skills

    PubMed Central

    Polyak, Stephen T.; von Davier, Alina A.; Peterschmidt, Kurt

    2017-01-01

    This paper describes a psychometrically-based approach to the measurement of collaborative problem solving skills, by mining and classifying behavioral data both in real-time and in post-game analyses. The data were collected from a sample of middle school children who interacted with a game-like, online simulation of collaborative problem solving tasks. In this simulation, a user is required to collaborate with a virtual agent to solve a series of tasks within a first-person maze environment. The tasks were developed following the psychometric principles of Evidence Centered Design (ECD) and are aligned with the Holistic Framework developed by ACT. The analyses presented in this paper are an application of an emerging discipline called computational psychometrics which is growing out of traditional psychometrics and incorporates techniques from educational data mining, machine learning and other computer/cognitive science fields. In the real-time analysis, our aim was to start with limited knowledge of skill mastery, and then demonstrate a form of continuous Bayesian evidence tracing that updates sub-skill level probabilities as new conversation flow event evidence is presented. This is performed using Bayes' rule and conversation item conditional probability tables. The items are polytomous and each response option has been tagged with a skill at a performance level. In our post-game analysis, our goal was to discover unique gameplay profiles by performing a cluster analysis of user's sub-skill performance scores based on their patterns of selected dialog responses. PMID:29238314

  20. Computational Psychometrics for the Measurement of Collaborative Problem Solving Skills.

    PubMed

    Polyak, Stephen T; von Davier, Alina A; Peterschmidt, Kurt

    2017-01-01

    This paper describes a psychometrically-based approach to the measurement of collaborative problem solving skills, by mining and classifying behavioral data both in real-time and in post-game analyses. The data were collected from a sample of middle school children who interacted with a game-like, online simulation of collaborative problem solving tasks. In this simulation, a user is required to collaborate with a virtual agent to solve a series of tasks within a first-person maze environment. The tasks were developed following the psychometric principles of Evidence Centered Design (ECD) and are aligned with the Holistic Framework developed by ACT. The analyses presented in this paper are an application of an emerging discipline called computational psychometrics which is growing out of traditional psychometrics and incorporates techniques from educational data mining, machine learning and other computer/cognitive science fields. In the real-time analysis, our aim was to start with limited knowledge of skill mastery, and then demonstrate a form of continuous Bayesian evidence tracing that updates sub-skill level probabilities as new conversation flow event evidence is presented. This is performed using Bayes' rule and conversation item conditional probability tables. The items are polytomous and each response option has been tagged with a skill at a performance level. In our post-game analysis, our goal was to discover unique gameplay profiles by performing a cluster analysis of user's sub-skill performance scores based on their patterns of selected dialog responses.

  1. Remotely Telling Humans and Computers Apart: An Unsolved Problem

    NASA Astrophysics Data System (ADS)

    Hernandez-Castro, Carlos Javier; Ribagorda, Arturo

    The ability to tell humans and computers apart is imperative to protect many services from misuse and abuse. For this purpose, tests called CAPTCHAs or HIPs have been designed and put into production. Recent history shows that most (if not all) can be broken given enough time and commercial interest: CAPTCHA design seems to be a much more difficult problem than previously thought. The assumption that difficult-AI problems can be easily converted into valid CAPTCHAs is misleading. There are also some extrinsic problems that do not help, especially the big number of in-house designs that are put into production without any prior public critique. In this paper we present a state-of-the-art survey of current HIPs, including proposals that are now into production. We classify them regarding their basic design ideas. We discuss current attacks as well as future attack paths, and we also present common errors in design, and how many implementation flaws can transform a not necessarily bad idea into a weak CAPTCHA. We present examples of these flaws, using specific well-known CAPTCHAs. In a more theoretical way, we discuss the threat model: confronted risks and countermeasures. Finally, we introduce and discuss some desirable properties that new HIPs should have, concluding with some proposals for future work, including methodologies for design, implementation and security assessment.

  2. Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex and Dynamic Conditions

    DTIC Science & Technology

    2015-07-14

    AFRL-OSR-VA-TR-2015-0202 Robust Decision Making: The Cognitive and Computational Modeling of Team Problem Solving for Decision Making under Complex...Computational Modeling of Team Problem Solving for Decision Making Under Complex and Dynamic Conditions 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1...functioning as they solve complex problems, and propose the means to improve the performance of teams, under changing or adversarial conditions. By

  3. Technology skills assessment for deaf and hard of hearing students in secondary school.

    PubMed

    Luft, Pamela; Bonello, Mary; Zirzow, Nichole K

    2009-01-01

    To BE COMPETITIVE in the workplace, deaf and hard of hearing students must not only possess basic computer literacy but also know how to use and care for personal assistive and listening technology. An instrument was developed and pilot-tested on 45 middle school and high school deaf and hard of hearing students in 5 public school programs, 4 urban and 1 suburban, to assess these students' current technology skills and to prepare them for post-high school expectations. The researchers found that the students' computer skills depended on their access to technology, which was not always present in the schools. Many students also did not know basic care practices or troubleshooting techniques for their own personal hearing aids (if worn), or how to access or use personal assistive technology.

  4. Learning Probabilities in Computer Engineering by Using a Competency- and Problem-Based Approach

    ERIC Educational Resources Information Center

    Khoumsi, Ahmed; Hadjou, Brahim

    2005-01-01

    Our department has redesigned its electrical and computer engineering programs by adopting a learning methodology based on competence development, problem solving, and the realization of design projects. In this article, we show how this pedagogical approach has been successfully used for learning probabilities and their application to computer…

  5. Computational study of the melting-freezing transition in the quantum hard-sphere system for intermediate densities. II. Structural features.

    PubMed

    Sesé, Luis M; Bailey, Lorna E

    2007-04-28

    The structural features of the quantum hard-sphere system in the region of the fluid-face-centered-cubic-solid transition, for reduced number densities 0.45computed by solving appropriate Ornstein-Zernike equations. A number of significant regularities in the above parameters involving both sides of the crystallization line are reported, and a comparison with results for Lennard-Jones quantum systems that can be found in the literature is made. On the other hand, the main amplitudes of the quantum fluid structure factors follow a complex behavior along the crystallization line, which points to difficulties in identifying a neat rule, similar to that of Hansen-Verlet for classical fluids, for these quantum amplitudes. To complete this study a further analysis of the instantaneous and centroid triplet correlations in the vicinities of the fluid-face-centered-cubic-solid phase transition of hard spheres has been performed, and some interesting differences between the classical and quantum melting-freezing transition are observed.

  6. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    PubMed

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  7. The Shortlist Method for Fast Computation of the Earth Mover's Distance and Finding Optimal Solutions to Transportation Problems

    PubMed Central

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106

  8. From video to computation of biological fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Dillard, Seth I.; Buchholz, James H. J.; Udaykumar, H. S.

    2016-04-01

    This work deals with the techniques necessary to obtain a purely Eulerian procedure to conduct CFD simulations of biological systems with moving boundary flow phenomena. Eulerian approaches obviate difficulties associated with mesh generation to describe or fit flow meshes to body surfaces. The challenges associated with constructing embedded boundary information, body motions and applying boundary conditions on the moving bodies for flow computation are addressed in the work. The overall approach is applied to the study of a fluid-structure interaction problem, i.e., the hydrodynamics of swimming of an American eel, where the motion of the eel is derived from video imaging. It is shown that some first-blush approaches do not work, and therefore, careful consideration of appropriate techniques to connect moving images to flow simulations is necessary and forms the main contribution of the paper. A combination of level set-based active contour segmentation with optical flow and image morphing is shown to enable the image-to-computation process.

  9. Use of computer modeling to investigate a dynamic interaction problem in the Skylab TACS quad-valve package

    NASA Technical Reports Server (NTRS)

    Hesser, R. J.; Gershman, R.

    1975-01-01

    A valve opening-response problem encountered during development of a control valve for the Skylab thruster attitude control system (TACS) is described. The problem involved effects of dynamic interaction among valves in the quad-redundant valve package. Also described is a detailed computer simulation of the quad-valve package which was helpful in resolving the problem.

  10. The complexity of proving chaoticity and the Church-Turing thesis

    NASA Astrophysics Data System (ADS)

    Calude, Cristian S.; Calude, Elena; Svozil, Karl

    2010-09-01

    Proving the chaoticity of some dynamical systems is equivalent to solving the hardest problems in mathematics. Conversely, classical physical systems may "compute the hard or even the incomputable" by measuring observables which correspond to computationally hard or even incomputable problems.

  11. Enhancing the Hardness of Sintered SS 17-4PH Using Nitriding Process for Bracket Orthodontic Application

    NASA Astrophysics Data System (ADS)

    Suharno, B.; Supriadi, S.; Ayuningtyas, S. T.; Widjaya, T.; Baek, E. R.

    2018-01-01

    Brackets orthodontic create teeth movement by applying force from wire to bracket then transferred to teeth. However, emergence of friction between brackets and wires reduces load for teeth movement towards desired area. In order to overcome these problem, surface treatment like nitriding chosen as a process which could escalate efficiency of transferred force by improving material hardness since hard materials have low friction levels. This work investigated nitriding treatment to form nitride layer which affecting hardness of sintered SS 17-4PH. The nitride layers produced after nitriding process at various temperature i.e. 470°C, 500°C, 530°C with 8hr holding time under 50% NH3 atmosphere. Optical metallography was conducted to compare microstructure of base and surface metal while the increasing of surface hardness then observed using vickers microhardness tester. Hardened surface layer was obtained after gaseous nitriding process because of nitride layer that contains Fe4N, CrN and Fe-αN formed. Hardness layers can achieved value 1051 HV associated with varies thickness from 53 to 119 μm. The presence of a precipitation process occurring in conjunction with nitriding process can lead to a decrease in hardness due to nitrogen content diminishing in solid solution phase. This problem causes weakening of nitrogen expansion in martensite lattice.

  12. Multi-objective problem of the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints

    NASA Astrophysics Data System (ADS)

    Amallynda, I.; Santosa, B.

    2017-11-01

    This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.

  13. Panel discussion on: 'Will computational science be able to provide answers to important problems of human society?'

    NASA Astrophysics Data System (ADS)

    Baiotti, Luca; Takabe, Hideaki

    2013-08-01

    The PDF contains the speech of journalist Atsuko Tsuji (Asahi Shimbun) with the title 'Requests and expectations for computational science' and the record of the following discussion on: 'Will computational science be able to provide answers to important problems of human society?'

  14. Technology, attributions, and emotions in post-secondary education: An application of Weiner’s attribution theory to academic computing problems

    PubMed Central

    Hall, Nathan C.; Goetz, Thomas; Chiarella, Andrew; Rahimi, Sonia

    2018-01-01

    As technology becomes increasingly integrated with education, research on the relationships between students’ computing-related emotions and motivation following technological difficulties is critical to improving learning experiences. Following from Weiner’s (2010) attribution theory of achievement motivation, the present research examined relationships between causal attributions and emotions concerning academic computing difficulties in two studies. Study samples consisted of North American university students enrolled in both traditional and online universities (total N = 559) who responded to either hypothetical scenarios or experimental manipulations involving technological challenges experienced in academic settings. Findings from Study 1 showed stable and external attributions to be emotionally maladaptive (more helplessness, boredom, guilt), particularly in response to unexpected computing problems. Additionally, Study 2 found stable attributions for unexpected problems to predict more anxiety for traditional students, with both external and personally controllable attributions for minor problems proving emotionally beneficial for students in online degree programs (more hope, less anxiety). Overall, hypothesized negative effects of stable attributions were observed across both studies, with mixed results for personally controllable attributions and unanticipated emotional benefits of external attributions for academic computing problems warranting further study. PMID:29529039

  15. MENDEL: An Intelligent Computer Tutoring System for Genetics Problem-Solving, Conjecturing, and Understanding.

    ERIC Educational Resources Information Center

    Streibel, Michael; And Others

    1987-01-01

    Describes an advice-giving computer system being developed for genetics education called MENDEL that is based on research in learning, genetics problem solving, and expert systems. The value of MENDEL as a design tool and the tutorial function are stressed. Hypothesis testing, graphics, and experiential learning are also discussed. (Author/LRW)

  16. Enhancements of evolutionary algorithm for the complex requirements of a nurse scheduling problem

    NASA Astrophysics Data System (ADS)

    Tein, Lim Huai; Ramli, Razamin

    2014-12-01

    Over the years, nurse scheduling is a noticeable problem that is affected by the global nurse turnover crisis. The more nurses are unsatisfied with their working environment the more severe the condition or implication they tend to leave. Therefore, the current undesirable work schedule is partly due to that working condition. Basically, there is a lack of complimentary requirement between the head nurse's liability and the nurses' need. In particular, subject to highly nurse preferences issue, the sophisticated challenge of doing nurse scheduling is failure to stimulate tolerance behavior between both parties during shifts assignment in real working scenarios. Inevitably, the flexibility in shifts assignment is hard to achieve for the sake of satisfying nurse diverse requests with upholding imperative nurse ward coverage. Hence, Evolutionary Algorithm (EA) is proposed to cater for this complexity in a nurse scheduling problem (NSP). The restriction of EA is discussed and thus, enhancement on the EA operators is suggested so that the EA would have the characteristic of a flexible search. This paper consists of three types of constraints which are the hard, semi-hard and soft constraints that can be handled by the EA with enhanced parent selection and specialized mutation operators. These operators and EA as a whole contribute to the efficiency of constraint handling, fitness computation as well as flexibility in the search, which correspond to the employment of exploration and exploitation principles.

  17. Remember Hard But Think Softly: Metaphorical Effects of Hardness/Softness on Cognitive Functions.

    PubMed

    Xie, Jiushu; Lu, Zhi; Wang, Ruiming; Cai, Zhenguang G

    2016-01-01

    Previous studies have found that bodily stimulation, such as hardness biases social judgment and evaluation via metaphorical association; however, it remains unclear whether bodily stimulation also affects cognitive functions, such as memory and creativity. The current study used metaphorical associations between "hard" and "rigid" and between "soft" and "flexible" in Chinese, to investigate whether the experience of hardness affects cognitive functions whose performance depends prospectively on rigidity (memory) and flexibility (creativity). In Experiment 1, we found that Chinese-speaking participants performed better at recalling previously memorized words while sitting on a hard-surface stool (the hard condition) than a cushioned one (the soft condition). In Experiment 2, participants sitting on a cushioned stool outperformed those sitting on a hard-surface stool on a Chinese riddle task, which required creative/flexible thinking, but not on an analogical reasoning task, which required both rigid and flexible thinking. The results suggest the hardness experience affects cognitive functions that are metaphorically associated with rigidity or flexibility. They support the embodiment proposition that cognitive functions and representations can be grounded in bodily states via metaphorical associations.

  18. Self-reported sleep patterns, sleep problems, and behavioral problems among school children aged 8–11 years

    PubMed Central

    Hoedlmoser, K.; Kloesch, G.; Wiater, A.; Schabus, M.

    2012-01-01

    Objectives Investigation of sleep patterns, sleep problems, and behavioral problems in 8- to 11-year-old children. Methods A total of 330 children (age: M=9.52; SD=0.56; range=8–11 years; 47.3% girls) in the 4th grade of elementary school in Salzburg (Austria) completed a self-report questionnaire (80 items) to survey sleep patterns, sleep problems, and behavioral problems. Results Children aged 8–11 years slept approximately 10 h and 13 min on school days (SD=47 min) as well as on weekends (SD=81 min); girls slept significantly longer on weekends than boys. Most common self-reported sleep problems were dryness of the mouth (26.6%), sleep onset delay (21.9%), bedtime resistance (20.3%), and restless legs (19.4%). There was a significant association between watching TV as well as playing computer games prior to sleep with frightful dreams. Daytime sleepiness indicated by difficulty waking up (33.4%) and having a hard time getting out of bed (28.5%) was also very prominent. However, children in Salzburg seemed to be less tired during school (6.6%) or when doing homework (4.8%) compared to other nationalities. Behavioral problems (e.g., emotional symptoms, hyperactivity and inattention, conduct problems, peer problems) and daytime sleepiness were both significantly associated with sleep problems: the more sleep problems reported, the worse behavioral problems and daytime sleepiness were. Moreover, we could show that sharing the bed with a pet was also related to sleep problems. Conclusions Self-reported sleep problems among 8- to 11-year-old children are very common. There is a strong relationship between sleep disorders and behavioral problems. Routine screening and diagnosis as well as treatment of sleep disorders in school children should, therefore, be established in the future. PMID:23162377

  19. Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem.

    PubMed

    Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing

    2015-01-01

    Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA.

  20. Theory and computer simulation of hard-core Yukawa mixtures: thermodynamical, structural and phase coexistence properties.

    PubMed

    Mkanya, Anele; Pellicane, Giuseppe; Pini, Davide; Caccamo, Carlo

    2017-09-13

    We report extensive calculations, based on the modified hypernetted chain (MHNC) theory, on the hierarchical reference theory (HRT), and on Monte Carlo simulations, of thermodynamical, structural and phase coexistence properties of symmetric binary hard-core Yukawa mixtures (HCYM) with attractive interactions at equal species concentration. The obtained results are throughout compared with those available in the literature for the same systems. It turns out that the MHNC predictions for thermodynamic and structural quantities are quite accurate in comparison with the MC data. The HRT is equally accurate for thermodynamics, and slightly less accurate for structure. Liquid-vapor (LV) and liquid-liquid (LL) consolute coexistence conditions as emerging from simulations, are also highly satisfactorily reproduced by both the MHNC and HRT for relatively long ranged potentials. When the potential range reduces, the MHNC faces problems in determining the LV binodal line; however, the LL consolute line and the critical end point (CEP) temperature and density turn out to be still satisfactorily predicted within this theory. The HRT also predicts with good accuracy the CEP position. The possibility of employing liquid state theories HCYM for the purpose of reliably determining phase equilibria in multicomponent colloidal fluids of current technological interest, is discussed.

  1. Theory and computer simulation of hard-core Yukawa mixtures: thermodynamical, structural and phase coexistence properties

    NASA Astrophysics Data System (ADS)

    Mkanya, Anele; Pellicane, Giuseppe; Pini, Davide; Caccamo, Carlo

    2017-09-01

    We report extensive calculations, based on the modified hypernetted chain (MHNC) theory, on the hierarchical reference theory (HRT), and on Monte Carlo simulations, of thermodynamical, structural and phase coexistence properties of symmetric binary hard-core Yukawa mixtures (HCYM) with attractive interactions at equal species concentration. The obtained results are throughout compared with those available in the literature for the same systems. It turns out that the MHNC predictions for thermodynamic and structural quantities are quite accurate in comparison with the MC data. The HRT is equally accurate for thermodynamics, and slightly less accurate for structure. Liquid-vapor (LV) and liquid-liquid (LL) consolute coexistence conditions as emerging from simulations, are also highly satisfactorily reproduced by both the MHNC and HRT for relatively long ranged potentials. When the potential range reduces, the MHNC faces problems in determining the LV binodal line; however, the LL consolute line and the critical end point (CEP) temperature and density turn out to be still satisfactorily predicted within this theory. The HRT also predicts with good accuracy the CEP position. The possibility of employing liquid state theories HCYM for the purpose of reliably determining phase equilibria in multicomponent colloidal fluids of current technological interest, is discussed.

  2. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan

    2012-01-01

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474

  3. Peer Support for the Hardly Reached: A Systematic Review.

    PubMed

    Sokol, Rebeccah; Fisher, Edwin

    2016-07-01

    Health disparities are aggravated when prevention and care initiatives fail to reach those they are intended to help. Groups can be classified as hardly reached according to a variety of circumstances that fall into 3 domains: individual (e.g., psychological factors), demographic (e.g., socioeconomic status), and cultural-environmental (e.g., social network). Several reports have indicated that peer support is an effective means of reaching hardly reached individuals. However, no review has explored peer support effectiveness in relation to the circumstances associated with being hardly reached or across diverse health problems. To conduct a systematic review assessing the reach and effectiveness of peer support among hardly reached individuals, as well as peer support strategies used. Three systematic searches conducted in PubMed identified studies that evaluated peer support programs among hardly reached individuals. In aggregate, the searches covered articles published from 2000 to 2015. Eligible interventions provided ongoing support for complex health behaviors, including prioritization of hardly reached populations, assistance in applying behavior change plans, and social-emotional support directed toward disease management or quality of life. Studies were excluded if they addressed temporally isolated behaviors, were limited to protocol group classes, included peer support as the dependent variable, did not include statistical tests of significance, or incorporated comparison conditions that provided appreciable social support. We abstracted data regarding the primary health topic, categorizations of hardly reached groups, program reach, outcomes, and strategies employed. We conducted a 2-sample t test to determine whether reported strategies were related to reach. Forty-seven studies met our inclusion criteria, and these studies represented each of the 3 domains of circumstances assessed (individual, demographic, and cultural-environmental). Interventions

  4. An Integrated Method Based on PSO and EDA for the Max-Cut Problem.

    PubMed

    Lin, Geng; Guan, Jian

    2016-01-01

    The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20,000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality.

  5. Laser-induced autofluorescence of oral cavity hard tissues

    NASA Astrophysics Data System (ADS)

    Borisova, E. G.; Uzunov, Tz. T.; Avramov, L. A.

    2007-03-01

    In current study oral cavity hard tissues autofluorescence was investigated to obtain more complete picture of their optical properties. As an excitation source nitrogen laser with parameters - 337,1 nm, 14 μJ, 10 Hz (ILGI-503, Russia) was used. In vitro spectra from enamel, dentine, cartilage, spongiosa and cortical part of the periodontal bones were registered using a fiber-optic microspectrometer (PC2000, "Ocean Optics" Inc., USA). Gingival fluorescence was also obtained for comparison of its spectral properties with that of hard oral tissues. Samples are characterized with significant differences of fluorescence properties one to another. It is clearly observed signal from different collagen types and collagen-cross links with maxima at 385, 430 and 480-490 nm. In dentine are observed only two maxima at 440 and 480 nm, related also to collagen structures. In samples of gingival and spongiosa were observed traces of hemoglobin - by its re-absorption at 545 and 575 nm, which distort the fluorescence spectra detected from these anatomic sites. Results, obtained in this study are foreseen to be used for development of algorithms for diagnosis and differentiation of teeth lesions and other problems of oral cavity hard tissues as periodontitis and gingivitis.

  6. On Computing Breakpoint Distances for Genomes with Duplicate Genes.

    PubMed

    Shao, Mingfu; Moret, Bernard M E

    2017-06-01

    A fundamental problem in comparative genomics is to compute the distance between two genomes in terms of its higher level organization (given by genes or syntenic blocks). For two genomes without duplicate genes, we can easily define (and almost always efficiently compute) a variety of distance measures, but the problem is NP-hard under most models when genomes contain duplicate genes. To tackle duplicate genes, three formulations (exemplar, maximum matching, and any matching) have been proposed, all of which aim to build a matching between homologous genes so as to minimize some distance measure. Of the many distance measures, the breakpoint distance (the number of nonconserved adjacencies) was the first one to be studied and remains of significant interest because of its simplicity and model-free property. The three breakpoint distance problems corresponding to the three formulations have been widely studied. Although we provided last year a solution for the exemplar problem that runs very fast on full genomes, computing optimal solutions for the other two problems has remained challenging. In this article, we describe very fast, exact algorithms for these two problems. Our algorithms rely on a compact integer-linear program that we further simplify by developing an algorithm to remove variables, based on new results on the structure of adjacencies and matchings. Through extensive experiments using both simulations and biological data sets, we show that our algorithms run very fast (in seconds) on mammalian genomes and scale well beyond. We also apply these algorithms (as well as the classic orthology tool MSOAR) to create orthology assignment, then compare their quality in terms of both accuracy and coverage. We find that our algorithm for the "any matching" formulation significantly outperforms other methods in terms of accuracy while achieving nearly maximum coverage.

  7. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  8. Toward an alternative hardness kernel matrix structure in the Electronegativity Equalization Method (EEM).

    PubMed

    Chaves, J; Barroso, J M; Bultinck, P; Carbó-Dorca, R

    2006-01-01

    This study presents an alternative of the Electronegativity Equalization Method (EEM), where the usual Coulomb kernel has been transformed into a smooth function. The new framework, as the classical EEM, permits fast calculations of atomic charges in a given molecule for a small computational cost. The original EEM procedure needs to previously calibrate the different implied atomic hardness and electronegativity, using a chosen set of molecules. In the new EEM algorithm half the number of parameters needs to be calibrated, since a relationship between electronegativities and hardnesses has been found.

  9. Computationally efficient algorithm for Gaussian Process regression in case of structured samples

    NASA Astrophysics Data System (ADS)

    Belyaev, M.; Burnaev, E.; Kapushev, Y.

    2016-04-01

    Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.

  10. 30 CFR 75.1720-1 - Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats, or hard caps... STANDARDS-UNDERGROUND COAL MINES Miscellaneous § 75.1720-1 Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color...

  11. 30 CFR 75.1720-1 - Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Distinctively colored hard hats, or hard caps... STANDARDS-UNDERGROUND COAL MINES Miscellaneous § 75.1720-1 Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color...

  12. 30 CFR 75.1720-1 - Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Distinctively colored hard hats, or hard caps... STANDARDS-UNDERGROUND COAL MINES Miscellaneous § 75.1720-1 Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color...

  13. 30 CFR 75.1720-1 - Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Distinctively colored hard hats, or hard caps... STANDARDS-UNDERGROUND COAL MINES Miscellaneous § 75.1720-1 Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color...

  14. 30 CFR 75.1720-1 - Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Distinctively colored hard hats, or hard caps... STANDARDS-UNDERGROUND COAL MINES Miscellaneous § 75.1720-1 Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color...

  15. 30 CFR 77.1710-1 - Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats or hard caps... Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color from those worn by experienced miners shall be worn at...

  16. 30 CFR 77.1710-1 - Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Distinctively colored hard hats or hard caps... Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color from those worn by experienced miners shall be worn at...

  17. 30 CFR 77.1710-1 - Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Distinctively colored hard hats or hard caps... Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color from those worn by experienced miners shall be worn at...

  18. 30 CFR 77.1710-1 - Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Distinctively colored hard hats or hard caps... Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color from those worn by experienced miners shall be worn at...

  19. 30 CFR 77.1710-1 - Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Distinctively colored hard hats or hard caps... Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color from those worn by experienced miners shall be worn at...

  20. Experimental realization of a one-way quantum computer algorithm solving Simon's problem.

    PubMed

    Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G

    2014-11-14

    We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.

  1. Branching points in the low-temperature dipolar hard sphere fluid

    NASA Astrophysics Data System (ADS)

    Rovigatti, Lorenzo; Kantorovich, Sofia; Ivanov, Alexey O.; Tavares, José Maria; Sciortino, Francesco

    2013-10-01

    In this contribution, we investigate the low-temperature, low-density behaviour of dipolar hard-sphere (DHS) particles, i.e., hard spheres with dipoles embedded in their centre. We aim at describing the DHS fluid in terms of a network of chains and rings (the fundamental clusters) held together by branching points (defects) of different nature. We first introduce a systematic way of classifying inter-cluster connections according to their topology, and then employ this classification to analyse the geometric and thermodynamic properties of each class of defects, as extracted from state-of-the-art equilibrium Monte Carlo simulations. By computing the average density and energetic cost of each defect class, we find that the relevant contribution to inter-cluster interactions is indeed provided by (rare) three-way junctions and by four-way junctions arising from parallel or anti-parallel locally linear aggregates. All other (numerous) defects are either intra-cluster or associated to low cluster-cluster interaction energies, suggesting that these defects do not play a significant part in the thermodynamic description of the self-assembly processes of dipolar hard spheres.

  2. Applying Graph Theory to Problems in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  3. Applying Graph Theory to Problems in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  4. Online Reading Practices of Students Who Are Deaf/Hard of Hearing

    ERIC Educational Resources Information Center

    Donne, Vicki; Rugg, Natalie

    2015-01-01

    This study sought to investigate reading perceptions, computer use perceptions, and online reading comprehension strategy use of 26 students who are deaf/hard of hearing in grades 4 through 8 attending public school districts in a tri-state area of the U.S. Students completed an online questionnaire and descriptive analysis indicated that students…

  5. Computational strategies to address chromatin structure problems

    NASA Astrophysics Data System (ADS)

    Perišić, Ognjen; Schlick, Tamar

    2016-06-01

    While the genetic information is contained in double helical DNA, gene expression is a complex multilevel process that involves various functional units, from nucleosomes to fully formed chromatin fibers accompanied by a host of various chromatin binding enzymes. The chromatin fiber is a polymer composed of histone protein complexes upon which DNA wraps, like yarn upon many spools. The nature of chromatin structure has been an open question since the beginning of modern molecular biology. Many experiments have shown that the chromatin fiber is a highly dynamic entity with pronounced structural diversity that includes properties of idealized zig-zag and solenoid models, as well as other motifs. This diversity can produce a high packing ratio and thus inhibit access to a majority of the wound DNA. Despite much research, chromatin’s dynamic structure has not yet been fully described. Long stretches of chromatin fibers exhibit puzzling dynamic behavior that requires interpretation in the light of gene expression patterns in various tissue and organisms. The properties of chromatin fiber can be investigated with experimental techniques, like in vitro biochemistry, in vivo imagining, and high-throughput chromosome capture technology. Those techniques provide useful insights into the fiber’s structure and dynamics, but they are limited in resolution and scope, especially regarding compact fibers and chromosomes in the cellular milieu. Complementary but specialized modeling techniques are needed to handle large floppy polymers such as the chromatin fiber. In this review, we discuss current approaches in the chromatin structure field with an emphasis on modeling, such as molecular dynamics and coarse-grained computational approaches. Combinations of these computational techniques complement experiments and address many relevant biological problems, as we will illustrate with special focus on epigenetic modulation of chromatin structure.

  6. Rocks and Other Hard Places: Tracing Ethical Thinking in Korean and English Dialog

    ERIC Educational Resources Information Center

    Kim, Yong-Ho; Kellogg, David

    2015-01-01

    Researchers into moral education, and ethics educators too, often find themselves between a rock and a hard place. On the one hand, we wish to know what the child will do beyond the narrow range of communicative functions carried out in a classroom, and to do this, we employ purely hypothetical problems, that is, problems that from the child's…

  7. Stability of Solutions to Classes of Traveling Salesman Problems.

    PubMed

    Niendorf, Moritz; Kabamba, Pierre T; Girard, Anouck R

    2016-04-01

    By performing stability analysis on an optimal tour for problems belonging to classes of the traveling salesman problem (TSP), this paper derives margins of optimality for a solution with respect to disturbances in the problem data. Specifically, we consider the asymmetric sequence-dependent TSP, where the sequence dependence is driven by the dynamics of a stack. This is a generalization of the symmetric non sequence-dependent version of the TSP. Furthermore, we also consider the symmetric sequence-dependent variant and the asymmetric non sequence-dependent variant. Amongst others these problems have applications in logistics and unmanned aircraft mission planning. Changing external conditions such as traffic or weather may alter task costs, which can render an initially optimal itinerary suboptimal. Instead of optimizing the itinerary every time task costs change, stability criteria allow for fast evaluation of whether itineraries remain optimal. This paper develops a method to compute stability regions for the best tour in a set of tours for the symmetric TSP and extends the results to the asymmetric problem as well as their sequence-dependent counterparts. As the TSP is NP-hard, heuristic methods are frequently used to solve it. The presented approach is also applicable to analyze stability regions for a tour obtained through application of the k -opt heuristic with respect to the k -neighborhood. A dimensionless criticality metric for edges is proposed, such that a high criticality of an edge indicates that the optimal tour is more susceptible to cost changes in that edge. Multiple examples demonstrate the application of the developed stability computation method as well as the edge criticality measure that facilitates an intuitive assessment of instances of the TSP.

  8. Problem Manual for Instructional Decision-Making Module; Volume 2: A Computer Assisted Instruction Module. Document Number 4.

    ERIC Educational Resources Information Center

    Bessent, E. Wailand; And Others

    Provided in the manual are background material, problems, and worksheets designed for graduate students involved in a computer assisted instruction (CAI) approach to supervisor training. Included are a faculty handbook for a simulated school in a mythical community, a practice problem to familiarize the student with terminal operation, and eight…

  9. Know Your Discipline: Teaching the Philosophy of Computer Science

    ERIC Educational Resources Information Center

    Tedre, Matti

    2007-01-01

    The diversity and interdisciplinarity of computer science and the multiplicity of its uses in other sciences make it hard to define computer science and to prescribe how computer science should be carried out. The diversity of computer science also causes friction between computer scientists from different branches. Computer science curricula, as…

  10. Collision safety of a hard-shell low-mass vehicle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaeser, R.; Walz, F.H.; Brunner, A.

    1994-06-01

    Low-mass vehicles and in particular low-mass electric vehicles as produced today in very small quantities are in general not designed for crashworthiness in collisions. Particular problems of compact low-mass cars are: reduced length of the car front, low mass compared to other vehicles, and heavy batteries in the case of an electric car. With the intention of studying design improvements, three frontal crash tests were run last year: the first one with a commercial, lightweight electric car; the second with a reinforced version of the same car; and the last one with a car based on a different structural designmore » with a `hard-shell` car body. Crash tests showed that the latter solution made better use of the small zone available for continuous energy absorption. The paper discusses further the problem of frontal collisions between vehicles of different weight and, in particular, the side collision. A side-collision test was run with the hard-shell vehicle following the ECE lateral-impact test procedure at 50 km/h and led to results for the EuroSIDI-dummy well below current injury tolerance criteria.« less

  11. Collision safety of a hard-shell low-mass vehicle.

    PubMed

    Kaeser, R; Walz, F H; Brunner, A

    1994-06-01

    Low-mass vehicles and in particular low-mass electric vehicles as produced today in very small quantities are in general not designed for crashworthiness in collisions. Particular problems of compact low-mass cars are: reduced length of the car front, low mass compared to other vehicles, and heavy batteries in the case of an electric car. With the intention of studying design improvements, three frontal crash tests were run last year: the first one with a commercial, lightweight electric car; the second with a reinforced version of the same car; and the last one with a car based on a different structural design with a "hard-shell" car body. Crash tests showed that the latter solution made better use of the small zone available for continuous energy absorption. The paper discusses further the problem of frontal collisions between vehicles of different weight and, in particular, the side collision. A side-collision test was run with the hard-shell vehicle following the ECE lateral-impact test procedure at 50 km/h and led to results for the EuroSID1-dummy well below current injury tolerance criteria.

  12. Firefly algorithm for cardinality constrained mean-variance portfolio optimization problem with entropy diversity constraint.

    PubMed

    Bacanin, Nebojsa; Tuba, Milan

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  13. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    PubMed Central

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645

  14. Sociocultural Aspects of Computers in Education.

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    The data reported in this paper gives depth to the picture of computers in society, in work, and in schools. The prices have dropped but computer corporations sell to schools, as they do to any other customer, to increase profits for themselves. Computerizing is a vehicle for social stratification. Computers are not easy to use and are hard to…

  15. What Can Quantum Optics Say about Computational Complexity Theory?

    NASA Astrophysics Data System (ADS)

    Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.

    2015-02-01

    Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.

  16. Stability Analysis of Finite Difference Schemes for Hyperbolic Systems, and Problems in Applied and Computational Linear Algebra.

    DTIC Science & Technology

    FINITE DIFFERENCE THEORY, * LINEAR ALGEBRA , APPLIED MATHEMATICS, APPROXIMATION(MATHEMATICS), BOUNDARY VALUE PROBLEMS, COMPUTATIONS, HYPERBOLAS, MATHEMATICAL MODELS, NUMERICAL ANALYSIS, PARTIAL DIFFERENTIAL EQUATIONS, STABILITY.

  17. Real-time solution of linear computational problems using databases of parametric reduced-order models with arbitrary underlying meshes

    NASA Astrophysics Data System (ADS)

    Amsallem, David; Tezaur, Radek; Farhat, Charbel

    2016-12-01

    A comprehensive approach for real-time computations using a database of parametric, linear, projection-based reduced-order models (ROMs) based on arbitrary underlying meshes is proposed. In the offline phase of this approach, the parameter space is sampled and linear ROMs defined by linear reduced operators are pre-computed at the sampled parameter points and stored. Then, these operators and associated ROMs are transformed into counterparts that satisfy a certain notion of consistency. In the online phase of this approach, a linear ROM is constructed in real-time at a queried but unsampled parameter point by interpolating the pre-computed linear reduced operators on matrix manifolds and therefore computing an interpolated linear ROM. The proposed overall model reduction framework is illustrated with two applications: a parametric inverse acoustic scattering problem associated with a mockup submarine, and a parametric flutter prediction problem associated with a wing-tank system. The second application is implemented on a mobile device, illustrating the capability of the proposed computational framework to operate in real-time.

  18. Hybrid annealing: Coupling a quantum simulator to a classical computer

    NASA Astrophysics Data System (ADS)

    Graß, Tobias; Lewenstein, Maciej

    2017-05-01

    Finding the global minimum in a rugged potential landscape is a computationally hard task, often equivalent to relevant optimization problems. Annealing strategies, either classical or quantum, explore the configuration space by evolving the system under the influence of thermal or quantum fluctuations. The thermal annealing dynamics can rapidly freeze the system into a low-energy configuration, and it can be simulated well on a classical computer, but it easily gets stuck in local minima. Quantum annealing, on the other hand, can be guaranteed to find the true ground state and can be implemented in modern quantum simulators; however, quantum adiabatic schemes become prohibitively slow in the presence of quasidegeneracies. Here, we propose a strategy which combines ideas from simulated annealing and quantum annealing. In such a hybrid algorithm, the outcome of a quantum simulator is processed on a classical device. While the quantum simulator explores the configuration space by repeatedly applying quantum fluctuations and performing projective measurements, the classical computer evaluates each configuration and enforces a lowering of the energy. We have simulated this algorithm for small instances of the random energy model, showing that it potentially outperforms both simulated thermal annealing and adiabatic quantum annealing. It becomes most efficient for problems involving many quasidegenerate ground states.

  19. What's So Hard about Understanding Language?

    ERIC Educational Resources Information Center

    Read, Walter; And Others

    A discussion of the application of artificial intelligence to natural language processing looks at several problems in language comprehension, involving semantic ambiguity, anaphoric reference, and metonymy. Examples of these problems are cited, and the importance of the computational approach in analyzing them is explained. The approach applies…

  20. Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems

    PubMed Central

    Fonseca Guerra, Gabriel A.; Furber, Steve B.

    2017-01-01

    Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems). The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart. PMID:29311791

  1. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...

  2. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...

  3. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...

  4. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...

  5. Dynamic hardness of metals

    NASA Astrophysics Data System (ADS)

    Liang, Xuecheng

    Dynamic hardness (Pd) of 22 different pure metals and alloys having a wide range of elastic modulus, static hardness, and crystal structure were measured in a gas pulse system. The indentation contact diameter with an indenting sphere and the radius (r2) of curvature of the indentation were determined by the curve fitting of the indentation profile data. r 2 measured by the profilometer was compared with that calculated from Hertz equation in both dynamic and static conditions. The results indicated that the curvature change due to elastic recovery after unloading is approximately proportional to the parameters predicted by Hertz equation. However, r 2 is less than the radius of indenting sphere in many cases which is contradictory to Hertz analysis. This discrepancy is believed due to the difference between Hertzian and actual stress distributions underneath the indentation. Factors which influence indentation elastic recovery were also discussed. It was found that Tabor dynamic hardness formula always gives a lower value than that directly from dynamic hardness definition DeltaE/V because of errors mainly from Tabor's rebound equation and the assumption that dynamic hardness at the beginning of rebound process (Pr) is equal to kinetic energy change of an impact sphere over the formed crater volume (Pd) in the derivation process for Tabor's dynamic hardness formula. Experimental results also suggested that dynamic to static hardness ratio of a material is primarily determined by its crystal structure and static hardness. The effects of strain rate and temperature rise on this ratio were discussed. A vacuum rotating arm apparatus was built to measure Pd at 70, 127, and 381 mum sphere sizes, these results exhibited that Pd is highly depended on the sphere size due to the strain rate effects. P d was also used to substitute for static hardness to correlate with abrasion and erosion resistance of metals and alloys. The particle size effects observed in erosion were

  6. Probing for quantum speedup in spin-glass problems with planted solutions

    NASA Astrophysics Data System (ADS)

    Hen, Itay; Job, Joshua; Albash, Tameem; Rønnow, Troels F.; Troyer, Matthias; Lidar, Daniel A.

    2015-10-01

    The availability of quantum annealing devices with hundreds of qubits has made the experimental demonstration of a quantum speedup for optimization problems a coveted, albeit elusive goal. Going beyond earlier studies of random Ising problems, here we introduce a method to construct a set of frustrated Ising-model optimization problems with tunable hardness. We study the performance of a D-Wave Two device (DW2) with up to 503 qubits on these problems and compare it to a suite of classical algorithms, including a highly optimized algorithm designed to compete directly with the DW2. The problems are generated around predetermined ground-state configurations, called planted solutions, which makes them particularly suitable for benchmarking purposes. The problem set exhibits properties familiar from constraint satisfaction (SAT) problems, such as a peak in the typical hardness of the problems, determined by a tunable clause density parameter. We bound the hardness regime where the DW2 device either does not or might exhibit a quantum speedup for our problem set. While we do not find evidence for a speedup for the hardest and most frustrated problems in our problem set, we cannot rule out that a speedup might exist for some of the easier, less frustrated problems. Our empirical findings pertain to the specific D-Wave processor and problem set we studied and leave open the possibility that future processors might exhibit a quantum speedup on the same problem set.

  7. Communication: From close-packed to topologically close-packed: Formation of Laves phases in moderately polydisperse hard-sphere mixtures

    NASA Astrophysics Data System (ADS)

    Lindquist, Beth A.; Jadrich, Ryan B.; Truskett, Thomas M.

    2018-05-01

    Particle size polydispersity can help to inhibit crystallization of the hard-sphere fluid into close-packed structures at high packing fractions and thus is often employed to create model glass-forming systems. Nonetheless, it is known that hard-sphere mixtures with modest polydispersity still have ordered ground states. Here, we demonstrate by computer simulation that hard-sphere mixtures with increased polydispersity fractionate on the basis of particle size and a bimodal subpopulation favors the formation of topologically close-packed C14 and C15 Laves phases in coexistence with a disordered phase. The generality of this result is supported by simulations of hard-sphere mixtures with particle-size distributions of four different forms.

  8. Hard-on-hard lubrication in the artificial hip under dynamic loading conditions.

    PubMed

    Sonntag, Robert; Reinders, Jörn; Rieger, Johannes S; Heitzmann, Daniel W W; Kretzer, J Philippe

    2013-01-01

    The tribological performance of an artificial hip joint has a particularly strong influence on its success. The principle causes for failure are adverse short- and long-term reactions to wear debris and high frictional torque in the case of poor lubrication that may cause loosening of the implant. Therefore, using experimental and theoretical approaches models have been developed to evaluate lubrication under standardized conditions. A steady-state numerical model has been extended with dynamic experimental data for hard-on-hard bearings used in total hip replacements to verify the tribological relevance of the ISO 14242-1 gait cycle in comparison to experimental data from the Orthoload database and instrumented gait analysis for three additional loading conditions: normal walking, climbing stairs and descending stairs. Ceramic-on-ceramic bearing partners show superior lubrication potential compared to hard-on-hard bearings that work with at least one articulating metal component. Lubrication regimes during the investigated activities are shown to strongly depend on the kinematics and loading conditions. The outcome from the ISO gait is not fully confirmed by the normal walking data and more challenging conditions show evidence of inferior lubrication. These findings may help to explain the differences between the in vitro predictions using the ISO gait cycle and the clinical outcome of some hard-on-hard bearings, e.g., using metal-on-metal.

  9. Hard-on-Hard Lubrication in the Artificial Hip under Dynamic Loading Conditions

    PubMed Central

    Sonntag, Robert; Reinders, Jörn; Rieger, Johannes S.; Heitzmann, Daniel W. W.; Kretzer, J. Philippe

    2013-01-01

    The tribological performance of an artificial hip joint has a particularly strong influence on its success. The principle causes for failure are adverse short- and long-term reactions to wear debris and high frictional torque in the case of poor lubrication that may cause loosening of the implant. Therefore, using experimental and theoretical approaches models have been developed to evaluate lubrication under standardized conditions. A steady-state numerical model has been extended with dynamic experimental data for hard-on-hard bearings used in total hip replacements to verify the tribological relevance of the ISO 14242-1 gait cycle in comparison to experimental data from the Orthoload database and instrumented gait analysis for three additional loading conditions: normal walking, climbing stairs and descending stairs. Ceramic-on-ceramic bearing partners show superior lubrication potential compared to hard-on-hard bearings that work with at least one articulating metal component. Lubrication regimes during the investigated activities are shown to strongly depend on the kinematics and loading conditions. The outcome from the ISO gait is not fully confirmed by the normal walking data and more challenging conditions show evidence of inferior lubrication. These findings may help to explain the differences between the in vitro predictions using the ISO gait cycle and the clinical outcome of some hard-on-hard bearings, e.g., using metal-on-metal. PMID:23940772

  10. BIGHORN Computational Fluid Dynamics Theory, Methodology, and Code Verification & Validation Benchmark Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Andrs, David; Martineau, Richard Charles

    This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for timemore » integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.« less

  11. Improved Fractal Space Filling Curves Hybrid Optimization Algorithm for Vehicle Routing Problem

    PubMed Central

    Yue, Yi-xiang; Zhang, Tong; Yue, Qun-xing

    2015-01-01

    Vehicle Routing Problem (VRP) is one of the key issues in optimization of modern logistics system. In this paper, a modified VRP model with hard time window is established and a Hybrid Optimization Algorithm (HOA) based on Fractal Space Filling Curves (SFC) method and Genetic Algorithm (GA) is introduced. By incorporating the proposed algorithm, SFC method can find an initial and feasible solution very fast; GA is used to improve the initial solution. Thereafter, experimental software was developed and a large number of experimental computations from Solomon's benchmark have been studied. The experimental results demonstrate the feasibility and effectiveness of the HOA. PMID:26167171

  12. Analysis, approximation, and computation of a coupled solid/fluid temperature control problem

    NASA Technical Reports Server (NTRS)

    Gunzburger, Max D.; Lee, Hyung C.

    1993-01-01

    An optimization problem is formulated motivated by the desire to remove temperature peaks, i.e., 'hot spots', along the bounding surfaces of containers of fluid flows. The heat equation of the solid container is coupled to the energy equations for the fluid. Heat sources can be located in the solid body, the fluid, or both. Control is effected by adjustments to the temperature of the fluid at the inflow boundary. Both mathematical analyses and computational experiments are given.

  13. Brief exposure to a self-paced computer-based reading programme and how it impacts reading ability and behaviour problems.

    PubMed

    Hughes, J Antony; Phillips, Gordon; Reed, Phil

    2013-01-01

    Basic literacy skills underlie much future adult functioning, and are targeted in children through a variety of means. Children with reading problems were exposed either to a self-paced computer programme that focused on improving phonetic ability, or underwent a classroom-based reading intervention. Exposure was limited to 3 40-min sessions a week, for six weeks. The children were assessed in terms of their reading, spelling, and mathematics abilities, as well as for their externalising and internalising behaviour problems, before the programme commenced, and immediately after the programme terminated. Relative to the control group, the computer-programme improved reading by about seven months in boys (but not in girls), but had no impact on either spelling or mathematics. Children on the programme also demonstrated fewer externalising and internalising behaviour problems than the control group. The results suggest that brief exposure to a self-paced phonetic computer-teaching programme had some benefits for the sample.

  14. Survey: Computer Usage in Design Courses.

    ERIC Educational Resources Information Center

    Henley, Ernest J.

    1983-01-01

    Presents results of a survey of chemical engineering departments regarding computer usage in senior design courses. Results are categorized according to: computer usage (use of process simulators, student-written programs, faculty-written or "canned" programs; costs (hard and soft money); and available software. Programs offered are…

  15. RESEARCH STRATEGIES FOR THE APPLICATION OF THE TECHNIQUES OF COMPUTATIONAL BIOLOGICAL CHEMISTRY TO ENVIRONMENTAL PROBLEMS

    EPA Science Inventory

    On October 25 and 26, 1984, the U.S. EPA sponsored a workshop to consider the potential applications of the techniques of computational biological chemistry to problems in environmental health. Eleven extramural scientists from the various related disciplines and a similar number...

  16. Troubleshooting Computer Problems--a Teachers' Guide.

    ERIC Educational Resources Information Center

    Zeitz, Leigh

    1995-01-01

    Presents a troubleshooting flow chart for teachers and others to use when trying to figure out why their computers do not work correctly. Written mainly for Macintosh computers, the purpose of this guide is to save school technology coordinators time and to help educate teachers. (Author/LRW)

  17. Control aspects of quantum computing using pure and mixed states.

    PubMed

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J

    2012-10-13

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems.

  18. Control aspects of quantum computing using pure and mixed states

    PubMed Central

    Schulte-Herbrüggen, Thomas; Marx, Raimund; Fahmy, Amr; Kauffman, Louis; Lomonaco, Samuel; Khaneja, Navin; Glaser, Steffen J.

    2012-01-01

    Steering quantum dynamics such that the target states solve classically hard problems is paramount to quantum simulation and computation. And beyond, quantum control is also essential to pave the way to quantum technologies. Here, important control techniques are reviewed and presented in a unified frame covering quantum computational gate synthesis and spectroscopic state transfer alike. We emphasize that it does not matter whether the quantum states of interest are pure or not. While pure states underly the design of quantum circuits, ensemble mixtures of quantum states can be exploited in a more recent class of algorithms: it is illustrated by characterizing the Jones polynomial in order to distinguish between different (classes of) knots. Further applications include Josephson elements, cavity grids, ion traps and nitrogen vacancy centres in scenarios of closed as well as open quantum systems. PMID:22946034

  19. CSP: A Multifaceted Hybrid Architecture for Space Computing

    NASA Technical Reports Server (NTRS)

    Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron

    2014-01-01

    Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.

  20. Microindentation hardness testing of coatings: techniques and interpretation of data

    NASA Astrophysics Data System (ADS)

    Blau, P. J.

    1986-09-01

    This paper addresses the problems and promises of micro-indentation testing of thin solid films. It has discussed basic penetration hardness testing philosophy, the peculiarities of low load-shallow penetration tests of uncoated metals, and it has compared coated with uncoated behavior so that some of the unique responses of coatings can be distinguished from typical hardness versus load behavior. As the uses of thin solid coatings with technological interest continue to proliferate, microindentation testing methodology will increasingly be challenged to provide useful tools for their characterization. The understanding of microindentation response must go hand-in-hand with machine design so that the capability of measurement precision does not outstrip our abilities to interpret test results in a meaningful way.

  1. Assessing accumulated hard-tissue debris using micro-computed tomography and free software for image processing and analysis.

    PubMed

    De-Deus, Gustavo; Marins, Juliana; Neves, Aline de Almeida; Reis, Claudia; Fidel, Sandra; Versiani, Marco A; Alves, Haimon; Lopes, Ricardo Tadeu; Paciornik, Sidnei

    2014-02-01

    The accumulation of debris occurs after root canal preparation procedures specifically in fins, isthmus, irregularities, and ramifications. The aim of this study was to present a step-by-step description of a new method used to longitudinally identify, measure, and 3-dimensionally map the accumulation of hard-tissue debris inside the root canal after biomechanical preparation using free software for image processing and analysis. Three mandibular molars presenting the mesial root with a large isthmus width and a type II Vertucci's canal configuration were selected and scanned. The specimens were assigned to 1 of 3 experimental approaches: (1) 5.25% sodium hypochlorite + 17% EDTA, (2) bidistilled water, and (3) no irrigation. After root canal preparation, high-resolution scans of the teeth were accomplished, and free software packages were used to register and quantify the amount of accumulated hard-tissue debris in either canal space or isthmus areas. Canal preparation without irrigation resulted in 34.6% of its volume filled with hard-tissue debris, whereas the use of bidistilled water or NaOCl followed by EDTA showed a reduction in the percentage volume of debris to 16% and 11.3%, respectively. The closer the distance to the isthmus area was the larger the amount of accumulated debris regardless of the irrigating protocol used. Through the present method, it was possible to calculate the volume of hard-tissue debris in the isthmuses and in the root canal space. Free-software packages used for image reconstruction, registering, and analysis have shown to be promising for end-user application. Copyright © 2014. Published by Elsevier Inc.

  2. Research in the Hard Sciences, and in Very Hard "Softer" Domains

    ERIC Educational Resources Information Center

    Phillips, D. C.

    2014-01-01

    The author of this commentary argues that physical scientists are attempting to advance knowledge in the so-called hard sciences, whereas education researchers are laboring to increase knowledge and understanding in an "extremely hard" but softer domain. Drawing on the work of Popper and Dewey, this commentary highlights the relative…

  3. Regressive Imagery in Creative Problem-Solving: Comparing Verbal Protocols of Expert and Novice Visual Artists and Computer Programmers

    ERIC Educational Resources Information Center

    Kozbelt, Aaron; Dexter, Scott; Dolese, Melissa; Meredith, Daniel; Ostrofsky, Justin

    2015-01-01

    We applied computer-based text analyses of regressive imagery to verbal protocols of individuals engaged in creative problem-solving in two domains: visual art (23 experts, 23 novices) and computer programming (14 experts, 14 novices). Percentages of words involving primary process and secondary process thought, plus emotion-related words, were…

  4. Effects of computer-based graphic organizers to solve one-step word problems for middle school students with mild intellectual disability: A preliminary study.

    PubMed

    Sheriff, Kelli A; Boon, Richard T

    2014-08-01

    The purpose of this study was to examine the effects of computer-based graphic organizers, using Kidspiration 3© software, to solve one-step word problems. Participants included three students with mild intellectual disability enrolled in a functional academic skills curriculum in a self-contained classroom. A multiple probe single-subject research design (Horner & Baer, 1978) was used to evaluate the effectiveness of computer-based graphic organizers to solving mathematical one-step word problems. During the baseline phase, the students completed a teacher-generated worksheet that consisted of nine functional word problems in a traditional format using a pencil, paper, and a calculator. In the intervention and maintenance phases, the students were instructed to complete the word problems using a computer-based graphic organizer. Results indicated that all three of the students improved in their ability to solve the one-step word problems using computer-based graphic organizers compared to traditional instructional practices. Limitations of the study and recommendations for future research directions are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Computing Role Assignments of Proper Interval Graphs in Polynomial Time

    NASA Astrophysics Data System (ADS)

    Heggernes, Pinar; van't Hof, Pim; Paulusma, Daniël

    A homomorphism from a graph G to a graph R is locally surjective if its restriction to the neighborhood of each vertex of G is surjective. Such a homomorphism is also called an R-role assignment of G. Role assignments have applications in distributed computing, social network theory, and topological graph theory. The Role Assignment problem has as input a pair of graphs (G,R) and asks whether G has an R-role assignment. This problem is NP-complete already on input pairs (G,R) where R is a path on three vertices. So far, the only known non-trivial tractable case consists of input pairs (G,R) where G is a tree. We present a polynomial time algorithm that solves Role Assignment on all input pairs (G,R) where G is a proper interval graph. Thus we identify the first graph class other than trees on which the problem is tractable. As a complementary result, we show that the problem is Graph Isomorphism-hard on chordal graphs, a superclass of proper interval graphs and trees.

  6. Interactive graphical computer-aided design system

    NASA Technical Reports Server (NTRS)

    Edge, T. M.

    1975-01-01

    System is used for design, layout, and modification of large-scale-integrated (LSI) metal-oxide semiconductor (MOS) arrays. System is structured around small computer which provides real-time support for graphics storage display unit with keyboard, slave display unit, hard copy unit, and graphics tablet for designer/computer interface.

  7. Meta-RaPS Algorithm for the Aerial Refueling Scheduling Problem

    NASA Technical Reports Server (NTRS)

    Kaplan, Sezgin; Arin, Arif; Rabadi, Ghaith

    2011-01-01

    The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on multiple tankers (machines). ARSP assumes that jobs have different release times and due dates, The total weighted tardiness is used to evaluate schedule's quality. Therefore, ARSP can be modeled as a parallel machine scheduling with release limes and due dates to minimize the total weighted tardiness. Since ARSP is NP-hard, it will be more appropriate to develop a pproimate or heuristic algorithm to obtain solutions in reasonable computation limes. In this paper, Meta-Raps-ATC algorithm is implemented to create high quality solutions. Meta-RaPS (Meta-heuristic for Randomized Priority Search) is a recent and promising meta heuristic that is applied by introducing randomness to a construction heuristic. The Apparent Tardiness Rule (ATC), which is a good rule for scheduling problems with tardiness objective, is used to construct initial solutions which are improved by an exchanging operation. Results are presented for generated instances.

  8. Specialized computer architectures for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  9. Campus Community Partnerships with People Who Are Deaf or Hard-of-Hearing

    ERIC Educational Resources Information Center

    Matteson, Jamie; Kha, Christine K.; Hu, Diane J.; Cheng, Chih-Chieh; Saul, Lawrence; Sadler, Georgia Robins

    2008-01-01

    In 1997, the Moores University of California, San Diego (UCSD) Cancer Center and advocacy groups for people who are deaf and hard of hearing launched a highly hearing, successful cancer control collaborative. In 2006, faculty from the Computer Science Department at UCSD invited the collaborative to help develop a new track in their doctoral…

  10. The world problem: on the computability of the topology of 4-manifolds

    NASA Technical Reports Server (NTRS)

    vanMeter, J. R.

    2005-01-01

    Topological classification of the 4-manifolds bridges computation theory and physics. A proof of the undecidability of the homeomorphy problem for 4-manifolds is outlined here in a clarifying way. It is shown that an arbitrary Turing machine with an arbitrary input can be encoded into the topology of a 4-manifold, such that the 4-manifold is homeomorphic to a certain other 4-manifold if and only if the corresponding Turing machine halts on the associated input. Physical implications are briefly discussed.

  11. Computer hardware for radiologists: Part 2

    PubMed Central

    Indrajit, IK; Alam, A

    2010-01-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future. PMID:21423895

  12. Computer hardware for radiologists: Part 2.

    PubMed

    Indrajit, Ik; Alam, A

    2010-11-01

    Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the 'ever increasing' digital future.

  13. Quiet planting in the locked constraints satisfaction problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zdeborova, Lenka; Krzakala, Florent

    2009-01-01

    We study the planted ensemble of locked constraint satisfaction problems. We describe the connection between the random and planted ensembles. The use of the cavity method is combined with arguments from reconstruction on trees and first and second moment considerations; in particular the connection with the reconstruction on trees appears to be crucial. Our main result is the location of the hard region in the planted ensemble, thus providing hard satisfiable benchmarks. In a part of that hard region instances have with high probability a single satisfying assignment.

  14. Differences in Train-induced Vibration between Hard Soil and Soft Soil

    NASA Astrophysics Data System (ADS)

    Noyori, M.; Yokoyama, H.

    2017-12-01

    Vibration and noise caused by running trains sometimes raises environmental issues. Train-induced vibration is caused by moving static and dynamic axle loads. To reduce the vibration, it is important to clarify the conditions under which the train-induced vibration increases. In this study, we clarified the differences in train-induced vibration between on hard soil and on soft soil using a numerical simulation method. The numerical simulation method we used is a combination of two analysis. The one is a coupled vibration analysis model of a running train, a track and a supporting structure. In the analysis, the excitation force of the viaduct slabs generated by a running train is computed. The other analysis is a three-dimensional vibration analysis model of a supporting structure and the ground into which the excitation force computed by the former analysis is input. As a result of the numerical simulation, the ground vibration in the area not more than 25m from the center of the viaduct is larger under the soft soil condition than that under the hard soil condition in almost all frequency ranges. On the other hand, the ground vibration of 40 and 50Hz at a point 50m from the center of the viaduct under the hard soil condition is larger than that under the soft soil condition. These are consistent with the result of the two-dimensional FEM based on a ground model alone. Thus, we concluded that these results are obtained from not the effects of the running train but the vibration characteristics of the ground.

  15. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing.

    PubMed

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-10-23

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.

  16. Smart power. Great leaders know when hard power is not enough.

    PubMed

    Nye, Joseph S

    2008-11-01

    The next U.S. administration will face enormous challenges to world peace, the global economy, and the environment. Exercising military and economic muscle alone will not bring peace and prosperity. According to Nye, a former U.S. government official and a former dean at Harvard University's John F. Kennedy School of Government, the next president must be able to combine hard power, characterized by coercion, and what Nye calls "soft" power, which relies instead on attraction. The result is smart power, a tool great leaders use to mobilize people around agendas that look beyond current problems. Hard power is often necessary, Nye explains. In the 1990s, when the Taliban was providing refuge to Al Oaeda, President Clinton tried---and failed--to solve the problem diplomatically instead of destroying terrorist havens in Afghanistan. In other situations, however, soft power is more effective, though it has been too often overlooked. In Iraq, Nye argues, the use of soft power could draw young people toward something other than terrorism. "I think that there's an awakening to the need for soft power as people look at the crisis in the Middle East and begin to realize that hard power is not sufficient to resolve it," he says. Solving today's global problems will require smart power--a judicious blend of the other two powers. While there are notable examples of men who have used smart power--Teddy Roosevelt, for instance--it's much more difficult for women to lead with smart power, especially in the United States, where women feel pressure to prove that they are not "soft." Only by exercising smart power, Nye says, can the next president of the United States set a new tone for U.S. foreign policy in this century.

  17. Effectiveness of Computer-Assisted STAD Cooperative Learning Strategy on Physics Problem Solving, Achievement and Retention

    ERIC Educational Resources Information Center

    Gambari, Amosa Isiaka; Yusuf, Mudasiru Olalere

    2015-01-01

    This study investigated the effectiveness of computer-assisted Students' Team Achievement Division (STAD) cooperative learning strategy on physics problem solving, students' achievement and retention. It also examined if the student performance would vary with gender. Purposive sampling technique was used to select two senior secondary schools…

  18. Foraging Behaviors and Potential Computational Ability of Problem-Solving in an Amoeba

    NASA Astrophysics Data System (ADS)

    Nakagaki, Toshiyuki

    We study cell behaviors in the complex situations: multiple locations of food were simultaneously given. An amoeba-like organism of true slime mold gathered at the multiple food locations while body shape made of tubular network was totally changed. Then only a few tubes connected all of food locations through a network shape. By taking the network shape of body, the plasmodium could meet its own physiological requirements: as fast absorption of nutrient as possible and sufficient circulation of chemical signals and nutrients through a whole body. Optimality of network shape was evaluated in relation to a combinatorial optimization problem. Here we reviewed the potential computational ability of problem-solving in the amoeba, which was much higher than we'd though. The main message of this article is that we had better to change our stupid opinion that an amoeba is stupid.

  19. Parallel evolutionary computation in bioinformatics applications.

    PubMed

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Spaces and Places for Disrupting Thinking about Inclusive Education in "Hard Times"

    ERIC Educational Resources Information Center

    Winter, Christine

    2012-01-01

    This paper sets out to read closely the National Curriculum Statutory Inclusion Statement in England (2007) alongside "Hard Times" to see if Dickens offers any insights into ethical responsibility and conceptualisations of inclusive education. I begin by presenting some of the meanings and associated problems of the term "inclusive…