Sample records for problem setting resulted

  1. Investigating the effect of mental set on insight problem solving.

    PubMed

    Ollinger, Michael; Jones, Gary; Knoblich, Günther

    2008-01-01

    Mental set is the tendency to solve certain problems in a fixed way based on previous solutions to similar problems. The moment of insight occurs when a problem cannot be solved using solution methods suggested by prior experience and the problem solver suddenly realizes that the solution requires different solution methods. Mental set and insight have often been linked together and yet no attempt thus far has systematically examined the interplay between the two. Three experiments are presented that examine the extent to which sets of noninsight and insight problems affect the subsequent solutions of insight test problems. The results indicate a subtle interplay between mental set and insight: when the set involves noninsight problems, no mental set effects are shown for the insight test problems, yet when the set involves insight problems, both facilitation and inhibition can be seen depending on the type of insight problem presented in the set. A two process model is detailed to explain these findings that combines the representational change mechanism with that of proceduralization.

  2. Application of the artificial bee colony algorithm for solving the set covering problem.

    PubMed

    Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando

    2014-01-01

    The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem.

  3. Application of the Artificial Bee Colony Algorithm for Solving the Set Covering Problem

    PubMed Central

    Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando

    2014-01-01

    The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem. PMID:24883356

  4. Convergence of neural networks for programming problems via a nonsmooth Lojasiewicz inequality.

    PubMed

    Forti, Mauro; Nistri, Paolo; Quincampoix, Marc

    2006-11-01

    This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, convex quadratic programming (QP) problems, and nonconvex QP problems where an indefinite quadratic objective function is subject to a set of affine constraints. The NNs are characterized by constraint neurons modeled by ideal diodes with vertical segments in their characteristic, which enable to implement an exact penalty method. A new method is exploited to address convergence of trajectories, which is based on a nonsmooth Lojasiewicz inequality for the generalized gradient vector field describing the NN dynamics. The method permits to prove that each forward trajectory of the NN has finite length, and as a consequence it converges toward a singleton. Furthermore, by means of a quantitative evaluation of the Lojasiewicz exponent at the equilibrium points, the following results on convergence rate of trajectories are established: (1) for nonconvex QP problems, each trajectory is either exponentially convergent, or convergent in finite time, toward a singleton belonging to the set of constrained critical points; (2) for convex QP problems, the same result as in (1) holds; moreover, the singleton belongs to the set of global minimizers; and (3) for LP problems, each trajectory converges in finite time to a singleton belonging to the set of global minimizers. These results, which improve previous results obtained via the Lyapunov approach, are true independently of the nature of the set of equilibrium points, and in particular they hold even when the NN possesses infinitely many nonisolated equilibrium points.

  5. Geometric Hitting Set for Segments of Few Orientations

    DOE PAGES

    Fekete, Sandor P.; Huang, Kan; Mitchell, Joseph S. B.; ...

    2016-01-13

    Here we study several natural instances of the geometric hitting set problem for input consisting of sets of line segments (and rays, lines) having a small number of distinct slopes. These problems model path monitoring (e.g., on road networks) using the fewest sensors (the \\hitting points"). We give approximation algorithms for cases including (i) lines of 3 slopes in the plane, (ii) vertical lines and horizontal segments, (iii) pairs of horizontal/vertical segments. Lastly, we give hardness and hardness of approximation results for these problems. We prove that the hitting set problem for vertical lines and horizontal rays is polynomially solvable.

  6. Patterns of Home and School Behavior Problems in Rural and Urban Settings

    PubMed Central

    Hope, Timothy L; Bierman, Karen L

    2009-01-01

    This study examined the cross-situational patterns of behavior problems shown by children in rural and urban communities at school entry. Behavior problems exhibited in home settings were not expected to vary significantly across urban and rural settings. In contrast, it was anticipated that child behavior at school would be heavily influenced by the increased exposure to aggressive models and deviant peer support experienced by children in urban as compared to rural schools, leading to higher rates of school conduct problems for children in urban settings. Statistical comparisons of the patterns of behavior problems shown by representative samples of 89 rural and 221 urban children provided support for these hypotheses, as significant rural-urban differences emerged in school and not in home settings. Cross-situational patterns of behavior problems also varied across setting, with home-only patterns of problems characterizing more children at the rural site and school-only, patterns of behavior problems characterizing more children at the urban sites. In addition, whereas externalizing behavior was the primary school problem exhibited by urban children, rural children displayed significantly higher rates of internalizing problems at school. The implications of these results are discussed for developmental models of behavior problems and for preventive interventions. PMID:19834584

  7. Some new results on the central overlap problem in astrometry

    NASA Astrophysics Data System (ADS)

    Rapaport, M.

    1998-07-01

    The central overlap problem in astrometry has been revisited in the recent last years by Eichhorn (1988) who explicitly inverted the matrix of a constrained least squares problem. In this paper, the general explicit solution of the unconstrained central overlap problem is given. We also give the explicit solution for an other set of constraints; this result is a confirmation of a conjecture expressed by Eichhorn (1988). We also consider the use of iterative methods to solve the central overlap problem. A surprising result is obtained when the classical Gauss Seidel method is used; the iterations converge immediately to the general solution of the equations; we explain this property writing the central overlap problem in a new set of variables.

  8. The Artificial Neural Networks Based on Scalarization Method for a Class of Bilevel Biobjective Programming Problem

    PubMed Central

    Chen, Zhong; Liu, June; Li, Xiong

    2017-01-01

    A two-stage artificial neural network (ANN) based on scalarization method is proposed for bilevel biobjective programming problem (BLBOP). The induced set of the BLBOP is firstly expressed as the set of minimal solutions of a biobjective optimization problem by using scalar approach, and then the whole efficient set of the BLBOP is derived by the proposed two-stage ANN for exploring the induced set. In order to illustrate the proposed method, seven numerical examples are tested and compared with results in the classical literature. Finally, a practical problem is solved by the proposed algorithm. PMID:29312446

  9. On-Orbit Range Set Applications

    NASA Astrophysics Data System (ADS)

    Holzinger, M.; Scheeres, D.

    2011-09-01

    History and methodology of Δv range set computation is briefly reviewed, followed by a short summary of the Δv optimal spacecraft servicing problem literature. Service vehicle placement is approached from a Δv range set viewpoint, providing a framework under which the analysis becomes quite geometric and intuitive. The optimal servicing tour design problem is shown to be a specific instantiation of the metric- Traveling Salesman Problem (TSP), which in general is an NP-hard problem. The Δv-TSP is argued to be quite similar to the Euclidean-TSP, for which approximate optimal solutions may be found in polynomial time. Applications of range sets are demonstrated using analytical and simulation results.

  10. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    NASA Astrophysics Data System (ADS)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  11. An ILP based memetic algorithm for finding minimum positive influence dominating sets in social networks

    NASA Astrophysics Data System (ADS)

    Lin, Geng; Guan, Jian; Feng, Huibin

    2018-06-01

    The positive influence dominating set problem is a variant of the minimum dominating set problem, and has lots of applications in social networks. It is NP-hard, and receives more and more attention. Various methods have been proposed to solve the positive influence dominating set problem. However, most of the existing work focused on greedy algorithms, and the solution quality needs to be improved. In this paper, we formulate the minimum positive influence dominating set problem as an integer linear programming (ILP), and propose an ILP based memetic algorithm (ILPMA) for solving the problem. The ILPMA integrates a greedy randomized adaptive construction procedure, a crossover operator, a repair operator, and a tabu search procedure. The performance of ILPMA is validated on nine real-world social networks with nodes up to 36,692. The results show that ILPMA significantly improves the solution quality, and is robust.

  12. Navigating Bioethical Waters: Two Pilot Projects in Problem-Based Learning for Future Bioscience and Biotechnology Professionals.

    PubMed

    Berry, Roberta M; Levine, Aaron D; Kirkman, Robert; Blake, Laura Palucki; Drake, Matthew

    2016-12-01

    We believe that the professional responsibility of bioscience and biotechnology professionals includes a social responsibility to contribute to the resolution of ethically fraught policy problems generated by their work. It follows that educators have a professional responsibility to prepare future professionals to discharge this responsibility. This essay discusses two pilot projects in ethics pedagogy focused on particularly challenging policy problems, which we call "fractious problems". The projects aimed to advance future professionals' acquisition of "fractious problem navigational" skills, a set of skills designed to enable broad and deep understanding of fractious problems and the design of good policy resolutions for them. A secondary objective was to enhance future professionals' motivation to apply these skills to help their communities resolve these problems. The projects employed "problem based learning" courses to advance these learning objectives. A new assessment instrument, "Skills for Science/Engineering Ethics Test" (SkillSET), was designed and administered to measure the success of the courses in doing so. This essay first discusses the rationale for the pilot projects, and then describes the design of the pilot courses and presents the results of our assessment using SkillSET in the first pilot project and the revised SkillSET 2.0 in the second pilot project. The essay concludes with discussion of observations and results.

  13. Extended Islands of Tractability for Parsimony Haplotyping

    NASA Astrophysics Data System (ADS)

    Fleischer, Rudolf; Guo, Jiong; Niedermeier, Rolf; Uhlmann, Johannes; Wang, Yihui; Weller, Mathias; Wu, Xi

    Parsimony haplotyping is the problem of finding a smallest size set of haplotypes that can explain a given set of genotypes. The problem is NP-hard, and many heuristic and approximation algorithms as well as polynomial-time solvable special cases have been discovered. We propose improved fixed-parameter tractability results with respect to the parameter "size of the target haplotype set" k by presenting an O *(k 4k )-time algorithm. This also applies to the practically important constrained case, where we can only use haplotypes from a given set. Furthermore, we show that the problem becomes polynomial-time solvable if the given set of genotypes is complete, i.e., contains all possible genotypes that can be explained by the set of haplotypes.

  14. On Making a Distinguished Vertex Minimum Degree by Vertex Deletion

    NASA Astrophysics Data System (ADS)

    Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf; Uhlmann, Johannes

    For directed and undirected graphs, we study the problem to make a distinguished vertex the unique minimum-(in)degree vertex through deletion of a minimum number of vertices. The corresponding NP-hard optimization problems are motivated by applications concerning control in elections and social network analysis. Continuing previous work for the directed case, we show that the problem is W[2]-hard when parameterized by the graph's feedback arc set number, whereas it becomes fixed-parameter tractable when combining the parameters "feedback vertex set number" and "number of vertices to delete". For the so far unstudied undirected case, we show that the problem is NP-hard and W[1]-hard when parameterized by the "number of vertices to delete". On the positive side, we show fixed-parameter tractability for several parameterizations measuring tree-likeness, including a vertex-linear problem kernel with respect to the parameter "feedback edge set number". On the contrary, we show a non-existence result concerning polynomial-size problem kernels for the combined parameter "vertex cover number and number of vertices to delete", implying corresponding nonexistence results when replacing vertex cover number by treewidth or feedback vertex set number.

  15. A problem-solving routine for improving hospital operations.

    PubMed

    Ghosh, Manimay; Sobek Ii, Durward K

    2015-01-01

    The purpose of this paper is to examine empirically why a systematic problem-solving routine can play an important role in the process improvement efforts of hospitals. Data on 18 process improvement cases were collected through semi-structured interviews, reports and other documents, and artifacts associated with the cases. The data were analyzed using a grounded theory approach. Adherence to all the steps of the problem-solving routine correlated to greater degrees of improvement across the sample. Analysis resulted in two models. The first partially explains why hospital workers tended to enact short-term solutions when faced with process-related problems; and tended not seek longer-term solutions that prevent problems from recurring. The second model highlights a set of self-reinforcing behaviors that are more likely to address problem recurrence and result in sustained process improvement. The study was conducted in one hospital setting. Hospital managers can improve patient care and increase operational efficiency by adopting and diffusing problem-solving routines that embody three key characteristics. This paper offers new insights on why caregivers adopt short-term approaches to problem solving. Three characteristics of an effective problem-solving routine in a healthcare setting are proposed.

  16. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  17. Discrete Ordinate Quadrature Selection for Reactor-based Eigenvalue Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, Joshua J; Evans, Thomas M; Davidson, Gregory G

    2013-01-01

    In this paper we analyze the effect of various quadrature sets on the eigenvalues of several reactor-based problems, including a two-dimensional (2D) fuel pin, a 2D lattice of fuel pins, and a three-dimensional (3D) reactor core problem. While many quadrature sets have been applied to neutral particle discrete ordinate transport calculations, the Level Symmetric (LS) and the Gauss-Chebyshev product (GC) sets are the most widely used in production-level reactor simulations. Other quadrature sets, such as Quadruple Range (QR) sets, have been shown to be more accurate in shielding applications. In this paper, we compare the LS, GC, QR, and themore » recently developed linear-discontinuous finite element (LDFE) sets, as well as give a brief overview of other proposed quadrature sets. We show that, for a given number of angles, the QR sets are more accurate than the LS and GC in all types of reactor problems analyzed (2D and 3D). We also show that the LDFE sets are more accurate than the LS and GC sets for these problems. We conclude that, for problems where tens to hundreds of quadrature points (directions) per octant are appropriate, QR sets should regularly be used because they have similar integration properties as the LS and GC sets, have no noticeable impact on the speed of convergence of the solution when compared with other quadrature sets, and yield more accurate results. We note that, for very high-order scattering problems, the QR sets exactly integrate fewer angular flux moments over the unit sphere than the GC sets. The effects of those inexact integrations have yet to be analyzed. We also note that the LDFE sets only exactly integrate the zeroth and first angular flux moments. Pin power comparisons and analyses are not included in this paper and are left for future work.« less

  18. Discrete ordinate quadrature selection for reactor-based Eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, J. J.; Evans, T. M.; Davidson, G. G.

    2013-07-01

    In this paper we analyze the effect of various quadrature sets on the eigenvalues of several reactor-based problems, including a two-dimensional (2D) fuel pin, a 2D lattice of fuel pins, and a three-dimensional (3D) reactor core problem. While many quadrature sets have been applied to neutral particle discrete ordinate transport calculations, the Level Symmetric (LS) and the Gauss-Chebyshev product (GC) sets are the most widely used in production-level reactor simulations. Other quadrature sets, such as Quadruple Range (QR) sets, have been shown to be more accurate in shielding applications. In this paper, we compare the LS, GC, QR, and themore » recently developed linear-discontinuous finite element (LDFE) sets, as well as give a brief overview of other proposed quadrature sets. We show that, for a given number of angles, the QR sets are more accurate than the LS and GC in all types of reactor problems analyzed (2D and 3D). We also show that the LDFE sets are more accurate than the LS and GC sets for these problems. We conclude that, for problems where tens to hundreds of quadrature points (directions) per octant are appropriate, QR sets should regularly be used because they have similar integration properties as the LS and GC sets, have no noticeable impact on the speed of convergence of the solution when compared with other quadrature sets, and yield more accurate results. We note that, for very high-order scattering problems, the QR sets exactly integrate fewer angular flux moments over the unit sphere than the GC sets. The effects of those inexact integrations have yet to be analyzed. We also note that the LDFE sets only exactly integrate the zeroth and first angular flux moments. Pin power comparisons and analyses are not included in this paper and are left for future work. (authors)« less

  19. On a problem of reconstruction of a discontinuous function by its Radon transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derevtsov, Evgeny Yu.; Maltseva, Svetlana V.; Svetov, Ivan E.

    A problem of reconstruction of a discontinuous function by its Radon transform is considered. One of the approaches to the numerical solution for the problem consists in the next sequential steps: a visualization of a set of breaking points; an identification of this set; a determination of jump values; an elimination of discontinuities. We consider three of listed problems except the problem of jump values. The problems are investigated by mathematical modeling using numerical experiments. The results of simulation are satisfactory and allow to hope for the further development of the approach.

  20. Fuzzy compromise: An effective way to solve hierarchical design problems

    NASA Technical Reports Server (NTRS)

    Allen, J. K.; Krishnamachari, R. S.; Masetta, J.; Pearce, D.; Rigby, D.; Mistree, F.

    1990-01-01

    In this paper, we present a method for modeling design problems using a compromise decision support problem (DSP) incorporating the principles embodied in fuzzy set theory. Specifically, the fuzzy compromise decision support problem is used to study hierarchical design problems. This approach has the advantage that although the system modeled has an element of uncertainty associated with it, the solution obtained is crisp and precise. The efficacy of incorporating fuzzy sets into the solution process is discussed in the context of results obtained for a portal frame.

  1. Cuckoo search via Levy flights applied to uncapacitated facility location problem

    NASA Astrophysics Data System (ADS)

    Mesa, Armacheska; Castromayor, Kris; Garillos-Manliguez, Cinmayii; Calag, Vicente

    2017-11-01

    Facility location problem (FLP) is a mathematical way to optimally locate facilities within a set of candidates to satisfy the requirements of a given set of clients. This study addressed the uncapacitated FLP as it assures that the capacity of every selected facility is finite. Thus, even if the demand is not known, which often is the case, in reality, organizations may still be able to take strategic decisions such as locating the facilities. There are different approaches relevant to the uncapacitated FLP. Here, the cuckoo search via Lévy flight (CS-LF) was used to solve the problem. Though hybrid methods produce better results, this study employed CS-LF to determine first its potential in finding solutions for the problem, particularly when applied to a real-world problem. The method was applied to the data set obtained from a department store in Davao City, Philippines. Results showed that applying CS-LF yielded better facility locations compared to particle swarm optimization and other existing algorithms. Although these results showed that CS-LF is a promising method to solve this particular problem, further studies on other FLP are recommended to establish a strong foundation of the capability of CS-LF in solving FLP.

  2. Application of TRIZ approach to machine vibration condition monitoring problems

    NASA Astrophysics Data System (ADS)

    Cempel, Czesław

    2013-12-01

    Up to now machine condition monitoring has not been seriously approached by TRIZ1TRIZ= Russian acronym for Inventive Problem Solving System, created by G. Altshuller ca 50 years ago. users, and the knowledge of TRIZ methodology has not been applied there intensively. However, there are some introductory papers of present author posted on Diagnostic Congress in Cracow (Cempel, in press [11]), and Diagnostyka Journal as well. But it seems to be further need to make such approach from different sides in order to see, if some new knowledge and technology will emerge. In doing this we need at first to define the ideal final result (IFR) of our innovation problem. As a next we need a set of parameters to describe the problems of system condition monitoring (CM) in terms of TRIZ language and set of inventive principles possible to apply, on the way to IFR. This means we should present the machine CM problem by means of contradiction and contradiction matrix. When specifying the problem parameters and inventive principles, one should use analogy and metaphorical thinking, which by definition is not exact but fuzzy, and leads sometimes to unexpected results and outcomes. The paper undertakes this important problem again and brings some new insight into system and machine CM problems. This may mean for example the minimal dimensionality of TRIZ engineering parameter set for the description of machine CM problems, and the set of most useful inventive principles applied to given engineering parameter and contradictions of TRIZ.

  3. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  4. Sparse Substring Pattern Set Discovery Using Linear Programming Boosting

    NASA Astrophysics Data System (ADS)

    Kashihara, Kazuaki; Hatano, Kohei; Bannai, Hideo; Takeda, Masayuki

    In this paper, we consider finding a small set of substring patterns which classifies the given documents well. We formulate the problem as 1 norm soft margin optimization problem where each dimension corresponds to a substring pattern. Then we solve this problem by using LPBoost and an optimal substring discovery algorithm. Since the problem is a linear program, the resulting solution is likely to be sparse, which is useful for feature selection. We evaluate the proposed method for real data such as movie reviews.

  5. An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics

    NASA Technical Reports Server (NTRS)

    Baluja, Shumeet

    1995-01-01

    This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.

  6. Computational Study for Planar Connected Dominating Set Problem

    NASA Astrophysics Data System (ADS)

    Marzban, Marjan; Gu, Qian-Ping; Jia, Xiaohua

    The connected dominating set (CDS) problem is a well studied NP-hard problem with many important applications. Dorn et al. [ESA2005, LNCS3669,pp95-106] introduce a new technique to generate 2^{O(sqrt{n})} time and fixed-parameter algorithms for a number of non-local hard problems, including the CDS problem in planar graphs. The practical performance of this algorithm is yet to be evaluated. We perform a computational study for such an evaluation. The results show that the size of instances can be solved by the algorithm mainly depends on the branchwidth of the instances, coinciding with the theoretical result. For graphs with small or moderate branchwidth, the CDS problem instances with size up to a few thousands edges can be solved in a practical time and memory space. This suggests that the branch-decomposition based algorithms can be practical for the planar CDS problem.

  7. A path following algorithm for the graph matching problem.

    PubMed

    Zaslavskiy, Mikhail; Bach, Francis; Vert, Jean-Philippe

    2009-12-01

    We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.

  8. A method and knowledge base for automated inference of patient problems from structured data in an electronic medical record

    PubMed Central

    Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W

    2011-01-01

    Background Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. Objective To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. Study design and methods We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100 000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100 000 records to assess its accuracy. Results Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100 000 randomly selected patients showed high sensitivity (range: 62.8–100.0%) and positive predictive value (range: 79.8–99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. Conclusion We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts. PMID:21613643

  9. Set-free Markov state model building

    NASA Astrophysics Data System (ADS)

    Weber, Marcus; Fackeldey, Konstantin; Schütte, Christof

    2017-03-01

    Molecular dynamics (MD) simulations face challenging problems since the time scales of interest often are much longer than what is possible to simulate; and even if sufficiently long simulations are possible the complex nature of the resulting simulation data makes interpretation difficult. Markov State Models (MSMs) help to overcome these problems by making experimentally relevant time scales accessible via coarse grained representations that also allow for convenient interpretation. However, standard set-based MSMs exhibit some caveats limiting their approximation quality and statistical significance. One of the main caveats results from the fact that typical MD trajectories repeatedly re-cross the boundary between the sets used to build the MSM which causes statistical bias in estimating the transition probabilities between these sets. In this article, we present a set-free approach to MSM building utilizing smooth overlapping ansatz functions instead of sets and an adaptive refinement approach. This kind of meshless discretization helps to overcome the recrossing problem and yields an adaptive refinement procedure that allows us to improve the quality of the model while exploring state space and inserting new ansatz functions into the MSM.

  10. Various forms of indexing HDMR for modelling multivariate classification problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aksu, Çağrı; Tunga, M. Alper

    2014-12-10

    The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled.more » In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.« less

  11. The Effects of Conducting a Functional Analysis on Problem Behavior in Other Settings

    ERIC Educational Resources Information Center

    Call, Nathan A.; Findley, Addie J.; Reavis, Andrea R.

    2012-01-01

    It has been suggested that reinforcing problem behavior during functional analyses (FAs) may be unethical (e.g., Carr, 1977), the implication being that doing so may result in an increase in problem behavior outside of FA sessions. The current study assessed whether conducting a FA resulted in increases in problem behavior outside of the FA…

  12. Strong convergence of an extragradient-type algorithm for the multiple-sets split equality problem.

    PubMed

    Zhao, Ying; Shi, Luoyi

    2017-01-01

    This paper introduces a new extragradient-type method to solve the multiple-sets split equality problem (MSSEP). Under some suitable conditions, the strong convergence of an algorithm can be verified in the infinite-dimensional Hilbert spaces. Moreover, several numerical results are given to show the effectiveness of our algorithm.

  13. Assessing students' ability to solve introductory physics problems using integrals in symbolic and graphical representations

    NASA Astrophysics Data System (ADS)

    Khan, Neelam; Hu, Dehui; Nguyen, Dong-Hai; Rebello, N. Sanjay

    2012-02-01

    Integration is widely used in physics in electricity and magnetism (E&M), as well as in mechanics, to calculate physical quantities from other non-constant quantities. We designed a survey to assess students' ability to apply integration to physics problems in introductory physics. Each student was given a set of eight problems, and each set of problems had two different versions; one consisted of symbolic problems and the other graphical problems. The purpose of this study was to investigate students' strategies for solving physics problems that use integrals in first and second-semester calculus-based physics. Our results indicate that most students had difficulty even recognizing that an integral is needed to solve the problem.

  14. Complete Sets of Radiating and Nonradiating Parts of a Source and Their Fields with Applications in Inverse Scattering Limited-Angle Problems

    PubMed Central

    Louis, A. K.

    2006-01-01

    Many algorithms applied in inverse scattering problems use source-field systems instead of the direct computation of the unknown scatterer. It is well known that the resulting source problem does not have a unique solution, since certain parts of the source totally vanish outside of the reconstruction area. This paper provides for the two-dimensional case special sets of functions, which include all radiating and all nonradiating parts of the source. These sets are used to solve an acoustic inverse problem in two steps. The problem under discussion consists of determining an inhomogeneous obstacle supported in a part of a disc, from data, known for a subset of a two-dimensional circle. In a first step, the radiating parts are computed by solving a linear problem. The second step is nonlinear and consists of determining the nonradiating parts. PMID:23165060

  15. A shrinking hypersphere PSO for engineering optimisation problems

    NASA Astrophysics Data System (ADS)

    Yadav, Anupam; Deep, Kusum

    2016-03-01

    Many real-world and engineering design problems can be formulated as constrained optimisation problems (COPs). Swarm intelligence techniques are a good approach to solve COPs. In this paper an efficient shrinking hypersphere-based particle swarm optimisation (SHPSO) algorithm is proposed for constrained optimisation. The proposed SHPSO is designed in such a way that the movement of the particle is set to move under the influence of shrinking hyperspheres. A parameter-free approach is used to handle the constraints. The performance of the SHPSO is compared against the state-of-the-art algorithms for a set of 24 benchmark problems. An exhaustive comparison of the results is provided statistically as well as graphically. Moreover three engineering design problems namely welded beam design, compressed string design and pressure vessel design problems are solved using SHPSO and the results are compared with the state-of-the-art algorithms.

  16. Determining the relationship between students' scores using traditional homework assignments to those who used assignments on a non-traditional interactive CD with tutor helps

    NASA Astrophysics Data System (ADS)

    Tinney, Charles Evan

    2007-12-01

    By using the book "Physics for Scientists and Engineers" by Raymond A. Serway as a guide, CD problem sets for teaching a calculus-based physics course were developed, programmed, and evaluated for homework assignments during the 2003-2004 academic year at Utah State University. These CD sets were used to replace the traditionally handwritten and submitted homework sets. They included a research-based format that guided the students through problem-solving techniques using responseactivated helps and suggestions. The CD contents were designed to help the student improve his/her physics problem-solving skills. The analyzed score results showed a direct correlation between the scores obtained on the homework and the students' time spent per problem, as well as the number of helps used per problem.

  17. Optimizing sensor cover energy for directional sensors

    NASA Astrophysics Data System (ADS)

    Astorino, Annabella; Gaudioso, Manlio; Miglionico, Giovanna

    2016-10-01

    The Directional Sensors Continuous Coverage Problem (DSCCP) aims at covering a given set of targets in a plane by means of a set of directional sensors. The location of these sensors is known in advance and they are characterized by a discrete set of possible radii and aperture angles. Decisions to be made are about orientation (which in our approach can vary continuously), radius and aperture angle of each sensor. The objective is to get a minimum cost coverage of all targets, if any. We introduce a MINLP formulation of the problem and define a Lagrangian heuristics based on a dual ascent procedure operating on one multiplier at a time. Finally we report the results of the implementation of the method on a set of test problems.

  18. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  19. Proficiency Standards and Cut-Scores for Language Proficiency Tests.

    ERIC Educational Resources Information Center

    Moy, Raymond H.

    The problem of standard setting on language proficiency tests is often approached by the use of norms derived from the group being tested, a process commonly known as "grading on the curve." One particular problem with this ad hoc method of standard setting is that it will usually result in a fluctuating standard dependent on the particular group…

  20. Dynamic discrete tomography

    NASA Astrophysics Data System (ADS)

    Alpers, Andreas; Gritzmann, Peter

    2018-03-01

    We consider the problem of reconstructing the paths of a set of points over time, where, at each of a finite set of moments in time the current positions of points in space are only accessible through some small number of their x-rays. This particular particle tracking problem, with applications, e.g. in plasma physics, is the basic problem in dynamic discrete tomography. We introduce and analyze various different algorithmic models. In particular, we determine the computational complexity of the problem (and various of its relatives) and derive algorithms that can be used in practice. As a byproduct we provide new results on constrained variants of min-cost flow and matching problems.

  1. How coping styles, cognitive distortions, and attachment predict problem gambling among adolescents and young adults.

    PubMed

    Calado, Filipa; Alexandre, Joana; Griffiths, Mark D

    2017-12-01

    Background and aims Recent research suggests that youth problem gambling is associated with several factors, but little is known how these factors might influence or interact each other in predicting this behavior. Consequently, this is the first study to examine the mediation effect of coping styles in the relationship between attachment to parental figures and problem gambling. Methods A total of 988 adolescents and emerging adults were recruited to participate. The first set of analyses tested the adequacy of a model comprising biological, cognitive, and family variables in predicting youth problem gambling. The second set of analyses explored the relationship between family and individual variables in problem gambling behavior. Results The results of the first set of analyses demonstrated that the individual factors of gender, cognitive distortions, and coping styles showed a significant predictive effect on youth problematic gambling, and the family factors of attachment and family structure did not reveal a significant influence on this behavior. The results of the second set of analyses demonstrated that the attachment dimension of angry distress exerted a more indirect influence on problematic gambling, through emotion-focused coping style. Discussion This study revealed that some family variables can have a more indirect effect on youth gambling behavior and provided some insights in how some factors interact in predicting problem gambling. Conclusion These findings suggest that youth gambling is a multifaceted phenomenon, and that the indirect effects of family variables are important in estimating the complex social forces that might influence adolescent decisions to gamble.

  2. Modelling human problem solving with data from an online game.

    PubMed

    Rach, Tim; Kirsch, Alexandra

    2016-11-01

    Since the beginning of cognitive science, researchers have tried to understand human strategies in order to develop efficient and adequate computational methods. In the domain of problem solving, the travelling salesperson problem has been used for the investigation and modelling of human solutions. We propose to extend this effort with an online game, in which instances of the travelling salesperson problem have to be solved in the context of a game experience. We report on our effort to design and run such a game, present the data contained in the resulting openly available data set and provide an outlook on the use of games in general for cognitive science research. In addition, we present three geometrical models mapping the starting point preferences in the problems presented in the game as the result of an evaluation of the data set.

  3. Performance of MCNP4A on seven computing platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.; Brockhoff, R.C.

    1994-12-31

    The performance of seven computer platforms has been evaluated with the MCNP4A Monte Carlo radiation transport code. For the first time we report timing results using MCNP4A and its new test set and libraries. Comparisons are made on platforms not available to us in previous MCNP timing studies. By using MCNP4A and its 325-problem test set, a widely-used and readily-available physics production code is used; the timing comparison is not limited to a single ``typical`` problem, demonstrating the problem dependence of timing results; the results are reproducible at the more than 100 installations around the world using MCNP; comparison ofmore » performance of other computer platforms to the ones tested in this study is possible because we present raw data rather than normalized results; and a measure of the increase in performance of computer hardware and software over the past two years is possible. The computer platforms reported are the Cray-YMP 8/64, IBM RS/6000-560, Sun Sparc10, Sun Sparc2, HP/9000-735, 4 processor 100 MHz Silicon Graphics ONYX, and Gateway 2000 model 4DX2-66V PC. In 1991 a timing study of MCNP4, the predecessor to MCNP4A, was conducted using ENDF/B-V cross-section libraries, which are export protected. The new study is based upon the new MCNP 25-problem test set which utilizes internationally available data. MCNP4A, its test problems and the test data library are available from the Radiation Shielding and Information Center in Oak Ridge, Tennessee, or from the NEA Data Bank in Saclay, France. Anyone with the same workstation and compiler can get the same test problem sets, the same library files, and the same MCNP4A code from RSIC or NEA and replicate our results. And, because we report raw data, comparison of the performance of other compute platforms and compilers can be made.« less

  4. Iterative framework radiation hybrid mapping

    USDA-ARS?s Scientific Manuscript database

    Building comprehensive radiation hybrid maps for large sets of markers is a computationally expensive process, since the basic mapping problem is equivalent to the traveling salesman problem. The mapping problem is also susceptible to noise, and as a result, it is often beneficial to remove markers ...

  5. Performance of Quantum Annealers on Hard Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Pokharel, Bibek; Venturelli, Davide; Rieffel, Eleanor

    Quantum annealers have been employed to attack a variety of optimization problems. We compared the performance of the current D-Wave 2X quantum annealer to that of the previous generation D-Wave Two quantum annealer on scheduling-type planning problems. Further, we compared the effect of different anneal times, embeddings of the logical problem, and different settings of the ferromagnetic coupling JF across the logical vertex-model on the performance of the D-Wave 2X quantum annealer. Our results show that at the best settings, the scaling of expected anneal time to solution for D-WAVE 2X is better than that of the DWave Two, but still inferior to that of state of the art classical solvers on these problems. We discuss the implication of our results for the design and programming of future quantum annealers. Supported by NASA Ames Research Center.

  6. Solution to the mean king's problem with mutually unbiased bases for arbitrary levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Gen; Tanaka, Hajime; Ozawa, Masanao

    2006-05-15

    The mean king's problem with mutually unbiased bases is reconsidered for arbitrary d-level systems. Hayashi et al. [Phys. Rev. A 71, 052331 (2005)] related the problem to the existence of a maximal set of d-1 mutually orthogonal Latin squares, in their restricted setting that allows only measurements of projection-valued measures. However, we then cannot find a solution to the problem when, e.g., d=6 or d=10. In contrast to their result, we show that the king's problem always has a solution for arbitrary levels if we also allow positive operator-valued measures. In constructing the solution, we use orthogonal arrays in combinatorialmore » design theory.« less

  7. Changes in Adult Behavior to Decrease Disruption from Students in Nonclassroom Settings

    ERIC Educational Resources Information Center

    Bohanon, Hank

    2015-01-01

    Decreasing classroom disruptions that result from hallway-related behavior in high school settings can be very challenging for high school staff. This article presents a case example of preventing problem behavior related to hallway settings in a high school with over 1,200 students. The interventions are described, and the results of the plan are…

  8. Ethical problems in pediatrics: what does the setting of care and education show us?

    PubMed

    Guedert, Jucélia Maria; Grosseman, Suely

    2012-03-16

    Pediatrics ethics education should enhance medical students' skills to deal with ethical problems that may arise in the different settings of care. This study aimed to analyze the ethical problems experienced by physicians who have medical education and pediatric care responsibilities, and if those problems are associated to their workplace, medical specialty and area of clinical practice. A self-applied semi-structured questionnaire was answered by 88 physicians with teaching and pediatric care responsibilities. Content analysis was performed to analyze the qualitative data. Poisson regression was used to explore the association of the categories of ethical problems reported with workplace and professional specialty and activity. 210 ethical problems were reported, grouped into five areas: physician-patient relationship, end-of-life care, health professional conducts, socioeconomic issues and health policies, and pediatric teaching. Doctors who worked in hospitals as well as general and subspecialist pediatricians reported fewer ethical problems related to socioeconomic issues and health policies than those who worked in Basic Health Units and who were family doctors. Some ethical problems are specific to certain settings: those related to end-of-life care are more frequent in the hospital settings and those associated with socioeconomic issues and public health policies are more frequent in Basic Health Units. Other problems are present in all the setting of pediatric care and learning and include ethical problems related to physician-patient relationship, health professional conducts and the pediatric education process. These findings should be taken into consideration when planning the teaching of ethics in pediatrics. This research article didn't reports the results of a controlled health care intervention. The study project was approved by the Institutional Ethical Review Committee (Report CEP-HIJG 032/2008).

  9. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGES

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; ...

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  10. Using Multiple Schedules During Functional Communication Training to Promote Rapid Transfer of Treatment Effects

    PubMed Central

    Fisher, Wayne W.; Greer, Brian D.; Fuhrman, Ashley M.; Querim, Angie C.

    2016-01-01

    Multiple schedules with signaled periods of reinforcement and extinction have been used to thin reinforcement schedules during functional communication training (FCT) to make the intervention more practical for parents and teachers. We evaluated whether these signals would also facilitate rapid transfer of treatment effects from one setting to the next and from one therapist to the next. With two children, we conducted FCT in the context of mixed (baseline) and multiple (treatment) schedules introduced across settings or therapists using a multiple baseline design. Results indicated that when the multiple schedules were introduced, the functional communication response came under rapid discriminative control, and problem behavior remained at near-zero rates. We extended these findings with another individual by using a more traditional baseline in which problem behavior produced reinforcement. Results replicated those of the previous participants and showed rapid reductions in problem behavior when multiple schedules were implemented across settings. PMID:26384141

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  12. Using multiple schedules during functional communication training to promote rapid transfer of treatment effects.

    PubMed

    Fisher, Wayne W; Greer, Brian D; Fuhrman, Ashley M; Querim, Angie C

    2015-12-01

    Multiple schedules with signaled periods of reinforcement and extinction have been used to thin reinforcement schedules during functional communication training (FCT) to make the intervention more practical for parents and teachers. We evaluated whether these signals would also facilitate rapid transfer of treatment effects across settings and therapists. With 2 children, we conducted FCT in the context of mixed (baseline) and multiple (treatment) schedules introduced across settings or therapists using a multiple baseline design. Results indicated that when the multiple schedules were introduced, the functional communication response came under rapid discriminative control, and problem behavior remained at near-zero rates. We extended these findings with another individual by using a more traditional baseline in which problem behavior produced reinforcement. Results replicated those of the previous participants and showed rapid reductions in problem behavior when multiple schedules were implemented across settings. © Society for the Experimental Analysis of Behavior.

  13. More from the Water Jars: A Reanalysis of Problem-Solving Performance among Gifted and Nongifted Children.

    ERIC Educational Resources Information Center

    Shore, Bruce M.; And Others

    1994-01-01

    Reanalysis of the data from a 1984 study on making and breaking problem-solving mental sets with 50 children found that gifted subjects who failed to initially form the set made the most errors of any group and were least likely to recognize their own errors. Results suggest that motivational reasons may underly this inferior performance by some…

  14. A method and knowledge base for automated inference of patient problems from structured data in an electronic medical record.

    PubMed

    Wright, Adam; Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W

    2011-01-01

    Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100,000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100,000 records to assess its accuracy. Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100,000 randomly selected patients showed high sensitivity (range: 62.8-100.0%) and positive predictive value (range: 79.8-99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts.

  15. COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H. B.; Marathe, M. V.; Stearns, R. E.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity ormore » efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97]« less

  16. Solving and Learning Soft Temporal Constraints: Experimental Setting and Results

    NASA Technical Reports Server (NTRS)

    Rossi, F.; Sperduti, A.; Venable, K. B.; Khatib, L.; Morris, P.; Morris, R.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Soft temporal constraints problems allow to describe in a natural way scenarios where events happen over time and preferences are associated to event distances and durations. However, sometimes such local preferences are difficult to set, and it may be easier instead to associate preferences to some complete solutions of the problem. Machine learning techniques can be useful in this respect. In this paper we describe two solvers (one more general and the other one more efficient) for tractable subclasses of soft temporal problems, and we show some experimental results. The random generator used to build the problems on which tests are performed is also described. We also compare the two solvers highlighting the tradeoff between performance and representational power. Finally, we present a learning module and we show its behavior on randomly-generated examples.

  17. A set-covering based heuristic algorithm for the periodic vehicle routing problem.

    PubMed

    Cacchiani, V; Hemmelmayr, V C; Tricoire, F

    2014-01-30

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems.

  18. A set-covering based heuristic algorithm for the periodic vehicle routing problem

    PubMed Central

    Cacchiani, V.; Hemmelmayr, V.C.; Tricoire, F.

    2014-01-01

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems. PMID:24748696

  19. Complexity and approximability of quantified and stochastic constraint satisfaction problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H. B.; Stearns, R. L.; Marathe, M. V.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SAT{sub c}(S)). Here, we study simultaneously the complexity of and the existence of efficient approximation algorithms for a number of variants of the problems SAT(S) and SAT{sub c}(S), and for many different D, C, and S.more » These problem variants include decision and optimization problems, for formulas, quantified formulas stochastically-quantified formulas. We denote these problems by Q-SAT(S), MAX-Q-SAT(S), S-SAT(S), MAX-S-SAT(S) MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S). The main contribution is the development of a unified predictive theory for characterizing the the complexity of these problems. Our unified approach is based on the following basic two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Let k {ge} 2. Let S be a finite set of finite-arity relations on {Sigma}{sub k} with the following condition on S: All finite arity relations on {Sigma}{sub k} can be represented as finite existentially-quantified conjunctions of relations in S applied to variables (to variables and constant symbols in C), Then we prove the following new results: (1) The problems SAT(S) and SAT{sub c}(S) are both NQL-complete and {le}{sub logn}{sup bw}-complete for NP. (2) The problems Q-SAT(S), Q-SAT{sub c}(S), are PSPACE-complete. Letting k = 2, the problem S-SAT(S) and S-SAT{sub c}(S) are PSPACE-complete. (3) {exists} {epsilon} > 0 for which approximating the problems MAX-Q-SAT(S) within {epsilon} times optimum is PSPACE-hard. Letting k =: 2, {exists} {epsilon} > 0 for which approximating the problems MAX-S-SAT(S) within {epsilon} times optimum is PSPACE-hard. (4) {forall} {epsilon} > 0 the problems MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S), are PSPACE-hard to approximate within a factor of n{sup {epsilon}} times optimum. These results significantly extend the earlier results by (i) Papadimitriou [Pa851] on complexity of stochastic satisfiability, (ii) Condon, Feigenbaum, Lund and Shor [CF+93, CF+94] by identifying natural classes of PSPACE-hard optimization problems with provably PSPACE-hard {epsilon}-approximation problems. Moreover, most of our results hold not just for Boolean relations: most previous results were done only in the context of Boolean domains. The results also constitute as a significant step towards obtaining a dichotomy theorems for the problems MAX-S-SAT(S) and MAX-Q-SAT(S): a research area of recent interest [CF+93, CF+94, Cr95, KSW97, LMP99].« less

  20. A set-covering formulation for a drayage problem with single and double container loads

    NASA Astrophysics Data System (ADS)

    Ghezelsoflu, A.; Di Francesco, M.; Frangioni, A.; Zuddas, P.

    2018-01-01

    This paper addresses a drayage problem, which is motivated by the case study of a real carrier. Its trucks carry one or two containers from a port to importers and from exporters to the port. Since up to four customers can be served in each route, we propose a set-covering formulation for this problem where all possible routes are enumerated. This model can be efficiently solved to optimality by a commercial solver, significantly outperforming a previously proposed node-arc formulation. Moreover, the model can be effectively used to evaluate a new distribution policy, which results in an enlarged set of feasible routes and can increase savings w.r.t. the policy currently employed by the carrier.

  1. Teaching children with autism to explain how: A case for problem solving?

    PubMed

    Frampton, Sarah E; Alice Shillingsburg, M

    2018-04-01

    Few studies have applied Skinner's (1953) conceptualization of problem solving to teach socially significant behaviors to individuals with developmental disabilities. The current study used a multiple probe design across behavior (sets) to evaluate the effects of problem-solving strategy training (PSST) on the target behavior of explaining how to complete familiar activities. During baseline, none of the three participants with autism spectrum disorder (ASD) could respond to the problems presented to them (i.e., explain how to do the activities). Tact training of the actions in each activity alone was ineffective; however, all participants demonstrated independent explaining-how following PSST. Further, following PSST with Set 1, tact training alone was sufficient for at least one scenario in sets 2 and 3 for all 3 participants. Results have implications for generative responding for individuals with ASD and further the discussion regarding the role of problem solving in complex verbal behavior. © 2018 Society for the Experimental Analysis of Behavior.

  2. Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model

    NASA Astrophysics Data System (ADS)

    Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled

    2018-03-01

    The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.

  3. Kernel parameter variation-based selective ensemble support vector data description for oil spill detection on the ocean via hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Uslu, Faruk Sukru

    2017-07-01

    Oil spills on the ocean surface cause serious environmental, political, and economic problems. Therefore, these catastrophic threats to marine ecosystems require detection and monitoring. Hyperspectral sensors are powerful optical sensors used for oil spill detection with the help of detailed spectral information of materials. However, huge amounts of data in hyperspectral imaging (HSI) require fast and accurate computation methods for detection problems. Support vector data description (SVDD) is one of the most suitable methods for detection, especially for large data sets. Nevertheless, the selection of kernel parameters is one of the main problems in SVDD. This paper presents a method, inspired by ensemble learning, for improving performance of SVDD without tuning its kernel parameters. Additionally, a classifier selection technique is proposed to get more gain. The proposed approach also aims to solve the small sample size problem, which is very important for processing high-dimensional data in HSI. The algorithm is applied to two HSI data sets for detection problems. In the first HSI data set, various targets are detected; in the second HSI data set, oil spill detection in situ is realized. The experimental results demonstrate the feasibility and performance improvement of the proposed algorithm for oil spill detection problems.

  4. A roadmap for optimal control: the right way to commute.

    PubMed

    Ross, I Michael

    2005-12-01

    Optimal control theory is the foundation for many problems in astrodynamics. Typical examples are trajectory design and optimization, relative motion control of distributed space systems and attitude steering. Many such problems in astrodynamics are solved by an alternative route of mathematical analysis and deep physical insight, in part because of the perception that an optimal control framework generates hard problems. Although this is indeed true of the Bellman and Pontryagin frameworks, the covector mapping principle provides a neoclassical approach that renders hard problems easy. That is, although the origins of this philosophy can be traced back to Bernoulli and Euler, it is essentially modern as a result of the strong linkage between approximation theory, set-valued analysis and computing technology. Motivated by the broad success of this approach, mission planners are now conceiving and demanding higher performance from space systems. This has resulted in new set of theoretical and computational problems. Recently, under the leadership of NASA-GRC, several workshops were held to address some of these problems. This paper outlines the theoretical issues stemming from practical problems in astrodynamics. Emphasis is placed on how it pertains to advanced mission design problems.

  5. Dynamic programming methods for concurrent design and dynamic allocation of vehicles embedded in a system-of-systems

    NASA Astrophysics Data System (ADS)

    Nusawardhana

    2007-12-01

    Recent developments indicate a changing perspective on how systems or vehicles should be designed. Such transition comes from the way decision makers in defense related agencies address complex problems. Complex problems are now often posed in terms of the capabilities desired, rather than in terms of requirements for a single systems. As a result, the way to provide a set of capabilities is through a collection of several individual, independent systems. This collection of individual independent systems is often referred to as a "System of Systems'' (SoS). Because of the independent nature of the constituent systems in an SoS, approaches to design an SoS, and more specifically, approaches to design a new system as a member of an SoS, will likely be different than the traditional design approaches for complex, monolithic (meaning the constituent parts have no ability for independent operation) systems. Because a system of system evolves over time, this simultaneous system design and resource allocation problem should be investigated in a dynamic context. Such dynamic optimization problems are similar to conventional control problems. However, this research considers problems which not only seek optimizing policies but also seek the proper system or vehicle to operate under these policies. This thesis presents a framework and a set of analytical tools to solve a class of SoS problems that involves the simultaneous design of a new system and allocation of the new system along with existing systems. Such a class of problems belongs to the problems of concurrent design and control of a new systems with solutions consisting of both optimal system design and optimal control strategy. Rigorous mathematical arguments show that the proposed framework solves the concurrent design and control problems. Many results exist for dynamic optimization problems of linear systems. In contrary, results on optimal nonlinear dynamic optimization problems are rare. The proposed framework is equipped with the set of analytical tools to solve several cases of nonlinear optimal control problems: continuous- and discrete-time nonlinear problems with applications on both optimal regulation and tracking. These tools are useful when mathematical descriptions of dynamic systems are available. In the absence of such a mathematical model, it is often necessary to derive a solution based on computer simulation. For this case, a set of parameterized decision may constitute a solution. This thesis presents a method to adjust these parameters based on the principle of stochastic approximation simultaneous perturbation using continuous measurements. The set of tools developed here mostly employs the methods of exact dynamic programming. However, due to the complexity of SoS problems, this research also develops suboptimal solution approaches, collectively recognized as approximate dynamic programming solutions, for large scale problems. The thesis presents, explores, and solves problems from an airline industry, in which a new aircraft is to be designed and allocated along with an existing fleet of aircraft. Because the life cycle of an aircraft is on the order of 10 to 20 years, this problem is to be addressed dynamically so that the new aircraft design is the best design for the fleet over a given time horizon.

  6. Determination system for solar cell layout in traffic light network using dominating set

    NASA Astrophysics Data System (ADS)

    Eka Yulia Retnani, Windi; Fambudi, Brelyanes Z.; Slamin

    2018-04-01

    Graph Theory is one of the fields in Mathematics that solves discrete problems. In daily life, the applications of Graph Theory are used to solve various problems. One of the topics in the Graph Theory that is used to solve the problem is the dominating set. The concept of dominating set is used, for example, to locate some objects systematically. In this study, the dominating set are used to determine the dominating points for solar panels, where the vertex represents the traffic light point and the edge represents the connection between the points of the traffic light. To search the dominating points for solar panels using the greedy algorithm. This algorithm is used to determine the location of solar panel. This research produced applications that can determine the location of solar panels with optimal results, that is, the minimum dominating points.

  7. Tuning Parameters in Heuristics by Using Design of Experiments Methods

    NASA Technical Reports Server (NTRS)

    Arin, Arif; Rabadi, Ghaith; Unal, Resit

    2010-01-01

    With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.

  8. Adaptive density trajectory cluster based on time and space distance

    NASA Astrophysics Data System (ADS)

    Liu, Fagui; Zhang, Zhijie

    2017-10-01

    There are some hotspot problems remaining in trajectory cluster for discovering mobile behavior regularity, such as the computation of distance between sub trajectories, the setting of parameter values in cluster algorithm and the uncertainty/boundary problem of data set. As a result, based on the time and space, this paper tries to define the calculation method of distance between sub trajectories. The significance of distance calculation for sub trajectories is to clearly reveal the differences in moving trajectories and to promote the accuracy of cluster algorithm. Besides, a novel adaptive density trajectory cluster algorithm is proposed, in which cluster radius is computed through using the density of data distribution. In addition, cluster centers and number are selected by a certain strategy automatically, and uncertainty/boundary problem of data set is solved by designed weighted rough c-means. Experimental results demonstrate that the proposed algorithm can perform the fuzzy trajectory cluster effectively on the basis of the time and space distance, and obtain the optimal cluster centers and rich cluster results information adaptably for excavating the features of mobile behavior in mobile and sociology network.

  9. Stress-free end problem in layered materials

    NASA Technical Reports Server (NTRS)

    Erdogan, F.; Bakioglu, M.

    1977-01-01

    In this paper the plane elastostatic problem for a medium which consists of periodically arranged two sets of bonded dissimilar layers or strips is considered. First it is assumed that one set of strips contains a crack which crosses the bimaterial interfaces. Then, by letting the collinear cracks join, the stress-free end problem is formulated. The singular behavior of the solutions at the point on intersection of the stress-free boundary and the interfaces is examined and appropriate stress intensity factors are defined. The results of some numerical examples are then presented which include the cases of both plane stress and plane strain.

  10. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    PubMed

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  11. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.

  12. Individualized Math Problems in Whole Numbers. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this set require computations involving whole numbers.…

  13. Internet publicity of data problems in the bioscience literature correlates with enhanced corrective action

    PubMed Central

    2014-01-01

    Several online forums exist to facilitate open and/or anonymous discussion of the peer-reviewed scientific literature. Data integrity is a common discussion topic, and it is widely assumed that publicity surrounding such matters will accelerate correction of the scientific record. This study aimed to test this assumption by examining a collection of 497 papers for which data integrity had been questioned either in public or in private. As such, the papers were divided into two sub-sets: a public set of 274 papers discussed online, and the remainder a private set of 223 papers not publicized. The sources of alleged data problems, as well as criteria for defining problem data, and communication of problems to journals and appropriate institutions, were similar between the sets. The number of laboratory groups represented in each set was also similar (75 in public, 62 in private), as was the number of problem papers per laboratory group (3.65 in public, 3.54 in private). Over a study period of 18 months, public papers were retracted 6.5-fold more, and corrected 7.7-fold more, than those in the private set. Parsing the results by laboratory group, 28 laboratory groups in the public set had papers which received corrective action, versus 6 laboratory groups in the private set. For those laboratory groups in the public set with corrected/retracted papers, the fraction of their papers acted on was 62% of those initially flagged, whereas in the private set this fraction was 27%. Such clustering of actions suggests a pattern in which correction/retraction of one paper from a group correlates with more corrections/retractions from the same group, with this pattern being stronger in the public set. It is therefore concluded that online discussion enhances levels of corrective action in the scientific literature. Nevertheless, anecdotal discussion reveals substantial room for improvement in handling of such matters. PMID:24765564

  14. Values in Principals' Thinking when Solving Problems

    ERIC Educational Resources Information Center

    Lazaridou, Angeliki

    2007-01-01

    The values that school principals use when solving organisational problems were studied. Data were collected by a think aloud procedure, in which the participants verbalised their thoughts while working on a set of five administrative problems. The results show that the principals referred to seven values that had subtle but important sub-texts:…

  15. Two Methods for Efficient Solution of the Hitting-Set Problem

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2005-01-01

    A paper addresses much of the same subject matter as that of Fast Algorithms for Model-Based Diagnosis (NPO-30582), which appears elsewhere in this issue of NASA Tech Briefs. However, in the paper, the emphasis is more on the hitting-set problem (also known as the transversal problem), which is well known among experts in combinatorics. The authors primary interest in the hitting-set problem lies in its connection to the diagnosis problem: it is a theorem of model-based diagnosis that in the set-theory representation of the components of a system, the minimal diagnoses of a system are the minimal hitting sets of the system. In the paper, the hitting-set problem (and, hence, the diagnosis problem) is translated from a combinatorial to a computational problem by mapping it onto the Boolean satisfiability and integer- programming problems. The paper goes on to describe developments nearly identical to those summarized in the cited companion NASA Tech Briefs article, including the utilization of Boolean-satisfiability and integer- programming techniques to reduce the computation time and/or memory needed to solve the hitting-set problem.

  16. A PERFECT MATCH CONDITION FOR POINT-SET MATCHING PROBLEMS USING THE OPTIMAL MASS TRANSPORT APPROACH

    PubMed Central

    CHEN, PENGWEN; LIN, CHING-LONG; CHERN, I-LIANG

    2013-01-01

    We study the performance of optimal mass transport-based methods applied to point-set matching problems. The present study, which is based on the L2 mass transport cost, states that perfect matches always occur when the product of the point-set cardinality and the norm of the curl of the non-rigid deformation field does not exceed some constant. This analytic result is justified by a numerical study of matching two sets of pulmonary vascular tree branch points whose displacement is caused by the lung volume changes in the same human subject. The nearly perfect match performance verifies the effectiveness of this mass transport-based approach. PMID:23687536

  17. Document image improvement for OCR as a classification problem

    NASA Astrophysics Data System (ADS)

    Summers, Kristen M.

    2003-01-01

    In support of the goal of automatically selecting methods of enhancing an image to improve the accuracy of OCR on that image, we consider the problem of determining whether to apply each of a set of methods as a supervised classification problem for machine learning. We characterize each image according to a combination of two sets of measures: a set that are intended to reflect the degree of particular types of noise present in documents in a single font of Roman or similar script and a more general set based on connected component statistics. We consider several potential methods of image improvement, each of which constitutes its own 2-class classification problem, according to whether transforming the image with this method improves the accuracy of OCR. In our experiments, the results varied for the different image transformation methods, but the system made the correct choice in 77% of the cases in which the decision affected the OCR score (in the range [0,1]) by at least .01, and it made the correct choice 64% of the time overall.

  18. Solidification of a binary mixture

    NASA Technical Reports Server (NTRS)

    Antar, B. N.

    1982-01-01

    The time dependent concentration and temperature profiles of a finite layer of a binary mixture are investigated during solidification. The coupled time dependent Stefan problem is solved numerically using an implicit finite differencing algorithm with the method of lines. Specifically, the temporal operator is approximated via an implicit finite difference operator resulting in a coupled set of ordinary differential equations for the spatial distribution of the temperature and concentration for each time. Since the resulting differential equations set form a boundary value problem with matching conditions at an unknown spatial point, the method of invariant imbedding is used for its solution.

  19. Data Mining Technologies Inspired from Visual Principle

    NASA Astrophysics Data System (ADS)

    Xu, Zongben

    In this talk we review the recent work done by our group on data mining (DM) technologies deduced from simulating visual principle. Through viewing a DM problem as a cognition problems and treading a data set as an image with each light point located at a datum position, we developed a series of high efficient algorithms for clustering, classification and regression via mimicking visual principles. In pattern recognition, human eyes seem to possess a singular aptitude to group objects and find important structure in an efficient way. Thus, a DM algorithm simulating visual system may solve some basic problems in DM research. From this point of view, we proposed a new approach for data clustering by modeling the blurring effect of lateral retinal interconnections based on scale space theory. In this approach, as the data image blurs, smaller light blobs merge into large ones until the whole image becomes one light blob at a low enough level of resolution. By identifying each blob with a cluster, the blurring process then generates a family of clustering along the hierarchy. The proposed approach provides unique solutions to many long standing problems, such as the cluster validity and the sensitivity to initialization problems, in clustering. We extended such an approach to classification and regression problems, through combatively employing the Weber's law in physiology and the cell response classification facts. The resultant classification and regression algorithms are proven to be very efficient and solve the problems of model selection and applicability to huge size of data set in DM technologies. We finally applied the similar idea to the difficult parameter setting problem in support vector machine (SVM). Viewing the parameter setting problem as a recognition problem of choosing a visual scale at which the global and local structures of a data set can be preserved, and the difference between the two structures be maximized in the feature space, we derived a direct parameter setting formula for the Gaussian SVM. The simulations and applications show that the suggested formula significantly outperforms the known model selection methods in terms of efficiency and precision.

  20. Correlation between the norm and the geometry of minimal networks

    NASA Astrophysics Data System (ADS)

    Laut, I. L.

    2017-05-01

    The paper is concerned with the inverse problem of the minimal Steiner network problem in a normed linear space. Namely, given a normed space in which all minimal networks are known for any finite point set, the problem is to describe all the norms on this space for which the minimal networks are the same as for the original norm. We survey the available results and prove that in the plane a rotund differentiable norm determines a distinctive set of minimal Steiner networks. In a two-dimensional space with rotund differentiable norm the coordinates of interior vertices of a nondegenerate minimal parametric network are shown to vary continuously under small deformations of the boundary set, and the turn direction of the network is determined. Bibliography: 15 titles.

  1. An Integrated approach to the Space Situational Awareness Problem

    DTIC Science & Technology

    2016-12-15

    data coming from the sensors. We developed particle-based Gaussian Mixture Filters that are immune to the “curse of dimensionality”/ “particle...depletion” problem inherent in particle filtering . This method maps the data assimilation/ filtering problem into an unsupervised learning problem. Results...Gaussian Mixture Filters ; particle depletion; Finite Set Statistics 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 1

  2. Results of the Software Process Improvement Efforts of the Early Adopters in NAVAIR 4.0

    DTIC Science & Technology

    2007-12-01

    and customer satisfaction. AIRSpeed utilizes a structured, problem solving methodology called DMAIC (Define, Measure, Analyze, Improve, Control...widely used in business. DMAIC leads project teams through the logical steps from problem definition to problem resolution. Each phase has a specific set...costs and improving productivity and customer satisfaction. AIRSpeed utilizes the DMAIC (Define, Measure, Analyze, Improve, Control) structured problem

  3. Building a Prevention Strategy: Getting Proactive-Getting Results. Preventing Discipline Problems, Unit 1. [Teaching Video, Practice Video, Facilitator's Guide, and Viewer's Guide].

    ERIC Educational Resources Information Center

    1999

    In early childhood settings, an emphasis on preventing discipline problems rather than only dealing with crises as they occur will result in a calmer classroom atmosphere and more teachable time. As Part 1 of a 3-part video series designed to help adults working with 3- to 8-year-olds use a proactive approach to prevent discipline problems, this…

  4. Hausdorff dimension of certain sets arising in Engel expansions

    NASA Astrophysics Data System (ADS)

    Fang, Lulu; Wu, Min

    2018-05-01

    The present paper is concerned with the Hausdorff dimension of certain sets arising in Engel expansions. In particular, the Hausdorff dimension of the set is completely determined, where A n (x) can stand for the digit, gap and ratio between two consecutive digits in the Engel expansion of x and ϕ is a positive function defined on natural numbers. These results significantly extend the existing results of Galambos’ open problems on the Hausdorff dimension of sets related to the growth rate of digits.

  5. Optimal Down Regulation of mRNA Translation

    NASA Astrophysics Data System (ADS)

    Zarai, Yoram; Margaliot, Michael; Tuller, Tamir

    2017-01-01

    Down regulation of mRNA translation is an important problem in various bio-medical domains ranging from developing effective medicines for tumors and for viral diseases to developing attenuated virus strains that can be used for vaccination. Here, we study the problem of down regulation of mRNA translation using a mathematical model called the ribosome flow model (RFM). In the RFM, the mRNA molecule is modeled as a chain of n sites. The flow of ribosomes between consecutive sites is regulated by n + 1 transition rates. Given a set of feasible transition rates, that models the outcome of all possible mutations, we consider the problem of maximally down regulating protein production by altering the rates within this set of feasible rates. Under certain conditions on the feasible set, we show that an optimal solution can be determined efficiently. We also rigorously analyze two special cases of the down regulation optimization problem. Our results suggest that one must focus on the position along the mRNA molecule where the transition rate has the strongest effect on the protein production rate. However, this rate is not necessarily the slowest transition rate along the mRNA molecule. We discuss some of the biological implications of these results.

  6. Nonmetabolic Complications of Continuous Subcutaneous Insulin Infusion: A Patient Survey

    PubMed Central

    Yemane, Nardos; Brackenridge, Anna; Pender, Siobhan

    2014-01-01

    Abstract Background: Little is known about the frequencies and types of nonmetabolic complications occurring in type 1 diabetes patients being treated by modern insulin pump therapy (continuous subcutaneous insulin infusion [CSII]), when recorded by standardized questionnaire rather than clinical experience. Subjects and Methods: A self-report questionnaire was completed by successive subjects with type 1 diabetes attending an insulin pump clinic, and those with a duration of CSII of ≥6 months were selected for analysis (n=92). Questions included pump manufacturer, insulin, infusion set type and duration of use, frequency of infusion set and site problems, pump malfunctions, and patient-related problems such as weight change since starting CSII. Results: Median (range) duration of CSII was 3.3 (0.5–32.0) years, and mean±SD duration of infusion set use was 3.2±0.7 (range 2–6) days. The commonest infusion set problems were kinking (64.1% of subjects) and blockage (54.3%). Blockage was associated with >3 days of use of infusion sets plus lispro insulin in the pump (relative risk [95% confidence interval], 1.71 [1.03–2.85]; P=0.07). The commonest infusion site problem was lipohypertrophy (26.1%), which occurred more often in those with long duration of CSII (4.8 [2.38–9.45] vs. 3.0 [1.50–4.25] years; P=0.01). Pump malfunction had occurred in 48% of subjects (43% in the first year of CSII), with “no delivery,” keypad, and battery problems commonly occurring. Although some patients reported weight gain (34%) and some weight loss (15%) on CSII, most patients (51%) reported no change in weight. Conclusions: Pump, infusion set, and infusion site problems remain common with CSII, even with contemporary technology. PMID:24180294

  7. Crib Work--An Evaluation of a Problem-Based Learning Experiment: Preliminary Results

    ERIC Educational Resources Information Center

    Walsh, Vonda K.; Bush, H. Francis

    2013-01-01

    Problem-based learning has been proven to be successful in both medical colleges and physics classes, but not uniformly across all disciplines. A college course in probability and statistics was used as a setting to test the effectiveness of problem-based learning when applied to homework. This paper compares the performances of the students from…

  8. The Context Dependency of the Self-Report Version of the Strength and Difficulties Questionnaire (SDQ): A Cross-Sectional Study between Two Administration Settings

    PubMed Central

    Hoofs, H.; Jansen, N. W. H.; Mohren, D. C. L.; Jansen, M. W. J.; Kant, I. J.

    2015-01-01

    Background The Strength and Difficulties Questionnaire (SDQ) is a screening instrument for psychosocial problems in children and adolescents, which is applied in “individual” and “collective” settings. Assessment in the individual setting is confidential for clinical applications, such as preventive child healthcare, while assessment in the collective setting is anonymous and applied in (epidemiological) research. Due to administration differences between the settings it remains unclear whether results and conclusions actually can be used interchangeably. This study therefore aims to investigate whether the SDQ is invariant across settings. Methods Two independent samples were retrieved (mean age = 14.07 years), one from an individual setting (N = 6,594) and one from a collective setting (N = 4,613). The SDQ was administered in the second year of secondary school in both settings. Samples come from the same socio-geographic population in the Netherlands. Results Confirmatory factor analysis showed that the SDQ was measurement invariant/equivalent across settings and gender. On average, children in the individual setting scored lower on total difficulties (mean difference = 2.05) and the psychosocial problems subscales compared to those in the collective setting. This was also reflected in the cut-off points for caseness, defined by the 90th percentiles, which were lower in the individual setting. Using cut-off points from the collective in the individual setting therefore resulted in a small number of cases, 2 to 3%, while ∼10% is expected. Conclusion The SDQ has the same connotation across the individual and collective setting. The observed structural differences regarding the mean scores, however, undermine the validity of the cross-use of absolute SDQ-scores between these settings. Applying cut-off scores from the collective setting in the individual setting could, therefore, result in invalid conclusions and potential misuse of the instrument. To correctly apply cut-off scores these should be retrieved from the applied setting. PMID:25886464

  9. Associations of Bullying, Victimization, and Daytime Sleepiness With Academic Problems in Adolescents Attending an Alternative High School.

    PubMed

    Rubens, Sonia L; Miller, Molly A; Zeringue, Megan M; Laird, Robert D

    2018-01-22

    Adolescents attending alternative high schools often present with high rates of academic and behavior problems. They are also at increased risk of poor health behaviors and engaging in physical violence compared with students in traditional high school settings. To address the needs of students in these educational settings, examining factors that influence academic problems in this population is essential. Research has established that both bullying/victimization and sleep problems increase adolescents' risk for academic problems. Little is known about how these 2 factors together may exacerbate risk for academic problems among students attending an alternative high school. The current study investigated the interaction between teacher-reported bullying, victimization and daytime sleepiness on academic concerns (attention and learning problems) among a sample of 172 students (56% female; age M = 18.07 years, SD = 1.42) attending an alternative high school in a large, Southeastern U.S. city. Findings from path models indicated that daytime sleepiness, bullying, and victimization were uniquely associated with attention and learning problems. Further, significant interactions indicated that the association between victimization/bullying and attention/learning problems weakened as levels of daytime sleepiness increased. Results suggest the importance of assessing and addressing multiple contextual risk factors in adolescents attending alternative high schools to provide comprehensive intervention for students in these settings. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. On sufficient statistics of least-squares superposition of vector sets.

    PubMed

    Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M

    2015-06-01

    The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.

  11. The collision singularity in a perturbed n-body problem.

    NASA Technical Reports Server (NTRS)

    Sperling, H. J.

    1972-01-01

    Collision of all bodies in a perturbed n-body problem is analyzed by an extension of the author's results for a perturbed two-body problem (1969). A procedure is set forth to prove that the absolute value of energy in a perturbed n-body system remains bounded until the moment of collision. It is shown that the characteristics of motion in both perturbed problems are basically the same.

  12. The Association between Motivation, Affect, and Self-regulated Learning When Solving Problems.

    PubMed

    Baars, Martine; Wijnia, Lisette; Paas, Fred

    2017-01-01

    Self-regulated learning (SRL) skills are essential for learning during school years, particularly in complex problem-solving domains, such as biology and math. Although a lot of studies have focused on the cognitive resources that are needed for learning to solve problems in a self-regulated way, affective and motivational resources have received much less research attention. The current study investigated the relation between affect (i.e., Positive Affect and Negative Affect Scale), motivation (i.e., autonomous and controlled motivation), mental effort, SRL skills, and problem-solving performance when learning to solve biology problems in a self-regulated online learning environment. In the learning phase, secondary education students studied video-modeling examples of how to solve hereditary problems, solved hereditary problems which they chose themselves from a set of problems with different complexity levels (i.e., five levels). In the posttest, students solved hereditary problems, self-assessed their performance, and chose a next problem from the set of problems but did not solve these problems. The results from this study showed that negative affect, inaccurate self-assessments during the posttest, and higher perceptions of mental effort during the posttest were negatively associated with problem-solving performance after learning in a self-regulated way.

  13. On the decoding process in ternary error-correcting output codes.

    PubMed

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia

    2010-01-01

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

  14. A guide to multi-objective optimization for ecological problems with an application to cackling goose management

    USGS Publications Warehouse

    Williams, Perry J.; Kendall, William L.

    2017-01-01

    Choices in ecological research and management are the result of balancing multiple, often competing, objectives. Multi-objective optimization (MOO) is a formal decision-theoretic framework for solving multiple objective problems. MOO is used extensively in other fields including engineering, economics, and operations research. However, its application for solving ecological problems has been sparse, perhaps due to a lack of widespread understanding. Thus, our objective was to provide an accessible primer on MOO, including a review of methods common in other fields, a review of their application in ecology, and a demonstration to an applied resource management problem.A large class of methods for solving MOO problems can be separated into two strategies: modelling preferences pre-optimization (the a priori strategy), or modelling preferences post-optimization (the a posteriori strategy). The a priori strategy requires describing preferences among objectives without knowledge of how preferences affect the resulting decision. In the a posteriori strategy, the decision maker simultaneously considers a set of solutions (the Pareto optimal set) and makes a choice based on the trade-offs observed in the set. We describe several methods for modelling preferences pre-optimization, including: the bounded objective function method, the lexicographic method, and the weighted-sum method. We discuss modelling preferences post-optimization through examination of the Pareto optimal set. We applied each MOO strategy to the natural resource management problem of selecting a population target for cackling goose (Branta hutchinsii minima) abundance. Cackling geese provide food security to Native Alaskan subsistence hunters in the goose's nesting area, but depredate crops on private agricultural fields in wintering areas. We developed objective functions to represent the competing objectives related to the cackling goose population target and identified an optimal solution first using the a priori strategy, and then by examining trade-offs in the Pareto set using the a posteriori strategy. We used four approaches for selecting a final solution within the a posteriori strategy; the most common optimal solution, the most robust optimal solution, and two solutions based on maximizing a restricted portion of the Pareto set. We discuss MOO with respect to natural resource management, but MOO is sufficiently general to cover any ecological problem that contains multiple competing objectives that can be quantified using objective functions.

  15. Computation of the Genetic Code

    NASA Astrophysics Data System (ADS)

    Kozlov, Nicolay N.; Kozlova, Olga N.

    2018-03-01

    One of the problems in the development of mathematical theory of the genetic code (summary is presented in [1], the detailed -to [2]) is the problem of the calculation of the genetic code. Similar problems in the world is unknown and could be delivered only in the 21st century. One approach to solving this problem is devoted to this work. For the first time provides a detailed description of the method of calculation of the genetic code, the idea of which was first published earlier [3]), and the choice of one of the most important sets for the calculation was based on an article [4]. Such a set of amino acid corresponds to a complete set of representations of the plurality of overlapping triple gene belonging to the same DNA strand. A separate issue was the initial point, triggering an iterative search process all codes submitted by the initial data. Mathematical analysis has shown that the said set contains some ambiguities, which have been founded because of our proposed compressed representation of the set. As a result, the developed method of calculation was limited to the two main stages of research, where the first stage only the of the area were used in the calculations. The proposed approach will significantly reduce the amount of computations at each step in this complex discrete structure.

  16. A Bayesian approach to truncated data sets: An application to Malmquist bias in Supernova Cosmology

    NASA Astrophysics Data System (ADS)

    March, Marisa Cristina

    2018-01-01

    A problem commonly encountered in statistical analysis of data is that of truncated data sets. A truncated data set is one in which a number of data points are completely missing from a sample, this is in contrast to a censored sample in which partial information is missing from some data points. In astrophysics this problem is commonly seen in a magnitude limited survey such that the survey is incomplete at fainter magnitudes, that is, certain faint objects are simply not observed. The effect of this `missing data' is manifested as Malmquist bias and can result in biases in parameter inference if it is not accounted for. In Frequentist methodologies the Malmquist bias is often corrected for by analysing many simulations and computing the appropriate correction factors. One problem with this methodology is that the corrections are model dependent. In this poster we derive a Bayesian methodology for accounting for truncated data sets in problems of parameter inference and model selection. We first show the methodology for a simple Gaussian linear model and then go on to show the method for accounting for a truncated data set in the case for cosmological parameter inference with a magnitude limited supernova Ia survey.

  17. Decisions about Confidentiality in Medical Student Mental Health Settings.

    ERIC Educational Resources Information Center

    Lindenthal, Jacob Jay; And Others

    1984-01-01

    Examined responses of psychologists and psychiatrists in medical schools (N=59) to vignettes representing student problems. Results suggested practitioners were generally unwilling to break confidentiality in response to problems involving suicidal tendencies, sexual coercion/seduction, social transgressions, or falsifying data. Only suggestions…

  18. A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.

    PubMed

    Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas

    2015-12-01

    Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.

  19. Quantum Adiabatic Optimization and Combinatorial Landscapes

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Knysh, S.; Morris, R. D.

    2003-01-01

    In this paper we analyze the performance of the Quantum Adiabatic Evolution (QAE) algorithm on a variant of Satisfiability problem for an ensemble of random graphs parametrized by the ratio of clauses to variables, gamma = M / N. We introduce a set of macroscopic parameters (landscapes) and put forward an ansatz of universality for random bit flips. We then formulate the problem of finding the smallest eigenvalue and the excitation gap as a statistical mechanics problem. We use the so-called annealing approximation with a refinement that a finite set of macroscopic variables (verses only energy) is used, and are able to show the existence of a dynamic threshold gamma = gammad, beyond which QAE should take an exponentially long time to find a solution. We compare the results for extended and simplified sets of landscapes and provide numerical evidence in support of our universality ansatz.

  20. The Effect of Distributed Practice in Undergraduate Statistics Homework Sets: A Randomized Trial

    ERIC Educational Resources Information Center

    Crissinger, Bryan R.

    2015-01-01

    Most homework sets in statistics courses are constructed so that students concentrate or "mass" their practice on a certain topic in one problem set. Distributed practice homework sets include review problems in each set so that practice on a topic is distributed across problem sets. There is a body of research that points to the…

  1. DOMAIN DECOMPOSITION METHOD APPLIED TO A FLOW PROBLEM Norberto C. Vera Guzmán Institute of Geophysics, UNAM

    NASA Astrophysics Data System (ADS)

    Vera, N. C.; GMMC

    2013-05-01

    In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.

  2. Density of convex intersections and applications

    PubMed Central

    Rautenberg, C. N.; Rösel, S.

    2017-01-01

    In this paper, we address density properties of intersections of convex sets in several function spaces. Using the concept of Γ-convergence, it is shown in a general framework, how these density issues naturally arise from the regularization, discretization or dualization of constrained optimization problems and from perturbed variational inequalities. A variety of density results (and counterexamples) for pointwise constraints in Sobolev spaces are presented and the corresponding regularity requirements on the upper bound are identified. The results are further discussed in the context of finite-element discretizations of sets associated with convex constraints. Finally, two applications are provided, which include elasto-plasticity and image restoration problems. PMID:28989301

  3. Mathematics and evolutionary biology make bioinformatics education comprehensible.

    PubMed

    Jungck, John R; Weisstein, Anton E

    2013-09-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes-the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software-the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a 'two-culture' problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses.

  4. Mathematics and evolutionary biology make bioinformatics education comprehensible

    PubMed Central

    Weisstein, Anton E.

    2013-01-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes—the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software—the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a ‘two-culture’ problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses. PMID:23821621

  5. User's manual for two dimensional FDTD version TEA and TMA codes for scattering from frequency-independent dielectic materials

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Luebbers, Raymond J.; Kunz, Karl S.

    1991-01-01

    The Penn State Finite Difference Time Domain Electromagnetic Scattering Code Versions TEA and TMA are two dimensional numerical electromagnetic scattering codes based upon the Finite Difference Time Domain Technique (FDTD) first proposed by Yee in 1966. The supplied version of the codes are two versions of our current two dimensional FDTD code set. This manual provides a description of the codes and corresponding results for the default scattering problem. The manual is organized into eleven sections: introduction, Version TEA and TMA code capabilities, a brief description of the default scattering geometry, a brief description of each subroutine, a description of the include files (TEACOM.FOR TMACOM.FOR), a section briefly discussing scattering width computations, a section discussing the scattering results, a sample problem set section, a new problem checklist, references and figure titles.

  6. Challenging Aerospace Problems for Intelligent Systems

    NASA Technical Reports Server (NTRS)

    Krishnakumar, Kalmanje; Kanashige, John; Satyadas, A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    In this paper we highlight four problem domains that are well suited and challenging for intelligent system technologies. The problems are defined and an outline of a probable approach is presented. No attempt is made to define the problems as test cases. In other words, no data or set of equations that a user can code and get results are provided. The main idea behind this paper is to motivate intelligent system researchers to examine problems that will elevate intelligent system technologies and applications to a higher level.

  7. Challenging Aerospace Problems for Intelligent Systems

    NASA Technical Reports Server (NTRS)

    KrishnaKumar, K.; Kanashige, J.; Satyadas, A.

    2003-01-01

    In this paper we highlight four problem domains that are well suited and challenging for intelligent system technologies. The problems are defined and an outline of a probable approach is presented. No attempt is made to define the problems as test cases. In other words, no data or set of equations that a user can code and get results are provided. The main idea behind this paper is to motivate intelligent system researchers to examine problems that will elevate intelligent system technologies and applications to a higher level.

  8. ATLAS, an integrated structural analysis and design system. Volume 5: System demonstration problems

    NASA Technical Reports Server (NTRS)

    Samuel, R. A. (Editor)

    1979-01-01

    One of a series of documents describing the ATLAS System for structural analysis and design is presented. A set of problems is described that demonstrate the various analysis and design capabilities of the ATLAS System proper as well as capabilities available by means of interfaces with other computer programs. Input data and results for each demonstration problem are discussed. Results are compared to theoretical solutions or experimental data where possible. Listings of all input data are included.

  9. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  10. Combination of graph heuristics in producing initial solution of curriculum based course timetabling problem

    NASA Astrophysics Data System (ADS)

    Wahid, Juliana; Hussin, Naimah Mohd

    2016-08-01

    The construction of population of initial solution is a crucial task in population-based metaheuristic approach for solving curriculum-based university course timetabling problem because it can affect the convergence speed and also the quality of the final solution. This paper presents an exploration on combination of graph heuristics in construction approach in curriculum based course timetabling problem to produce a population of initial solutions. The graph heuristics were set as single and combination of two heuristics. In addition, several ways of assigning courses into room and timeslot are implemented. All settings of heuristics are then tested on the same curriculum based course timetabling problem instances and are compared with each other in terms of number of population produced. The result shows that combination of saturation degree followed by largest degree heuristic produce the highest number of population of initial solutions. The results from this study can be used in the improvement phase of algorithm that uses population of initial solutions.

  11. A novel hybrid genetic algorithm to solve the make-to-order sequence-dependent flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Mirabi, Mohammad; Fatemi Ghomi, S. M. T.; Jolai, F.

    2014-04-01

    Flow-shop scheduling problem (FSP) deals with the scheduling of a set of n jobs that visit a set of m machines in the same order. As the FSP is NP-hard, there is no efficient algorithm to reach the optimal solution of the problem. To minimize the holding, delay and setup costs of large permutation flow-shop scheduling problems with sequence-dependent setup times on each machine, this paper develops a novel hybrid genetic algorithm (HGA) with three genetic operators. Proposed HGA applies a modified approach to generate a pool of initial solutions, and also uses an improved heuristic called the iterated swap procedure to improve the initial solutions. We consider the make-to-order production approach that some sequences between jobs are assumed as tabu based on maximum allowable setup cost. In addition, the results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.

  12. A contemporary approach to the problem of determining physical parameters according to the results of measurements

    NASA Technical Reports Server (NTRS)

    Elyasberg, P. Y.

    1979-01-01

    The shortcomings of the classical approach are set forth, and the newer methods resulting from these shortcomings are explained. The problem was approached with the assumption that the probabilities of error were known, as well as without knowledge of the distribution of the probabilities of error. The advantages of the newer approach are discussed.

  13. Robust set-point regulation for ecological models with multiple management goals.

    PubMed

    Guiver, Chris; Mueller, Markus; Hodgson, Dave; Townley, Stuart

    2016-05-01

    Population managers will often have to deal with problems of meeting multiple goals, for example, keeping at specific levels both the total population and population abundances in given stage-classes of a stratified population. In control engineering, such set-point regulation problems are commonly tackled using multi-input, multi-output proportional and integral (PI) feedback controllers. Building on our recent results for population management with single goals, we develop a PI control approach in a context of multi-objective population management. We show that robust set-point regulation is achieved by using a modified PI controller with saturation and anti-windup elements, both described in the paper, and illustrate the theory with examples. Our results apply more generally to linear control systems with positive state variables, including a class of infinite-dimensional systems, and thus have broader appeal.

  14. IMSF: Infinite Methodology Set Framework

    NASA Astrophysics Data System (ADS)

    Ota, Martin; Jelínek, Ivan

    Software development is usually an integration task in enterprise environment - few software applications work autonomously now. It is usually a collaboration of heterogeneous and unstable teams. One serious problem is lack of resources, a popular result being outsourcing, ‘body shopping’, and indirectly team and team member fluctuation. Outsourced sub-deliveries easily become black boxes with no clear development method used, which has a negative impact on supportability. Such environments then often face the problems of quality assurance and enterprise know-how management. The used methodology is one of the key factors. Each methodology was created as a generalization of a number of solved projects, and each methodology is thus more or less connected with a set of task types. When the task type is not suitable, it causes problems that usually result in an undocumented ad-hoc solution. This was the motivation behind formalizing a simple process for collaborative software engineering. Infinite Methodology Set Framework (IMSF) defines the ICT business process of adaptive use of methods for classified types of tasks. The article introduces IMSF and briefly comments its meta-model.

  15. Testing foreign language impact on engineering students' scientific problem-solving performance

    NASA Astrophysics Data System (ADS)

    Tatzl, Dietmar; Messnarz, Bernd

    2013-12-01

    This article investigates the influence of English as the examination language on the solution of physics and science problems by non-native speakers in tertiary engineering education. For that purpose, a statistically significant total number of 96 students in four year groups from freshman to senior level participated in a testing experiment in the Degree Programme of Aviation at the FH JOANNEUM University of Applied Sciences, Graz, Austria. Half of each test group were given a set of 12 physics problems described in German, the other half received the same set of problems described in English. It was the goal to test linguistic reading comprehension necessary for scientific problem solving instead of physics knowledge as such. The results imply that written undergraduate English-medium engineering tests and examinations may not require additional examination time or language-specific aids for students who have reached university-entrance proficiency in English as a foreign language.

  16. Expert and novice categorization of introductory physics problems

    NASA Astrophysics Data System (ADS)

    Wolf, Steven Frederick

    Since it was first published 30 years ago, Chi et al.'s seminal paper on expert and novice categorization of introductory problems led to a plethora of follow-up studies within and outside of the area of physics [Chi et al. Cognitive Science 5, 121 -- 152 (1981)]. These studies frequently encompass "card-sorting" exercises whereby the participants group problems. The study firmly established the paradigm that novices categorize physics problems by "surface features" (e.g. "incline," "pendulum," "projectile motion,"... ), while experts use "deep structure" (e.g. "energy conservation," "Newton 2,"... ). While this technique certainly allows insights into problem solving approaches, simple descriptive statistics more often than not fail to find significant differences between experts and novices. In most experiments, the clean-cut outcome of the original study cannot be reproduced. Given the widespread implications of the original study, the frequent failure to reproduce its findings warrants a closer look. We developed a less subjective statistical analysis method for the card sorting outcome and studied how the "successful" outcome of the experiment depends on the choice of the original card set. Thus, in a first step, we are moving beyond descriptive statistics, and develop a novel microscopic approach that takes into account the individual identity of the cards and uses graph theory and models to visualize, analyze, and interpret problem categorization experiments. These graphs are compared macroscopically, using standard graph theoretic statistics, and microscopically, using a distance metric that we have developed. This macroscopic sorting behavior is described using our Cognitive Categorization Model. The microscopic comparison allows us to visualize our sorters using Principal Components Analysis and compare the expert sorters to the novice sorters as a group. In the second step, we ask the question: Which properties of problems are most important in problem sets that discriminate experts from novices in a measurable way? We are describing a method to characterize problems along several dimensions, and then study the effectiveness of differently composed problem sets in differentiating experts from novices, using our analysis method. Both components of our study are based on an extensive experiment using a large problem set, which known physics experts and novices categorized according to the original experimental protocol. Both the size of the card set and the size of the sorter pool were larger than in comparable experiments. Based on our analysis method, we find that most of the variation in sorting outcome is not due to the sorter being an expert versus a novice, but rather due to an independent characteristic that we named "stacker" versus "spreader." The fact that the expert-novice distinction only accounts for a smaller amount of the variation may partly explain the frequent null-results when conducting these experiments. In order to study how the outcome depends on the original problem set, our problem set needed to be large so that we could determine how well experts and novices could be discriminated by considering both small subsets using a Monte Carlo approach and larger subsets using Simulated Annealing. This computationally intense study relied on our objective analysis method, as the large combinatorics did not allow for manual analysis of the outcomes from the subsets. We found that the number of questions required to accurately classify experts and novices could be surprisingly small so long as the problem set was carefully crafted to be composed of problems with particular pedagogical and contextual features. In order to discriminate experts from novices in a categorization task, it is important that the problem sets carefully consider three problem properties: The chapters that problems are in (the problems need to be from a wide spectrum of chapters to allow for the original "deep structure" categorization), the processes required to solve the problems (the problems must required different solving strategies), and the difficulty of the problems (the problems must be "easy"). In other words, for the experiment to be "successful," the card set needs to be carefully "rigged" across three property dimensions.

  17. Some controversial multiple testing problems in regulatory applications.

    PubMed

    Hung, H M James; Wang, Sue-Jane

    2009-01-01

    Multiple testing problems in regulatory applications are often more challenging than the problems of handling a set of mathematical symbols representing multiple null hypotheses under testing. In the union-intersection setting, it is important to define a family of null hypotheses relevant to the clinical questions at issue. The distinction between primary endpoint and secondary endpoint needs to be considered properly in different clinical applications. Without proper consideration, the widely used sequential gate keeping strategies often impose too many logical restrictions to make sense, particularly to deal with the problem of testing multiple doses and multiple endpoints, the problem of testing a composite endpoint and its component endpoints, and the problem of testing superiority and noninferiority in the presence of multiple endpoints. Partitioning the null hypotheses involved in closed testing into clinical relevant orderings or sets can be a viable alternative to resolving the illogical problems requiring more attention from clinical trialists in defining the clinical hypotheses or clinical question(s) at the design stage. In the intersection-union setting there is little room for alleviating the stringency of the requirement that each endpoint must meet the same intended alpha level, unless the parameter space under the null hypothesis can be substantially restricted. Such restriction often requires insurmountable justification and usually cannot be supported by the internal data. Thus, a possible remedial approach to alleviate the possible conservatism as a result of this requirement is a group-sequential design strategy that starts with a conservative sample size planning and then utilizes an alpha spending function to possibly reach the conclusion early.

  18. Probing for quantum speedup in spin-glass problems with planted solutions

    NASA Astrophysics Data System (ADS)

    Hen, Itay; Job, Joshua; Albash, Tameem; Rønnow, Troels F.; Troyer, Matthias; Lidar, Daniel A.

    2015-10-01

    The availability of quantum annealing devices with hundreds of qubits has made the experimental demonstration of a quantum speedup for optimization problems a coveted, albeit elusive goal. Going beyond earlier studies of random Ising problems, here we introduce a method to construct a set of frustrated Ising-model optimization problems with tunable hardness. We study the performance of a D-Wave Two device (DW2) with up to 503 qubits on these problems and compare it to a suite of classical algorithms, including a highly optimized algorithm designed to compete directly with the DW2. The problems are generated around predetermined ground-state configurations, called planted solutions, which makes them particularly suitable for benchmarking purposes. The problem set exhibits properties familiar from constraint satisfaction (SAT) problems, such as a peak in the typical hardness of the problems, determined by a tunable clause density parameter. We bound the hardness regime where the DW2 device either does not or might exhibit a quantum speedup for our problem set. While we do not find evidence for a speedup for the hardest and most frustrated problems in our problem set, we cannot rule out that a speedup might exist for some of the easier, less frustrated problems. Our empirical findings pertain to the specific D-Wave processor and problem set we studied and leave open the possibility that future processors might exhibit a quantum speedup on the same problem set.

  19. The First Step in Prison Training Program Evaluation: A Model for Pinpointing Critical Needs

    ERIC Educational Resources Information Center

    Vicino, Frank L.; And Others

    1977-01-01

    The model allows for the determination of the severity of the training problems, thus leading naturally to problem priority and in addition a basis for evaluation design. Results from the use of the model, in an operational setting, are presented. (Author)

  20. Drawings as a Component of Triangulated Assessment

    ERIC Educational Resources Information Center

    Otto, Charlotte A.; Everett, Susan A.; Luera, Gail R.; Burke, Christopher F. J.

    2013-01-01

    Action research (AR) in an educational setting as described by Tillotson (2000), is an approach to "classroom-based problems" or "specific school issues". This process involves identification of the issue or problem, development and implementation of an action plan, gathering and interpreting data, sharing the results within…

  1. A machine learning evaluation of an artificial immune system.

    PubMed

    Glickman, Matthew; Balthrop, Justin; Forrest, Stephanie

    2005-01-01

    ARTIS is an artificial immune system framework which contains several adaptive mechanisms. LISYS is a version of ARTIS specialized for the problem of network intrusion detection. The adaptive mechanisms of LISYS are characterized in terms of their machine-learning counterparts, and a series of experiments is described, each of which isolates a different mechanism of LISYS and studies its contribution to the system's overall performance. The experiments were conducted on a new data set, which is more recent and realistic than earlier data sets. The network intrusion detection problem is challenging because it requires one-class learning in an on-line setting with concept drift. The experiments confirm earlier experimental results with LISYS, and they study in detail how LISYS achieves success on the new data set.

  2. Tuning rules for robust FOPID controllers based on multi-objective optimization with FOPDT models.

    PubMed

    Sánchez, Helem Sabina; Padula, Fabrizio; Visioli, Antonio; Vilanova, Ramon

    2017-01-01

    In this paper a set of optimally balanced tuning rules for fractional-order proportional-integral-derivative controllers is proposed. The control problem of minimizing at once the integrated absolute error for both the set-point and the load disturbance responses is addressed. The control problem is stated as a multi-objective optimization problem where a first-order-plus-dead-time process model subject to a robustness, maximum sensitivity based, constraint has been considered. A set of Pareto optimal solutions is obtained for different normalized dead times and then the optimal balance between the competing objectives is obtained by choosing the Nash solution among the Pareto-optimal ones. A curve fitting procedure has then been applied in order to generate suitable tuning rules. Several simulation results show the effectiveness of the proposed approach. Copyright © 2016. Published by Elsevier Ltd.

  3. Mining Stable Roles in RBAC

    NASA Astrophysics Data System (ADS)

    Colantonio, Alessandro; di Pietro, Roberto; Ocello, Alberto; Verde, Nino Vincenzo

    In this paper we address the problem of generating a candidate role-set for an RBAC configuration that enjoys the following two key features: it minimizes the administration cost; and, it is a stable candidate role-set. To achieve these goals, we implement a three steps methodology: first, we associate a weight to roles; second, we identify and remove the user-permission assignments that cannot belong to a role that have a weight exceeding a given threshold; third, we restrict the problem of finding a candidate role-set for the given system configuration using only the user-permission assignments that have not been removed in the second step—that is, user-permission assignments that belong to roles with a weight exceeding the given threshold. We formally show—proof of our results are rooted in graph theory—that this methodology achieves the intended goals. Finally, we discuss practical applications of our approach to the role mining problem.

  4. Transient responses' optimization by means of set-based multi-objective evolution

    NASA Astrophysics Data System (ADS)

    Avigad, Gideon; Eisenstadt, Erella; Goldvard, Alex; Salomon, Shaul

    2012-04-01

    In this article, a novel solution to multi-objective problems involving the optimization of transient responses is suggested. It is claimed that the common approach of treating such problems by introducing auxiliary objectives overlooks tradeoffs that should be presented to the decision makers. This means that, if at some time during the responses, one of the responses is optimal, it should not be overlooked. An evolutionary multi-objective algorithm is suggested in order to search for these optimal solutions. For this purpose, state-wise domination is utilized with a new crowding measure for ordered sets being suggested. The approach is tested on both artificial as well as on real life problems in order to explain the methodology and demonstrate its applicability and importance. The results indicate that, from an engineering point of view, the approach possesses several advantages over existing approaches. Moreover, the applications highlight the importance of set-based evolution.

  5. Heuristics to Evaluate Interactive Systems for Children with Autism Spectrum Disorder (ASD).

    PubMed

    Khowaja, Kamran; Salim, Siti Salwah; Asemi, Adeleh

    2015-01-01

    In this paper, we adapted and expanded a set of guidelines, also known as heuristics, to evaluate the usability of software to now be appropriate for software aimed at children with autism spectrum disorder (ASD). We started from the heuristics developed by Nielsen in 1990 and developed a modified set of 15 heuristics. The first 5 heuristics of this set are the same as those of the original Nielsen set, the next 5 heuristics are improved versions of Nielsen's, whereas the last 5 heuristics are new. We present two evaluation studies of our new heuristics. In the first, two groups compared Nielsen's set with the modified set of heuristics, with each group evaluating two interactive systems. The Nielsen's heuristics were assigned to the control group while the experimental group was given the modified set of heuristics, and a statistical analysis was conducted to determine the effectiveness of the modified set, the contribution of 5 new heuristics and the impact of 5 improved heuristics. The results show that the modified set is significantly more effective than the original, and we found a significant difference between the five improved heuristics and their corresponding heuristics in the original set. The five new heuristics are effective in problem identification using the modified set. The second study was conducted using a system which was developed to ascertain if the modified set was effective at identifying usability problems that could be fixed before the release of software. The post-study analysis revealed that the majority of the usability problems identified by the experts were fixed in the updated version of the system.

  6. The Association between Motivation, Affect, and Self-regulated Learning When Solving Problems

    PubMed Central

    Baars, Martine; Wijnia, Lisette; Paas, Fred

    2017-01-01

    Self-regulated learning (SRL) skills are essential for learning during school years, particularly in complex problem-solving domains, such as biology and math. Although a lot of studies have focused on the cognitive resources that are needed for learning to solve problems in a self-regulated way, affective and motivational resources have received much less research attention. The current study investigated the relation between affect (i.e., Positive Affect and Negative Affect Scale), motivation (i.e., autonomous and controlled motivation), mental effort, SRL skills, and problem-solving performance when learning to solve biology problems in a self-regulated online learning environment. In the learning phase, secondary education students studied video-modeling examples of how to solve hereditary problems, solved hereditary problems which they chose themselves from a set of problems with different complexity levels (i.e., five levels). In the posttest, students solved hereditary problems, self-assessed their performance, and chose a next problem from the set of problems but did not solve these problems. The results from this study showed that negative affect, inaccurate self-assessments during the posttest, and higher perceptions of mental effort during the posttest were negatively associated with problem-solving performance after learning in a self-regulated way. PMID:28848467

  7. A set partitioning reformulation for the multiple-choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Voß, Stefan; Lalla-Ruiz, Eduardo

    2016-05-01

    The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.

  8. Variational Trajectory Optimization Tool Set: Technical description and user's manual

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.

    1993-01-01

    The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.

  9. KIPSE1: A Knowledge-based Interactive Problem Solving Environment for data estimation and pattern classification

    NASA Technical Reports Server (NTRS)

    Han, Chia Yung; Wan, Liqun; Wee, William G.

    1990-01-01

    A knowledge-based interactive problem solving environment called KIPSE1 is presented. The KIPSE1 is a system built on a commercial expert system shell, the KEE system. This environment gives user capability to carry out exploratory data analysis and pattern classification tasks. A good solution often consists of a sequence of steps with a set of methods used at each step. In KIPSE1, solution is represented in the form of a decision tree and each node of the solution tree represents a partial solution to the problem. Many methodologies are provided at each node to the user such that the user can interactively select the method and data sets to test and subsequently examine the results. Otherwise, users are allowed to make decisions at various stages of problem solving to subdivide the problem into smaller subproblems such that a large problem can be handled and a better solution can be found.

  10. Level-set techniques for facies identification in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  11. Infertility in resource-constrained settings: moving towards amelioration.

    PubMed

    Hammarberg, Karin; Kirkman, Maggie

    2013-02-01

    It is often presumed that infertility is not a problem in resource-poor areas where fertility rates are high. This is challenged by consistent evidence that the consequences of childlessness are very severe in low-income countries, particularly for women. In these settings, childless women are frequently stigmatized, isolated, ostracized, disinherited and neglected by the family and local community. This may result in physical and psychological abuse, polygamy and even suicide. Attitudes among people in high-income countries towards provision of infertility care in low-income countries have mostly been either dismissive or indifferent as it is argued that scarce healthcare resources should be directed towards reducing fertility and restricting population growth. However, recognition of the plight of infertile couples in low-income settings is growing. One of the United Nation's Millennium Development Goals was for universal access to reproductive health care by 2015, and WHO has recommended that infertility be considered a global health problem and stated the need for adaptation of assisted reproductive technology in low-resource countries. This paper challenges the construct that infertility is not a serious problem in resource-constrained settings and argues that there is a need for infertility care, including affordable assisted reproduction treatment, in these settings. It is often presumed that infertility is not a problem in densely populated, resource-poor areas where fertility rates are high. This presumption is challenged by consistent evidence that the consequences of childlessness are very severe in low-income countries, particularly for women. In these settings, childless women are frequently stigmatized, isolated, ostracized, disinherited and neglected by the family and local community. This may result in physical and psychological abuse, polygamy and even suicide. Because many families in low-income countries depend on children for economic survival, childlessness and having fewer children than the number identified as appropriate are social and public health matters, not only medical problems. Attitudes among people in high-income countries towards provision of infertility care in low-income countries have mostly been either dismissive or indifferent as it is argued that scarce healthcare resources and family planning activities should be directed towards reducing fertility and restricting population growth. However, recognition of the plight of infertile couples in low-income settings is growing. One of the United Nation's Millennium Development Goals was for universal access to reproductive health care by 2015, and WHO has recommended that infertility be considered a global health problem and stated the need for adaptation of assisted reproduction technology in low-resource countries. In this paper, we challenge the construct that infertility is not a serious problem in resource-constrained settings and argue that there is a need for infertility care, including affordable assisted reproduction treatment, in these settings. Copyright © 2012 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  12. Resolvent-Techniques for Multiple Exercise Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Sören, E-mail: christensen@math.uni-kiel.de; Lempa, Jukka, E-mail: jukka.lempa@hioa.no

    2015-02-15

    We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristicsmore » of the problems can be identified more explicitly. We illustrate the main results with explicit examples.« less

  13. Algorithm to solve a chance-constrained network capacity design problem with stochastic demands and finite support

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.

    2016-04-15

    Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as amore » foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.« less

  14. Generating effective project scheduling heuristics by abstraction and reconstitution

    NASA Technical Reports Server (NTRS)

    Janakiraman, Bhaskar; Prieditis, Armand

    1992-01-01

    A project scheduling problem consists of a finite set of jobs, each with fixed integer duration, requiring one or more resources such as personnel or equipment, and each subject to a set of precedence relations, which specify allowable job orderings, and a set of mutual exclusion relations, which specify jobs that cannot overlap. No job can be interrupted once started. The objective is to minimize project duration. This objective arises in nearly every large construction project--from software to hardware to buildings. Because such project scheduling problems are NP-hard, they are typically solved by branch-and-bound algorithms. In these algorithms, lower-bound duration estimates (admissible heuristics) are used to improve efficiency. One way to obtain an admissible heuristic is to remove (abstract) all resources and mutual exclusion constraints and then obtain the minimal project duration for the abstracted problem; this minimal duration is the admissible heuristic. Although such abstracted problems can be solved efficiently, they yield inaccurate admissible heuristics precisely because those constraints that are central to solving the original problem are abstracted. This paper describes a method to reconstitute the abstracted constraints back into the solution to the abstracted problem while maintaining efficiency, thereby generating better admissible heuristics. Our results suggest that reconstitution can make good admissible heuristics even better.

  15. Video based object representation and classification using multiple covariance matrices.

    PubMed

    Zhang, Yurong; Liu, Quan

    2017-01-01

    Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.

  16. A note on convergence of solutions of total variation regularized linear inverse problems

    NASA Astrophysics Data System (ADS)

    Iglesias, José A.; Mercier, Gwenael; Scherzer, Otmar

    2018-05-01

    In a recent paper by Chambolle et al (2017 Inverse Problems 33 015002) it was proven that if the subgradient of the total variation at the noise free data is not empty, the level-sets of the total variation denoised solutions converge to the level-sets of the noise free data with respect to the Hausdorff distance. The condition on the subgradient corresponds to the source condition introduced by Burger and Osher (2007 Multiscale Model. Simul. 6 365–95), who proved convergence rates results with respect to the Bregman distance under this condition. We generalize the result of Chambolle et al to total variation regularization of general linear inverse problems under such a source condition. As particular applications we present denoising in bounded and unbounded, convex and non convex domains, deblurring and inversion of the circular Radon transform. In all these examples the convergence result applies. Moreover, we illustrate the convergence behavior through numerical examples.

  17. Genetic Algorithm and Tabu Search for Vehicle Routing Problems with Stochastic Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ismail, Zuhaimy, E-mail: zuhaimyi@yahoo.com, E-mail: irhamahn@yahoo.com; Irhamah, E-mail: zuhaimyi@yahoo.com, E-mail: irhamahn@yahoo.com

    2010-11-11

    This paper presents a problem of designing solid waste collection routes, involving scheduling of vehicles where each vehicle begins at the depot, visits customers and ends at the depot. It is modeled as a Vehicle Routing Problem with Stochastic Demands (VRPSD). A data set from a real world problem (a case) is used in this research. We developed Genetic Algorithm (GA) and Tabu Search (TS) procedure and these has produced the best possible result. The problem data are inspired by real case of VRPSD in waste collection. Results from the experiment show the advantages of the proposed algorithm that aremore » its robustness and better solution qualities.« less

  18. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    PubMed

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2018-07-01

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  19. Reducing Teacher Stress by Implementing Collaborative Problem Solving in a School Setting

    ERIC Educational Resources Information Center

    Schaubman, Averi; Stetson, Erica; Plog, Amy

    2011-01-01

    Student behavior affects teacher stress levels and the student-teacher relationship. In this pilot study, teachers were trained in Collaborative Problem Solving (CPS), a cognitive-behavioral model that explains challenging behavior as the result of underlying deficits in the areas of flexibility/adaptability, frustration tolerance, and problem…

  20. Optimal control problem for linear fractional-order systems, described by equations with Hadamard-type derivative

    NASA Astrophysics Data System (ADS)

    Postnov, Sergey

    2017-11-01

    Two kinds of optimal control problem are investigated for linear time-invariant fractional-order systems with lumped parameters which dynamics described by equations with Hadamard-type derivative: the problem of control with minimal norm and the problem of control with minimal time at given restriction on control norm. The problem setting with nonlocal initial conditions studied. Admissible controls allowed to be the p-integrable functions (p > 1) at half-interval. The optimal control problem studied by moment method. The correctness and solvability conditions for the corresponding moment problem are derived. For several special cases the optimal control problems stated are solved analytically. Some analogies pointed for results obtained with the results which are known for integer-order systems and fractional-order systems describing by equations with Caputo- and Riemann-Liouville-type derivatives.

  1. Rank the voltage across light bulbs … then set up the live experiment

    NASA Astrophysics Data System (ADS)

    Jacobs, Greg C.

    2018-02-01

    The Tasks Inspired by Physics Education Research (TIPERS) workbooks pose questions in styles quite different from the end-of-chapter problems that those of us of a certain age were assigned back in the days before Netscape. My own spin on TIPERS is not just to do them on paper, but to have students set up the situations in the laboratory to verify—or contradict —their paper solutions. The circuits unit is particularly conducive to creating quick-and-dirty lab setups that demonstrate the result of conceptually framed problems.

  2. Algorithms and Complexity Results for Genome Mapping Problems.

    PubMed

    Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric

    2017-01-01

    Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.

  3. Bridging the Gap Between Planning and Scheduling

    NASA Technical Reports Server (NTRS)

    Smith, David E.; Frank, Jeremy; Jonsson, Ari K.; Norvig, Peter (Technical Monitor)

    2000-01-01

    Planning research in Artificial Intelligence (AI) has often focused on problems where there are cascading levels of action choice and complex interactions between actions. In contrast. Scheduling research has focused on much larger problems where there is little action choice, but the resulting ordering problem is hard. In this paper, we give an overview of M planning and scheduling techniques, focusing on their similarities, differences, and limitations. We also argue that many difficult practical problems lie somewhere between planning and scheduling, and that neither area has the right set of tools for solving these vexing problems.

  4. Individualized Math Problems in Percent. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This volume includes problems concerned with computing percents.…

  5. Individualized Math Problems in Algebra. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic, and contains problems related to diverse vocations. Solutions are provided for all problems. Problems presented in this package concern ratios used in food…

  6. Individualized Math Problems in Fractions. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This package contains problems involving computation with common…

  7. Individualized Math Problems in Geometry. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. The volume contains problems in applied geometry. Measurement of…

  8. Individualized Math Problems in Measurement and Conversion. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This volume includes problems involving measurement, computation of…

  9. Individualized Math Problems in Integers. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This volume presents problems involving operations with positive and…

  10. Robust optimization modelling with applications to industry and environmental problems

    NASA Astrophysics Data System (ADS)

    Chaerani, Diah; Dewanto, Stanley P.; Lesmana, Eman

    2017-10-01

    Robust Optimization (RO) modeling is one of the existing methodology for handling data uncertainty in optimization problem. The main challenge in this RO methodology is how and when we can reformulate the robust counterpart of uncertain problems as a computationally tractable optimization problem or at least approximate the robust counterpart by a tractable problem. Due to its definition the robust counterpart highly depends on how we choose the uncertainty set. As a consequence we can meet this challenge only if this set is chosen in a suitable way. The development on RO grows fast, since 2004, a new approach of RO called Adjustable Robust Optimization (ARO) is introduced to handle uncertain problems when the decision variables must be decided as a ”wait and see” decision variables. Different than the classic Robust Optimization (RO) that models decision variables as ”here and now”. In ARO, the uncertain problems can be considered as a multistage decision problem, thus decision variables involved are now become the wait and see decision variables. In this paper we present the applications of both RO and ARO. We present briefly all results to strengthen the importance of RO and ARO in many real life problems.

  11. Variable neighborhood search to solve the vehicle routing problem for hazardous materials transportation.

    PubMed

    Bula, Gustavo Alfredo; Prodhon, Caroline; Gonzalez, Fabio Augusto; Afsar, H Murat; Velasco, Nubia

    2017-02-15

    This work focuses on the Heterogeneous Fleet Vehicle Routing problem (HFVRP) in the context of hazardous materials (HazMat) transportation. The objective is to determine a set of routes that minimizes the total expected routing risk. This is a nonlinear function, and it depends on the vehicle load and the population exposed when an incident occurs. Thus, a piecewise linear approximation is used to estimate it. For solving the problem, a variant of the Variable Neighborhood Search (VNS) algorithm is employed. To improve its performance, a post-optimization procedure is implemented via a Set Partitioning (SP) problem. The SP is solved on a pool of routes obtained from executions of the local search procedure embedded on the VNS. The algorithm is tested on two sets of HFVRP instances based on literature with up to 100 nodes, these instances are modified to include vehicle and arc risk parameters. The results are competitive in terms of computational efficiency and quality attested by a comparison with Mixed Integer Linear Programming (MILP) previously proposed. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Involvement of family members in life with type 2 diabetes: Six interconnected problem domains of significance for family health identity and healthcare authenticity

    PubMed Central

    Grabowski, Dan; Andersen, Tue Helms; Varming, Annemarie; Ommundsen, Christine; Willaing, Ingrid

    2017-01-01

    Objectives: Family involvement plays a key role in diabetes management. Problems and challenges related to type 2-diabetes often affect the whole family, and relatives are at increased risk of developing diabetes themselves. We highlight these issues in our objectives: (1) to uncover specific family problems associated with mutual involvement in life with type 2-diabetes and (2) to analytically look at ways of approaching these problems in healthcare settings. Methods: Qualitative data were gathered in participatory problem assessment workshops. The data were analysed in three rounds using radical hermeneutics. Results: Problems were categorized in six domains: knowledge, communication, support, everyday life, roles and worries. The final cross-analysis focusing on the link between family identity and healthcare authenticity provided information on how the six domains can be approached in healthcare settings. Conclusion: The study generated important knowledge about problems associated with family involvement in life with type 2 diabetes and about how family involvement can be supported in healthcare practice. PMID:28839943

  13. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.

  14. A Review of Functional Analysis Methods Conducted in Public School Classroom Settings

    ERIC Educational Resources Information Center

    Lloyd, Blair P.; Weaver, Emily S.; Staubitz, Johanna L.

    2016-01-01

    The use of functional behavior assessments (FBAs) to address problem behavior in classroom settings has increased as a result of education legislation and long-standing evidence supporting function-based interventions. Although functional analysis remains the standard for identifying behavior--environment functional relations, this component is…

  15. Artificial neuron-glia networks learning approach based on cooperative coevolution.

    PubMed

    Mesejo, Pablo; Ibáñez, Oscar; Fernández-Blanco, Enrique; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana B

    2015-06-01

    Artificial Neuron-Glia Networks (ANGNs) are a novel bio-inspired machine learning approach. They extend classical Artificial Neural Networks (ANNs) by incorporating recent findings and suppositions about the way information is processed by neural and astrocytic networks in the most evolved living organisms. Although ANGNs are not a consolidated method, their performance against the traditional approach, i.e. without artificial astrocytes, was already demonstrated on classification problems. However, the corresponding learning algorithms developed so far strongly depends on a set of glial parameters which are manually tuned for each specific problem. As a consequence, previous experimental tests have to be done in order to determine an adequate set of values, making such manual parameter configuration time-consuming, error-prone, biased and problem dependent. Thus, in this paper, we propose a novel learning approach for ANGNs that fully automates the learning process, and gives the possibility of testing any kind of reasonable parameter configuration for each specific problem. This new learning algorithm, based on coevolutionary genetic algorithms, is able to properly learn all the ANGNs parameters. Its performance is tested on five classification problems achieving significantly better results than ANGN and competitive results with ANN approaches.

  16. A scalable method for identifying frequent subtrees in sets of large phylogenetic trees

    PubMed Central

    2012-01-01

    Background We consider the problem of finding the maximum frequent agreement subtrees (MFASTs) in a collection of phylogenetic trees. Existing methods for this problem often do not scale beyond datasets with around 100 taxa. Our goal is to address this problem for datasets with over a thousand taxa and hundreds of trees. Results We develop a heuristic solution that aims to find MFASTs in sets of many, large phylogenetic trees. Our method works in multiple phases. In the first phase, it identifies small candidate subtrees from the set of input trees which serve as the seeds of larger subtrees. In the second phase, it combines these small seeds to build larger candidate MFASTs. In the final phase, it performs a post-processing step that ensures that we find a frequent agreement subtree that is not contained in a larger frequent agreement subtree. We demonstrate that this heuristic can easily handle data sets with 1000 taxa, greatly extending the estimation of MFASTs beyond current methods. Conclusions Although this heuristic does not guarantee to find all MFASTs or the largest MFAST, it found the MFAST in all of our synthetic datasets where we could verify the correctness of the result. It also performed well on large empirical data sets. Its performance is robust to the number and size of the input trees. Overall, this method provides a simple and fast way to identify strongly supported subtrees within large phylogenetic hypotheses. PMID:23033843

  17. Improved multi-objective ant colony optimization algorithm and its application in complex reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing

    2013-09-01

    The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.

  18. Combining deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance.

    PubMed

    Ngo, Tuan Anh; Lu, Zhi; Carneiro, Gustavo

    2017-01-01

    We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  19. Complex fuzzy soft expert sets

    NASA Astrophysics Data System (ADS)

    Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak

    2017-04-01

    Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.

  20. On Takens’ last problem: tangencies and time averages near heteroclinic networks

    NASA Astrophysics Data System (ADS)

    Labouriau, Isabel S.; Rodrigues, Alexandre A. P.

    2017-05-01

    We obtain a structurally stable family of smooth ordinary differential equations exhibiting heteroclinic tangencies for a dense subset of parameters. We use this to find vector fields C 2-close to an element of the family exhibiting a tangency, for which the set of solutions with historic behaviour contains an open set. This provides an affirmative answer to Takens’ last problem (Takens 2008 Nonlinearity 21 T33-6). A limited solution with historic behaviour is one for which the time averages do not converge as time goes to infinity. Takens’ problem asks for dynamical systems where historic behaviour occurs persistently for initial conditions in a set with positive Lebesgue measure. The family appears in the unfolding of a degenerate differential equation whose flow has an asymptotically stable heteroclinic cycle involving two-dimensional connections of non-trivial periodic solutions. We show that the degenerate problem also has historic behaviour, since for an open set of initial conditions starting near the cycle, the time averages approach the boundary of a polygon whose vertices depend on the centres of gravity of the periodic solutions and their Floquet multipliers. We illustrate our results with an explicit example where historic behaviour arises C 2-close of a \\mathbf{SO}(2) -equivariant vector field.

  1. Generalized bipartite quantum state discrimination problems with sequential measurements

    NASA Astrophysics Data System (ADS)

    Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki

    2018-02-01

    We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.

  2. Two Quantum Protocols for Oblivious Set-member Decision Problem

    NASA Astrophysics Data System (ADS)

    Shi, Run-Hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2015-10-01

    In this paper, we defined a new secure multi-party computation problem, called Oblivious Set-member Decision problem, which allows one party to decide whether a secret of another party belongs to his private set in an oblivious manner. There are lots of important applications of Oblivious Set-member Decision problem in fields of the multi-party collaborative computation of protecting the privacy of the users, such as private set intersection and union, anonymous authentication, electronic voting and electronic auction. Furthermore, we presented two quantum protocols to solve the Oblivious Set-member Decision problem. Protocol I takes advantage of powerful quantum oracle operations so that it needs lower costs in both communication and computation complexity; while Protocol II takes photons as quantum resources and only performs simple single-particle projective measurements, thus it is more feasible with the present technology.

  3. Two Quantum Protocols for Oblivious Set-member Decision Problem

    PubMed Central

    Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2015-01-01

    In this paper, we defined a new secure multi-party computation problem, called Oblivious Set-member Decision problem, which allows one party to decide whether a secret of another party belongs to his private set in an oblivious manner. There are lots of important applications of Oblivious Set-member Decision problem in fields of the multi-party collaborative computation of protecting the privacy of the users, such as private set intersection and union, anonymous authentication, electronic voting and electronic auction. Furthermore, we presented two quantum protocols to solve the Oblivious Set-member Decision problem. Protocol I takes advantage of powerful quantum oracle operations so that it needs lower costs in both communication and computation complexity; while Protocol II takes photons as quantum resources and only performs simple single-particle projective measurements, thus it is more feasible with the present technology. PMID:26514668

  4. Two Quantum Protocols for Oblivious Set-member Decision Problem.

    PubMed

    Shi, Run-Hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun

    2015-10-30

    In this paper, we defined a new secure multi-party computation problem, called Oblivious Set-member Decision problem, which allows one party to decide whether a secret of another party belongs to his private set in an oblivious manner. There are lots of important applications of Oblivious Set-member Decision problem in fields of the multi-party collaborative computation of protecting the privacy of the users, such as private set intersection and union, anonymous authentication, electronic voting and electronic auction. Furthermore, we presented two quantum protocols to solve the Oblivious Set-member Decision problem. Protocol I takes advantage of powerful quantum oracle operations so that it needs lower costs in both communication and computation complexity; while Protocol II takes photons as quantum resources and only performs simple single-particle projective measurements, thus it is more feasible with the present technology.

  5. Generalized minimum dominating set and application in automatic text summarization

    NASA Astrophysics Data System (ADS)

    Xu, Yi-Zhi; Zhou, Hai-Jun

    2016-03-01

    For a graph formed by vertices and weighted edges, a generalized minimum dominating set (MDS) is a vertex set of smallest cardinality such that the summed weight of edges from each outside vertex to vertices in this set is equal to or larger than certain threshold value. This generalized MDS problem reduces to the conventional MDS problem in the limiting case of all the edge weights being equal to the threshold value. We treat the generalized MDS problem in the present paper by a replica-symmetric spin glass theory and derive a set of belief-propagation equations. As a practical application we consider the problem of extracting a set of sentences that best summarize a given input text document. We carry out a preliminary test of the statistical physics-inspired method to this automatic text summarization problem.

  6. A genetic algorithm used for solving one optimization problem

    NASA Astrophysics Data System (ADS)

    Shipacheva, E. N.; Petunin, A. A.; Berezin, I. M.

    2017-12-01

    A problem of minimizing the length of the blank run for a cutting tool during cutting of sheet materials into shaped blanks is discussed. This problem arises during the preparation of control programs for computerized numerical control (CNC) machines. A discrete model of the problem is analogous in setting to the generalized travelling salesman problem with limitations in the form of precursor conditions determined by the technological features of cutting. A certain variant of a genetic algorithm for solving this problem is described. The effect of the parameters of the developed algorithm on the solution result for the problem with limitations is investigated.

  7. Individualized Math Problems in Ratio and Proportion. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. This volume contains problems involving ratio and proportion. Some…

  8. Individualized Math Problems in Graphs and Tables. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems involving the construction and interpretation of graphs and…

  9. Individualized Math Problems in Simple Equations. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume require solution of linear equations, systems…

  10. Individualized Math Problems in Trigonometry. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume require the use of trigonometric and inverse…

  11. Individualized Math Problems in Decimals. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    THis is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume concern use of decimals and are related to the…

  12. Individualized Math Problems in Volume. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this booklet require the computation of volumes of solids,…

  13. Novel probabilistic neuroclassifier

    NASA Astrophysics Data System (ADS)

    Hong, Jiang; Serpen, Gursel

    2003-09-01

    A novel probabilistic potential function neural network classifier algorithm to deal with classes which are multi-modally distributed and formed from sets of disjoint pattern clusters is proposed in this paper. The proposed classifier has a number of desirable properties which distinguish it from other neural network classifiers. A complete description of the algorithm in terms of its architecture and the pseudocode is presented. Simulation analysis of the newly proposed neuro-classifier algorithm on a set of benchmark problems is presented. Benchmark problems tested include IRIS, Sonar, Vowel Recognition, Two-Spiral, Wisconsin Breast Cancer, Cleveland Heart Disease and Thyroid Gland Disease. Simulation results indicate that the proposed neuro-classifier performs consistently better for a subset of problems for which other neural classifiers perform relatively poorly.

  14. Heuristics to Evaluate Interactive Systems for Children with Autism Spectrum Disorder (ASD)

    PubMed Central

    Khowaja, Kamran; Salim, Siti Salwah

    2015-01-01

    In this paper, we adapted and expanded a set of guidelines, also known as heuristics, to evaluate the usability of software to now be appropriate for software aimed at children with autism spectrum disorder (ASD). We started from the heuristics developed by Nielsen in 1990 and developed a modified set of 15 heuristics. The first 5 heuristics of this set are the same as those of the original Nielsen set, the next 5 heuristics are improved versions of Nielsen's, whereas the last 5 heuristics are new. We present two evaluation studies of our new heuristics. In the first, two groups compared Nielsen’s set with the modified set of heuristics, with each group evaluating two interactive systems. The Nielsen’s heuristics were assigned to the control group while the experimental group was given the modified set of heuristics, and a statistical analysis was conducted to determine the effectiveness of the modified set, the contribution of 5 new heuristics and the impact of 5 improved heuristics. The results show that the modified set is significantly more effective than the original, and we found a significant difference between the five improved heuristics and their corresponding heuristics in the original set. The five new heuristics are effective in problem identification using the modified set. The second study was conducted using a system which was developed to ascertain if the modified set was effective at identifying usability problems that could be fixed before the release of software. The post-study analysis revealed that the majority of the usability problems identified by the experts were fixed in the updated version of the system. PMID:26196385

  15. The Aftercare and School Observation System (ASOS): Reliability and Component Structure.

    PubMed

    Ingoldsby, Erin M; Shelleby, Elizabeth C; Lane, Tonya; Shaw, Daniel S; Dishion, Thomas J; Wilson, Melvin N

    2013-10-01

    This study examines the psychometric properties and component structure of a newly developed observational system, the Aftercare and School Observation System (ASOS). Participants included 468 children drawn from a larger longitudinal intervention study. The system was utilized to assess participant children in school lunchrooms and recess and various afterschool environments. Exploratory factor analyses examined whether a core set of component constructs assessing qualities of children's relationships, caregiver involvement and monitoring, and experiences in school and aftercare contexts that have been linked to children's behavior problems would emerge. Construct validity was assessed by examining associations between ASOS constructs and questionnaire measures assessing children's behavior problems and relationship qualities in school and aftercare settings. Across both settings, two factors showed very similar empirical structures and item loadings, reflecting the constructs of a negative/aggressive context and caregiver positive involvement, with one additional unique factor from the school setting reflecting the extent to which caregiver methods used resulted in less negative behavior and two additional unique factors from the aftercare setting reflecting positivity in the child's interactions and general environment and negativity in the child's interactions and setting. Modest correlations between ASOS factors and aftercare provider and teacher ratings of behavior problems, adult-child relationships, and a rating of school climate contributed to our interpretation that the ASOS scores capture meaningful features of children's experiences in these settings. This study represents the first step of establishing that the ASOS reliably and validly captures risk and protective relationships and experiences in extra-familial settings.

  16. General stochastic variational formulation for the oligopolistic market equilibrium problem with excesses

    NASA Astrophysics Data System (ADS)

    Barbagallo, Annamaria; Di Meglio, Guglielmo; Mauro, Paolo

    2017-07-01

    The aim of the paper is to study, in a Hilbert space setting, a general random oligopolistic market equilibrium problem in presence of both production and demand excesses and to characterize the random Cournot-Nash equilibrium principle by means of a stochastic variational inequality. Some existence results are presented.

  17. Understanding Cellular Respiration in Terms of Matter & Energy within Ecosystems

    ERIC Educational Resources Information Center

    White, Joshua S.; Maskiewicz, April C.

    2014-01-01

    Using a design-based research approach, we developed a data-rich problem (DRP) set to improve student understanding of cellular respiration at the ecosystem level. The problem tasks engage students in data analysis to develop biological explanations. Several of the tasks and their implementation are described. Quantitative results suggest that…

  18. Reverse engineering a social agent-based hidden markov model--visage.

    PubMed

    Chen, Hung-Ching Justin; Goldberg, Mark; Magdon-Ismail, Malik; Wallace, William A

    2008-12-01

    We present a machine learning approach to discover the agent dynamics that drives the evolution of the social groups in a community. We set up the problem by introducing an agent-based hidden Markov model for the agent dynamics: an agent's actions are determined by micro-laws. Nonetheless, We learn the agent dynamics from the observed communications without knowing state transitions. Our approach is to identify the appropriate micro-laws corresponding to an identification of the appropriate parameters in the model. The model identification problem is then formulated as a mixed optimization problem. To solve the problem, we develop a multistage learning process for determining the group structure, the group evolution, and the micro-laws of a community based on the observed set of communications among actors, without knowing the semantic contents. Finally, to test the quality of our approximations and the feasibility of the approach, we present the results of extensive experiments on synthetic data as well as the results on real communities, such as Enron email and Movie newsgroups. Insight into agent dynamics helps us understand the driving forces behind social evolution.

  19. A Qualitative Study of Psychosocial Problems among Parents of Children with Cerebral Palsy Attending Two Tertiary Care Hospitals in Western India

    PubMed Central

    Panchal, Dhara Antani

    2014-01-01

    Objective. To explore the psychosocial problems faced by the parents of children with cerebral palsy (CP) in rural and urban settings. Design. Qualitative research design using focus group discussions (FGDs) was used for the study. Setting. Two FGDs comprising one at a rural tertiary level care hospital and the other at an urban tertiary level care hospital were conducted. Participants. A total of thirteen parents participated in the two FGDs. Main Outcome Measured. Psychosocial problems experienced by the parents of children suffering from CP were measured. Results. The problems experienced by the mothers were associated with common themes such as disturbed social relationships, health problems, financial problems, moments of happiness, worries about future of the child, need for more support services, and lack of adequate number of trained physiotherapists. All the parents had children with problems since birth and most had approached various health care providers for a cure for their child. Conclusions. A wide range of psychosocial problems are experienced by the parents of children with CP. Studies like this can provide valuable information for designing a family centered care programme for children with CP. PMID:24967331

  20. Determination of optimal self-drive tourism route using the orienteering problem method

    NASA Astrophysics Data System (ADS)

    Hashim, Zakiah; Ismail, Wan Rosmanira; Ahmad, Norfaieqah

    2013-04-01

    This paper was conducted to determine the optimal travel routes for self-drive tourism based on the allocation of time and expense by maximizing the amount of attraction scores assigned to each city involved. Self-drive tourism represents a type of tourism where tourists hire or travel by their own vehicle. It only involves a tourist destination which can be linked with a network of roads. Normally, the traveling salesman problem (TSP) and multiple traveling salesman problems (MTSP) method were used in the minimization problem such as determination the shortest time or distance traveled. This paper involved an alternative approach for maximization method which is maximize the attraction scores and tested on tourism data for ten cities in Kedah. A set of priority scores are used to set the attraction score at each city. The classical approach of the orienteering problem was used to determine the optimal travel route. This approach is extended to the team orienteering problem and the two methods were compared. These two models have been solved by using LINGO12.0 software. The results indicate that the model involving the team orienteering problem provides a more appropriate solution compared to the orienteering problem model.

  1. A hybrid heuristic for the multiple choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Mansi, Raïd; Alves, Cláudio; Valério de Carvalho, J. M.; Hanafi, Saïd

    2013-08-01

    In this article, a new solution approach for the multiple choice multidimensional knapsack problem is described. The problem is a variant of the multidimensional knapsack problem where items are divided into classes, and exactly one item per class has to be chosen. Both problems are NP-hard. However, the multiple choice multidimensional knapsack problem appears to be more difficult to solve in part because of its choice constraints. Many real applications lead to very large scale multiple choice multidimensional knapsack problems that can hardly be addressed using exact algorithms. A new hybrid heuristic is proposed that embeds several new procedures for this problem. The approach is based on the resolution of linear programming relaxations of the problem and reduced problems that are obtained by fixing some variables of the problem. The solutions of these problems are used to update the global lower and upper bounds for the optimal solution value. A new strategy for defining the reduced problems is explored, together with a new family of cuts and a reformulation procedure that is used at each iteration to improve the performance of the heuristic. An extensive set of computational experiments is reported for benchmark instances from the literature and for a large set of hard instances generated randomly. The results show that the approach outperforms other state-of-the-art methods described so far, providing the best known solution for a significant number of benchmark instances.

  2. A new kernel-based fuzzy level set method for automated segmentation of medical images in the presence of intensity inhomogeneity.

    PubMed

    Rastgarpour, Maryam; Shanbehzadeh, Jamshid

    2014-01-01

    Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation.

  3. Co-Occurring Non-Suicidal Self-Injury and Firesetting Among At-Risk Adolescents: Experiences of Negative Life Events, Mental Health Problems, Substance Use, and Suicidality.

    PubMed

    Tanner, Alicia; Hasking, Penelope; Martin, Graham

    2016-01-01

    Co-occurring internalizing and externalizing problem behaviors in adolescence typically marks more severe psychopathology and poorer psychosocial functioning than engagement in a single problem behavior. We examined the negative life events, emotional and behavioral problems, substance use, and suicidality of school-based adolescents reporting both non-suicidal self-injury (NSSI) and repetitive firesetting, compared to those engaging in either behavior alone. Differences in NSSI characteristics among self-injurers who set fires, compared to those who did not, were also assessed. A total of 384 at-risk adolescents aged 12-18 years (58.8% female) completed self-report questionnaires measuring NSSI, firesetting, and key variables of interest. Results suggest that adolescents who both self-injure and deliberately set fires represent a low-prevalence but distinct high-risk subgroup, characterized by increased rates of interpersonal difficulties, mental health problems and substance use, more severe self-injury, and suicidal behavior. Implications for prevention and early intervention initiatives are discussed.

  4. Proteomics, lipidomics, metabolomics: a mass spectrometry tutorial from a computer scientist's point of view.

    PubMed

    Smith, Rob; Mathis, Andrew D; Ventura, Dan; Prince, John T

    2014-01-01

    For decades, mass spectrometry data has been analyzed to investigate a wide array of research interests, including disease diagnostics, biological and chemical theory, genomics, and drug development. Progress towards solving any of these disparate problems depends upon overcoming the common challenge of interpreting the large data sets generated. Despite interim successes, many data interpretation problems in mass spectrometry are still challenging. Further, though these challenges are inherently interdisciplinary in nature, the significant domain-specific knowledge gap between disciplines makes interdisciplinary contributions difficult. This paper provides an introduction to the burgeoning field of computational mass spectrometry. We illustrate key concepts, vocabulary, and open problems in MS-omics, as well as provide invaluable resources such as open data sets and key search terms and references. This paper will facilitate contributions from mathematicians, computer scientists, and statisticians to MS-omics that will fundamentally improve results over existing approaches and inform novel algorithmic solutions to open problems.

  5. A machine learning approach for efficient uncertainty quantification using multiscale methods

    NASA Astrophysics Data System (ADS)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  6. Bridging the gap between hospital and primary care: the pharmacist home visit.

    PubMed

    Ensing, Hendrik T; Koster, Ellen S; Stuijt, Clementine C M; van Dooren, Ad A; Bouvy, Marcel L

    2015-06-01

    Bridging the gap between hospital and primary care is important as transition from one healthcare setting to another increases the risk on drug-related problems and consequent readmissions. To reduce those risks, pharmacist interventions during and after hospitalization have been frequently studied, albeit with variable effects. Therefore, in this manuscript we propose a three phase approach to structurally address post-discharge drug-related problems. First, hospitals need to transfer up-todate medication information to community pharmacists. Second, the key phase of this approach consists of adequate follow-up at the patients' home. Pharmacists need to apply their clinical and communication skills to identify and analyze drug-related problems. Finally, to prevent and solve identified drug related problems a close collaboration within the primary care setting between pharmacists and general practitioners is of utmost importance. It is expected that such an approach results in improved quality of care and improved patient safety.

  7. Setting health priorities in a community: a case example

    PubMed Central

    Sousa, Fábio Alexandre Melo do Rego; Goulart, Maria José Garcia; Braga, Antonieta Manuela dos Santos; Medeiros, Clara Maria Oliveira; Rego, Débora Cristina Martins; Vieira, Flávio Garcia; Pereira, Helder José Alves da Rocha; Tavares, Helena Margarida Correia Vicente; Loura, Marta Maria Puim

    2017-01-01

    ABSTRACT OBJECTIVE To describe the methodology used in the process of setting health priorities for community intervention in a community of older adults. METHODS Based on the results of a health diagnosis related to active aging, a prioritization process was conceived to select the priority intervention problem. The process comprised four successive phases of problem analysis and classification: (1) grouping by level of similarity, (2) classification according to epidemiological criteria, (3) ordering by experts, and (4) application of the Hanlon method. These stages combined, in an integrated manner, the views of health team professionals, community nursing and gerontology experts, and the actual community. RESULTS The first stage grouped the identified problems by level of similarity, comprising a body of 19 issues for analysis. In the second stage these problems were classified by the health team members by epidemiological criteria (size, vulnerability, and transcendence). The nine most relevant problems resulting from the second stage of the process were submitted to expert analysis and the five most pertinent problems were selected. The last step identified the priority issue for intervention in this specific community with the participation of formal and informal community leaders: Low Social Interaction in Community Participation. CONCLUSIONS The prioritization process is a key step in health planning, enabling the identification of priority problems to intervene in a given community at a given time. There are no default formulas for selecting priority issues. It is up to each community intervention team to define its own process with different methods/techniques that allow the identification of and intervention in needs classified as priority by the community. PMID:28273229

  8. Handling Imbalanced Data Sets in Multistage Classification

    NASA Astrophysics Data System (ADS)

    López, M.

    Multistage classification is a logical approach, based on a divide-and-conquer solution, for dealing with problems with a high number of classes. The classification problem is divided into several sequential steps, each one associated to a single classifier that works with subgroups of the original classes. In each level, the current set of classes is split into smaller subgroups of classes until they (the subgroups) are composed of only one class. The resulting chain of classifiers can be represented as a tree, which (1) simplifies the classification process by using fewer categories in each classifier and (2) makes it possible to combine several algorithms or use different attributes in each stage. Most of the classification algorithms can be biased in the sense of selecting the most populated class in overlapping areas of the input space. This can degrade a multistage classifier performance if the training set sample frequencies do not reflect the real prevalence in the population. Several techniques such as applying prior probabilities, assigning weights to the classes, or replicating instances have been developed to overcome this handicap. Most of them are designed for two-class (accept-reject) problems. In this article, we evaluate several of these techniques as applied to multistage classification and analyze how they can be useful for astronomy. We compare the results obtained by classifying a data set based on Hipparcos with and without these methods.

  9. An Assessment-based Solution to a Human-Service Employee Performance Problem

    PubMed Central

    Wilder, David A.; Majdalany, Lina; Mathisen, David; Strain, Leigh Ann

    2013-01-01

    The PDC-HS implicated a lack of proper training on participant duties and a lack of performance feedback as contributors to the performance problems. As a result, an intervention targeting training on participant duties and performance feedback was implemented across eight treatment rooms; the intervention increased performance in all rooms. This preliminary validation study suggests the PDC-HS may prove useful in solving performance problems in human-service settings. PMID:25729505

  10. A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods

    PubMed Central

    Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016

  11. Couples' Reports of Relationship Problems in a Naturalistic Therapy Setting

    ERIC Educational Resources Information Center

    Boisvert, Marie-Michele; Wright, John; Tremblay, Nadine; McDuff, Pierre

    2011-01-01

    Understanding couples' relationship problems is fundamental to couple therapy. Although research has documented common relationship problems, no study has used open-ended questions to explore problems in couples seeking therapy in naturalistic settings. The present study used a reliable coding system to explore the relationship problems reported…

  12. Symbolic Model-Based SAR Feature Analysis and Change Detection

    DTIC Science & Technology

    1992-02-01

    normalization fac- tor described above in the Dempster rule of combination. Another problem is that in certain cases D-S overweights prior probabilities compared...Beaufort Sea data set and the Peru data set. The Phoenix results are described in section 6.2.2 including a partial trace of the opera- tion of the

  13. Teaching Conflict Management Skills to the Health Care Professionals.

    ERIC Educational Resources Information Center

    Wilcox, James R.; And Others

    The health care organization, as a specialized organizational setting, has some characteristics that make it of special concern to the conflict theorist. In a health care setting, conflict may arise as a result of (1) the complexity of medicine and the bureaucracy of health care delivery, (2) the problems of acquiring relevant information from…

  14. Microbially Mediated Kinetic Sulfur Isotope Fractionation: Reactive Transport Modeling Benchmark

    NASA Astrophysics Data System (ADS)

    Wanner, C.; Druhan, J. L.; Cheng, Y.; Amos, R. T.; Steefel, C. I.; Ajo Franklin, J. B.

    2014-12-01

    Microbially mediated sulfate reduction is a ubiquitous process in many subsurface systems. Isotopic fractionation is characteristic of this anaerobic process, since sulfate reducing bacteria (SRB) favor the reduction of the lighter sulfate isotopologue (S32O42-) over the heavier isotopologue (S34O42-). Detection of isotopic shifts have been utilized as a proxy for the onset of sulfate reduction in subsurface systems such as oil reservoirs and aquifers undergoing uranium bioremediation. Reactive transport modeling (RTM) of kinetic sulfur isotope fractionation has been applied to field and laboratory studies. These RTM approaches employ different mathematical formulations in the representation of kinetic sulfur isotope fractionation. In order to test the various formulations, we propose a benchmark problem set for the simulation of kinetic sulfur isotope fractionation during microbially mediated sulfate reduction. The benchmark problem set is comprised of four problem levels and is based on a recent laboratory column experimental study of sulfur isotope fractionation. Pertinent processes impacting sulfur isotopic composition such as microbial sulfate reduction and dispersion are included in the problem set. To date, participating RTM codes are: CRUNCHTOPE, TOUGHREACT, MIN3P and THE GEOCHEMIST'S WORKBENCH. Preliminary results from various codes show reasonable agreement for the problem levels simulating sulfur isotope fractionation in 1D.

  15. Solving multi-objective job shop scheduling problems using a non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Piroozfard, Hamed; Wong, Kuan Yew

    2015-05-01

    The efforts of finding optimal schedules for the job shop scheduling problems are highly important for many real-world industrial applications. In this paper, a multi-objective based job shop scheduling problem by simultaneously minimizing makespan and tardiness is taken into account. The problem is considered to be more complex due to the multiple business criteria that must be satisfied. To solve the problem more efficiently and to obtain a set of non-dominated solutions, a meta-heuristic based non-dominated sorting genetic algorithm is presented. In addition, task based representation is used for solution encoding, and tournament selection that is based on rank and crowding distance is applied for offspring selection. Swapping and insertion mutations are employed to increase diversity of population and to perform intensive search. To evaluate the modified non-dominated sorting genetic algorithm, a set of modified benchmarking job shop problems obtained from the OR-Library is used, and the results are considered based on the number of non-dominated solutions and quality of schedules obtained by the algorithm.

  16. A quantum annealing approach for fault detection and diagnosis of graph-based systems

    NASA Astrophysics Data System (ADS)

    Perdomo-Ortiz, A.; Fluegemann, J.; Narasimhan, S.; Biswas, R.; Smelyanskiy, V. N.

    2015-02-01

    Diagnosing the minimal set of faults capable of explaining a set of given observations, e.g., from sensor readouts, is a hard combinatorial optimization problem usually tackled with artificial intelligence techniques. We present the mapping of this combinatorial problem to quadratic unconstrained binary optimization (QUBO), and the experimental results of instances embedded onto a quantum annealing device with 509 quantum bits. Besides being the first time a quantum approach has been proposed for problems in the advanced diagnostics community, to the best of our knowledge this work is also the first research utilizing the route Problem → QUBO → Direct embedding into quantum hardware, where we are able to implement and tackle problem instances with sizes that go beyond previously reported toy-model proof-of-principle quantum annealing implementations; this is a significant leap in the solution of problems via direct-embedding adiabatic quantum optimization. We discuss some of the programmability challenges in the current generation of the quantum device as well as a few possible ways to extend this work to more complex arbitrary network graphs.

  17. Three-Component Decomposition of Polarimetric SAR Data Integrating Eigen-Decomposition Results

    NASA Astrophysics Data System (ADS)

    Lu, Da; He, Zhihua; Zhang, Huan

    2018-01-01

    This paper presents a novel three-component scattering power decomposition of polarimetric SAR data. There are two problems in three-component decomposition method: volume scattering component overestimation in urban areas and artificially set parameter to be a fixed value. Though volume scattering component overestimation can be partly solved by deorientation process, volume scattering still dominants some oriented urban areas. The speckle-like decomposition results introduced by artificially setting value are not conducive to further image interpretation. This paper integrates the results of eigen-decomposition to solve the aforementioned problems. Two principal eigenvectors are used to substitute the surface scattering model and the double bounce scattering model. The decomposed scattering powers are obtained using a constrained linear least-squares method. The proposed method has been verified using an ESAR PolSAR image, and the results show that the proposed method has better performance in urban area.

  18. Optimization of the time-dependent traveling salesman problem with Monte Carlo methods.

    PubMed

    Bentner, J; Bauer, G; Obermair, G M; Morgenstern, I; Schneider, J

    2001-09-01

    A problem often considered in operations research and computational physics is the traveling salesman problem, in which a traveling salesperson has to find the shortest closed tour between a certain set of cities. This problem has been extended to more realistic scenarios, e.g., the "real" traveling salesperson has to take rush hours into consideration. We will show how this extended problem is treated with physical optimization algorithms. We will present results for a specific instance of Reinelt's library TSPLIB95, in which we define a zone with traffic jams in the afternoon.

  19. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  20. Intersubjective decision-making for computer-aided forging technology design

    NASA Astrophysics Data System (ADS)

    Kanyukov, S. I.; Konovalov, A. V.; Muizemnek, O. Yu.

    2017-12-01

    We propose a concept of intersubjective decision-making for problems of open-die forging technology design. The intersubjective decisions are chosen from a set of feasible decisions using the fundamentals of the decision-making theory in fuzzy environment according to the Bellman-Zadeh scheme. We consider the formalization of subjective goals and the choice of membership functions for the decisions depending on subjective goals. We study the arrangement of these functions into an intersubjective membership function. The function is constructed for a resulting decision, which is chosen from a set of feasible decisions. The choice of the final intersubjective decision is discussed. All the issues are exemplified by a specific technological problem. The considered concept of solving technological problems under conditions of fuzzy goals allows one to choose the most efficient decisions from a set of feasible ones. These decisions correspond to the stated goals. The concept allows one to reduce human participation in automated design. This concept can be used to develop algorithms and design programs for forging numerous types of forged parts.

  1. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.

  2. The LET Procedure for Prosthetic Myocontrol: Towards Multi-DOF Control Using Single-DOF Activations.

    PubMed

    Nowak, Markus; Castellini, Claudio

    2016-01-01

    Simultaneous and proportional myocontrol of dexterous hand prostheses is to a large extent still an open problem. With the advent of commercially and clinically available multi-fingered hand prostheses there are now more independent degrees of freedom (DOFs) in prostheses than can be effectively controlled using surface electromyography (sEMG), the current standard human-machine interface for hand amputees. In particular, it is uncertain, whether several DOFs can be controlled simultaneously and proportionally by exclusively calibrating the intended activation of single DOFs. The problem is currently solved by training on all required combinations. However, as the number of available DOFs grows, this approach becomes overly long and poses a high cognitive burden on the subject. In this paper we present a novel approach to overcome this problem. Multi-DOF activations are artificially modelled from single-DOF ones using a simple linear combination of sEMG signals, which are then added to the training set. This procedure, which we named LET (Linearly Enhanced Training), provides an augmented data set to any machine-learning-based intent detection system. In two experiments involving intact subjects, one offline and one online, we trained a standard machine learning approach using the full data set containing single- and multi-DOF activations as well as using the LET-augmented data set in order to evaluate the performance of the LET procedure. The results indicate that the machine trained on the latter data set obtains worse results in the offline experiment compared to the full data set. However, the online implementation enables the user to perform multi-DOF tasks with almost the same precision as single-DOF tasks without the need of explicitly training multi-DOF activations. Moreover, the parameters involved in the system are statistically uniform across subjects.

  3. Microwave assisted preparation of magnesium phosphate cement (MPC) for orthopedic applications: a novel solution to the exothermicity problem.

    PubMed

    Zhou, Huan; Agarwal, Anand K; Goel, Vijay K; Bhaduri, Sarit B

    2013-10-01

    There are two interesting features of this paper. First, we report herein a novel microwave assisted technique to prepare phosphate based orthopedic cements, which do not generate any exothermicity during setting. The exothermic reactions during the setting of phosphate cements can cause tissue damage during the administration of injectable compositions and hence a solution to the problem is sought via microwave processing. This solution through microwave exposure is based on a phenomenon that microwave irradiation can remove all water molecules from the alkaline earth phosphate cement paste to temporarily stop the setting reaction while preserving the active precursor phase in the formulation. The setting reaction can be initiated a second time by adding aqueous medium, but without any exothermicity. Second, a special emphasis is placed on using this technique to synthesize magnesium phosphate cements for orthopedic applications with their enhanced mechanical properties and possible uses as drug and protein delivery vehicles. The as-synthesized cements were evaluated for the occurrences of exothermic reactions, setting times, presence of Mg-phosphate phases, compressive strength levels, microstructural features before and after soaking in (simulated body fluid) SBF, and in vitro cytocompatibility responses. The major results show that exposure to microwaves solves the exothermicity problem, while simultaneously improving the mechanical performance of hardened cements and reducing the setting times. As expected, the cements are also found to be cytocompatible. Finally, it is observed that this process can be applied to calcium phosphate cements system (CPCs) as well. Based on the results, this microwave exposure provides a novel technique for the processing of injectable phosphate bone cement compositions. © 2013.

  4. Sharing clinical information across care settings: the birth of an integrated assessment system

    PubMed Central

    Gray, Leonard C; Berg, Katherine; Fries, Brant E; Henrard, Jean-Claude; Hirdes, John P; Steel, Knight; Morris, John N

    2009-01-01

    Background Population ageing, the emergence of chronic illness, and the shift away from institutional care challenge conventional approaches to assessment systems which traditionally are problem and setting specific. Methods From 2002, the interRAI research collaborative undertook development of a suite of assessment tools to support assessment and care planning of persons with chronic illness, frailty, disability, or mental health problems across care settings. The suite constitutes an early example of a "third generation" assessment system. Results The rationale and development strategy for the suite is described, together with a description of potential applications. To date, ten instruments comprise the suite, each comprising "core" items shared among the majority of instruments and "optional" items that are specific to particular care settings or situations. Conclusion This comprehensive suite offers the opportunity for integrated multi-domain assessment, enabling electronic clinical records, data transfer, ease of interpretation and streamlined training. PMID:19402891

  5. The Geriatric ICF Core Set reflecting health-related problems in community-living older adults aged 75 years and older without dementia: development and validation.

    PubMed

    Spoorenberg, Sophie L W; Reijneveld, Sijmen A; Middel, Berrie; Uittenbroek, Ronald J; Kremer, Hubertus P H; Wynia, Klaske

    2015-01-01

    The aim of the present study was to develop a valid Geriatric ICF Core Set reflecting relevant health-related problems of community-living older adults without dementia. A Delphi study was performed in order to reach consensus (≥70% agreement) on second-level categories from the International Classification of Functioning, Disability and Health (ICF). The Delphi panel comprised 41 older adults, medical and non-medical experts. Content validity of the set was tested in a cross-sectional study including 267 older adults identified as frail or having complex care needs. Consensus was reached for 30 ICF categories in the Delphi study (fourteen Body functions, ten Activities and Participation and six Environmental Factors categories). Content validity of the set was high: the prevalence of all the problems was >10%, except for d530 Toileting. The most frequently reported problems were b710 Mobility of joint functions (70%), b152 Emotional functions (65%) and b455 Exercise tolerance functions (62%). No categories had missing values. The final Geriatric ICF Core Set is a comprehensive and valid set of 29 ICF categories, reflecting the most relevant health-related problems among community-living older adults without dementia. This Core Set may contribute to optimal care provision and support of the older population. Implications for Rehabilitation The Geriatric ICF Core Set may provide a practical tool for gaining an understanding of the relevant health-related problems of community-living older adults without dementia. The Geriatric ICF Core Set may be used in primary care practice as an assessment tool in order to tailor care and support to the needs of older adults. The Geriatric ICF Core Set may be suitable for use in multidisciplinary teams in integrated care settings, since it is based on a broad range of problems in functioning. Professionals should pay special attention to health problems related to mobility and emotional functioning since these are the most prevalent problems in community-living older adults.

  6. Cognitive Task Analysis: Implications for the Theory and Practice of Instructional Design.

    ERIC Educational Resources Information Center

    Dehoney, Joanne

    Cognitive task analysis grew out of efforts by cognitive psychologists to understand problem-solving in a lab setting. It has proved a useful tool for describing expert performance in complex problem solving domains. This review considers two general models of cognitive task analysis and examines the procedures and results of analyses in three…

  7. Hydraulics in civil engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chadwick, A.; Morfett, J.

    1986-01-01

    This undergraduate text combines fundamental theoretical concepts with design applications to provide coverage of hydraulics in civil engineering. The authors have incorporated the results of research in many areas and have taken advantage of the availability of microcomputers in the presentation and solution of problems. In addition, the text embodies a set of worked examples to illustrate the theoretical concepts, and typical problems.

  8. Reducing Developmental Risk for Emotional/Behavioral Problems: A Randomized Controlled Trial Examining the Tools for Getting Along Curriculum

    ERIC Educational Resources Information Center

    Daunic, Ann P.; Smith, Stephen W.; Garvan, Cynthia W.; Barber, Brian R.; Becker, Mallory K.; Peters, Christine D.; Taylor, Gregory G.; Van Loan, Christopher L.; Li, Wei; Naranjo, Arlene H.

    2012-01-01

    Researchers have demonstrated that cognitive-behavioral intervention strategies--such as social problem solving--provided in school settings can help ameliorate the developmental risk for emotional and behavioral difficulties. In this study, we report the results of a randomized controlled trial of Tools for Getting Along (TFGA), a social…

  9. Case Studies in Critical Ecoliteracy: A Curriculum for Analyzing the Social Foundations of Environmental Problems

    ERIC Educational Resources Information Center

    Turner, Rita; Donnelly, Ryan

    2013-01-01

    This article outlines the features and application of a set of model curriculum materials that utilize eco-democratic principles and humanities-based content to cultivate critical analysis of the cultural foundations of socio-environmental problems. We first describe the goals and components of the materials, then discuss results of their use in…

  10. The Problem of Correspondence of Educational and Professional Standards (Results of Empirical Research)

    ERIC Educational Resources Information Center

    Piskunova, Elena; Sokolova, Irina; Kalimullin, Aydar

    2016-01-01

    In the article, the problem of correspondence of educational standards of higher pedagogical education and teacher professional standards in Russia is actualized. Modern understanding of the quality of vocational education suggests that in the process of education the student develops a set of competencies that will enable him or her to carry out…

  11. Identities and Transformational Experiences for Quantitative Problem Solving: Gender Comparisons of First-Year University Science Students

    ERIC Educational Resources Information Center

    Hudson, Peter; Matthews, Kelly

    2012-01-01

    Women are underrepresented in science, technology, engineering and mathematics (STEM) areas in university settings; however this may be the result of attitude rather than aptitude. There is widespread agreement that quantitative problem-solving is essential for graduate competence and preparedness in science and other STEM subjects. The research…

  12. Navigating Turn-Taking and Conversational Repair in an Online Synchronous Course

    ERIC Educational Resources Information Center

    Earnshaw, Yvonne

    2017-01-01

    In face-to-face conversations, speaker transitions (or hand-offs) are typically seamless. In computer-mediated communication settings, speaker hand-offs can be a bit more challenging. This paper presents the results of a study of audio communication problems that occur in an online synchronous course, and how, and by whom, those problems are…

  13. Teachers' and Students' Preliminary Stages in Physics Problem Solving

    ERIC Educational Resources Information Center

    Mansyur, Jusman

    2015-01-01

    This paper describes the preliminary stages in physics problem-solving related to the use of external representation. This empirical study was carried out using a phenomenographic approach to analyze data from individual thinking-aloud and interviews with 8 senior high school students and 7 physics teachers. The result of this study is a set of…

  14. Investigating High-School Students' Reasoning Strategies when They Solve Linear Equations

    ERIC Educational Resources Information Center

    Huntley, Mary Ann; Marcus, Robin; Kahan, Jeremy; Miller, Jane Lincoln

    2007-01-01

    A cross-curricular structured-probe task-based clinical interview study with 44 pairs of third-year high-school mathematics students, most of whom were high achieving, was conducted to investigate their approaches to a variety of algebra problems. This paper presents results from one problem that involved solving a set of three linear equations of…

  15. How much is enough? The recurrent problem of setting measurable objectives in conservation

    USGS Publications Warehouse

    Tear, T.H.; Kareiva, P.; Angermeier, P.L.; Comer, P.; Czech, B.; Kautz, R.; Landon, L.; Mehlman, D.; Murphy, K.; Ruckelshaus, M.; Scott, J.M.; Wilhere, G.

    2005-01-01

    International agreements, environmental laws, resource management agencies, and environmental nongovernmental organizations all establish objectives that define what they hope to accomplish. Unfortunately, quantitative objectives in conservation are typically set without consistency and scientific rigor. As a result, conservationists are failing to provide credible answers to the question "How much is enough?" This is a serious problem because objectives profoundly shape where and how limited conservation resources are spent, and help to create a shared vision for the future. In this article we develop guidelines to help steer conservation biologists and practitioners through the process of objective setting. We provide three case studies to highlight the practical challenges of objective setting in different social, political, and legal contexts. We also identify crucial gaps in our science, including limited knowledge of species distributions and of large-scale, long-term ecosystem dynamics, that must be filled if we hope to do better than setting conservation objectives through intuition and best guesses. ?? 2005 American Institute of Biological Sciences.

  16. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  17. Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set

    NASA Astrophysics Data System (ADS)

    Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.

    2017-05-01

    A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.

  18. Tankless Water Heater

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Kennedy Space Center specialists aided Space, Energy, Time Saving (SETS) Systems, Inc. in working out the problems they encountered with their new electronic "tankless" water heater. The flow switch design suffered intermittent problems. Hiring several testing and engineering firms produced only graphs, printouts, and a large expense, but no solutions. Then through the Kennedy Space Center/State of Florida Technology Outreach Program, SETS was referred to Michael Brooks, a 21-year space program veteran and flowmeter expert. Run throughout Florida to provide technical service to businesses at no cost, the program applies scientific and engineering expertise originally developed for space applications to the Florida business community. Brooks discovered several key problems, resulting in a new design that turned out to be simpler, yielding a 63 percent reduction in labor and material costs over the old design.

  19. A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks

    NASA Astrophysics Data System (ADS)

    Haijun, Xiong; Qi, Zhang

    2016-08-01

    Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.

  20. Computer-Aided Breast Cancer Diagnosis with Optimal Feature Sets: Reduction Rules and Optimization Techniques.

    PubMed

    Mathieson, Luke; Mendes, Alexandre; Marsden, John; Pond, Jeffrey; Moscato, Pablo

    2017-01-01

    This chapter introduces a new method for knowledge extraction from databases for the purpose of finding a discriminative set of features that is also a robust set for within-class classification. Our method is generic and we introduce it here in the field of breast cancer diagnosis from digital mammography data. The mathematical formalism is based on a generalization of the k-Feature Set problem called (α, β)-k-Feature Set problem, introduced by Cotta and Moscato (J Comput Syst Sci 67(4):686-690, 2003). This method proceeds in two steps: first, an optimal (α, β)-k-feature set of minimum cardinality is identified and then, a set of classification rules using these features is obtained. We obtain the (α, β)-k-feature set in two phases; first a series of extremely powerful reduction techniques, which do not lose the optimal solution, are employed; and second, a metaheuristic search to identify the remaining features to be considered or disregarded. Two algorithms were tested with a public domain digital mammography dataset composed of 71 malignant and 75 benign cases. Based on the results provided by the algorithms, we obtain classification rules that employ only a subset of these features.

  1. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  2. A function space framework for structural total variation regularization with applications in inverse problems

    NASA Astrophysics Data System (ADS)

    Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas

    2018-06-01

    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.

  3. Connes' embedding problem and Tsirelson's problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junge, M.; Palazuelos, C.; Navascues, M.

    2011-01-15

    We show that Tsirelson's problem concerning the set of quantum correlations and Connes' embedding problem on finite approximations in von Neumann algebras (known to be equivalent to Kirchberg's QWEP conjecture) are essentially equivalent. Specifically, Tsirelson's problem asks whether the set of bipartite quantum correlations generated between tensor product separated systems is the same as the set of correlations between commuting C{sup *}-algebras. Connes' embedding problem asks whether any separable II{sub 1} factor is a subfactor of the ultrapower of the hyperfinite II{sub 1} factor. We show that an affirmative answer to Connes' question implies a positive answer to Tsirelson's. Conversely,more » a positive answer to a matrix valued version of Tsirelson's problem implies a positive one to Connes' problem.« less

  4. Quasilinear parabolic variational inequalities with multi-valued lower-order terms

    NASA Astrophysics Data System (ADS)

    Carl, Siegfried; Le, Vy K.

    2014-10-01

    In this paper, we provide an analytical frame work for the following multi-valued parabolic variational inequality in a cylindrical domain : Find and an such that where is some closed and convex subset, A is a time-dependent quasilinear elliptic operator, and the multi-valued function is assumed to be upper semicontinuous only, so that Clarke's generalized gradient is included as a special case. Thus, parabolic variational-hemivariational inequalities are special cases of the problem considered here. The extension of parabolic variational-hemivariational inequalities to the general class of multi-valued problems considered in this paper is not only of disciplinary interest, but is motivated by the need in applications. The main goals are as follows. First, we provide an existence theory for the above-stated problem under coercivity assumptions. Second, in the noncoercive case, we establish an appropriate sub-supersolution method that allows us to get existence, comparison, and enclosure results. Third, the order structure of the solution set enclosed by sub-supersolutions is revealed. In particular, it is shown that the solution set within the sector of sub-supersolutions is a directed set. As an application, a multi-valued parabolic obstacle problem is treated.

  5. Stratification and sample selection for multicrop experiments. [Arkansas, Kentucky, Michigan, Missouri, Mississippi, Ohio, Wisconsin, Illinois, Indiana, Minnesota, Iowa, Louisiana, Nebraska, South Dakota, and North Dakota

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator); Hixson, M. M.; Davis, B. J.; Bauer, M. E.

    1978-01-01

    The author has identified the following significant results. A stratification was performed and sample segments were selected for an initial investigation of multicrop problems in order to support development and evaluation of procedures for using LACIE and other technologies for the classification of corn and soybeans, to identify factors likely to affect classification performance, and to evaluate problems encountered and techniques which are applicable to the crop estimation problem in foreign countries. Two types of samples, low density and high density, supporting these requirements were selected as research data set for an initial evaluation of technical issues. Looking at the geographic location of the strata, the system appears to be logical and the various segments seem to represent different conditions. This result is supportive not only of the variables and the methodology employed in the stratification, but also of the validity of the data sets employed.

  6. Aggression and violence in healthcare and its impact on nursing students: A narrative review of the literature.

    PubMed

    Hopkins, Martin; Fetherston, Catherine M; Morrison, Paul

    2018-03-01

    Aggression and violence is a significant social problem in many countries and an increasing problem in healthcare settings in which nurses are particularly vulnerable. The literature suggests that aggression and violence has a significant negative impact upon nurses and potentially upon nursing students and can result in these staff members experiencing stress as a direct result of these adverse events. The literature suggests that there is confusion over what constitutes aggression and violence in the workplace and therefore a true lack of understanding of the scale of the problem relating to nursing students. This review proposes that nursing students are indeed at significant risk of aggression and violence in the clinical setting which has the potential to significantly impact their role as a novice carer. Furthermore, aggression and violence can manifest negative stress responses in individuals, therefore, the potential for nursing students to cope with stressful situations shall be presented. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Faster PET reconstruction with a stochastic primal-dual hybrid gradient method

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane

    2017-08-01

    Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.

  8. A method to estimate the additional uncertainty in gap-filled NEE resulting from long gaps in the CO2 flux record

    Treesearch

    Andrew D. Richardson; David Y. Hollinger

    2007-01-01

    Missing values in any data set create problems for researchers. The process by which missing values are replaced, and the data set is made complete, is generally referred to as imputation. Within the eddy flux community, the term "gap filling" is more commonly applied. A major challenge is that random errors in measured data result in uncertainty in the gap-...

  9. Linking Family Characteristics with Poor Peer Relations: The Mediating Role of Conduct Problems

    PubMed Central

    Bierman, Karen Linn; Smoot, David L.

    2012-01-01

    Parent, teacher, and peer ratings were collected for 75 grade school boys to test the hypothesis that certain family interaction patterns would be associated with poor peer relations. Path analyses provided support for a mediational model, in which punitive and ineffective discipline was related to child conduct problems in home and school settings which, in turn, predicted poor peer relations. Further analyses suggested that distinct subgroups of boys could be identified who exhibited conduct problems at home only, at school only, in both settings, or in neither setting. Boys who exhibited cross-situational conduct problems were more likely to experience multiple concurrent problems (e.g., in both home and school settings) and were more likely than any other group to experience poor peer relations. However, only about one-third of the boys with poor peer relations in this sample exhibited problem profiles consistent with the proposed model (e.g., experienced high rates of punitive/ineffective home discipline and exhibited conduct problems in home and school settings), suggesting that the proposed model reflects one common (but not exclusive) pathway to poor peer relations. PMID:1865049

  10. A comparative study of diffraction of shallow-water waves by high-level IGN and GN equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, B.B.; Ertekin, R.C.; College of Shipbuilding Engineering, Harbin Engineering University, 150001 Harbin

    2015-02-15

    This work is on the nonlinear diffraction analysis of shallow-water waves, impinging on submerged obstacles, by two related theories, namely the classical Green–Naghdi (GN) equations and the Irrotational Green–Naghdi (IGN) equations, both sets of equations being at high levels and derived for incompressible and inviscid flows. Recently, the high-level Green–Naghdi equations have been applied to some wave transformation problems. The high-level IGN equations have also been used in the last decade to study certain wave propagation problems. However, past works on these theories used different numerical methods to solve these nonlinear and unsteady sets of differential equations and at differentmore » levels. Moreover, different physical problems have been solved in the past. Therefore, it has not been possible to understand the differences produced by these two sets of theories and their range of applicability so far. We are thus motivated to make a direct comparison of the results produced by these theories by use of the same numerical method to solve physically the same wave diffraction problems. We focus on comparing these two theories by using similar codes; only the equations used are different but other parts of the codes, such as the wave-maker, damping zone, discretion method, matrix solver, etc., are exactly the same. This way, we eliminate many potential sources of differences that could be produced by the solution of different equations. The physical problems include the presence of various submerged obstacles that can be used for example as breakwaters or to represent the continental shelf. A numerical wave tank is created by placing a wavemaker on one end and a wave absorbing beach on the other. The nonlinear and unsteady sets of differential equations are solved by the finite-difference method. The results are compared with different equations as well as with the available experimental data.« less

  11. A comparative study of diffraction of shallow-water waves by high-level IGN and GN equations

    NASA Astrophysics Data System (ADS)

    Zhao, B. B.; Ertekin, R. C.; Duan, W. Y.

    2015-02-01

    This work is on the nonlinear diffraction analysis of shallow-water waves, impinging on submerged obstacles, by two related theories, namely the classical Green-Naghdi (GN) equations and the Irrotational Green-Naghdi (IGN) equations, both sets of equations being at high levels and derived for incompressible and inviscid flows. Recently, the high-level Green-Naghdi equations have been applied to some wave transformation problems. The high-level IGN equations have also been used in the last decade to study certain wave propagation problems. However, past works on these theories used different numerical methods to solve these nonlinear and unsteady sets of differential equations and at different levels. Moreover, different physical problems have been solved in the past. Therefore, it has not been possible to understand the differences produced by these two sets of theories and their range of applicability so far. We are thus motivated to make a direct comparison of the results produced by these theories by use of the same numerical method to solve physically the same wave diffraction problems. We focus on comparing these two theories by using similar codes; only the equations used are different but other parts of the codes, such as the wave-maker, damping zone, discretion method, matrix solver, etc., are exactly the same. This way, we eliminate many potential sources of differences that could be produced by the solution of different equations. The physical problems include the presence of various submerged obstacles that can be used for example as breakwaters or to represent the continental shelf. A numerical wave tank is created by placing a wavemaker on one end and a wave absorbing beach on the other. The nonlinear and unsteady sets of differential equations are solved by the finite-difference method. The results are compared with different equations as well as with the available experimental data.

  12. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  13. The program LOPT for least-squares optimization of energy levels

    NASA Astrophysics Data System (ADS)

    Kramida, A. E.

    2011-02-01

    The article describes a program that solves the least-squares optimization problem for finding the energy levels of a quantum-mechanical system based on a set of measured energy separations or wavelengths of transitions between those energy levels, as well as determining the Ritz wavelengths of transitions and their uncertainties. The energy levels are determined by solving the matrix equation of the problem, and the uncertainties of the Ritz wavenumbers are determined from the covariance matrix of the problem. Program summaryProgram title: LOPT Catalogue identifier: AEHM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19 254 No. of bytes in distributed program, including test data, etc.: 427 839 Distribution format: tar.gz Programming language: Perl v.5 Computer: PC, Mac, Unix workstations Operating system: MS Windows (XP, Vista, 7), Mac OS X, Linux, Unix (AIX) RAM: 3 Mwords or more Word size: 32 or 64 Classification: 2.2 Nature of problem: The least-squares energy-level optimization problem, i.e., finding a set of energy level values that best fits the given set of transition intervals. Solution method: The solution of the least-squares problem is found by solving the corresponding linear matrix equation, where the matrix is constructed using a new method with variable substitution. Restrictions: A practical limitation on the size of the problem N is imposed by the execution time, which scales as N and depends on the computer. Unusual features: Properly rounds the resulting data and formats the output in a format suitable for viewing with spreadsheet editing software. Estimates numerical errors resulting from the limited machine precision. Running time: 1 s for N=100, or 60 s for N=400 on a typical PC.

  14. Completable scheduling: An integrated approach to planning and scheduling

    NASA Technical Reports Server (NTRS)

    Gervasio, Melinda T.; Dejong, Gerald F.

    1992-01-01

    The planning problem has traditionally been treated separately from the scheduling problem. However, as more realistic domains are tackled, it becomes evident that the problem of deciding on an ordered set of tasks to achieve a set of goals cannot be treated independently of the problem of actually allocating resources to the tasks. Doing so would result in losing the robustness and flexibility needed to deal with imperfectly modeled domains. Completable scheduling is an approach which integrates the two problems by allowing an a priori planning module to defer particular planning decisions, and consequently the associated scheduling decisions, until execution time. This allows a completable scheduling system to maximize plan flexibility by allowing runtime information to be taken into consideration when making planning and scheduling decision. Furthermore, through the criteria of achievability placed on deferred decision, a completable scheduling system is able to retain much of the goal-directedness and guarantees of achievement afforded by a priori planning. The completable scheduling approach is further enhanced by the use of contingent explanation-based learning, which enables a completable scheduling system to learn general completable plans from example and improve its performance through experience. Initial experimental results show that completable scheduling outperforms classical scheduling as well as pure reactive scheduling in a simple scheduling domain.

  15. A Review On Missing Value Estimation Using Imputation Algorithm

    NASA Astrophysics Data System (ADS)

    Armina, Roslan; Zain, Azlan Mohd; Azizah Ali, Nor; Sallehuddin, Roselina

    2017-09-01

    The presence of the missing value in the data set has always been a major problem for precise prediction. The method for imputing missing value needs to minimize the effect of incomplete data sets for the prediction model. Many algorithms have been proposed for countermeasure of missing value problem. In this review, we provide a comprehensive analysis of existing imputation algorithm, focusing on the technique used and the implementation of global or local information of data sets for missing value estimation. In addition validation method for imputation result and way to measure the performance of imputation algorithm also described. The objective of this review is to highlight possible improvement on existing method and it is hoped that this review gives reader better understanding of imputation method trend.

  16. How emotions affect logical reasoning: evidence from experiments with mood-manipulated participants, spider phobics, and people with exam anxiety.

    PubMed

    Jung, Nadine; Wranke, Christina; Hamburger, Kai; Knauff, Markus

    2014-01-01

    Recent experimental studies show that emotions can have a significant effect on the way we think, decide, and solve problems. This paper presents a series of four experiments on how emotions affect logical reasoning. In two experiments different groups of participants first had to pass a manipulated intelligence test. Their emotional state was altered by giving them feedback, that they performed excellent, poor or on average. Then they completed a set of logical inference problems (with if p, then q statements) either in a Wason selection task paradigm or problems from the logical propositional calculus. Problem content also had either a positive, negative or neutral emotional value. Results showed a clear effect of emotions on reasoning performance. Participants in negative mood performed worse than participants in positive mood, but both groups were outperformed by the neutral mood reasoners. Problem content also had an effect on reasoning performance. In a second set of experiments, participants with exam or spider phobia solved logical problems with contents that were related to their anxiety disorder (spiders or exams). Spider phobic participants' performance was lowered by the spider-content, while exam anxious participants were not affected by the exam-related problem content. Overall, unlike some previous studies, no evidence was found that performance is improved when emotion and content are congruent. These results have consequences for cognitive reasoning research and also for cognitively oriented psychotherapy and the treatment of disorders like depression and anxiety.

  17. How emotions affect logical reasoning: evidence from experiments with mood-manipulated participants, spider phobics, and people with exam anxiety

    PubMed Central

    Jung, Nadine; Wranke, Christina; Hamburger, Kai; Knauff, Markus

    2014-01-01

    Recent experimental studies show that emotions can have a significant effect on the way we think, decide, and solve problems. This paper presents a series of four experiments on how emotions affect logical reasoning. In two experiments different groups of participants first had to pass a manipulated intelligence test. Their emotional state was altered by giving them feedback, that they performed excellent, poor or on average. Then they completed a set of logical inference problems (with if p, then q statements) either in a Wason selection task paradigm or problems from the logical propositional calculus. Problem content also had either a positive, negative or neutral emotional value. Results showed a clear effect of emotions on reasoning performance. Participants in negative mood performed worse than participants in positive mood, but both groups were outperformed by the neutral mood reasoners. Problem content also had an effect on reasoning performance. In a second set of experiments, participants with exam or spider phobia solved logical problems with contents that were related to their anxiety disorder (spiders or exams). Spider phobic participants' performance was lowered by the spider-content, while exam anxious participants were not affected by the exam-related problem content. Overall, unlike some previous studies, no evidence was found that performance is improved when emotion and content are congruent. These results have consequences for cognitive reasoning research and also for cognitively oriented psychotherapy and the treatment of disorders like depression and anxiety. PMID:24959160

  18. Quantum mechanics over sets

    NASA Astrophysics Data System (ADS)

    Ellerman, David

    2014-03-01

    In models of QM over finite fields (e.g., Schumacher's ``modal quantum theory'' MQT), one finite field stands out, Z2, since Z2 vectors represent sets. QM (finite-dimensional) mathematics can be transported to sets resulting in quantum mechanics over sets or QM/sets. This gives a full probability calculus (unlike MQT with only zero-one modalities) that leads to a fulsome theory of QM/sets including ``logical'' models of the double-slit experiment, Bell's Theorem, QIT, and QC. In QC over Z2 (where gates are non-singular matrices as in MQT), a simple quantum algorithm (one gate plus one function evaluation) solves the Parity SAT problem (finding the parity of the sum of all values of an n-ary Boolean function). Classically, the Parity SAT problem requires 2n function evaluations in contrast to the one function evaluation required in the quantum algorithm. This is quantum speedup but with all the calculations over Z2 just like classical computing. This shows definitively that the source of quantum speedup is not in the greater power of computing over the complex numbers, and confirms the idea that the source is in superposition.

  19. Inducing mental set constrains procedural flexibility and conceptual understanding in mathematics.

    PubMed

    DeCaro, Marci S

    2016-10-01

    An important goal in mathematics is to flexibly use and apply multiple, efficient procedures to solve problems and to understand why these procedures work. One factor that may limit individuals' ability to notice and flexibly apply strategies is the mental set induced by the problem context. Undergraduate (N = 41, Experiment 1) and fifth- and sixth-grade students (N = 87, Experiment 2) solved mathematical equivalence problems in one of two set-inducing conditions. Participants in the complex-first condition solved problems without a repeated addend on both sides of the equal sign (e.g., 7 + 5 + 9 = 3 + _), which required multistep strategies. Then these students solved problems with a repeated addend (e.g., 7 + 5 + 9 = 7 + _), for which a shortcut strategy could be readily used (i.e., adding 5 + 9). Participants in the shortcut-first condition solved the same problem set but began with the shortcut problems. Consistent with laboratory studies of mental set, participants in the complex-first condition were less likely to use the more efficient shortcut strategy when possible. In addition, these participants were less likely to demonstrate procedural flexibility and conceptual understanding on a subsequent assessment of mathematical equivalence knowledge. These findings suggest that certain problem-solving contexts can help or hinder both flexibility in strategy use and deeper conceptual thinking about the problems.

  20. Evolutionary fuzzy ARTMAP neural networks for classification of semiconductor defects.

    PubMed

    Tan, Shing Chiang; Watada, Junzo; Ibrahim, Zuwairie; Khalid, Marzuki

    2015-05-01

    Wafer defect detection using an intelligent system is an approach of quality improvement in semiconductor manufacturing that aims to enhance its process stability, increase production capacity, and improve yields. Occasionally, only few records that indicate defective units are available and they are classified as a minority group in a large database. Such a situation leads to an imbalanced data set problem, wherein it engenders a great challenge to deal with by applying machine-learning techniques for obtaining effective solution. In addition, the database may comprise overlapping samples of different classes. This paper introduces two models of evolutionary fuzzy ARTMAP (FAM) neural networks to deal with the imbalanced data set problems in a semiconductor manufacturing operations. In particular, both the FAM models and hybrid genetic algorithms are integrated in the proposed evolutionary artificial neural networks (EANNs) to classify an imbalanced data set. In addition, one of the proposed EANNs incorporates a facility to learn overlapping samples of different classes from the imbalanced data environment. The classification results of the proposed evolutionary FAM neural networks are presented, compared, and analyzed using several classification metrics. The outcomes positively indicate the effectiveness of the proposed networks in handling classification problems with imbalanced data sets.

  1. Cartesian control of redundant robots

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.

    1989-01-01

    A Cartesian-space position/force controller is presented for redundant robots. The proposed control structure partitions the control problem into a nonredundant position/force trajectory tracking problem and a redundant mapping problem between Cartesian control input F is a set member of the set R(sup m) and robot actuator torque T is a set member of the set R(sup n) (for redundant robots, m is less than n). The underdetermined nature of the F yields T map is exploited so that the robot redundancy is utilized to improve the dynamic response of the robot. This dynamically optimal F yields T map is implemented locally (in time) so that it is computationally efficient for on-line control; however, it is shown that the map possesses globally optimal characteristics. Additionally, it is demonstrated that the dynamically optimal F yields T map can be modified so that the robot redundancy is used to simultaneously improve the dynamic response and realize any specified kinematic performance objective (e.g., manipulability maximization or obstacle avoidance). Computer simulation results are given for a four degree of freedom planar redundant robot under Cartesian control, and demonstrate that position/force trajectory tracking and effective redundancy utilization can be achieved simultaneously with the proposed controller.

  2. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams

    PubMed Central

    Rouinfar, Amy; Agra, Elise; Larson, Adam M.; Rebello, N. Sanjay; Loschky, Lester C.

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. PMID:25324804

  3. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    PubMed

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  4. Evaluation of a collaborative model: a case study analysis of watershed planning in the intermountain west

    Treesearch

    Gary Bentrup

    2001-01-01

    Collaborative planning processes have become increasingly popular for addressing environmental planning issues, resulting in a number of conceptual models for collaboration. A model proposed by Selin and Chavez suggests that collaboration emerges from a series of antecedents and then proceeds sequentially through problem-setting, direction-setting, implementation, and...

  5. A problem in non-linear Diophantine approximation

    NASA Astrophysics Data System (ADS)

    Harrap, Stephen; Hussain, Mumtaz; Kristensen, Simon

    2018-05-01

    In this paper we obtain the Lebesgue and Hausdorff measure results for the set of vectors satisfying infinitely many fully non-linear Diophantine inequalities. The set is associated with a class of linear inhomogeneous partial differential equations whose solubility depends on a certain Diophantine condition. The failure of the Diophantine condition guarantees the existence of a smooth solution.

  6. Adapting Evidence-Based Mental Health Treatments in Community Settings: Preliminary Results from a Partnership Approach

    ERIC Educational Resources Information Center

    Southam-Gerow, Michael A.; Hourigan, Shannon E.; Allin, Robert B., Jr.

    2009-01-01

    This article describes the application of a university-community partnership model to the problem of adapting evidence-based treatment approaches in a community mental health setting. Background on partnership research is presented, with consideration of methodological and practical issues related to this kind of research. Then, a rationale for…

  7. Implementing a Structured Reading Program in an Afterschool Setting: Problems and Potential Solutions

    ERIC Educational Resources Information Center

    Hartry, Ardice; Fitzgerald, Robert; Porter, Kristie

    2008-01-01

    In this article, Ardice Hartry, Robert Fitzgerald, and Kristie Porter present results from their implementation study of a structured reading program for fourth, fifth, and sixth graders in an afterschool setting. As the authors explain, schools and districts often view an extended school day as a promising way to address the literacy needs of…

  8. [Development of a software standardizing optical density with operation settings related to several limitations].

    PubMed

    Tu, Xiao-Ming; Zhang, Zuo-Heng; Wan, Cheng; Zheng, Yu; Xu, Jin-Mei; Zhang, Yuan-Yuan; Luo, Jian-Ping; Wu, Hai-Wei

    2012-12-01

    To develop a software that can be used to standardize optical density to normalize the procedures and results of standardization in order to effectively solve several problems generated during standardization of in-direct ELISA results. The software was designed based on the I-STOD method with operation settings to solve the problems that one might encounter during the standardization. Matlab GUI was used as a tool for the development. The software was tested with the results of the detection of sera of persons from schistosomiasis japonica endemic areas. I-STOD V1.0 (WINDOWS XP/WIN 7, 0.5 GB) was successfully developed to standardize optical density. A serial of serum samples from schistosomiasis japonica endemic areas were used to examine the operational effects of I-STOD V1.0 software. The results indicated that the software successfully overcame several problems including reliability of standard curve, applicable scope of samples and determination of dilution for samples outside the scope, so that I-STOD was performed more conveniently and the results of standardization were more consistent. I-STOD V1.0 is a professional software based on I-STOD. It can be easily operated and can effectively standardize the testing results of in-direct ELISA.

  9. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.

    PubMed

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-12-20

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.

  10. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems

    PubMed Central

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-01-01

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135

  11. Coevolutionary Free Lunches

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Macready, William G.

    2005-01-01

    Recent work on the mathematical foundations of optimization has begun to uncover its rich structure. In particular, the "No Free Lunch" (NFL) theorems state that any two algorithms are equivalent when their performance is averaged across all possible problems. This highlights the need for exploiting problem-specific knowledge to achieve better than random performance. In this paper we present a general framework covering more search scenarios. In addition to the optimization scenarios addressed in the NFL results, this framework covers multi-armed bandit problems and evolution of multiple co-evolving players. As a particular instance of the latter, it covers "self-play" problems. In these problems the set of players work together to produce a champion, who then engages one or more antagonists in a subsequent multi-player game. In contrast to the traditional optimization case where the NFL results hold, we show that in self-play there are free lunches: in coevolution some algorithms have better performance than other algorithms, averaged across all possible problems. We consider the implications of these results to biology where there is no champion.

  12. A computer vision system for the recognition of trees in aerial photographs

    NASA Technical Reports Server (NTRS)

    Pinz, Axel J.

    1991-01-01

    Increasing problems of forest damage in Central Europe set the demand for an appropriate forest damage assessment tool. The Vision Expert System (VES) is presented which is capable of finding trees in color infrared aerial photographs. Concept and architecture of VES are discussed briefly. The system is applied to a multisource test data set. The processing of this multisource data set leads to a multiple interpretation result for one scene. An integration of these results will provide a better scene description by the vision system. This is achieved by an implementation of Steven's correlation algorithm.

  13. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  14. Preschoolers' Cooperative Problem Solving: Integrating Play and Problem Solving

    ERIC Educational Resources Information Center

    Ramani, Geetha B.; Brownell, Celia A.

    2014-01-01

    Cooperative problem solving with peers plays a central role in promoting children's cognitive and social development. This article reviews research on cooperative problem solving among preschool-age children in experimental settings and social play contexts. Studies suggest that cooperative interactions with peers in experimental settings are…

  15. TOUGH Simulations of the Updegraff's Set of Fluid and Heat Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moridis, G.J.; Pruess

    1992-11-01

    The TOUGH code [Pruess, 1987] for two-phase flow of water, air, and heat in penneable media has been exercised on a suite of test problems originally selected and simulated by C. D. Updegraff [1989]. These include five 'verification' problems for which analytical or numerical solutions are available, and three 'validation' problems that model laboratory fluid and heat flow experiments. All problems could be run without any code modifications (*). Good and efficient numerical performance, as well as accurate results were obtained throughout. Additional code verification and validation problems from the literature are briefly summarized, and suggestions are given for propermore » applications of TOUGH and related codes.« less

  16. High School Teachers' Problem Solving Activities to Review and Extend Their Mathematical and Didactical Knowledge

    ERIC Educational Resources Information Center

    Santos-Trigo, Manuel; Barrera-Mora, Fernando

    2011-01-01

    The study documents the extent to which high school teachers reflect on their need to revise and extend their mathematical and practicing knowledge. In this context, teachers worked on a set of tasks as a part of an inquiring community that promoted the use of different computational tools in problem solving approaches. Results indicated that the…

  17. Producing Satisfactory Solutions to Scheduling Problems: An Iterative Constraint Relaxation Approach

    NASA Technical Reports Server (NTRS)

    Chien, S.; Gratch, J.

    1994-01-01

    One drawback to using constraint-propagation in planning and scheduling systems is that when a problem has an unsatisfiable set of constraints such algorithms typically only show that no solution exists. While, technically correct, in practical situations, it is desirable in these cases to produce a satisficing solution that satisfies the most important constraints (typically defined in terms of maximizing a utility function). This paper describes an iterative constraint relaxation approach in which the scheduler uses heuristics to progressively relax problem constraints until the problem becomes satisfiable. We present empirical results of applying these techniques to the problem of scheduling spacecraft communications for JPL/NASA antenna resources.

  18. Solution of the Generalized Noah's Ark Problem.

    PubMed

    Billionnet, Alain

    2013-01-01

    The phylogenetic diversity (PD) of a set of species is a measure of the evolutionary distance among the species in the collection, based on a phylogenetic tree. Such a tree is composed of a root, internal nodes, and leaves that correspond to the set of taxa under study. With each edge of the tree is associated a non-negative branch length (evolutionary distance). If a particular survival probability is associated with each taxon, the PD measure becomes the expected PD measure. In the Noah's Ark Problem (NAP) introduced by Weitzman (1998), these survival probabilities can be increased at some cost. The problem is to determine how best to allocate a limited amount of resources to maximize the expected PD of the considered species. It is easy to formulate the NAP as a (difficult) nonlinear 0-1 programming problem. The aim of this article is to show that a general version of the NAP (GNAP) can be solved simply and efficiently with any set of edge weights and any set of survival probabilities by using standard mixed-integer linear programming software. The crucial point to move from a nonlinear program in binary variables to a mixed-integer linear program, is to approximate the logarithmic function by the lower envelope of a set of tangents to the curve. Solving the obtained mixed-integer linear program provides not only a near-optimal solution but also an upper bound on the value of the optimal solution. We also applied this approach to a generalization of the nature reserve problem (GNRP) that consists of selecting a set of regions to be conserved so that the expected PD of the set of species present in these regions is maximized. In this case, the survival probabilities of different taxa are not independent of each other. Computational results are presented to illustrate potentialities of the approach. Near-optimal solutions with hypothetical phylogenetic trees comprising about 4000 taxa are obtained in a few seconds or minutes of computing time for the GNAP, and in about 30 min for the GNRP. In all the cases the average guarantee varies from 0% to 1.20%.

  19. Cooperative parallel adaptive neighbourhood search for the disjunctively constrained knapsack problem

    NASA Astrophysics Data System (ADS)

    Quan, Zhe; Wu, Lei

    2017-09-01

    This article investigates the use of parallel computing for solving the disjunctively constrained knapsack problem. The proposed parallel computing model can be viewed as a cooperative algorithm based on a multi-neighbourhood search. The cooperation system is composed of a team manager and a crowd of team members. The team members aim at applying their own search strategies to explore the solution space. The team manager collects the solutions from the members and shares the best one with them. The performance of the proposed method is evaluated on a group of benchmark data sets. The results obtained are compared to those reached by the best methods from the literature. The results show that the proposed method is able to provide the best solutions in most cases. In order to highlight the robustness of the proposed parallel computing model, a new set of large-scale instances is introduced. Encouraging results have been obtained.

  20. Lipschitz regularity for integro-differential equations with coercive Hamiltonians and application to large time behavior

    NASA Astrophysics Data System (ADS)

    Barles, Guy; Ley, Olivier; Topp, Erwin

    2017-02-01

    In this paper, we provide suitable adaptations of the ‘weak version of Bernstein method’ introduced by the first author in 1991, in order to obtain Lipschitz regularity results and Lipschitz estimates for nonlinear integro-differential elliptic and parabolic equations set in the whole space. Our interest is to obtain such Lipschitz results to possibly degenerate equations, or to equations which are indeed ‘uniformly elliptic’ (maybe in the nonlocal sense) but which do not satisfy the usual ‘growth condition’ on the gradient term allowing to use (for example) the Ishii-Lions’ method. We treat the case of a model equation with a superlinear coercivity on the gradient term which has a leading role in the equation. This regularity result together with comparison principle provided for the problem allow to obtain the ergodic large time behavior of the evolution problem in the periodic setting.

  1. On local search for bi-objective knapsack problems.

    PubMed

    Liefooghe, Arnaud; Paquete, Luís; Figueira, José Rui

    2013-01-01

    In this article, a local search approach is proposed for three variants of the bi-objective binary knapsack problem, with the aim of maximizing the total profit and minimizing the total weight. First, an experimental study on a given structural property of connectedness of the efficient set is conducted. Based on this property, a local search algorithm is proposed and its performance is compared to exact algorithms in terms of runtime and quality metrics. The experimental results indicate that this simple local search algorithm is able to find a representative set of optimal solutions in most of the cases, and in much less time than exact algorithms.

  2. A synchronous game for binary constraint systems

    NASA Astrophysics Data System (ADS)

    Kim, Se-Jin; Paulsen, Vern; Schafhauser, Christopher

    2018-03-01

    Recently, Slofstra proved that the set of quantum correlations is not closed. We prove that the set of synchronous quantum correlations is not closed, which implies his result, by giving an example of a synchronous game that has a perfect quantum approximate strategy but no perfect quantum strategy. We also exhibit a graph for which the quantum independence number and the quantum approximate independence number are different. We prove new characterisations of synchronous quantum approximate correlations and synchronous quantum spatial correlations. We solve the synchronous approximation problem of Dykema and the second author, which yields a new equivalence of Connes' embedding problem in terms of synchronous correlations.

  3. Computing aggregate properties of preimages for 2D cellular automata.

    PubMed

    Beer, Randall D

    2017-11-01

    Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm-incremental aggregation-that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.

  4. Study on Heat Transfer Agent Models of Transmission Line and Transformer

    NASA Astrophysics Data System (ADS)

    Wang, B.; Zhang, P. P.

    2018-04-01

    When using heat transfer simulation to study the dynamic overload of transmission line and transformer, it needs to establish the mathematical expression of heat transfer. However, the formula is a nonlinear differential equation or equation set and it is not easy to get general solutions. Aiming at this problem, some different temperature change processes caused by different initial conditions are calculated by differential equation and equation set. New agent models are developed according to the characteristics of different temperature change processes. The results show that the agent models have high precision and can solve the problem that the original equation cannot be directly applied in some practical engineers.

  5. Computing aggregate properties of preimages for 2D cellular automata

    NASA Astrophysics Data System (ADS)

    Beer, Randall D.

    2017-11-01

    Computing properties of the set of precursors of a given configuration is a common problem underlying many important questions about cellular automata. Unfortunately, such computations quickly become intractable in dimension greater than one. This paper presents an algorithm—incremental aggregation—that can compute aggregate properties of the set of precursors exponentially faster than naïve approaches. The incremental aggregation algorithm is demonstrated on two problems from the two-dimensional binary Game of Life cellular automaton: precursor count distributions and higher-order mean field theory coefficients. In both cases, incremental aggregation allows us to obtain new results that were previously beyond reach.

  6. Stochastic Set-Based Particle Swarm Optimization Based on Local Exploration for Solving the Carpool Service Problem.

    PubMed

    Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia

    2016-08-01

    The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.

    This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less

  8. Development of the 3-SET 4P questionnaire for evaluating former ICU patients' physical and psychosocial problems over time: a pilot study.

    PubMed

    Akerman, Eva; Fridlund, Bengt; Ersson, Anders; Granberg-Axéll, Anetth

    2009-04-01

    Current studies reveal a lack of consensus for the evaluation of physical and psychosocial problems after ICU stay and their changes over time. The aim was to develop and evaluate the validity and reliability of a questionnaire for assessing physical and psychosocial problems over time for patients following ICU recovery. Thirty-nine patients completed the questionnaire, 17 were retested. The questionnaire was constructed in three sets: physical problems, psychosocial problems and follow-up care. Face and content validity were tested by nurses, researchers and patients. The questionnaire showed good construct validity in all three sets and had strong factor loadings (explained variance >70%, factor loadings >0.5) for all three sets. There was good concurrent validity compared with the SF 12 (r(s)>0.5). Internal consistency was shown to be reliable (Cronbach's alpha 0.70-0.85). Stability reliability on retesting was good for the physical and psychosocial sets (r(s)>0.5). The 3-set 4P questionnaire was a first step in developing an instrument for assessment of former ICU patients' problems over time. The sample size was small and thus, further studies are needed to confirm these findings.

  9. Measuring Performance Of Salaried Salespeople

    ERIC Educational Resources Information Center

    Riola, Peter W.

    1974-01-01

    A Sales By Objectives (SBO) program offers the sales manager results by using the resources around him--his sales personnel--to set their own personal, routine, problem solving, and innovative goals. (MW)

  10. Automatic Generation of Heuristics for Scheduling

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.; Bresina, John L.; Rodgers, Stuart M.

    1997-01-01

    This paper presents a technique, called GenH, that automatically generates search heuristics for scheduling problems. The impetus for developing this technique is the growing consensus that heuristics encode advice that is, at best, useful in solving most, or typical, problem instances, and, at worst, useful in solving only a narrowly defined set of instances. In either case, heuristic problem solvers, to be broadly applicable, should have a means of automatically adjusting to the idiosyncrasies of each problem instance. GenH generates a search heuristic for a given problem instance by hill-climbing in the space of possible multi-attribute heuristics, where the evaluation of a candidate heuristic is based on the quality of the solution found under its guidance. We present empirical results obtained by applying GenH to the real world problem of telescope observation scheduling. These results demonstrate that GenH is a simple and effective way of improving the performance of an heuristic scheduler.

  11. Transport Test Problems for Hybrid Methods Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.

    2011-12-28

    This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.

  12. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  13. Reduced-Size Integer Linear Programming Models for String Selection Problems: Application to the Farthest String Problem.

    PubMed

    Zörnig, Peter

    2015-08-01

    We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.

  14. An efficient approach to the travelling salesman problem using self-organizing maps.

    PubMed

    Vieira, Frederico Carvalho; Dória Neto, Adrião Duarte; Costa, José Alfredo Ferreira

    2003-04-01

    This paper presents an approach to the well-known Travelling Salesman Problem (TSP) using Self-Organizing Maps (SOM). The SOM algorithm has interesting topological information about its neurons configuration on cartesian space, which can be used to solve optimization problems. Aspects of initialization, parameters adaptation, and complexity analysis of the proposed SOM based algorithm are discussed. The results show an average deviation of 3.7% from the optimal tour length for a set of 12 TSP instances.

  15. Damming the genomic data flood using a comprehensive analysis and storage data structure

    PubMed Central

    Bouffard, Marc; Phillips, Michael S.; Brown, Andrew M.K.; Marsh, Sharon; Tardif, Jean-Claude; van Rooij, Tibor

    2010-01-01

    Data generation, driven by rapid advances in genomic technologies, is fast outpacing our analysis capabilities. Faced with this flood of data, more hardware and software resources are added to accommodate data sets whose structure has not specifically been designed for analysis. This leads to unnecessarily lengthy processing times and excessive data handling and storage costs. Current efforts to address this have centered on developing new indexing schemas and analysis algorithms, whereas the root of the problem lies in the format of the data itself. We have developed a new data structure for storing and analyzing genotype and phenotype data. By leveraging data normalization techniques, database management system capabilities and the use of a novel multi-table, multidimensional database structure we have eliminated the following: (i) unnecessarily large data set size due to high levels of redundancy, (ii) sequential access to these data sets and (iii) common bottlenecks in analysis times. The resulting novel data structure horizontally divides the data to circumvent traditional problems associated with the use of databases for very large genomic data sets. The resulting data set required 86% less disk space and performed analytical calculations 6248 times faster compared to a standard approach without any loss of information. Database URL: http://castor.pharmacogenomics.ca PMID:21159730

  16. Imaging metallic samples using electrical capacitance tomography: forward modelling and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.

    2016-11-01

    Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.

  17. Postural set for balance control is normal in Alzheimer's but not in Parkinson's disease.

    PubMed

    Chong, R K; Jones, C L; Horak, F B

    1999-03-01

    It has been suggested that patients with dementia of the Alzheimer type have abnormalities in the basal ganglia, and thus, may have similar sensorimotor problems as patients with basal ganglia degeneration from Parkinson's disease. Whether the similarity extends to balance control is unknown. One distinguishing feature of balance disorder in Parkinson's disease is difficulty with changing postural set in terms of adapting the amplitude of leg muscle activity as a function of support condition. We, therefore, tested whether patients with Alzheimer's disease without extrapyramidal signs would show a similar problem in changing postural set as patients with Parkinson's disease. The ability to quickly change postural set was measured by comparing leg muscle activity under two conditions of support (free stance, versus grasping a frame, or sitting) during backward surface translations, during toes up surface rotations, and during voluntary rise to toes. Results were compared among 12 healthy adults, 8 nondemented Parkinson's patients on their usual dose of medication, and 11 Alzheimer patients without extrapyramidal signs. Subjects with Alzheimer's, but not Parkinson's, disease performed similarly to the healthy control subjects. They changed postural set immediately, by suppressing leg muscle activity to low levels when supported. Parkinson subjects did not change postural set immediately. They did not suppress the tibialis anterior in voluntary rise to toes when holding, nor the soleus in perturbed sitting as much as the healthy control and Alzheimer subjects in the first trial. Instead, the Parkinson subjects changed set more slowly, over repeated and consecutive trials in both protocols. The onset latencies of soleus responses to backward surface translations and perturbed sitting, as well as tibialis anterior responses to toes up rotations, were the same for all three groups. Alzheimer patients without extrapyramidal signs, unlike nondemented Parkinson's disease patients, have no difficulty in quickly changing postural set in response to altered support conditions. Our results, therefore, do not support the hypothesis that Parkinson's and uncomplicated Alzheimer's diseases share common postural set problems that may contribute to disordered balance control.

  18. A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Jolai, Fariborz; Assadipour, Ghazal

    Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.

  19. Distributed-Memory Fast Maximal Independent Set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluatemore » their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.« less

  20. Working towards a scalable model of problem-based learning instruction in undergraduate engineering education

    NASA Astrophysics Data System (ADS)

    Mantri, Archana

    2014-05-01

    The intent of the study presented in this paper is to show that the model of problem-based learning (PBL) can be made scalable by designing curriculum around a set of open-ended problems (OEPs). The detailed statistical analysis of the data collected to measure the effects of traditional and PBL instructions for three courses in Electronics and Communication Engineering, namely Analog Electronics, Digital Electronics and Pulse, Digital & Switching Circuits is presented here. It measures the effects of pedagogy, gender and cognitive styles on the knowledge, skill and attitude of the students. The study was conducted two times with content designed around same set of OEPs but with two different trained facilitators for all the three courses. The repeatability of results for effects of the independent parameters on dependent parameters is studied and inferences are drawn.

  1. Transport in Dynamical Astronomy and Multibody Problems

    NASA Astrophysics Data System (ADS)

    Dellnitz, Michael; Junge, Oliver; Koon, Wang Sang; Lekien, Francois; Lo, Martin W.; Marsden, Jerrold E.; Padberg, Kathrin; Preis, Robert; Ross, Shane D.; Thiere, Bianca

    We combine the techniques of almost invariant sets (using tree structured box elimination and graph partitioning algorithms) with invariant manifold and lobe dynamics techniques. The result is a new computational technique for computing key dynamical features, including almost invariant sets, resonance regions as well as transport rates and bottlenecks between regions in dynamical systems. This methodology can be applied to a variety of multibody problems, including those in molecular modeling, chemical reaction rates and dynamical astronomy. In this paper we focus on problems in dynamical astronomy to illustrate the power of the combination of these different numerical tools and their applicability. In particular, we compute transport rates between two resonance regions for the three-body system consisting of the Sun, Jupiter and a third body (such as an asteroid). These resonance regions are appropriate for certain comets and asteroids.

  2. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  3. Problem and pathological gambling in a sample of casino patrons.

    PubMed

    Fong, Timothy W; Campos, Michael D; Brecht, Mary-Lynn; Davis, Alice; Marco, Adrienne; Pecanha, Viviane; Rosenthal, Richard J

    2011-03-01

    Relatively few studies have examined gambling problems among individuals in a casino setting. The current study sought to examine the prevalence of gambling problems among a sample of casino patrons and examine alcohol and tobacco use, health status, and quality of life by gambling problem status. To these ends, 176 casino patrons were recruited by going to a Southern California casino and requesting that they complete an anonymous survey. Results indicated the following lifetime rates for at-risk, problem, and pathological gambling: 29.2, 10.7, and 29.8%. Differences were found with regards to gambling behavior, and results indicated higher rates of smoking among individuals with gambling problems, but not higher rates of alcohol use. Self-rated quality of life was lower among pathological gamblers relative to non-problem gamblers, but did not differ from at-risk or problem gamblers. Although subject to some limitations, our data support the notion of higher frequency of gambling problems among casino patrons and may suggest the need for increased interventions for gambling problems on-site at casinos.

  4. Note Taking in Multi-Media Settings

    ERIC Educational Resources Information Center

    Black, Kelly; Yao, Guangming

    2014-01-01

    We provide a preliminary exploration into the use of note taking when combined with video examples. Student volunteers were divided into three groups and asked to perform two problems. The first problem was explored in a classroom setting and the other problem was a novel problem. The students were asked to complete the two questions. Furthermore,…

  5. Assessing the performance of dynamical trajectory estimates

    NASA Astrophysics Data System (ADS)

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  6. Assessing the performance of dynamical trajectory estimates.

    PubMed

    Bröcker, Jochen

    2014-06-01

    Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.

  7. Speedup of lexicographic optimization by superiorization and its applications to cancer radiotherapy treatment

    NASA Astrophysics Data System (ADS)

    Bonacker, Esther; Gibali, Aviv; Küfer, Karl-Heinz; Süss, Philipp

    2017-04-01

    Multicriteria optimization problems occur in many real life applications, for example in cancer radiotherapy treatment and in particular in intensity modulated radiation therapy (IMRT). In this work we focus on optimization problems with multiple objectives that are ranked according to their importance. We solve these problems numerically by combining lexicographic optimization with our recently proposed level set scheme, which yields a sequence of auxiliary convex feasibility problems; solved here via projection methods. The projection enables us to combine the newly introduced superiorization methodology with multicriteria optimization methods to speed up computation while guaranteeing convergence of the optimization. We demonstrate our scheme with a simple 2D academic example (used in the literature) and also present results from calculations on four real head neck cases in IMRT (Radiation Oncology of the Ludwig-Maximilians University, Munich, Germany) for two different choices of superiorization parameter sets suited to yield fast convergence for each case individually or robust behavior for all four cases.

  8. Resource Economics

    NASA Astrophysics Data System (ADS)

    Conrad, Jon M.

    2000-01-01

    Resource Economics is a text for students with a background in calculus, intermediate microeconomics, and a familiarity with the spreadsheet software Excel. The book covers basic concepts, shows how to set up spreadsheets to solve dynamic allocation problems, and presents economic models for fisheries, forestry, nonrenewable resources, stock pollutants, option value, and sustainable development. Within the text, numerical examples are posed and solved using Excel's Solver. These problems help make concepts operational, develop economic intuition, and serve as a bridge to the study of real-world problems of resource management. Through these examples and additional exercises at the end of Chapters 1 to 8, students can make dynamic models operational, develop their economic intuition, and learn how to set up spreadsheets for the simulation of optimization of resource and environmental systems. Book is unique in its use of spreadsheet software (Excel) to solve dynamic allocation problems Conrad is co-author of a previous book for the Press on the subject for graduate students Approach is extremely student-friendly; gives students the tools to apply research results to actual environmental issues

  9. Mental sets in conduct problem youth with psychopathic features: entity versus incremental theories of intelligence.

    PubMed

    Salekin, Randall T; Lester, Whitney S; Sellers, Mary-Kate

    2012-08-01

    The purpose of the current study was to examine the effect of a motivational intervention on conduct problem youth with psychopathic features. Specifically, the current study examined conduct problem youths' mental set (or theory) regarding intelligence (entity vs. incremental) upon task performance. We assessed 36 juvenile offenders with psychopathic features and tested whether providing them with two different messages regarding intelligence would affect their functioning on a task related to academic performance. The study employed a MANOVA design with two motivational conditions and three outcomes including fluency, flexibility, and originality. Results showed that youth with psychopathic features who were given a message that intelligence grows over time, were more fluent and flexible than youth who were informed that intelligence is static. There were no significant differences between the groups in terms of originality. The implications of these findings are discussed including the possible benefits of interventions for adolescent offenders with conduct problems and psychopathic features. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  10. Combinatorial Optimization Algorithms for Dynamic Multiple Fault Diagnosis in Automotive and Aerospace Applications

    NASA Astrophysics Data System (ADS)

    Kodali, Anuradha

    In this thesis, we develop dynamic multiple fault diagnosis (DMFD) algorithms to diagnose faults that are sporadic and coupled. Firstly, we formulate a coupled factorial hidden Markov model-based (CFHMM) framework to diagnose dependent faults occurring over time (dynamic case). Here, we implement a mixed memory Markov coupling model to determine the most likely sequence of (dependent) fault states, the one that best explains the observed test outcomes over time. An iterative Gauss-Seidel coordinate ascent optimization method is proposed for solving the problem. A soft Viterbi algorithm is also implemented within the framework for decoding dependent fault states over time. We demonstrate the algorithm on simulated and real-world systems with coupled faults; the results show that this approach improves the correct isolation rate as compared to the formulation where independent fault states are assumed. Secondly, we formulate a generalization of set-covering, termed dynamic set-covering (DSC), which involves a series of coupled set-covering problems over time. The objective of the DSC problem is to infer the most probable time sequence of a parsimonious set of failure sources that explains the observed test outcomes over time. The DSC problem is NP-hard and intractable due to the fault-test dependency matrix that couples the failed tests and faults via the constraint matrix, and the temporal dependence of failure sources over time. Here, the DSC problem is motivated from the viewpoint of a dynamic multiple fault diagnosis problem, but it has wide applications in operations research, for e.g., facility location problem. Thus, we also formulated the DSC problem in the context of a dynamically evolving facility location problem. Here, a facility can be opened, closed, or can be temporarily unavailable at any time for a given requirement of demand points. These activities are associated with costs or penalties, viz., phase-in or phase-out for the opening or closing of a facility, respectively. The set-covering matrix encapsulates the relationship among the rows (tests or demand points) and columns (faults or locations) of the system at each time. By relaxing the coupling constraints using Lagrange multipliers, the DSC problem can be decoupled into independent subproblems, one for each column. Each subproblem is solved using the Viterbi decoding algorithm, and a primal feasible solution is constructed by modifying the Viterbi solutions via a heuristic. The proposed Viterbi-Lagrangian relaxation algorithm (VLRA) provides a measure of suboptimality via an approximate duality gap. As a major practical extension of the above problem, we also consider the problem of diagnosing faults with delayed test outcomes, termed delay-dynamic set-covering (DDSC), and experiment with real-world problems that exhibit masking faults. Also, we present simulation results on OR-library datasets (set-covering formulations are predominantly validated on these matrices in the literature), posed as facility location problems. Finally, we implement these algorithms to solve problems in aerospace and automotive applications. Firstly, we address the diagnostic ambiguity problem in aerospace and automotive applications by developing a dynamic fusion framework that includes dynamic multiple fault diagnosis algorithms. This improves the correct fault isolation rate, while minimizing the false alarm rates, by considering multiple faults instead of the traditional data-driven techniques based on single fault (class)-single epoch (static) assumption. The dynamic fusion problem is formulated as a maximum a posteriori decision problem of inferring the fault sequence based on uncertain outcomes of multiple binary classifiers over time. The fusion process involves three steps: the first step transforms the multi-class problem into dichotomies using error correcting output codes (ECOC), thereby solving the concomitant binary classification problems; the second step fuses the outcomes of multiple binary classifiers over time using a sliding window or block dynamic fusion method that exploits temporal data correlations over time. We solve this NP-hard optimization problem via a Lagrangian relaxation (variational) technique. The third step optimizes the classifier parameters, viz., probabilities of detection and false alarm, using a genetic algorithm. The proposed algorithm is demonstrated by computing the diagnostic performance metrics on a twin-spool commercial jet engine, an automotive engine, and UCI datasets (problems with high classification error are specifically chosen for experimentation). We show that the primal-dual optimization framework performed consistently better than any traditional fusion technique, even when it is forced to give a single fault decision across a range of classification problems. Secondly, we implement the inference algorithms to diagnose faults in vehicle systems that are controlled by a network of electronic control units (ECUs). The faults, originating from various interactions and especially between hardware and software, are particularly challenging to address. Our basic strategy is to divide the fault universe of such cyber-physical systems in a hierarchical manner, and monitor the critical variables/signals that have impact at different levels of interactions. The proposed diagnostic strategy is validated on an electrical power generation and storage system (EPGS) controlled by two ECUs in an environment with CANoe/MATLAB co-simulation. Eleven faults are injected with the failures originating in actuator hardware, sensor, controller hardware and software components. Diagnostic matrix is established to represent the relationship between the faults and the test outcomes (also known as fault signatures) via simulations. The results show that the proposed diagnostic strategy is effective in addressing the interaction-caused faults.

  11. Positive-unlabeled learning for disease gene identification

    PubMed Central

    Yang, Peng; Li, Xiao-Li; Mei, Jian-Ping; Kwoh, Chee-Keong; Ng, See-Kiong

    2012-01-01

    Background: Identifying disease genes from human genome is an important but challenging task in biomedical research. Machine learning methods can be applied to discover new disease genes based on the known ones. Existing machine learning methods typically use the known disease genes as the positive training set P and the unknown genes as the negative training set N (non-disease gene set does not exist) to build classifiers to identify new disease genes from the unknown genes. However, such kind of classifiers is actually built from a noisy negative set N as there can be unknown disease genes in N itself. As a result, the classifiers do not perform as well as they could be. Result: Instead of treating the unknown genes as negative examples in N, we treat them as an unlabeled set U. We design a novel positive-unlabeled (PU) learning algorithm PUDI (PU learning for disease gene identification) to build a classifier using P and U. We first partition U into four sets, namely, reliable negative set RN, likely positive set LP, likely negative set LN and weak negative set WN. The weighted support vector machines are then used to build a multi-level classifier based on the four training sets and positive training set P to identify disease genes. Our experimental results demonstrate that our proposed PUDI algorithm outperformed the existing methods significantly. Conclusion: The proposed PUDI algorithm is able to identify disease genes more accurately by treating the unknown data more appropriately as unlabeled set U instead of negative set N. Given that many machine learning problems in biomedical research do involve positive and unlabeled data instead of negative data, it is possible that the machine learning methods for these problems can be further improved by adopting PU learning methods, as we have done here for disease gene identification. Availability and implementation: The executable program and data are available at http://www1.i2r.a-star.edu.sg/∼xlli/PUDI/PUDI.html. Contact: xlli@i2r.a-star.edu.sg or yang0293@e.ntu.edu.sg Supplementary information: Supplementary Data are available at Bioinformatics online. PMID:22923290

  12. Low-rank matrix fitting based on subspace perturbation analysis with applications to structure from motion.

    PubMed

    Jia, Hongjun; Martinez, Aleix M

    2009-05-01

    The task of finding a low-rank (r) matrix that best fits an original data matrix of higher rank is a recurring problem in science and engineering. The problem becomes especially difficult when the original data matrix has some missing entries and contains an unknown additive noise term in the remaining elements. The former problem can be solved by concatenating a set of r-column matrices that share a common single r-dimensional solution space. Unfortunately, the number of possible submatrices is generally very large and, hence, the results obtained with one set of r-column matrices will generally be different from that captured by a different set. Ideally, we would like to find that solution that is least affected by noise. This requires that we determine which of the r-column matrices (i.e., which of the original feature points) are less influenced by the unknown noise term. This paper presents a criterion to successfully carry out such a selection. Our key result is to formally prove that the more distinct the r vectors of the r-column matrices are, the less they are swayed by noise. This key result is then combined with the use of a noise model to derive an upper bound for the effect that noise and occlusions have on each of the r-column matrices. It is shown how this criterion can be effectively used to recover the noise-free matrix of rank r. Finally, we derive the affine and projective structure-from-motion (SFM) algorithms using the proposed criterion. Extensive validation on synthetic and real data sets shows the superiority of the proposed approach over the state of the art.

  13. Identifying finite-time coherent sets from limited quantities of Lagrangian data.

    PubMed

    Williams, Matthew O; Rypina, Irina I; Rowley, Clarence W

    2015-08-01

    A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that "leak" from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, "data rich" test problems, and conceptually related methods based on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or "mesh-free" methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.

  14. Identifying finite-time coherent sets from limited quantities of Lagrangian data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Matthew O.; Rypina, Irina I.; Rowley, Clarence W.

    A data-driven procedure for identifying the dominant transport barriers in a time-varying flow from limited quantities of Lagrangian data is presented. Our approach partitions state space into coherent pairs, which are sets of initial conditions chosen to minimize the number of trajectories that “leak” from one set to the other under the influence of a stochastic flow field during a pre-specified interval in time. In practice, this partition is computed by solving an optimization problem to obtain a pair of functions whose signs determine set membership. From prior experience with synthetic, “data rich” test problems, and conceptually related methods basedmore » on approximations of the Perron-Frobenius operator, we observe that the functions of interest typically appear to be smooth. We exploit this property by using the basis sets associated with spectral or “mesh-free” methods, and as a result, our approach has the potential to more accurately approximate these functions given a fixed amount of data. In practice, this could enable better approximations of the coherent pairs in problems with relatively limited quantities of Lagrangian data, which is usually the case with experimental geophysical data. We apply this method to three examples of increasing complexity: The first is the double gyre, the second is the Bickley Jet, and the third is data from numerically simulated drifters in the Sulu Sea.« less

  15. Phase transition in the countdown problem

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas; Luque, Bartolo

    2012-07-01

    We present a combinatorial decision problem, inspired by the celebrated quiz show called Countdown, that involves the computation of a given target number T from a set of k randomly chosen integers along with a set of arithmetic operations. We find that the probability of winning the game evidences a threshold phenomenon that can be understood in the terms of an algorithmic phase transition as a function of the set size k. Numerical simulations show that such probability sharply transitions from zero to one at some critical value of the control parameter, hence separating the algorithm's parameter space in different phases. We also find that the system is maximally efficient close to the critical point. We derive analytical expressions that match the numerical results for finite size and permit us to extrapolate the behavior in the thermodynamic limit.

  16. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    PubMed

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. A Dimensionally Reduced Clustering Methodology for Heterogeneous Occupational Medicine Data Mining.

    PubMed

    Saâdaoui, Foued; Bertrand, Pierre R; Boudet, Gil; Rouffiac, Karine; Dutheil, Frédéric; Chamoux, Alain

    2015-10-01

    Clustering is a set of techniques of the statistical learning aimed at finding structures of heterogeneous partitions grouping homogenous data called clusters. There are several fields in which clustering was successfully applied, such as medicine, biology, finance, economics, etc. In this paper, we introduce the notion of clustering in multifactorial data analysis problems. A case study is conducted for an occupational medicine problem with the purpose of analyzing patterns in a population of 813 individuals. To reduce the data set dimensionality, we base our approach on the Principal Component Analysis (PCA), which is the statistical tool most commonly used in factorial analysis. However, the problems in nature, especially in medicine, are often based on heterogeneous-type qualitative-quantitative measurements, whereas PCA only processes quantitative ones. Besides, qualitative data are originally unobservable quantitative responses that are usually binary-coded. Hence, we propose a new set of strategies allowing to simultaneously handle quantitative and qualitative data. The principle of this approach is to perform a projection of the qualitative variables on the subspaces spanned by quantitative ones. Subsequently, an optimal model is allocated to the resulting PCA-regressed subspaces.

  18. Quantitative knowledge acquisition for expert systems

    NASA Technical Reports Server (NTRS)

    Belkin, Brenda L.; Stengel, Robert F.

    1991-01-01

    A common problem in the design of expert systems is the definition of rules from data obtained in system operation or simulation. While it is relatively easy to collect data and to log the comments of human operators engaged in experiments, generalizing such information to a set of rules has not previously been a direct task. A statistical method is presented for generating rule bases from numerical data, motivated by an example based on aircraft navigation with multiple sensors. The specific objective is to design an expert system that selects a satisfactory suite of measurements from a dissimilar, redundant set, given an arbitrary navigation geometry and possible sensor failures. The systematic development is described of a Navigation Sensor Management (NSM) Expert System from Kalman Filter convariance data. The method invokes two statistical techniques: Analysis of Variance (ANOVA) and the ID3 Algorithm. The ANOVA technique indicates whether variations of problem parameters give statistically different covariance results, and the ID3 algorithms identifies the relationships between the problem parameters using probabilistic knowledge extracted from a simulation example set. Both are detailed.

  19. OligoIS: Scalable Instance Selection for Class-Imbalanced Data Sets.

    PubMed

    García-Pedrajas, Nicolás; Perez-Rodríguez, Javier; de Haro-García, Aida

    2013-02-01

    In current research, an enormous amount of information is constantly being produced, which poses a challenge for data mining algorithms. Many of the problems in extremely active research areas, such as bioinformatics, security and intrusion detection, or text mining, share the following two features: large data sets and class-imbalanced distribution of samples. Although many methods have been proposed for dealing with class-imbalanced data sets, most of these methods are not scalable to the very large data sets common to those research fields. In this paper, we propose a new approach to dealing with the class-imbalance problem that is scalable to data sets with many millions of instances and hundreds of features. This proposal is based on the divide-and-conquer principle combined with application of the selection process to balanced subsets of the whole data set. This divide-and-conquer principle allows the execution of the algorithm in linear time. Furthermore, the proposed method is easy to implement using a parallel environment and can work without loading the whole data set into memory. Using 40 class-imbalanced medium-sized data sets, we will demonstrate our method's ability to improve the results of state-of-the-art instance selection methods for class-imbalanced data sets. Using three very large data sets, we will show the scalability of our proposal to millions of instances and hundreds of features.

  20. Developing Online Problem-Based Resources for the Professional Development of Teachers of Children with Visual Impairment

    ERIC Educational Resources Information Center

    McLinden, Mike; McCall, Steve; Hinton, Danielle; Weston, Annette; Douglas, Graeme

    2006-01-01

    This article presents a summary of the results from phase 1 of a two-phase research project. Drawing on the principles of problem-based learning (PBL), the aims of phase 1 were to design, develop and evaluate a set of flexible online teaching resources for use within a virtual learning environment. Participants in the project (n = 10) were…

  1. Problem Solving in the Natural Sciences and Early Adolescent Girls' Gender Roles and Self-Esteem a Qualitative and Quantitative Analysis from AN Ecological Perspective

    NASA Astrophysics Data System (ADS)

    Slavkin, Michael

    What impact do gender roles and self-esteem have on early adolescent girls' abilities to solve problems when participating in natural science-related activities? Bronfenbrenner's human ecology model and Barker's behavior setting theory were used to assess how environmental contexts relate to problem solving in scientific contexts. These models also provided improved methodology and increased understanding of these constructs when compared with prior research. Early adolescent girls gender roles and self-esteem were found to relate to differences in problem solving in science-related groups. Specifically, early adolescent girls' gender roles were associated with levels of verbal expression, expression of positive affect, dominance, and supportive behavior during science experiments. Also, levels of early adolescent girls self-esteem were related to verbal expression and dominance in peer groups. Girls with high self-esteem also were more verbally expressive and had higher levels of dominance during science experiments. The dominant model of a masculine-typed and feminine-typed dichotomy of problem solving based on previous literature was not effective in Identifying differences within girls' problem solving. Such differences in the results of these studies may be the result of this study's use of observational measures and analysis of the behavior settings in which group members participated. Group behavior and problem-solving approaches of early adolescent girls seemed most likely to be defined by environmental contexts, not governed solely by the personalities of participants. A discussion for the examination of environmental factors when assessing early adolescent girls' gender roles and self-esteem follows this discussion.

  2. A Memetic Algorithm for Global Optimization of Multimodal Nonseparable Problems.

    PubMed

    Zhang, Geng; Li, Yangmin

    2016-06-01

    It is a big challenging issue of avoiding falling into local optimum especially when facing high-dimensional nonseparable problems where the interdependencies among vector elements are unknown. In order to improve the performance of optimization algorithm, a novel memetic algorithm (MA) called cooperative particle swarm optimizer-modified harmony search (CPSO-MHS) is proposed in this paper, where the CPSO is used for local search and the MHS for global search. The CPSO, as a local search method, uses 1-D swarm to search each dimension separately and thus converges fast. Besides, it can obtain global optimum elements according to our experimental results and analyses. MHS implements the global search by recombining different vector elements and extracting global optimum elements. The interaction between local search and global search creates a set of local search zones, where global optimum elements reside within the search space. The CPSO-MHS algorithm is tested and compared with seven other optimization algorithms on a set of 28 standard benchmarks. Meanwhile, some MAs are also compared according to the results derived directly from their corresponding references. The experimental results demonstrate a good performance of the proposed CPSO-MHS algorithm in solving multimodal nonseparable problems.

  3. On Efficient Deployment of Wireless Sensors for Coverage and Connectivity in Constrained 3D Space.

    PubMed

    Wu, Chase Q; Wang, Li

    2017-10-10

    Sensor networks have been used in a rapidly increasing number of applications in many fields. This work generalizes a sensor deployment problem to place a minimum set of wireless sensors at candidate locations in constrained 3D space to k -cover a given set of target objects. By exhausting the combinations of discreteness/continuousness constraints on either sensor locations or target objects, we formulate four classes of sensor deployment problems in 3D space: deploy sensors at Discrete/Continuous Locations (D/CL) to cover Discrete/Continuous Targets (D/CT). We begin with the design of an approximate algorithm for DLDT and then reduce DLCT, CLDT, and CLCT to DLDT by discretizing continuous sensor locations or target objects into a set of divisions without sacrificing sensing precision. Furthermore, we consider a connected version of each problem where the deployed sensors must form a connected network, and design an approximation algorithm to minimize the number of deployed sensors with connectivity guarantee. For performance comparison, we design and implement an optimal solution and a genetic algorithm (GA)-based approach. Extensive simulation results show that the proposed deployment algorithms consistently outperform the GA-based heuristic and achieve a close-to-optimal performance in small-scale problem instances and a significantly superior overall performance than the theoretical upper bound.

  4. Selection of Representative Models for Decision Analysis Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  5. Girls Talk Math - Engaging Girls Through Math Media

    NASA Astrophysics Data System (ADS)

    Bernardi, Francesca; Morgan, Katrina

    2017-11-01

    ``Girls Talk Math: Engaging Girls through Math Media'' is a free two-week long summer day camp for high-school girls in the Triangle area of NC. This past June the camp had its second run thanks to renewed funding from the Mathematical Association of America Tensor Women and Mathematics Grant. The camp involved 35 local high-school students who identify as female. Campers complete challenging problem sets and research the life of a female scientist who worked on similar problems. They report their work in a blog post and record a podcast about the scientist they researched. The curriculum has been developed by Mathematics graduate students at UNC from an inquiry based learning perspective; problem sets topics include some theoretical mathematics, but also more applied physics-based material. Campers worked on fluid dynamics, special relativity, and quantum mechanics problem sets which included experiments. The camp has received positive feedback from the local community and the second run saw a large increase in the number of participants. The program is evaluated using pre and post surveys, which measure campers' confidence and interest in pursuing higher level courses in STEM. The results from the past two summers have been encouraging. Mathematical Association of America Tensor Women and Mathematics Grant.

  6. Problem—solving counseling as a therapeutic tool on youth suicidal behavior in the suburban population in Sri Lanka

    PubMed Central

    Perera, E. A. Ramani; Kathriarachchi, Samudra T.

    2011-01-01

    Background: Suicidal behaviour among youth is a major public health concern in Sri Lanka. Prevention of youth suicides using effective, feasible and culturally acceptable methods is invaluable in this regard, however research in this area is grossly lacking. Objective: This study aimed at determining the effectiveness of problem solving counselling as a therapeutic intervention in prevention of youth suicidal behaviour in Sri Lanka. Setting and design: This control trial study was based on hospital admissions with suicidal attempts in a sub-urban hospital in Sri Lanka. The study was carried out at Base Hospital Homagama. Materials and Methods: A sample of 124 was recruited using convenience sampling method and divided into two groups, experimental and control. Control group was offered routine care and experimental group received four sessions of problem solving counselling over one month. Outcome of both groups was measured, six months after the initial screening, using the visual analogue scale. Results: Individualized outcome measures on problem solving counselling showed that problem solving ability among the subjects in the experimental group had improved after four counselling sessions and suicidal behaviour has been reduced. The results are statistically significant. Conclusion: This Study confirms that problem solving counselling is an effective therapeutic tool in management of youth suicidal behaviour in hospital setting in a developing country. PMID:21431005

  7. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    NASA Astrophysics Data System (ADS)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost among all admissible affine maps. The procedure can be used on both continuous measures and finite sample sets from distributions. In numerical examples, the procedure is applied to multivariate normal distributions, to a two-dimensional shape transform problem and to color transfer problems. For the second topic, we present an extension to anisotropic flows of the recently developed Helmholtz and wave-vortex decomposition method for one-dimensional spectra measured along ship or aircraft tracks in Buhler et al. (J. Fluid Mech., vol. 756, 2014, pp. 1007-1026). While in the original method the flow was assumed to be homogeneous and isotropic in the horizontal plane, we allow the flow to have a simple kind of horizontal anisotropy that is chosen in a self-consistent manner and can be deduced from the one-dimensional power spectra of the horizontal velocity fields and their cross-correlation. The key result is that an exact and robust Helmholtz decomposition of the horizontal kinetic energy spectrum can be achieved in this anisotropic flow setting, which then also allows the subsequent wave-vortex decomposition step. The new method is developed theoretically and tested with encouraging results on challenging synthetic data as well as on ocean data from the Gulf Stream.

  8. An electromagnetism-like metaheuristic for open-shop problems with no buffer

    NASA Astrophysics Data System (ADS)

    Naderi, Bahman; Najafi, Esmaeil; Yazdani, Mehdi

    2012-12-01

    This paper considers open-shop scheduling with no intermediate buffer to minimize total tardiness. This problem occurs in many production settings, in the plastic molding, chemical, and food processing industries. The paper mathematically formulates the problem by a mixed integer linear program. The problem can be optimally solved by the model. The paper also develops a novel metaheuristic based on an electromagnetism algorithm to solve the large-sized problems. The paper conducts two computational experiments. The first includes small-sized instances by which the mathematical model and general performance of the proposed metaheuristic are evaluated. The second evaluates the metaheuristic for its performance to solve some large-sized instances. The results show that the model and algorithm are effective to deal with the problem.

  9. Guaranteed estimation of solutions to Helmholtz transmission problems with uncertain data from their indirect noisy observations

    NASA Astrophysics Data System (ADS)

    Podlipenko, Yu. K.; Shestopalov, Yu. V.

    2017-09-01

    We investigate the guaranteed estimation problem of linear functionals from solutions to transmission problems for the Helmholtz equation with inexact data. The right-hand sides of equations entering the statements of transmission problems and the statistical characteristics of observation errors are supposed to be unknown and belonging to certain sets. It is shown that the optimal linear mean square estimates of the above mentioned functionals and estimation errors are expressed via solutions to the systems of transmission problems of the special type. The results and techniques can be applied in the analysis and estimation of solution to forward and inverse electromagnetic and acoustic problems with uncertain data that arise in mathematical models of the wave diffraction on transparent bodies.

  10. Applying Graph Theory to Problems in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  11. Applying Graph Theory to Problems in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  12. Patient satisfaction among Spanish-speaking patients in a public health setting.

    PubMed

    Welty, Elisabeth; Yeager, Valerie A; Ouimet, Claude; Menachemi, Nir

    2012-01-01

    Despite the growing literature on health care quality, few patient satisfaction studies have focused upon the public health setting; where many Hispanic patients receive care. The purpose of this study was to examine the differences in satisfaction between English and Spanish-speaking patients in a local health department clinical setting. We conducted a paper-based satisfaction survey of patients that visited any of the seven Jefferson County Department of Health primary care centers from March 19 to April 19, 2008. Using Chi-squared analyses we found 25% of the Spanish-speaking patients reported regularly having problems getting an appointment compared to 16.8% among English-speakers (p < .001). Results of logistic regression analyses indicated that, despite the availability of interpreters at all JCDH primary care centers, differences in satisfaction existed between Spanish and English speaking patients controlling for center location, purpose of visit, and time spent waiting. Specifically, Spanish speaking patients were more likely to report problems getting an appointment and less likely to report having their medical problems resolved when leaving their visit as compared to those who spoke English. Findings presented herein may provide insight regarding the quality of care received, specifically regarding patient satisfaction in the public health setting. © 2011 National Association for Healthcare Quality.

  13. Generating moment matching scenarios using optimization techniques

    DOE PAGES

    Mehrotra, Sanjay; Papp, Dávid

    2013-05-16

    An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbitrary set. This allows for a reduction in the number of scenarios and allows the scenarios to be better tailored to the problem at hand. The method is based on a semi-infinite linear programming formulation of the problem thatmore » is shown to be solvable with polynomial iteration complexity. A practical column generation method is implemented. The column generation subproblems are polynomial optimization problems; however, they need not be solved to optimality. It is found that the columns in the column generation approach can be efficiently generated by random sampling. The number of scenarios generated matches a lower bound of Tchakaloff's. The rate of convergence of the approximation error is established for continuous integrands, and an improved bound is given for smooth integrands. Extensive numerical experiments are presented in which variants of the proposed method are compared to Monte Carlo and quasi-Monte Carlo methods on both numerical integration problems and stochastic optimization problems. The benefits of being able to match any prescribed set of moments, rather than all moments up to a certain order, is also demonstrated using optimization problems with 100-dimensional random vectors. Here, empirical results show that the proposed approach outperforms Monte Carlo and quasi-Monte Carlo based approaches on the tested problems.« less

  14. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-dimensional Bayesian Nonlinear Inverse Problems with Application to Porous Medium Flow

    NASA Astrophysics Data System (ADS)

    Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.

    2015-12-01

    We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.

  15. Do Peer Relations in Adolescence Influence Health in Adulthood? Peer Problems in the School Setting and the Metabolic Syndrome in Middle-Age

    PubMed Central

    Gustafsson, Per E.; Janlert, Urban; Theorell, Töres; Westerlund, Hugo; Hammarström, Anne

    2012-01-01

    While the importance of social relations for health has been demonstrated in childhood, adolescence and adulthood, few studies have examined the prospective importance of peer relations for adult health. The aim of this study was to examine whether peer problems in the school setting in adolescence relates to the metabolic syndrome in middle-age. Participants came from the Northern Swedish Cohort, a 27-year cohort study of school leavers (effective n = 881, 82% of the original cohort). A score of peer problems was operationalized through form teachers’ assessment of each student’s isolation and popularity among school peers at age 16 years, and the metabolic syndrome was measured by clinical measures at age 43 according to established criteria. Additional information on health, health behaviors, achievement and social circumstances were collected from teacher interviews, school records, clinical measurements and self-administered questionnaires. Logistic regression was used as the main statistical method. Results showed a dose-response relationship between peer problems in adolescence and metabolic syndrome in middle-age, corresponding to 36% higher odds for the metabolic syndrome at age 43 for each SD higher peer problems score at age 16. The association remained significant after adjustment for health, health behaviors, school adjustment or family circumstances in adolescence, and for psychological distress, health behaviors or social circumstances in adulthood. In analyses stratified by sex, the results were significant only in women after adjustment for covariates. Peer problems were significantly related to all individual components of the metabolic syndrome. These results suggest that unsuccessful adaption to the school peer group can have enduring consequences for metabolic health. PMID:22761778

  16. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  17. Examination, evaluation and repair of laminated wood blades after service on the Mod-OA wind turbine

    NASA Technical Reports Server (NTRS)

    Faddoul, J. R.

    1983-01-01

    Laminated wood blades were designed, fabricated, and installed on a 200-KW wind turbine (Mod-OA). The machine uses a two-blade rotor with a diameter of 38.1 m (125 ft). Each blade weights less than 1361 kg (3000 lb). After operating in the field, two blade sets were returned for inspection. One set had been in Hawaii for 17 months (7844 hr of operation) and the other had been at Block Island, Rhode Island, for 26 months (22 months operating - 7564 hr). The Hawaii set was returned because of one of the studs that holds the blade to the hub had failed. This was found to be caused by a combination of improper installation and inadequate corrosion protection. No other problems were found. The broken stud (along with four others that were badly corroded) was replaced and the blades are now in storage. The Block Island set of blades was returned at the completion of the test program, but one blade was found to have developed a crack in the leading edge along the entire span. This crack was found to be the result of a manufacturing process problem but was not structurally critical. When a load-deflection test was conducted on the cracked blade, the response was identical to that measured before installation. In general, the laminate quality of both blade sets was excellent. No significant internal delamination or structural defects were found in any blade. The stud bonding process requires close tolerance control and adequate corrosion protection, but studs can be removed and replaced without major problems. Moisture content stabilization does not appear to be a problem, and laminated wood blades are satisfactory for long-term operation on Mod-OA wind turbines.

  18. Exact solutions for species tree inference from discordant gene trees.

    PubMed

    Chang, Wen-Chieh; Górecki, Paweł; Eulenstein, Oliver

    2013-10-01

    Phylogenetic analysis has to overcome the grant challenge of inferring accurate species trees from evolutionary histories of gene families (gene trees) that are discordant with the species tree along whose branches they have evolved. Two well studied approaches to cope with this challenge are to solve either biologically informed gene tree parsimony (GTP) problems under gene duplication, gene loss, and deep coalescence, or the classic RF supertree problem that does not rely on any biological model. Despite the potential of these problems to infer credible species trees, they are NP-hard. Therefore, these problems are addressed by heuristics that typically lack any provable accuracy and precision. We describe fast dynamic programming algorithms that solve the GTP problems and the RF supertree problem exactly, and demonstrate that our algorithms can solve instances with data sets consisting of as many as 22 taxa. Extensions of our algorithms can also report the number of all optimal species trees, as well as the trees themselves. To better asses the quality of the resulting species trees that best fit the given gene trees, we also compute the worst case species trees, their numbers, and optimization score for each of the computational problems. Finally, we demonstrate the performance of our exact algorithms using empirical and simulated data sets, and analyze the quality of heuristic solutions for the studied problems by contrasting them with our exact solutions.

  19. Binge drinking and sleep problems among young adults.

    PubMed

    Popovici, Ioana; French, Michael T

    2013-09-01

    As most of the literature exploring the relationships between alcohol use and sleep problems is descriptive and with small sample sizes, the present study seeks to provide new information on the topic by employing a large, nationally representative dataset with several waves of data and a broad set of measures for binge drinking and sleep problems. We use data from the National Longitudinal Study of Adolescent Health (Add Health), a nationally representative survey of adolescents and young adults. The analysis sample consists of all Wave 4 observations without missing values for the sleep problems variables (N=14,089, 53% females). We estimate gender-specific multivariate probit models with a rich set of socioeconomic, demographic, physical, and mental health variables to control for confounding factors. Our results confirm that alcohol use, and specifically binge drinking, is positively and significantly associated with various types of sleep problems. The detrimental effects on sleep increase in magnitude with frequency of binge drinking, suggesting a dose-response relationship. Moreover, binge drinking is associated with sleep problems independent of psychiatric conditions. The statistically strong association between sleep problems and binge drinking found in this study is a first step in understanding these relationships. Future research is needed to determine the causal links between alcohol misuse and sleep problems to inform appropriate clinical and policy responses. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. An Optimization-Based Method for Feature Ranking in Nonlinear Regression Problems.

    PubMed

    Bravi, Luca; Piccialli, Veronica; Sciandrone, Marco

    2017-04-01

    In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.

  1. Consideration of plant behaviour in optimal servo-compensator design

    NASA Astrophysics Data System (ADS)

    Moase, W. H.; Manzie, C.

    2016-07-01

    Where the most prevalent optimal servo-compensator formulations penalise the behaviour of an error system, this paper considers the problem of additionally penalising the actual states and inputs of the plant. Doing so has the advantage of enabling the penalty function to better resemble an economic cost. This is especially true of problems where control effort needs to be sensibly allocated across weakly redundant inputs or where one wishes to use penalties to soft-constrain certain states or inputs. It is shown that, although the resulting cost function grows unbounded as its horizon approaches infinity, it is possible to formulate an equivalent optimisation problem with a bounded cost. The resulting optimisation problem is similar to those in earlier studies but has an additional 'correction term' in the cost function, and a set of equality constraints that arise when there are redundant inputs. A numerical approach to solve the resulting optimisation problem is presented, followed by simulations on a micro-macro positioner that illustrate the benefits of the proposed servo-compensator design approach.

  2. Robust MST-Based Clustering Algorithm.

    PubMed

    Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing

    2018-06-01

    Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.

  3. The distribution of the zeros of the Hermite-Padé polynomials for a pair of functions forming a Nikishin system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakhmanov, E A; Suetin, S P

    2013-09-30

    The distribution of the zeros of the Hermite-Padé polynomials of the first kind for a pair of functions with an arbitrary even number of common branch points lying on the real axis is investigated under the assumption that this pair of functions forms a generalized complex Nikishin system. It is proved (Theorem 1) that the zeros have a limiting distribution, which coincides with the equilibrium measure of a certain compact set having the S-property in a harmonic external field. The existence problem for S-compact sets is solved in Theorem 2. The main idea of the proof of Theorem 1 consists in replacing a vector equilibrium problem in potentialmore » theory by a scalar problem with an external field and then using the general Gonchar-Rakhmanov method, which was worked out in the solution of the '1/9'-conjecture. The relation of the result obtained here to some results and conjectures due to Nuttall is discussed. Bibliography: 51 titles.« less

  4. Negative ratings play a positive role in information filtering

    NASA Astrophysics Data System (ADS)

    Zeng, Wei; Zhu, Yu-Xiao; Lü, Linyuan; Zhou, Tao

    2011-11-01

    The explosive growth of information asks for advanced information filtering techniques to solve the so-called information overload problem. A promising way is the recommender system which analyzes the historical records of users’ activities and accordingly provides personalized recommendations. Most recommender systems can be represented by user-object bipartite networks where users can evaluate and vote for objects, and ratings such as “dislike” and “I hate it” are treated straightforwardly as negative factors or are completely ignored in traditional approaches. Applying a local diffusion algorithm on three benchmark data sets, MovieLens, Netflix and Amazon, our study arrives at a very surprising result, namely the negative ratings may play a positive role especially for very sparse data sets. In-depth analysis at the microscopic level indicates that the negative ratings from less active users to less popular objects could probably have positive impacts on the recommendations, while the ones connecting active users and popular objects mostly should be treated negatively. We finally outline the significant relevance of our results to the two long-term challenges in information filtering: the sparsity problem and the cold-start problem.

  5. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  6. Penny-shaped crack in a fiber-reinforced matrix. [elastostatics

    NASA Technical Reports Server (NTRS)

    Narayanan, T. V.; Erdogan, F.

    1974-01-01

    Using a slender inclusion model developed earlier, the elastostatic interaction problem between a penny-shaped crack and elastic fibers in an elastic matrix is formulated. For a single set and for multiple sets of fibers oriented perpendicularly to the plane of the crack and distributed symmetrically on concentric circles, the problem was reduced to a system of singular integral equations. Techniques for the regularization and for the numerical solution of the system are outlined. For various fiber geometries numerical examples are given, and distribution of the stress intensity factor along the crack border was obtained. Sample results showing the distribution of the fiber stress and a measure of the fiber-matrix interface shear are also included.

  7. Synthetic aperture radar range - Azimuth ambiguity design and constraints

    NASA Technical Reports Server (NTRS)

    Mehlis, J. G.

    1980-01-01

    Problems concerning the design of a system for mapping a planetary surface with a synthetic aperture radar (SAR) are considered. Given an ambiguity level, resolution, and swath width, the problems are related to the determination of optimum antenna apertures and the most suitable pulse repetition frequency (PRF). From the set of normalized azimuth ambiguity ratio curves, the designer can arrive at the azimuth antenna length, and from the sets of normalized range ambiguity ratio curves, he can arrive at the range aperture length or pulse repetition frequency. A procedure based on this design method is shown in an example. The normalized curves provide results for a SAR using a uniformly or cosine weighted rectangular antenna aperture.

  8. Penny-shaped crack in a fiber-reinforced matrix

    NASA Technical Reports Server (NTRS)

    Narayanan, T. V.; Erdogan, F.

    1975-01-01

    Using the slender inclusion model developed earlier the elastostatic interaction problem between a penny-shaped crack and elastic fibers in an elastic matrix is formulated. For a single set and for multiple sets of fibers oriented perpendicularly to the plane of the crack and distributed symmetrically on concentric circles the problem is reduced to a system of singular integral equations. Techniques for the regularization and for the numerical solution of the system are outlined. For various fiber geometries numerical examples are given and distribution of the stress intensity factor along the crack border is obtained. Sample results showing the distribution of the fiber stress and a measure of the fiber-matrix interface shear are also included.

  9. Exploring KM Features of High-Performance Companies

    NASA Astrophysics Data System (ADS)

    Wu, Wei-Wen

    2007-12-01

    For reacting to an increasingly rival business environment, many companies emphasize the importance of knowledge management (KM). It is a favorable way to explore and learn KM features of high-performance companies. However, finding out the critical KM features of high-performance companies is a qualitative analysis problem. To handle this kind of problem, the rough set approach is suitable because it is based on data-mining techniques to discover knowledge without rigorous statistical assumptions. Thus, this paper explored KM features of high-performance companies by using the rough set approach. The results show that high-performance companies stress the importance on both tacit and explicit knowledge, and consider that incentives and evaluations are the essentials to implementing KM.

  10. Photon-limited Sensing and Surveillance

    DTIC Science & Technology

    2015-01-29

    considerable time delay). More specifically, there were four main outcomes from this work: • Improved understanding of the fundmental limitations of...that we design novel cameras for photon-limited settings based on the principles of CS. Most prior theoretical results in compressed sensing and related...inverse problems apply to idealized settings where the noise is i.i.d., and do not account for signal-dependent noise and physical sensing

  11. Matching by linear programming and successive convexification.

    PubMed

    Jiang, Hao; Drew, Mark S; Li, Ze-Nian

    2007-06-01

    We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.

  12. An Integrated Method Based on PSO and EDA for the Max-Cut Problem.

    PubMed

    Lin, Geng; Guan, Jian

    2016-01-01

    The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20,000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality.

  13. Complexity and approximability for a problem of intersecting of proximity graphs with minimum number of equal disks

    NASA Astrophysics Data System (ADS)

    Kobylkin, Konstantin

    2016-10-01

    Computational complexity and approximability are studied for the problem of intersecting of a set of straight line segments with the smallest cardinality set of disks of fixed radii r > 0 where the set of segments forms straight line embedding of possibly non-planar geometric graph. This problem arises in physical network security analysis for telecommunication, wireless and road networks represented by specific geometric graphs defined by Euclidean distances between their vertices (proximity graphs). It can be formulated in a form of known Hitting Set problem over a set of Euclidean r-neighbourhoods of segments. Being of interest computational complexity and approximability of Hitting Set over so structured sets of geometric objects did not get much focus in the literature. Strong NP-hardness of the problem is reported over special classes of proximity graphs namely of Delaunay triangulations, some of their connected subgraphs, half-θ6 graphs and non-planar unit disk graphs as well as APX-hardness is given for non-planar geometric graphs at different scales of r with respect to the longest graph edge length. Simple constant factor approximation algorithm is presented for the case where r is at the same scale as the longest edge length.

  14. Multiobjective optimization in a pseudometric objective space as applied to a general model of business activities

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2016-09-01

    It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.

  15. A unified framework for approximation in inverse problems for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1988-01-01

    A theoretical framework is presented that can be used to treat approximation techniques for very general classes of parameter estimation problems involving distributed systems that are either first or second order in time. Using the approach developed, one can obtain both convergence and stability (continuous dependence of parameter estimates with respect to the observations) under very weak regularity and compactness assumptions on the set of admissible parameters. This unified theory can be used for many problems found in the recent literature and in many cases offers significant improvements to existing results.

  16. Research on vehicles and cargos matching model based on virtual logistics platform

    NASA Astrophysics Data System (ADS)

    Zhuang, Yufeng; Lu, Jiang; Su, Zhiyuan

    2018-04-01

    Highway less than truckload (LTL) transportation vehicles and cargos matching problem is a joint optimization problem of typical vehicle routing and loading, which is also a hot issue of operational research. This article based on the demand of virtual logistics platform, for the problem of the highway LTL transportation, the matching model of the idle vehicle and the transportation order is set up and the corresponding genetic algorithm is designed. Then the algorithm is implemented by Java. The simulation results show that the solution is satisfactory.

  17. Structural damage identification using an enhanced thermal exchange optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kaveh, A.; Dadras, A.

    2018-03-01

    The recently developed optimization algorithm-the so-called thermal exchange optimization (TEO) algorithm-is enhanced and applied to a damage detection problem. An offline parameter tuning approach is utilized to set the internal parameters of the TEO, resulting in the enhanced heat transfer optimization (ETEO) algorithm. The damage detection problem is defined as an inverse problem, and ETEO is applied to a wide range of structures. Several scenarios with noise and noise-free modal data are tested and the locations and extents of damages are identified with good accuracy.

  18. Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Hou, Gene J. W.; Kenny, Sean P.

    1991-01-01

    A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.

  19. On a comparison of two schemes in sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Grishina, Anastasiia A.; Penenko, Alexey V.

    2017-11-01

    This paper is focused on variational data assimilation as an approach to mathematical modeling. Realization of the approach requires a sequence of connected inverse problems with different sets of observational data to be solved. Two variational data assimilation schemes, "implicit" and "explicit", are considered in the article. Their equivalence is shown and the numerical results are given on a basis of non-linear Robertson system. To avoid the "inverse problem crime" different schemes were used to produce synthetic measurement and to solve the data assimilation problem.

  20. Automatic detection of wheezes by evaluation of multiple acoustic feature extraction methods and C-weighted SVM

    NASA Astrophysics Data System (ADS)

    Sosa, Germán. D.; Cruz-Roa, Angel; González, Fabio A.

    2015-01-01

    This work addresses the problem of lung sound classification, in particular, the problem of distinguishing between wheeze and normal sounds. Wheezing sound detection is an important step to associate lung sounds with an abnormal state of the respiratory system, usually associated with tuberculosis or another chronic obstructive pulmonary diseases (COPD). The paper presents an approach for automatic lung sound classification, which uses different state-of-the-art sound features in combination with a C-weighted support vector machine (SVM) classifier that works better for unbalanced data. Feature extraction methods used here are commonly applied in speech recognition and related problems thanks to the fact that they capture the most informative spectral content from the original signals. The evaluated methods were: Fourier transform (FT), wavelet decomposition using Wavelet Packet Transform bank of filters (WPT) and Mel Frequency Cepstral Coefficients (MFCC). For comparison, we evaluated and contrasted the proposed approach against previous works using different combination of features and/or classifiers. The different methods were evaluated on a set of lung sounds including normal and wheezing sounds. A leave-two-out per-case cross-validation approach was used, which, in each fold, chooses as validation set a couple of cases, one including normal sounds and the other including wheezing sounds. Experimental results were reported in terms of traditional classification performance measures: sensitivity, specificity and balanced accuracy. Our best results using the suggested approach, C-weighted SVM and MFCC, achieve a 82.1% of balanced accuracy obtaining the best result for this problem until now. These results suggest that supervised classifiers based on kernel methods are able to learn better models for this challenging classification problem even using the same feature extraction methods.

  1. Primal-dual techniques for online algorithms and mechanisms

    NASA Astrophysics Data System (ADS)

    Liaghat, Vahid

    An offline algorithm is one that knows the entire input in advance. An online algorithm, however, processes its input in a serial fashion. In contrast to offline algorithms, an online algorithm works in a local fashion and has to make irrevocable decisions without having the entire input. Online algorithms are often not optimal since their irrevocable decisions may turn out to be inefficient after receiving the rest of the input. For a given online problem, the goal is to design algorithms which are competitive against the offline optimal solutions. In a classical offline scenario, it is often common to see a dual analysis of problems that can be formulated as a linear or convex program. Primal-dual and dual-fitting techniques have been successfully applied to many such problems. Unfortunately, the usual tricks come short in an online setting since an online algorithm should make decisions without knowing even the whole program. In this thesis, we study the competitive analysis of fundamental problems in the literature such as different variants of online matching and online Steiner connectivity, via online dual techniques. Although there are many generic tools for solving an optimization problem in the offline paradigm, in comparison, much less is known for tackling online problems. The main focus of this work is to design generic techniques for solving integral linear optimization problems where the solution space is restricted via a set of linear constraints. A general family of these problems are online packing/covering problems. Our work shows that for several seemingly unrelated problems, primal-dual techniques can be successfully applied as a unifying approach for analyzing these problems. We believe this leads to generic algorithmic frameworks for solving online problems. In the first part of the thesis, we show the effectiveness of our techniques in the stochastic settings and their applications in Bayesian mechanism design. In particular, we introduce new techniques for solving a fundamental linear optimization problem, namely, the stochastic generalized assignment problem (GAP). This packing problem generalizes various problems such as online matching, ad allocation, bin packing, etc. We furthermore show applications of such results in the mechanism design by introducing Prophet Secretary, a novel Bayesian model for online auctions. In the second part of the thesis, we focus on the covering problems. We develop the framework of "Disk Painting" for a general class of network design problems that can be characterized by proper functions. This class generalizes the node-weighted and edge-weighted variants of several well-known Steiner connectivity problems. We furthermore design a generic technique for solving the prize-collecting variants of these problems when there exists a dual analysis for the non-prize-collecting counterparts. Hence, we solve the online prize-collecting variants of several network design problems for the first time. Finally we focus on designing techniques for online problems with mixed packing/covering constraints. We initiate the study of degree-bounded graph optimization problems in the online setting by designing an online algorithm with a tight competitive ratio for the degree-bounded Steiner forest problem. We hope these techniques establishes a starting point for the analysis of the important class of online degree-bounded optimization on graphs.

  2. Simulated annealing with restart strategy for the blood pickup routing problem

    NASA Astrophysics Data System (ADS)

    Yu, V. F.; Iswari, T.; Normasari, N. M. E.; Asih, A. M. S.; Ting, H.

    2018-04-01

    This study develops a simulated annealing heuristic with restart strategy (SA_RS) for solving the blood pickup routing problem (BPRP). BPRP minimizes the total length of the routes for blood bag collection between a blood bank and a set of donation sites, each associated with a time window constraint that must be observed. The proposed SA_RS is implemented in C++ and tested on benchmark instances of the vehicle routing problem with time windows to verify its performance. The algorithm is then tested on some newly generated BPRP instances and the results are compared with those obtained by CPLEX. Experimental results show that the proposed SA_RS heuristic effectively solves BPRP.

  3. On a distinctive feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets

    NASA Astrophysics Data System (ADS)

    Trifonenkov, A. V.; Trifonenkov, V. P.

    2017-01-01

    This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.

  4. Local Feature Selection for Data Classification.

    PubMed

    Armanfard, Narges; Reilly, James P; Komeili, Majid

    2016-06-01

    Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.

  5. Nonrigid Image Registration in Digital Subtraction Angiography Using Multilevel B-Spline

    PubMed Central

    2013-01-01

    We address the problem of motion artifact reduction in digital subtraction angiography (DSA) using image registration techniques. Most of registration algorithms proposed for application in DSA, have been designed for peripheral and cerebral angiography images in which we mainly deal with global rigid motions. These algorithms did not yield good results when applied to coronary angiography images because of complex nonrigid motions that exist in this type of angiography images. Multiresolution and iterative algorithms are proposed to cope with this problem, but these algorithms are associated with high computational cost which makes them not acceptable for real-time clinical applications. In this paper we propose a nonrigid image registration algorithm for coronary angiography images that is significantly faster than multiresolution and iterative blocking methods and outperforms competing algorithms evaluated on the same data sets. This algorithm is based on a sparse set of matched feature point pairs and the elastic registration is performed by means of multilevel B-spline image warping. Experimental results with several clinical data sets demonstrate the effectiveness of our approach. PMID:23971026

  6. L2-norm multiple kernel learning and its application to biomedical data fusion

    PubMed Central

    2010-01-01

    Background This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L∞, L1, and L2 MKL. In particular, L2 MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing L∞ MKL method. In real biomedical applications, L2 MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources. Results We provide a theoretical analysis of the relationship between the L2 optimization of kernels in the dual problem with the L2 coefficient regularization in the primal problem. Understanding the dual L2 problem grants a unified view on MKL and enables us to extend the L2 method to a wide range of machine learning problems. We implement L2 MKL for ranking and classification problems and compare its performance with the sparse L∞ and the averaging L1 MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. L2 MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel L2 MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing. Conclusions This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a "winner-takes-all" effect seen in L∞ MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing L2 kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the computational burden of MKL, this paper proposes several novel LSSVM based MKL algorithms. Systematic comparison on real data sets shows that LSSVM MKL has comparable performance as the conventional SVM MKL algorithms. Moreover, large scale numerical experiments indicate that when cast as semi-infinite programming, LSSVM MKL can be solved more efficiently than SVM MKL. Availability The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/l2lssvm.html. PMID:20529363

  7. Excore Modeling with VERAShift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.

    It is important to be able to accurately predict the neutron flux outside the immediate reactor core for a variety of safety and material analyses. Monte Carlo radiation transport calculations are required to produce the high fidelity excore responses. Under this milestone VERA (specifically the VERAShift package) has been extended to perform excore calculations by running radiation transport calculations with Shift. This package couples VERA-CS with Shift to perform excore tallies for multiple state points concurrently, with each component capable of parallel execution on independent domains. Specifically, this package performs fluence calculations in the core barrel and vessel, or, performsmore » the requested tallies in any user-defined excore regions. VERAShift takes advantage of the general geometry package in Shift. This gives VERAShift the flexibility to explicitly model features outside the core barrel, including detailed vessel models, detectors, and power plant details. A very limited set of experimental and numerical benchmarks is available for excore simulation comparison. The Consortium for the Advanced Simulation of Light Water Reactors (CASL) has developed a set of excore benchmark problems to include as part of the VERA-CS verification and validation (V&V) problems. The excore capability in VERAShift has been tested on small representative assembly problems, multiassembly problems, and quarter-core problems. VERAView has also been extended to visualize these vessel fluence results from VERAShift. Preliminary vessel fluence results for quarter-core multistate calculations look very promising. Further development is needed to determine the details relevant to excore simulations. Validation of VERA for fluence and excore detectors still needs to be performed against experimental and numerical results.« less

  8. Maternal IQ, child IQ, behavior, and achievement in urban 5-7 year olds.

    PubMed

    Chen, Aimin; Schwarz, Donald; Radcliffe, Jerilynn; Rogan, Walter J

    2006-03-01

    In one study of children in 27 families with maternal retardation, those children with higher intelligence quotient (IQ) were more likely to have multiple behavior problems than those with lower IQ. If true, this result would affect clinical practice, but it has not been replicated. Because the setting of the initial observation is similar to the setting of childhood lead poisoning, we attempted a replication using data from the Treatment of Lead-Exposed Children (TLC) study, in which 780 children aged 12-33 mo with blood lead levels 20-44 microg/dL were randomized to either succimer treatment or placebo and then followed up to 5 y. Of 656 mothers of TLC children with IQ measured, 113 demonstrated mental retardation (IQ <70). Whether maternal IQ was <70 or >or=70, children with IQ >or=85 were rated more favorably on cognitive tests and behavioral questionnaires than children with IQ <85; these measures included Conners' Parent Rating Scale-Revised at age 5, the Developmental Neuropsychological Assessment at ages 5 and 7, and the Behavioral Assessment System for Children at age 7. Among children of mothers with IQ <70, those with IQ >or=85 did not show more severe clinical behavioral problems, nor were they more likely to show multiple behavior problems. Children with higher IQ have fewer behavior problems, irrespective of the mother's IQ. In the special setting of mothers with IQ <70, children with higher IQ are not at greater risk of behavior problems.

  9. The Influence of Function, Topography, and Setting on Noncontingent Reinforcement Effect Sizes for Reduction in Problem Behavior: A Meta-Analysis of Single-Case Experimental Design Data

    ERIC Educational Resources Information Center

    Ritter, William A.; Barnard-Brak, Lucy; Richman, David M.; Grubb, Laura M.

    2018-01-01

    Richman et al. ("J Appl Behav Anal" 48:131-152, 2015) completed a meta-analytic analysis of single-case experimental design data on noncontingent reinforcement (NCR) for the treatment of problem behavior exhibited by individuals with developmental disabilities. Results showed that (1) NCR produced very large effect sizes for reduction in…

  10. Decision-theoretic methodology for reliability and risk allocation in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, N.Z.; Papazoglou, I.A.; Bari, R.A.

    1985-01-01

    This paper describes a methodology for allocating reliability and risk to various reactor systems, subsystems, components, operations, and structures in a consistent manner, based on a set of global safety criteria which are not rigid. The problem is formulated as a multiattribute decision analysis paradigm; the multiobjective optimization, which is performed on a PRA model and reliability cost functions, serves as the guiding principle for reliability and risk allocation. The concept of noninferiority is used in the multiobjective optimization problem. Finding the noninferior solution set is the main theme of the current approach. The assessment of the decision maker's preferencesmore » could then be performed more easily on the noninferior solution set. Some results of the methodology applications to a nontrivial risk model are provided and several outstanding issues such as generic allocation and preference assessment are discussed.« less

  11. Infinite horizon problems on stratifiable state-constraints sets

    NASA Astrophysics Data System (ADS)

    Hermosilla, C.; Zidani, H.

    2015-02-01

    This paper deals with a state-constrained control problem. It is well known that, unless some compatibility condition between constraints and dynamics holds, the Value Function has not enough regularity, or can fail to be the unique constrained viscosity solution of a Hamilton-Jacobi-Bellman (HJB) equation. Here, we consider the case of a set of constraints having a stratified structure. Under this circumstance, the interior of this set may be empty or disconnected, and the admissible trajectories may have the only option to stay on the boundary without possible approximation in the interior of the constraints. In such situations, the classical pointing qualification hypothesis is not relevant. The discontinuous Value Function is then characterized by means of a system of HJB equations on each stratum that composes the state-constraints. This result is obtained under a local controllability assumption which is required only on the strata where some chattering phenomena could occur.

  12. Polyatomic molecular Dirac-Hartree-Fock calculations with Gaussian basis sets

    NASA Technical Reports Server (NTRS)

    Dyall, Kenneth G.; Faegri, Knut, Jr.; Taylor, Peter R.

    1990-01-01

    Numerical methods have been used successfully in atomic Dirac-Hartree-Fock (DHF) calculations for many years. Some DHF calculations using numerical methods have been done on diatomic molecules, but while these serve a useful purpose for calibration, the computational effort in extending this approach to polyatomic molecules is prohibitive. An alternative more in line with traditional quantum chemistry is to use an analytical basis set expansion of the wave function. This approach fell into disrepute in the early 1980's due to problems with variational collapse and intruder states, but has recently been put on firm theoretical foundations. In particular, the problems of variational collapse are well understood, and prescriptions for avoiding the most serious failures have been developed. Consequently, it is now possible to develop reliable molecular programs using basis set methods. This paper describes such a program and reports results of test calculations to demonstrate the convergence and stability of the method.

  13. Addressing group dynamics in a brief motivational intervention for college student drinkers.

    PubMed

    Faris, Alexander S; Brown, Janice M

    2003-01-01

    Previous research indicates that brief motivational interventions for college student drinkers may be less effective in group settings than individual settings. Social psychological theories about counterproductive group dynamics may partially explain this finding. The present study examined potential problems with group motivational interventions by comparing outcomes from a standard group motivational intervention (SGMI; n = 25), an enhanced group motivational intervention (EGMI; n = 27) designed to suppress counterproductive processes, and a no intervention control (n = 23). SGMI and EGMI participants reported disruptive group dynamics as evidenced by low elaboration likelihood, production blocking, and social loafing, though the level of disturbance was significantly lower for EGMI individuals (p = .001). Despite counteracting group dynamics in the EGMI condition, participants in the two interventions were statistically similar in post-intervention problem recognition and future drinking intentions. The results raise concerns over implementing individually-based interventions in group settings without making necessary adjustments.

  14. Stride search: A general algorithm for storm detection in high-resolution climate data

    DOE PAGES

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; ...

    2016-04-13

    This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less

  15. Diverse knowledges and competing interests: an essay on socio-technical problem-solving.

    PubMed

    di Norcia, Vincent

    2002-01-01

    Solving complex socio-technical problems, this paper claims, involves diverse knowledges (cognitive diversity), competing interests (social diversity), and pragmatism. To explain this view, this paper first explores two different cases: Canadian pulp and paper mill pollution and siting nuclear reactors in systematically sensitive areas of California. Solving such socio-technically complex problems involves cognitive diversity as well as social diversity and pragmatism. Cognitive diversity requires one to not only recognize relevant knowledges but also to assess their validity. Finally, it is suggested, integrating the resultant set of diverse relevant and valid knowledges determines the parameters of the solution space for the problem.

  16. Problem-based learning in the NICU.

    PubMed

    Pilcher, Jobeth

    2014-01-01

    Problem-based learning (PBL) is an educational strategy that provides learners with the opportunity to investigate and solve realistic problem situations. It is also referred to as project-based learning or work-based learning. PBL combines several learning strategies including the use of case studies coupled with collaborative, facilitated, and self-directed learning. Research has demonstrated that use of PBL can result in learners having improved problem-solving skills, increased breadth and analysis of complex data, higher-level thinking skills, and improved collaboration. This article will include background information and a description of PBL, followed by examples of how this strategy can be used for learning in neonatal settings.

  17. Study of multi-dimensional radiative energy transfer in molecular gases

    NASA Technical Reports Server (NTRS)

    Liu, Jiwen; Tiwari, S. N.

    1993-01-01

    The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.

  18. In Pursuit of Change: Youth Response to Intensive Goal Setting Embedded in a Serious Video Game

    PubMed Central

    Thompson, Debbe; Baranowski, Tom; Buday, Richard; Baranowski, Janice; Juliano, Melissa; Frazior, McKee; Wilsdon, Jon; Jago, Russell

    2007-01-01

    Background Type 2 diabetes has increased in prevalence among youth, paralleling the increase in pediatric obesity. Helping youth achieve energy balance by changing diet and physical activity behaviors should decrease the risk for type 2 diabetes and obesity. Goal setting and goal review are critical components of behavior change. Theory-informed video games that emphasize development and refinement of goal setting and goal review skills provide a method for achieving energy balance in an informative, entertaining format. This article reports alpha-testing results of early versions of theory-informed goal setting and reviews components of two diabetes and obesity prevention video games for preadolescents. Method Two episodes each of two video games were alpha tested with 9- to 11-year-old youth from multiple ethnic groups. Alpha testing included observed game play followed by a scripted interview. The staff was trained in observation and interview techniques prior to data collection. Results Although some difficulties were encountered, alpha testers generally understood goal setting and review components and comprehended they were setting personal goals. Although goal setting and review involved multiple steps, youth were generally able to complete them quickly, with minimal difficulty. Few technical issues arose; however, several usability and comprehension problems were identified. Conclusions Theory-informed video games may be an effective medium for promoting youth diabetes and obesity prevention. Alpha testing helps identify problems likely to have a negative effect on functionality, usability, and comprehension during development, thereby providing an opportunity to correct these issues prior to final production. PMID:19885165

  19. The construction of combinatorial manifolds with prescribed sets of links of vertices

    NASA Astrophysics Data System (ADS)

    Gaifullin, A. A.

    2008-10-01

    To every oriented closed combinatorial manifold we assign the set (with repetitions) of isomorphism classes of links of its vertices. The resulting transformation \\mathcal{L} is the main object of study in this paper. We pose an inversion problem for \\mathcal{L} and show that this problem is closely related to Steenrod's problem on the realization of cycles and to the Rokhlin-Schwartz-Thom construction of combinatorial Pontryagin classes. We obtain a necessary condition for a set of isomorphism classes of combinatorial spheres to belong to the image of \\mathcal{L}. (Sets satisfying this condition are said to be balanced.) We give an explicit construction showing that every balanced set of isomorphism classes of combinatorial spheres falls into the image of \\mathcal{L} after passing to a multiple set and adding several pairs of the form (Z,-Z), where -Z is the sphere Z with the orientation reversed. Given any singular simplicial cycle \\xi of a space X, this construction enables us to find explicitly a combinatorial manifold M and a map \\varphi\\colon M\\to X such that \\varphi_* \\lbrack M \\rbrack =r[\\xi] for some positive integer r. The construction is based on resolving singularities of \\xi. We give applications of the main construction to cobordisms of manifolds with singularities and cobordisms of simple cells. In particular, we prove that every rational additive invariant of cobordisms of manifolds with singularities admits a local formula. Another application is the construction of explicit (though inefficient) local combinatorial formulae for polynomials in the rational Pontryagin classes of combinatorial manifolds.

  20. Probabilistic soft sets and dual probabilistic soft sets in decision making with positive and negative parameters

    NASA Astrophysics Data System (ADS)

    Fatimah, F.; Rosadi, D.; Hakim, R. B. F.

    2018-03-01

    In this paper, we motivate and introduce probabilistic soft sets and dual probabilistic soft sets for handling decision making problem in the presence of positive and negative parameters. We propose several types of algorithms related to this problem. Our procedures are flexible and adaptable. An example on real data is also given.

  1. Discovery of error-tolerant biclusters from noisy gene expression data.

    PubMed

    Gupta, Rohit; Rao, Navneet; Kumar, Vipin

    2011-11-24

    An important analysis performed on microarray gene-expression data is to discover biclusters, which denote groups of genes that are coherently expressed for a subset of conditions. Various biclustering algorithms have been proposed to find different types of biclusters from these real-valued gene-expression data sets. However, these algorithms suffer from several limitations such as inability to explicitly handle errors/noise in the data; difficulty in discovering small bicliusters due to their top-down approach; inability of some of the approaches to find overlapping biclusters, which is crucial as many genes participate in multiple biological processes. Association pattern mining also produce biclusters as their result and can naturally address some of these limitations. However, traditional association mining only finds exact biclusters, which limits its applicability in real-life data sets where the biclusters may be fragmented due to random noise/errors. Moreover, as they only work with binary or boolean attributes, their application on gene-expression data require transforming real-valued attributes to binary attributes, which often results in loss of information. Many past approaches have tried to address the issue of noise and handling real-valued attributes independently but there is no systematic approach that addresses both of these issues together. In this paper, we first propose a novel error-tolerant biclustering model, 'ET-bicluster', and then propose a bottom-up heuristic-based mining algorithm to sequentially discover error-tolerant biclusters directly from real-valued gene-expression data. The efficacy of our proposed approach is illustrated by comparing it with a recent approach RAP in the context of two biological problems: discovery of functional modules and discovery of biomarkers. For the first problem, two real-valued S.Cerevisiae microarray gene-expression data sets are used to demonstrate that the biclusters obtained from ET-bicluster approach not only recover larger set of genes as compared to those obtained from RAP approach but also have higher functional coherence as evaluated using the GO-based functional enrichment analysis. The statistical significance of the discovered error-tolerant biclusters as estimated by using two randomization tests, reveal that they are indeed biologically meaningful and statistically significant. For the second problem of biomarker discovery, we used four real-valued Breast Cancer microarray gene-expression data sets and evaluate the biomarkers obtained using MSigDB gene sets. The results obtained for both the problems: functional module discovery and biomarkers discovery, clearly signifies the usefulness of the proposed ET-bicluster approach and illustrate the importance of explicitly incorporating noise/errors in discovering coherent groups of genes from gene-expression data.

  2. Fourier analysis of finite element preconditioned collocation schemes

    NASA Technical Reports Server (NTRS)

    Deville, Michel O.; Mund, Ernest H.

    1990-01-01

    The spectrum of the iteration operator of some finite element preconditioned Fourier collocation schemes is investigated. The first part of the paper analyses one-dimensional elliptic and hyperbolic model problems and the advection-diffusion equation. Analytical expressions of the eigenvalues are obtained with use of symbolic computation. The second part of the paper considers the set of one-dimensional differential equations resulting from Fourier analysis (in the tranverse direction) of the 2-D Stokes problem. All results agree with previous conclusions on the numerical efficiency of finite element preconditioning schemes.

  3. Problems in Operating a Drug Rehabilitation Center in an Adult Correctional Setting and Some Preventive Guidelines or Strategies.

    ERIC Educational Resources Information Center

    Smith, S. Mae; And Others

    1979-01-01

    Some of the problems include differences in philosophy, nontherapeutic aspects of the prison environment, dependency on the prison environment, and unique staff problems. The authors conclude that changes can be made and effective treatment can exist within the correctional setting. (Author)

  4. Research Foundations and Problems Regarding Counseling in the Community.

    ERIC Educational Resources Information Center

    Campbell, Robert E.

    Research foundations and problems for counseling in the community are discussed. Research implications are outlined around Sarason's three challenges to community health; (1) extending therapeutic outreach, (2) studying those situations, settings, or forces in the community that set the stage for problems; and (3) efforts toward prevention.…

  5. An Interactive Multiobjective Programming Approach to Combinatorial Data Analysis.

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Stahl, Stephanie

    2001-01-01

    Describes an interactive procedure for multiobjective asymmetric unidimensional seriation problems that uses a dynamic-programming algorithm to generate partially the efficient set of sequences for small to medium-sized problems and a multioperational heuristic to estimate the efficient set for larger problems. Applies the procedure to an…

  6. Designing Agent Utilities for Coordinated, Scalable and Robust Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan

    2005-01-01

    Coordinating the behavior of a large number of agents to achieve a system level goal poses unique design challenges. In particular, problems of scaling (number of agents in the thousands to tens of thousands), observability (agents have limited sensing capabilities), and robustness (the agents are unreliable) make it impossible to simply apply methods developed for small multi-agent systems composed of reliable agents. To address these problems, we present an approach based on deriving agent goals that are aligned with the overall system goal, and can be computed using information readily available to the agents. Then, each agent uses a simple reinforcement learning algorithm to pursue its own goals. Because of the way in which those goals are derived, there is no need to use difficult to scale external mechanisms to force collaboration or coordination among the agents, or to ensure that agents actively attempt to appropriate the tasks of agents that suffered failures. To present these results in a concrete setting, we focus on the problem of finding the sub-set of a set of imperfect devices that results in the best aggregate device. This is a large distributed agent coordination problem where each agent (e.g., device) needs to determine whether to be part of the aggregate device. Our results show that the approach proposed in this work provides improvements of over an order of magnitude over both traditional search methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents failed midway through the simulation) the system's performance degrades gracefully and still outperforms a failure-free and centralized search algorithm. The results also show that the gains increase as the size of the system (e.g., number of agents) increases. This latter result is particularly encouraging and suggests that this method is ideally suited for domains where the number of agents is currently in the thousands and will reach tens or hundreds of thousands in the near future.

  7. Benchmarking optimization software with COPS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolan, E.D.; More, J.J.

    2001-01-08

    The COPS test set provides a modest selection of difficult nonlinearly constrained optimization problems from applications in optimal design, fluid dynamics, parameter estimation, and optimal control. In this report we describe version 2.0 of the COPS problems. The formulation and discretization of the original problems have been streamlined and improved. We have also added new problems. The presentation of COPS follows the original report, but the description of the problems has been streamlined. For each problem we discuss the formulation of the problem and the structural data in Table 0.1 on the formulation. The aim of presenting this data ismore » to provide an approximate idea of the size and sparsity of the problem. We also include the results of computational experiments with the LANCELOT, LOQO, MINOS, and SNOPT solvers. These computational experiments differ from the original results in that we have deleted problems that were considered to be too easy. Moreover, in the current version of the computational experiments, each problem is tested with four variations. An important difference between this report and the original report is that the tables that present the computational experiments are generated automatically from the testing script. This is explained in more detail in the report.« less

  8. Optimizing a realistic large-scale frequency assignment problem using a new parallel evolutionary approach

    NASA Astrophysics Data System (ADS)

    Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.

    2011-08-01

    This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.

  9. Interprofessional, simulation-based technology-enhanced learning to improve physical health care in psychiatry: The recognition and assessment of medical problems in psychiatric settings course.

    PubMed

    Akroyd, Mike; Jordan, Gary; Rowlands, Paul

    2016-06-01

    People with serious mental illness have reduced life expectancy compared with a control population, much of which is accounted for by significant physical comorbidity. Frontline clinical staff in mental health often lack confidence in recognition, assessment and management of such 'medical' problems. Simulation provides one way for staff to practise these skills in a safe setting. We produced a multidisciplinary simulation course around recognition and assessment of medical problems in psychiatric settings. We describe an audit of strategic and design aspects of the recognition and assessment of medical problems in psychiatric settings course, using the Department of Health's 'Framework for Technology Enhanced Learning' as our audit standards. At the same time as highlighting areas where recognition and assessment of medical problems in psychiatric settings adheres to these identified principles, such as the strategic underpinning of the approach, and the means by which information is collected, reviewed and shared, it also helps us to identify areas where we can improve. © The Author(s) 2014.

  10. [Behaviour therapy and child welfare - results of an approach to improve mental health care of aggressive children].

    PubMed

    Nitkowski, Dennis; Petermann, Franz; Büttner, Peter; Krause-Leipoldt, Carsten; Petermann, Ulrike

    2009-09-01

    The Training with Aggressive Children (Petermann & Petermann, 2008) was integrated into the setting of a child welfare service. This study examined, if mental health care of aggressive children in child welfare settings can be improved, compared the effectiveness of a combination of the training and child welfare intervention after six months with effects of the TAK. 25 Children with conduct problems (24 boys, one girl) aged 7;6 to 13;0 years participated in the study. A pretest-follow up comparison of parent ratings on the Child Behavior Checklist (CBCL) documented a large reduction of aggressive-delinquent behaviour and social problems in the training and child welfare group. Furthermore, conduct and peer relationship problems decreased essentially on the Strengths and Difficulties Questionnaire (SDQ). By reducing conduct, attention and social problems, and delinquent behaviour, the therapeutic outcome of the training and child welfare group was clearly superior to training group. In comparison to the training, the combination of child welfare and training seemed to reduce a wider range of behavioural problems more effectively. This indicates that combined intervention programs can optimize mental health care of aggressive children.

  11. [Medical safety management in the setting of a clinical reference laboratory--risk management efforts in clinical testing].

    PubMed

    Seki, Akira; Miya, Tetsumasa

    2011-03-01

    As a result of recurring medical accidents, risk management in the medical setting has been given much attention. The announcement in August, 2000 by the Ministry of Health committee for formulating a standard manual for risk management, of a "Risk management manual formulation guideline" has since been accompanied by the efforts of numerous medical testing facilities to develop such documents. In 2008, ISO/TS 22367:2008 on "Medical laboratories-Reduction of error through risk management and continual improvement" was published. However, at present, risk management within a medical testing facility stresses the implementation of provisional actions in response to a problem after it has occurred. Risk management is basically a planned process and includes "corrective actions" as well as "preventive actions." A corrective action is defined as identifying the root cause of the problem and removing it, and is conducted to prevent the problem from recurring. A preventive action is defined as identifying of the any potential problem and removing it, and is conducted to prevent a problem before it occurs. Presently, I shall report on the experiences of our laboratory regarding corrective and preventive actions taken in response to accidents and incidents, respectively.

  12. The piecewise parabolic method for Riemann problems in nonlinear elasticity.

    PubMed

    Zhang, Wei; Wang, Tao; Bai, Jing-Song; Li, Ping; Wan, Zhen-Hua; Sun, De-Jun

    2017-10-18

    We present the application of Harten-Lax-van Leer (HLL)-type solvers on Riemann problems in nonlinear elasticity which undergoes high-load conditions. In particular, the HLLD ("D" denotes Discontinuities) Riemann solver is proved to have better robustness and efficiency for resolving complex nonlinear wave structures compared with the HLL and HLLC ("C" denotes Contact) solvers, especially in the shock-tube problem including more than five waves. Also, Godunov finite volume scheme is extended to higher order of accuracy by means of piecewise parabolic method (PPM), which could be used with HLL-type solvers and employed to construct the fluxes. Moreover, in the case of multi material components, level set algorithm is applied to track the interface between different materials, while the interaction of interfaces is realized through HLLD Riemann solver combined with modified ghost method. As seen from the results of both the solid/solid "stick" problem with the same material at the two sides of contact interface and the solid/solid "slip" problem with different materials at the two sides, this scheme composed of HLLD solver, PPM and level set algorithm can capture the material interface effectively and suppress spurious oscillations therein significantly.

  13. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  14. Optimal aggregation of binary classifiers for multiclass cancer diagnosis using gene expression profiles.

    PubMed

    Yukinawa, Naoto; Oba, Shigeyuki; Kato, Kikuya; Ishii, Shin

    2009-01-01

    Multiclass classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. There have been many studies of aggregating binary classifiers to construct a multiclass classifier based on one-versus-the-rest (1R), one-versus-one (11), or other coding strategies, as well as some comparison studies between them. However, the studies found that the best coding depends on each situation. Therefore, a new problem, which we call the "optimal coding problem," has arisen: how can we determine which coding is the optimal one in each situation? To approach this optimal coding problem, we propose a novel framework for constructing a multiclass classifier, in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. Although there is no a priori answer to the optimal coding problem, our weight tuning method can be a consistent answer to the problem. We apply this method to various classification problems including a synthesized data set and some cancer diagnosis data sets from gene expression profiling. The results demonstrate that, in most situations, our method can improve classification accuracy over simple voting heuristics and is better than or comparable to state-of-the-art multiclass predictors.

  15. Robust Design Optimization via Failure Domain Bounding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2007-01-01

    This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.

  16. A scalable method for identifying frequent subtrees in sets of large phylogenetic trees.

    PubMed

    Ramu, Avinash; Kahveci, Tamer; Burleigh, J Gordon

    2012-10-03

    We consider the problem of finding the maximum frequent agreement subtrees (MFASTs) in a collection of phylogenetic trees. Existing methods for this problem often do not scale beyond datasets with around 100 taxa. Our goal is to address this problem for datasets with over a thousand taxa and hundreds of trees. We develop a heuristic solution that aims to find MFASTs in sets of many, large phylogenetic trees. Our method works in multiple phases. In the first phase, it identifies small candidate subtrees from the set of input trees which serve as the seeds of larger subtrees. In the second phase, it combines these small seeds to build larger candidate MFASTs. In the final phase, it performs a post-processing step that ensures that we find a frequent agreement subtree that is not contained in a larger frequent agreement subtree. We demonstrate that this heuristic can easily handle data sets with 1000 taxa, greatly extending the estimation of MFASTs beyond current methods. Although this heuristic does not guarantee to find all MFASTs or the largest MFAST, it found the MFAST in all of our synthetic datasets where we could verify the correctness of the result. It also performed well on large empirical data sets. Its performance is robust to the number and size of the input trees. Overall, this method provides a simple and fast way to identify strongly supported subtrees within large phylogenetic hypotheses.

  17. Treating asthma with a self-management model of illness behaviour in an Australian community pharmacy setting.

    PubMed

    Smith, Lorraine; Bosnic-Anticevich, Sinthia Z; Mitchell, Bernadette; Saini, Bandana; Krass, Ines; Armour, Carol

    2007-04-01

    Asthma affects a considerable proportion of the population worldwide and presents a significant health problem in Australia. Given its chronic nature, effective asthma self-management approaches are important. However, despite research and interventions targeting its treatment, the management of asthma remains problematic. This study aimed to develop, from a theoretical basis, an asthma self-management model and implement it in an Australian community pharmacy setting in metropolitan Sydney, using a controlled, parallel-groups repeated-measures design. Trained pharmacists delivered a structured, step-wise, patient-focused asthma self-management program to adult participants over a 9-month period focusing on identification of asthma problems, goal setting and strategy development. Data on process- clinical- and psychosocial-outcome measures were gathered. Results showed that participants set an average of four new goals and six repeated goals over the course of the intervention. Most common goal-related themes included asthma triggers, asthma control and medications. An average of nine strategies per participant was developed to achieve the set goals. Common strategies involved visiting a medical practitioner for review of medications, improving adherence to medications and using medications before exercise. Clinical and psychosocial outcomes indicated significant improvements over time in asthma symptom control, asthma-related self-efficacy and quality of life, and negative affect. These results suggest that an asthma self-management model of illness behaviour has the potential to provide patients with a range of process skills for self-management, and deliver improvements in clinical and psychosocial indicators of asthma control. The results also indicate the capacity for the effective delivery of such an intervention by pharmacists in Australian community pharmacy settings.

  18. Setting the Periodic Table.

    ERIC Educational Resources Information Center

    Saturnelli, Annette

    1985-01-01

    Examines problems resulting from different forms of the periodic table, indicating that New York State schools use a form reflecting the International Union of Pure and Applied Chemistry's 1984 recommendations. Other formats used and reasons for standardization are discussed. (DH)

  19. A position-dependent mass model for the Thomas–Fermi potential: Exact solvability and relation to δ-doped semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze-Halberg, Axel, E-mail: xbataxel@gmail.com; García-Ravelo, Jesús; Pacheco-García, Christian

    We consider the Schrödinger equation in the Thomas–Fermi field, a model that has been used for describing electron systems in δ-doped semiconductors. It is shown that the problem becomes exactly-solvable if a particular effective (position-dependent) mass distribution is incorporated. Orthogonal sets of normalizable bound state solutions are constructed in explicit form, and the associated energies are determined. We compare our results with the corresponding findings on the constant-mass problem discussed by Ioriatti (1990) [13]. -- Highlights: ► We introduce an exactly solvable, position-dependent mass model for the Thomas–Fermi potential. ► Orthogonal sets of solutions to our model are constructed inmore » closed form. ► Relation to delta-doped semiconductors is discussed. ► Explicit subband bottom energies are calculated and compared to results obtained in a previous study.« less

  20. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  1. Regularity results for the minimum time function with Hörmander vector fields

    NASA Astrophysics Data System (ADS)

    Albano, Paolo; Cannarsa, Piermarco; Scarinci, Teresa

    2018-03-01

    In a bounded domain of Rn with boundary given by a smooth (n - 1)-dimensional manifold, we consider the homogeneous Dirichlet problem for the eikonal equation associated with a family of smooth vector fields {X1 , … ,XN } subject to Hörmander's bracket generating condition. We investigate the regularity of the viscosity solution T of such problem. Due to the presence of characteristic boundary points, singular trajectories may occur. First, we characterize these trajectories as the closed set of all points at which the solution loses point-wise Lipschitz continuity. Then, we prove that the local Lipschitz continuity of T, the local semiconcavity of T, and the absence of singular trajectories are equivalent properties. Finally, we show that the last condition is satisfied whenever the characteristic set of {X1 , … ,XN } is a symplectic manifold. We apply our results to several examples.

  2. “Extra Oomph:” Addressing Housing Disparities through Medical Legal Partnership Interventions

    PubMed Central

    Hernández, Diana

    2016-01-01

    Low-income households face common and chronic housing problems that have known health risks and legal remedies. The Medical Legal Partnership (MLP) program presents a unique opportunity to address housing problems and improve patient health through legal assistance offered in clinical settings. Drawn from in-depth interviews with 72 patients, this study investigated the outcomes of MLP interventions and compares results to similarly disadvantaged participants with no access to MLP services. Results indicate that participants in the MLP group were more likely to achieve adequate, affordable and stable housing than those in the comparison group. Study findings suggest that providing access to legal services in the healthcare setting can effectively address widespread health disparities rooted in problematic housing. Implications for policy and scalability are discussed with the conclusion that MLPs can shift professionals’ consciousness as they work to improve housing and health trajectories for indigent groups using legal approaches. PMID:27867247

  3. Applicability domains for classification problems: benchmarking of distance to models for AMES mutagenicity set

    EPA Science Inventory

    For QSAR and QSPR modeling of biological and physicochemical properties, estimating the accuracy of predictions is a critical problem. The “distance to model” (DM) can be defined as a metric that defines the similarity between the training set molecules and the test set compound ...

  4. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  5. Performance Characterization and Vibration Testing of 30-cm Carbon-Carbon Ion Optics

    NASA Technical Reports Server (NTRS)

    Steven Snyder, John; Brophy, John R.

    2004-01-01

    Carbon-based ion optics have the potential to significantly increase the operable life and power ranges of ion thrusters because of reduced erosion rates compared to molybdenum optics. The development of 15-cm and larger diameter grids has encountered many problems, however, not the least of which is the ability to pass vibration testing. JPL has recently developed a new generation of 30-cm carbon-carbon ion optics in order to address these problems and demonstrate the viability of the technology. Perveance, electron backstreaming, and screen grid transparency data are presented for two sets of optics. Vibration testing was successfully performed on two different sets of ion optics with no damage and the results of those tests are compared to models of grid vibrational behavior. It will be shown that the vibration model is a conservative predictor of grid response and can accurately describe test results. There was no change in grid alignment as a result of vibration testing and a slight improvement, if any change at all, in optics performance.

  6. JAva GUi for Applied Research (JAGUAR) v 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JAGUAR is a Java software tool for automatically rendering a graphical user interface (GUI) from a structured input specification. It is designed as a plug-in to the Eclipse workbench to enable users to create, edit, and externally execute analysis application input decks and then view the results. JAGUAR serves as a GUI for Sandia's DAKOTA software toolkit for optimization and uncertainty quantification. It will include problem (input deck)set-up, option specification, analysis execution, and results visualization. Through the use of wizards, templates, and views, JAGUAR helps uses navigate the complexity of DAKOTA's complete input specification. JAGUAR is implemented in Java, leveragingmore » Eclipse extension points and Eclipse user interface. JAGUAR parses a DAKOTA NIDR input specification and presents the user with linked graphical and plain text representations of problem set-up and option specification for DAKOTA studies. After the data has been input by the user, JAGUAR generates one or more input files for DAKOTA, executes DAKOTA, and captures and interprets the results« less

  7. Influence of protonation, tautomeric, and stereoisomeric states on protein-ligand docking results.

    PubMed

    ten Brink, Tim; Exner, Thomas E

    2009-06-01

    In this work, we present a systematical investigation of the influence of ligand protonation states, stereoisomers, and tautomers on results obtained with the two protein-ligand docking programs GOLD and PLANTS. These different states were generated with a fully automated tool, called SPORES (Structure PrOtonation and Recognition System). First, the most probable protonations, as defined by this rule based system, were compared to the ones stored in the well-known, manually revised CCDC/ASTEX data set. Then, to investigate the influence of the ligand protonation state on the docking results, different protonation states were created. Redocking and virtual screening experiments were conducted demonstrating that both docking programs have problems in identifying the correct protomer for each complex. Therefore, a preselection of plausible protomers or the improvement of the scoring functions concerning their ability to rank different molecules/states is needed. Additionally, ligand stereoisomers were tested for a subset of the CCDC/ASTEX set, showing similar problems regarding the ranking of these stereoisomers as the ranking of the protomers.

  8. Thinking can cause forgetting: memory dynamics in creative problem solving.

    PubMed

    Storm, Benjamin C; Angello, Genna; Bjork, Elizabeth Ligon

    2011-09-01

    Research on retrieval-induced forgetting has shown that retrieval can cause the forgetting of related or competing items in memory (Anderson, Bjork, & Bjork, 1994). In the present research, we examined whether an analogous phenomenon occurs in the context of creative problem solving. Using the Remote Associates Test (RAT; Mednick, 1962), we found that attempting to generate a novel common associate to 3 cue words caused the forgetting of other strong associates related to those cue words. This problem-solving-induced forgetting effect occurred even when participants failed to generate a viable solution, increased in magnitude when participants spent additional time problem solving, and was positively correlated with problem-solving success on a separate set of RAT problems. These results implicate a role for forgetting in overcoming fixation in creative problem solving. (c) 2011 APA, all rights reserved.

  9. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  10. Set covering algorithm, a subprogram of the scheduling algorithm for mission planning and logistic evaluation

    NASA Technical Reports Server (NTRS)

    Chang, H.

    1976-01-01

    A computer program using Lemke, Salkin and Spielberg's Set Covering Algorithm (SCA) to optimize a traffic model problem in the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) was documented. SCA forms a submodule of SAMPLE and provides for input and output, subroutines, and an interactive feature for performing the optimization and arranging the results in a readily understandable form for output.

  11. [Do different survey settings influence the prevalence of symptoms? A methodological comparison using the Youth Self-Report].

    PubMed

    Prüss, Ulrike; von Widdern, Susanne; von Ferber, Christian

    2005-10-01

    The self-reported emotional and behavioural disorders among adolescents were assessed by the Youth Self-Report (YSR). The YSR was administered either in households or in classrooms. The goal of the study was to prove whether these different settings affect the prevalence rates of symptoms reported in the YSR. Mean scores and standard deviations of problem scales of two classroom samples and one household sample that was generally used as a reference were compared. The data were also compared with two classroom samples from Sweden and Greece. Statistical analyses were performed by means of t-test (unpaired), the evaluation of the magnitude of the effects by means of Cohen's criteria. Classroom samples detected a significantly higher prevalence of symptoms than did household samples. This is the case for almost all of the problem scales in the YSR. The result of our study supports the finding that the setting of surveys that use self-administered questionnaires in classrooms themselves affect the prevalence of self-reported symptoms assessed by the YSR. The results of surveys may be influenced, to a much greater degree than previously thought, by the settings in which they are administered. Further research is needed to identify the specific influences that differ for surveys administered at home, respectively at school.

  12. Assessment and improvement of statistical tools for comparative proteomics analysis of sparse data sets with few experimental replicates.

    PubMed

    Schwämmle, Veit; León, Ileana Rodríguez; Jensen, Ole Nørregaard

    2013-09-06

    Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling and mass spectrometry experiments and should be applicable to large data sets of any type. An R script that implements the improved rank products algorithm and the combined analysis is available.

  13. System for solving diagnosis and hitting set problems

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh (Inventor); Fijany, Amir (Inventor)

    2007-01-01

    The diagnosis problem arises when a system's actual behavior contradicts the expected behavior, thereby exhibiting symptoms (a collection of conflict sets). System diagnosis is then the task of identifying faulty components that are responsible for anomalous behavior. To solve the diagnosis problem, the present invention describes a method for finding the minimal set of faulty components (minimal diagnosis set) that explain the conflict sets. The method includes acts of creating a matrix of the collection of conflict sets, and then creating nodes from the matrix such that each node is a node in a search tree. A determination is made as to whether each node is a leaf node or has any children nodes. If any given node has children nodes, then the node is split until all nodes are leaf nodes. Information gathered from the leaf nodes is used to determine the minimal diagnosis set.

  14. The benefits of computer-generated feedback for mathematics problem solving.

    PubMed

    Fyfe, Emily R; Rittle-Johnson, Bethany

    2016-07-01

    The goal of the current research was to better understand when and why feedback has positive effects on learning and to identify features of feedback that may improve its efficacy. In a randomized experiment, second-grade children received instruction on a correct problem-solving strategy and then solved a set of relevant problems. Children were assigned to receive no feedback, immediate feedback, or summative feedback from the computer. On a posttest the following day, feedback resulted in higher scores relative to no feedback for children who started with low prior knowledge. Immediate feedback was particularly effective, facilitating mastery of the material for children with both low and high prior knowledge. Results suggest that minimal computer-generated feedback can be a powerful form of guidance during problem solving. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Weak convergence of a projection algorithm for variational inequalities in a Banach space

    NASA Astrophysics Data System (ADS)

    Iiduka, Hideaki; Takahashi, Wataru

    2008-03-01

    Let C be a nonempty, closed convex subset of a Banach space E. In this paper, motivated by Alber [Ya.I. Alber, Metric and generalized projection operators in Banach spaces: Properties and applications, in: A.G. Kartsatos (Ed.), Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, in: Lecture Notes Pure Appl. Math., vol. 178, Dekker, New York, 1996, pp. 15-50], we introduce the following iterative scheme for finding a solution of the variational inequality problem for an inverse-strongly-monotone operator A in a Banach space: x1=x[set membership, variant]C andxn+1=[Pi]CJ-1(Jxn-[lambda]nAxn) for every , where [Pi]C is the generalized projection from E onto C, J is the duality mapping from E into E* and {[lambda]n} is a sequence of positive real numbers. Then we show a weak convergence theorem (Theorem 3.1). Finally, using this result, we consider the convex minimization problem, the complementarity problem, and the problem of finding a point u[set membership, variant]E satisfying 0=Au.

  16. Toward interactive scheduling systems for managing medical resources.

    PubMed

    Oddi, A; Cesta, A

    2000-10-01

    Managers of medico-hospital facilities are facing two general problems when allocating resources to activities: (1) to find an agreement between several and contrasting requirements; (2) to manage dynamic and uncertain situations when constraints suddenly change over time due to medical needs. This paper describes the results of a research aimed at applying constraint-based scheduling techniques to the management of medical resources. A mixed-initiative problem solving approach is adopted in which a user and a decision support system interact to incrementally achieve a satisfactory solution to the problem. A running prototype is described called Interactive Scheduler which offers a set of functionalities for a mixed-initiative interaction to cope with the medical resource management. Interactive Scheduler is endowed with a representation schema used for describing the medical environment, a set of algorithms that address the specific problems of the domain, and an innovative interaction module that offers functionalities for the dialogue between the support system and its user. A particular contribution of this work is the explicit representation of constraint violations, and the definition of scheduling algorithms that aim at minimizing the amount of constraint violations in a solution.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Zhaojun; Yang, Chao

    What is common among electronic structure calculation, design of MEMS devices, vibrational analysis of high speed railways, and simulation of the electromagnetic field of a particle accelerator? The answer: they all require solving large scale nonlinear eigenvalue problems. In fact, these are just a handful of examples in which solving nonlinear eigenvalue problems accurately and efficiently is becoming increasingly important. Recognizing the importance of this class of problems, an invited minisymposium dedicated to nonlinear eigenvalue problems was held at the 2005 SIAM Annual Meeting. The purpose of the minisymposium was to bring together numerical analysts and application scientists to showcasemore » some of the cutting edge results from both communities and to discuss the challenges they are still facing. The minisymposium consisted of eight talks divided into two sessions. The first three talks focused on a type of nonlinear eigenvalue problem arising from electronic structure calculations. In this type of problem, the matrix Hamiltonian H depends, in a non-trivial way, on the set of eigenvectors X to be computed. The invariant subspace spanned by these eigenvectors also minimizes a total energy function that is highly nonlinear with respect to X on a manifold defined by a set of orthonormality constraints. In other applications, the nonlinearity of the matrix eigenvalue problem is restricted to the dependency of the matrix on the eigenvalues to be computed. These problems are often called polynomial or rational eigenvalue problems In the second session, Christian Mehl from Technical University of Berlin described numerical techniques for solving a special type of polynomial eigenvalue problem arising from vibration analysis of rail tracks excited by high-speed trains.« less

  18. The method of lines in analyzing solids containing cracks

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.

    1990-01-01

    A semi-numerical method is reviewed for solving a set of coupled partial differential equations subject to mixed and possibly coupled boundary conditions. The line method of analysis is applied to the Navier-Cauchy equations of elastic and elastoplastic equilibrium to calculate the displacement distributions in various, simple geometry bodies containing cracks. The application of this method to the appropriate field equations leads to coupled sets of simultaneous ordinary differential equations whose solutions are obtained along sets of lines in a discretized region. When decoupling of the equations and their boundary conditions is not possible, the use of a successive approximation procedure permits the analytical solution of the resulting ordinary differential equations. The use of this method is illustrated by reviewing and presenting selected solutions of mixed boundary value problems in three dimensional fracture mechanics. These solutions are of great importance in fracture toughness testing, where accurate stress and displacement distributions are required for the calculation of certain fracture parameters. Computations obtained for typical flawed specimens include that for elastic as well as elastoplastic response. Problems in both Cartesian and cylindrical coordinate systems are included. Results are summarized for a finite geometry rectangular bar with a central through-the-thickness or rectangular surface crack under remote uniaxial tension. In addition, stress and displacement distributions are reviewed for finite circular bars with embedded penny-shaped cracks, and rods with external annular or ring cracks under opening mode tension. The results obtained show that the method of lines presents a systematic approach to the solution of some three-dimensional mechanics problems with arbitrary boundary conditions. The advantage of this method over other numerical solutions is that good results are obtained even from the use of a relatively coarse grid.

  19. Solving LP Relaxations of Large-Scale Precedence Constrained Problems

    NASA Astrophysics Data System (ADS)

    Bienstock, Daniel; Zuckerberg, Mark

    We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.

  20. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    NASA Astrophysics Data System (ADS)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  1. Known Data Problems | ECHO | US EPA

    EPA Pesticide Factsheets

    EPA manages a series of national information systems that include data flowing from staff in EPA and state/tribal/local offices. Given this fairly complex set of transactions, occasional problems occur with the migration of data into the national systems. This page is meant to explain known data quality problems with larger sets of data.

  2. Generalized Reduction of Problem Behavior of Young Children with Autism: Building Trans-Situational Interventions

    ERIC Educational Resources Information Center

    Schindler, Holly Reed; Horner, Robert H.

    2005-01-01

    The effects of functional communication training on the generalized reduction of problem behavior with three 4- to 5-year-old children with autism and problem behavior were evaluated. Participants were assessed in primary teaching settings and in three secondary, generalization settings. Through baseline analysis, lower effort interventions in the…

  3. Wraparound: As a Tertiary Level Intervention for Students with Emotional/Behavioral Needs

    ERIC Educational Resources Information Center

    Eber, Lucille; Breen, Kimberli; Rose, Jennifer; Unizycki, Renee M.; London, Tasha H.

    2008-01-01

    If a student has multiple behavior problems that escalate over time and across different settings, school-based problem-solving teams can become quickly overwhelmed, especially when educators identify "setting events" for problem behaviors that have occurred outside of school and are beyond the control of school personnel. Instead of resorting to…

  4. Existence and Hadamard well-posedness of a system of simultaneous generalized vector quasi-equilibrium problems.

    PubMed

    Zhang, Wenyan; Zeng, Jing

    2017-01-01

    An existence result for the solution set of a system of simultaneous generalized vector quasi-equilibrium problems (for short, (SSGVQEP)) is obtained, which improves Theorem 3.1 of the work of Ansari et al. (J. Optim. Theory Appl. 127:27-44, 2005). Moreover, a definition of Hadamard-type well-posedness for (SSGVQEP) is introduced and sufficient conditions for Hadamard well-posedness of (SSGVQEP) are established.

  5. WND-CHARM: Multi-purpose image classification using compound image transforms

    PubMed Central

    Orlov, Nikita; Shamir, Lior; Macura, Tomasz; Johnston, Josiah; Eckley, D. Mark; Goldberg, Ilya G.

    2008-01-01

    We describe a multi-purpose image classifier that can be applied to a wide variety of image classification tasks without modifications or fine-tuning, and yet provide classification accuracy comparable to state-of-the-art task-specific image classifiers. The proposed image classifier first extracts a large set of 1025 image features including polynomial decompositions, high contrast features, pixel statistics, and textures. These features are computed on the raw image, transforms of the image, and transforms of transforms of the image. The feature values are then used to classify test images into a set of pre-defined image classes. This classifier was tested on several different problems including biological image classification and face recognition. Although we cannot make a claim of universality, our experimental results show that this classifier performs as well or better than classifiers developed specifically for these image classification tasks. Our classifier’s high performance on a variety of classification problems is attributed to (i) a large set of features extracted from images; and (ii) an effective feature selection and weighting algorithm sensitive to specific image classification problems. The algorithms are available for free download from openmicroscopy.org. PMID:18958301

  6. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less

  7. Solving Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) using BRKGA with local search

    NASA Astrophysics Data System (ADS)

    Prasetyo, H.; Alfatsani, M. A.; Fauza, G.

    2018-05-01

    The main issue in vehicle routing problem (VRP) is finding the shortest route of product distribution from the depot to outlets to minimize total cost of distribution. Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) is one of the variants of VRP that accommodates vehicle capacity and distribution period. Since the main problem of CCVRPTW is considered a non-polynomial hard (NP-hard) problem, it requires an efficient and effective algorithm to solve the problem. This study was aimed to develop Biased Random Key Genetic Algorithm (BRKGA) that is combined with local search to solve the problem of CCVRPTW. The algorithm design was then coded by MATLAB. Using numerical test, optimum algorithm parameters were set and compared with the heuristic method and Standard BRKGA to solve a case study on soft drink distribution. Results showed that BRKGA combined with local search resulted in lower total distribution cost compared with the heuristic method. Moreover, the developed algorithm was found to be successful in increasing the performance of Standard BRKGA.

  8. Two-Stage orders sequencing system for mixed-model assembly

    NASA Astrophysics Data System (ADS)

    Zemczak, M.; Skolud, B.; Krenczyk, D.

    2015-11-01

    In the paper, the authors focus on the NP-hard problem of orders sequencing, formulated similarly to Car Sequencing Problem (CSP). The object of the research is the assembly line in an automotive industry company, on which few different models of products, each in a certain number of versions, are assembled on the shared resources, set in a line. Such production type is usually determined as a mixed-model production, and arose from the necessity of manufacturing customized products on the basis of very specific orders from single clients. The producers are nowadays obliged to provide each client the possibility to determine a huge amount of the features of the product they are willing to buy, as the competition in the automotive market is large. Due to the previously mentioned nature of the problem (NP-hard), in the given time period only satisfactory solutions are sought, as the optimal solution method has not yet been found. Most of the researchers that implemented inaccurate methods (e.g. evolutionary algorithms) to solving sequencing problems dropped the research after testing phase, as they were not able to obtain reproducible results, and met problems while determining the quality of the received solutions. Therefore a new approach to solving the problem, presented in this paper as a sequencing system is being developed. The sequencing system consists of a set of determined rules, implemented into computer environment. The system itself works in two stages. First of them is connected with the determination of a place in the storage buffer to which certain production orders should be sent. In the second stage of functioning, precise sets of sequences are determined and evaluated for certain parts of the storage buffer under certain criteria.

  9. Psychiatric services in primary care settings: a survey of general practitioners in Thailand

    PubMed Central

    Lotrakul, Manote; Saipanish, Ratana

    2006-01-01

    Background General Practitioners (GPs) in Thailand play an important role in treating psychiatric disorders since there is a shortage of psychiatrists in the country. Our aim was to examine GP's perception of psychiatric problems, drug treatment and service problems encountered in primary care settings. Methods We distributed 1,193 postal questionnaires inquiring about psychiatric practices and service problems to doctors in primary care settings throughout Thailand. Results Four hundred and thirty-four questionnaires (36.4%) were returned. Sixty-seven of the respondents (15.4%) who had taken further special training in various fields were excluded from the analysis, giving a total of 367 GPs in this study. Fifty-six per cent of respondents were males and they had worked for 4.6 years on average (median = 3 years). 65.6% (SD = 19.3) of the total patients examined had physical problems, 10.7% (SD = 7.9) had psychiatric problems and 23.9% (SD = 16.0) had both problems. The most common psychiatric diagnoses were anxiety disorders (37.5%), alcohol and drugs abuse (28.1%), and depressive disorders (29.2%). Commonly prescribed psychotropic drugs were anxiolytics and antidepressants. The psychotropic drugs most frequently prescribed were diazepam among anti-anxiety drugs, amitriptyline among antidepressant drugs, and haloperidol among antipsychotic drugs. Conclusion Most drugs available through primary care were the same as what existed 3 decades ago. There should be adequate supply of new and appropriate psychotropic drugs in primary care. Case-finding instruments for common mental disorders might be helpful for GPs whose quality of practice was limited by large numbers of patients. However, the service delivery system should be modified in order to maintain successful care for a large number of psychiatric patients. PMID:16867187

  10. The Ecocultural Context and Child Behavior Problems: A Qualitative Analysis in Rural Nepal

    PubMed Central

    Burkey, Matthew D.; Ghimire, Lajina; Adhikari, Ramesh Prasad; Wissow, Lawrence S.; Jordans, Mark J.D.; Kohrt, Brandon A.

    2016-01-01

    Commonly used paradigms for studying child psychopathology emphasize individual-level factors and often neglect the role of context in shaping risk and protective factors among children, families, and communities. To address this gap, we evaluated influences of ecocultural contextual factors on definitions, development of, and responses to child behavior problems and examined how contextual knowledge can inform culturally responsive interventions. We drew on Super and Harkness’ “developmental niche” framework to evaluate the influences of physical and social settings, childcare customs and practices, and parental ethnotheories on the definitions, development of, and responses to child behavior problems in a community in rural Nepal. Data were collected between February and October 2014 through in-depth interviews with a purposive sampling strategy targeting parents (N=10), teachers (N=6), and community leaders (N=8) familiar with child-rearing. Results were supplemented by focus group discussions with children (N=9) and teachers (N=8), pile-sort interviews with mothers (N=8) of school-aged children, and direct observations in homes, schools, and community spaces. Behavior problems were largely defined in light of parents’ socialization goals and role expectations for children. Certain physical settings and times were seen to carry greater risk for problematic behavior when children were unsupervised. Parents and other adults attempted to mitigate behavior problems by supervising them and their social interactions, providing for their physical needs, educating them, and through a shared verbal reminding strategy (samjhaune). The findings of our study illustrate the transactional nature of behavior problem development that involves context-specific goals, roles, and concerns that are likely to affect adults’ interpretations and responses to children’s behavior. Ultimately, employing a developmental niche framework will elucidate setting-specific risk and protective factors for culturally compelling intervention strategies. PMID:27173743

  11. Automated Planning for a Deep Space Communications Station

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Fisher, Forest; Mutz, Darren; Chien, Steve

    1999-01-01

    This paper describes the application of Artificial Intelligence planning techniques to the problem of antenna track plan generation for a NASA Deep Space Communications Station. Me described system enables an antenna communications station to automatically respond to a set of tracking goals by correctly configuring the appropriate hardware and software to provide the requested communication services. To perform this task, the Automated Scheduling and Planning Environment (ASPEN) has been applied to automatically produce antenna trucking plans that are tailored to support a set of input goals. In this paper, we describe the antenna automation problem, the ASPEN planning and scheduling system, how ASPEN is used to generate antenna track plans, the results of several technology demonstrations, and future work utilizing dynamic planning technology.

  12. Establishing and Maintaining Treatment Effects with Less Intrusive Consequences VIA a Pairing Procedure

    PubMed Central

    Vorndran, Christina M; Lerman, Dorothea C

    2006-01-01

    The generality and long-term maintenance of a pairing procedure designed to improve the efficacy of less intrusive procedures were evaluated for the treatment of problem behavior maintained by automatic reinforcement exhibited by 2 individuals with developmental disabilities. Results suggested that a less intrusive procedure could be established as a conditioned punisher by pairing it with an effective punisher contingent on problem behavior. Generalization across multiple therapists was demonstrated for both participants. However, generalization to another setting was not achieved for 1 participant until pairing was conducted in the second setting. Long-term maintenance was observed with 1 participant in the absence of further pairing trials. Maintenance via intermittent pairing trials was successful for the other participant. PMID:16602384

  13. Hill-Climbing search and diversification within an evolutionary approach to protein structure prediction.

    PubMed

    Chira, Camelia; Horvath, Dragos; Dumitrescu, D

    2011-07-30

    Proteins are complex structures made of amino acids having a fundamental role in the correct functioning of living cells. The structure of a protein is the result of the protein folding process. However, the general principles that govern the folding of natural proteins into a native structure are unknown. The problem of predicting a protein structure with minimum-energy starting from the unfolded amino acid sequence is a highly complex and important task in molecular and computational biology. Protein structure prediction has important applications in fields such as drug design and disease prediction. The protein structure prediction problem is NP-hard even in simplified lattice protein models. An evolutionary model based on hill-climbing genetic operators is proposed for protein structure prediction in the hydrophobic - polar (HP) model. Problem-specific search operators are implemented and applied using a steepest-ascent hill-climbing approach. Furthermore, the proposed model enforces an explicit diversification stage during the evolution in order to avoid local optimum. The main features of the resulting evolutionary algorithm - hill-climbing mechanism and diversification strategy - are evaluated in a set of numerical experiments for the protein structure prediction problem to assess their impact to the efficiency of the search process. Furthermore, the emerging consolidated model is compared to relevant algorithms from the literature for a set of difficult bidimensional instances from lattice protein models. The results obtained by the proposed algorithm are promising and competitive with those of related methods.

  14. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  15. BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.

    PubMed

    Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R

    2015-02-20

    Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .

  16. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  17. Relationship between perceived limit-setting abilities, autism spectrum disorder severity, behaviour problems and parenting stress in mothers of children with autism spectrum disorder.

    PubMed

    Reed, Phil; Howse, Jessie; Ho, Ben; Osborne, Lisa A

    2017-11-01

    Parenting stress in mothers of children with autism spectrum disorder (ASD) is high and impacts perceptions about parenting. This study examined the relationship between parenting stress and observer-perceived limit-setting ability. Participants' perceptions of other parents' limit-setting ability were assessed by showing participants video clips of parenting behaviours. Mothers of 93 children with autism spectrum disorder completed an online survey regarding the severity of their own child's autism spectrum disorder (Social Communication Questionnaire), their child's behaviour problems (Strengths and Difficulties Questionnaire) and their own levels of parenting stress (Questionnaire on Resources and Stress). They were shown five videos of other parents interacting with children with autism spectrum disorder and were asked to rate the limit-setting abilities observed in each video using the Parent-Child Relationship Inventory. Higher parenting stress negatively related to judgements about others' limit-setting skills. This mirrors the literature regarding the relationship between self-reported parenting stress and rating child behaviour more negatively. It suggests that stress negatively impacts a wide range of judgements and implies that caution may be required when interpreting the results of studies in which parenting skills are assessed by self-report.

  18. Development of Problem Sets for K-12 and Engineering on Pharmaceutical Particulate Systems

    ERIC Educational Resources Information Center

    Savelski, Mariano J.; Slater, C. Stewart; Del Vecchio, Christopher A.; Kosteleski, Adrian J.; Wilson, Sarah A.

    2010-01-01

    Educational problem sets have been developed on structured organic particulate systems (SOPS) used in pharmaceutical technology. The sets present topics such as particle properties and powder flow and can be integrated into K-12 and college-level curricula. The materials educate students in specific areas of pharmaceutical particulate processing,…

  19. A population-based model for priority setting across the care continuum and across modalities

    PubMed Central

    Segal, Leonie; Mortimer, Duncan

    2006-01-01

    Background The Health-sector Wide (HsW) priority setting model is designed to shift the focus of priority setting away from 'program budgets' – that are typically defined by modality or disease-stage – and towards well-defined target populations with a particular disease/health problem. Methods The key features of the HsW model are i) a disease/health problem framework, ii) a sequential approach to covering the entire health sector, iii) comprehensiveness of scope in identifying intervention options and iv) the use of objective evidence. The HsW model redefines the unit of analysis over which priorities are set to include all mutually exclusive and complementary interventions for the prevention and treatment of each disease/health problem under consideration. The HsW model is therefore incompatible with the fragmented approach to priority setting across multiple program budgets that currently characterises allocation in many health systems. The HsW model employs standard cost-utility analyses and decision-rules with the aim of maximising QALYs contingent upon the global budget constraint for the set of diseases/health problems under consideration. It is recognised that the objective function may include non-health arguments that would imply a departure from simple QALY maximisation and that political constraints frequently limit degrees of freedom. In addressing these broader considerations, the HsW model can be modified to maximise value-weighted QALYs contingent upon the global budget constraint and any political constraints bearing upon allocation decisions. Results The HsW model has been applied in several contexts, recently to osteoarthritis, that has demonstrated both its practical application and its capacity to derive clear evidenced-based policy recommendations. Conclusion Comparisons with other approaches to priority setting, such as Programme Budgeting and Marginal Analysis (PBMA) and modality-based cost-effectiveness comparisons, as typified by Australia's Pharmaceutical Benefits Advisory Committee process for the listing of pharmaceuticals for government funding, demonstrate the value added by the HsW model notably in its greater likelihood of contributing to allocative efficiency. PMID:16566841

  20. Global dynamic optimization approach to predict activation in metabolic pathways.

    PubMed

    de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R

    2014-01-06

    During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary topologies, non-linear dynamics and constraints.

  1. Practice makes proficient: pigeons (Columba livia) learn efficient routes on full-circuit navigational traveling salesperson problems.

    PubMed

    Baron, Danielle M; Ramirez, Alejandro J; Bulitko, Vadim; Madan, Christopher R; Greiner, Ariel; Hurd, Peter L; Spetch, Marcia L

    2015-01-01

    Visiting multiple locations and returning to the start via the shortest route, referred to as the traveling salesman (or salesperson) problem (TSP), is a valuable skill for both humans and non-humans. In the current study, pigeons were trained with increasing set sizes of up to six goals, with each set size presented in three distinct configurations, until consistency in route selection emerged. After training at each set size, the pigeons were tested with two novel configurations. All pigeons acquired routes that were significantly more efficient (i.e., shorter in length) than expected by chance selection of the goals. On average, the pigeons also selected routes that were more efficient than expected based on a local nearest-neighbor strategy and were as efficient as the average route generated by a crossing-avoidance strategy. Analysis of the routes taken indicated that they conformed to both a nearest-neighbor and a crossing-avoidance strategy significantly more often than expected by chance. Both the time taken to visit all goals and the actual distance traveled decreased from the first to the last trials of training in each set size. On the first trial with novel configurations, average efficiency was higher than chance, but was not higher than expected from a nearest-neighbor or crossing-avoidance strategy. These results indicate that pigeons can learn to select efficient routes on a TSP problem.

  2. Classification of highly unbalanced CYP450 data of drugs using cost sensitive machine learning techniques.

    PubMed

    Eitrich, T; Kless, A; Druska, C; Meyer, W; Grotendorst, J

    2007-01-01

    In this paper, we study the classifications of unbalanced data sets of drugs. As an example we chose a data set of 2D6 inhibitors of cytochrome P450. The human cytochrome P450 2D6 isoform plays a key role in the metabolism of many drugs in the preclinical drug discovery process. We have collected a data set from annotated public data and calculated physicochemical properties with chemoinformatics methods. On top of this data, we have built classifiers based on machine learning methods. Data sets with different class distributions lead to the effect that conventional machine learning methods are biased toward the larger class. To overcome this problem and to obtain sensitive but also accurate classifiers we combine machine learning and feature selection methods with techniques addressing the problem of unbalanced classification, such as oversampling and threshold moving. We have used our own implementation of a support vector machine algorithm as well as the maximum entropy method. Our feature selection is based on the unsupervised McCabe method. The classification results from our test set are compared structurally with compounds from the training set. We show that the applied algorithms enable the effective high throughput in silico classification of potential drug candidates.

  3. Quasi-measures on the group G{sup m}, Dirichlet sets, and uniqueness problems for multiple Walsh series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plotnikov, Mikhail G

    2011-02-11

    Multiple Walsh series (S) on the group G{sup m} are studied. It is proved that every at most countable set is a uniqueness set for series (S) under convergence over cubes. The recovery problem is solved for the coefficients of series (S) that converge outside countable sets or outside sets of Dirichlet type. A number of analogues of the de la Vallee Poussin theorem are established for series (S). Bibliography: 28 titles.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  5. Using Online Algorithms to Solve NP-Hard Problems More Efficiently in Practice

    DTIC Science & Technology

    2007-12-01

    bounds. For the openstacks , TPP, and pipesworld domains, our results were qualitatively different: most instances in these domains were either easy...between our results in these two sets of domains. For most in- stances in the openstacks domain we found no k values that elicited a “yes” answer in

  6. Estimating meme fitness in adaptive memetic algorithms for combinatorial problems.

    PubMed

    Smith, J E

    2012-01-01

    Among the most promising and active research areas in heuristic optimisation is the field of adaptive memetic algorithms (AMAs). These gain much of their reported robustness by adapting the probability with which each of a set of local improvement operators is applied, according to an estimate of their current value to the search process. This paper addresses the issue of how the current value should be estimated. Assuming the estimate occurs over several applications of a meme, we consider whether the extreme or mean improvements should be used, and whether this aggregation should be global, or local to some part of the solution space. To investigate these issues, we use the well-established COMA framework that coevolves the specification of a population of memes (representing different local search algorithms) alongside a population of candidate solutions to the problem at hand. Two very different memetic algorithms are considered: the first using adaptive operator pursuit to adjust the probabilities of applying a fixed set of memes, and a second which applies genetic operators to dynamically adapt and create memes and their functional definitions. For the latter, especially on combinatorial problems, credit assignment mechanisms based on historical records, or on notions of landscape locality, will have limited application, and it is necessary to estimate the value of a meme via some form of sampling. The results on a set of binary encoded combinatorial problems show that both methods are very effective, and that for some problems it is necessary to use thousands of variables in order to tease apart the differences between different reward schemes. However, for both memetic algorithms, a significant pattern emerges that reward based on mean improvement is better than that based on extreme improvement. This contradicts recent findings from adapting the parameters of operators involved in global evolutionary search. The results also show that local reward schemes outperform global reward schemes in combinatorial spaces, unlike in continuous spaces. An analysis of evolving meme behaviour is used to explain these findings.

  7. PubMed Central

    Getty, Louise; de Courval, Louise Poulin

    1981-01-01

    The Speech and Hearing Department of the University of Montréal, in conjunction with ‘l'Unité de médecine familiale de Verdun’ set up a pilot project grouping family doctors, audiologists and speech pathologists. Information was exchanged on speech and language problems in children, stuttering, voice disorders, aphasia and hearing problems in children and adults. We emphasized the importance of early detection of these problems, of adequate information to the patient and his family and referral to the speech pathologist or to the audiologist. The results of this experience showed the importance of close collaboration between family doctors and communication specialists. PMID:21289800

  8. Distributed genetic algorithms for the floorplan design problem

    NASA Technical Reports Server (NTRS)

    Cohoon, James P.; Hegde, Shailesh U.; Martin, Worthy N.; Richards, Dana S.

    1991-01-01

    Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried.

  9. On Born's Conjecture about Optimal Distribution of Charges for an Infinite Ionic Crystal

    NASA Astrophysics Data System (ADS)

    Bétermin, Laurent; Knüpfer, Hans

    2018-04-01

    We study the problem for the optimal charge distribution on the sites of a fixed Bravais lattice. In particular, we prove Born's conjecture about the optimality of the rock salt alternate distribution of charges on a cubic lattice (and more generally on a d-dimensional orthorhombic lattice). Furthermore, we study this problem on the two-dimensional triangular lattice and we prove the optimality of a two-component honeycomb distribution of charges. The results hold for a class of completely monotone interaction potentials which includes Coulomb-type interactions for d≥3 . In a more general setting, we derive a connection between the optimal charge problem and a minimization problem for the translated lattice theta function.

  10. Generalizing Backtrack-Free Search: A Framework for Search-Free Constraint Satisfaction

    NASA Technical Reports Server (NTRS)

    Jonsson, Ari K.; Frank, Jeremy

    2000-01-01

    Tractable classes of constraint satisfaction problems are of great importance in artificial intelligence. Identifying and taking advantage of such classes can significantly speed up constraint problem solving. In addition, tractable classes are utilized in applications where strict worst-case performance guarantees are required, such as constraint-based plan execution. In this work, we present a formal framework for search-free (backtrack-free) constraint satisfaction. The framework is based on general procedures, rather than specific propagation techniques, and thus generalizes existing techniques in this area. We also relate search-free problem solving to the notion of decision sets and use the result to provide a constructive criterion that is sufficient to guarantee search-free problem solving.

  11. A cylindrical shell with an arbitrarily oriented crack

    NASA Technical Reports Server (NTRS)

    Yahsi, O. S.; Erdogan, F.

    1982-01-01

    The general problem of a shallow shell with constant curvatures is considered. It is assumed that the shell contains an arbitrarily oriented through crack and the material is specially orthotropic. The nonsymmetric problem is solved for arbitrary self equilibrating crack surface tractions, which, added to an appropriate solution for an uncracked shell, would give the result for a cracked shell under most general loading conditions. The problem is reduced to a system of five singular integral equations in a set of unknown functions representing relative displacements and rotations on the crack surfaces. The stress state around the crack tip is asymptotically analyzed and it is shown that the results are identical to those obtained from the two dimensional in plane and antiplane elasticity solutions. The numerical results are given for a cylindrical shell containing an arbitrarily oriented through crack. Some sample results showing the effect of the Poisson's ratio and the material orthotropy are also presented.

  12. The Prevalence of Behavior Problems among People with Intellectual Disability Living in Community Settings

    ERIC Educational Resources Information Center

    Myrbakk, Even; Von Tetzchner, Stephen

    2008-01-01

    With the desegregation processes of services for people with intellectual disability (ID) that is taking place in most Western countries there is a need for more knowledge related to the prevalence of behavior problems among people living in community settings. This study investigates the prevalence of behavior problems among 140 adolescents and…

  13. Radiation Detection Computational Benchmark Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing differentmore » techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for compilation. This is a report describing the details of the selected Benchmarks and results from various transport codes.« less

  14. Cascade phenomenon against subsequent failures in complex networks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhong-Yuan; Liu, Zhi-Quan; He, Xuan; Ma, Jian-Feng

    2018-06-01

    Cascade phenomenon may lead to catastrophic disasters which extremely imperil the network safety or security in various complex systems such as communication networks, power grids, social networks and so on. In some flow-based networks, the load of failed nodes can be redistributed locally to their neighboring nodes to maximally preserve the traffic oscillations or large-scale cascading failures. However, in such local flow redistribution model, a small set of key nodes attacked subsequently can result in network collapse. Then it is a critical problem to effectively find the set of key nodes in the network. To our best knowledge, this work is the first to study this problem comprehensively. We first introduce the extra capacity for every node to put up with flow fluctuations from neighbors, and two extra capacity distributions including degree based distribution and average distribution are employed. Four heuristic key nodes discovering methods including High-Degree-First (HDF), Low-Degree-First (LDF), Random and Greedy Algorithms (GA) are presented. Extensive simulations are realized in both scale-free networks and random networks. The results show that the greedy algorithm can efficiently find the set of key nodes in both scale-free and random networks. Our work studies network robustness against cascading failures from a very novel perspective, and methods and results are very useful for network robustness evaluations and protections.

  15. Verification of ANSYS Fluent and OpenFOAM CFD platforms for prediction of impact flow

    NASA Astrophysics Data System (ADS)

    Tisovská, Petra; Peukert, Pavel; Kolář, Jan

    The main goal of the article is a verification of the heat transfer coefficient numerically predicted by two CDF platforms - ANSYS-Fluent and OpenFOAM on the problem of impact flows oncoming from 2D nozzle. Various mesh parameters and solver settings were tested under several boundary conditions and compared to known experimental results. The best solver setting, suitable for further optimization of more complex geometry is evaluated.

  16. Completing the Physical Representation of Quantum Algorithms Provides a Quantitative Explanation of Their Computational Speedup

    NASA Astrophysics Data System (ADS)

    Castagnoli, Giuseppe

    2018-03-01

    The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete. We complete it in three steps: (i) extending the representation to the process of setting the problem, (ii) relativizing the extended representation to the problem solver to whom the problem setting must be concealed, and (iii) symmetrizing the relativized representation for time reversal to represent the reversibility of the underlying physical process. The third steps projects the input state of the representation, where the problem solver is completely ignorant of the setting and thus the solution of the problem, on one where she knows half solution (half of the information specifying it when the solution is an unstructured bit string). Completing the physical representation shows that the number of computation steps (oracle queries) required to solve any oracle problem in an optimal quantum way should be that of a classical algorithm endowed with the advanced knowledge of half solution.

  17. Differential Associations of UPPS-P Impulsivity Traits With Alcohol Problems.

    PubMed

    McCarty, Kayleigh N; Morris, David H; Hatz, Laura E; McCarthy, Denis M

    2017-07-01

    The UPPS-P model posits that impulsivity comprises five factors: positive urgency, negative urgency, lack of planning, lack of perseverance, and sensation seeking. Negative and positive urgency are the traits most consistently associated with alcohol problems. However, previous work has examined alcohol problems either individually or in the aggregate, rather than examining multiple problem domains simultaneously. Recent work has also questioned the utility of distinguishing between positive and negative urgency, as this distinction did not meaningfully differ in predicting domains of psychopathology. The aims of this study were to address these issues by (a) testing unique associations of UPPS-P with specific domains of alcohol problems and (b) determining the utility of distinguishing between positive and negative urgency as risk factors for specific alcohol problems. Associations between UPPS-P traits and alcohol problem domains were examined in two cross-sectional data sets using negative binomial regression models. In both samples, negative urgency was associated with social/interpersonal, self-perception, risky behaviors, and blackout drinking problems. Positive urgency was associated with academic/occupational and physiological dependence problems. Both urgency traits were associated with impaired control and self-care problems. Associations for other UPPS-P traits did not replicate across samples. Results indicate that negative and positive urgency have differential associations with alcohol problem domains. Results also suggest a distinction between the type of alcohol problems associated with these traits-negative urgency was associated with problems experienced during a drinking episode, whereas positive urgency was associated with alcohol problems that result from longer-term drinking trends.

  18. A fuzzy-ontology-oriented case-based reasoning framework for semantic diabetes diagnosis.

    PubMed

    El-Sappagh, Shaker; Elmogy, Mohammed; Riad, A M

    2015-11-01

    Case-based reasoning (CBR) is a problem-solving paradigm that uses past knowledge to interpret or solve new problems. It is suitable for experience-based and theory-less problems. Building a semantically intelligent CBR that mimic the expert thinking can solve many problems especially medical ones. Knowledge-intensive CBR using formal ontologies is an evolvement of this paradigm. Ontologies can be used for case representation and storage, and it can be used as a background knowledge. Using standard medical ontologies, such as SNOMED CT, enhances the interoperability and integration with the health care systems. Moreover, utilizing vague or imprecise knowledge further improves the CBR semantic effectiveness. This paper proposes a fuzzy ontology-based CBR framework. It proposes a fuzzy case-base OWL2 ontology, and a fuzzy semantic retrieval algorithm that handles many feature types. This framework is implemented and tested on the diabetes diagnosis problem. The fuzzy ontology is populated with 60 real diabetic cases. The effectiveness of the proposed approach is illustrated with a set of experiments and case studies. The resulting system can answer complex medical queries related to semantic understanding of medical concepts and handling of vague terms. The resulting fuzzy case-base ontology has 63 concepts, 54 (fuzzy) object properties, 138 (fuzzy) datatype properties, 105 fuzzy datatypes, and 2640 instances. The system achieves an accuracy of 97.67%. We compare our framework with existing CBR systems and a set of five machine-learning classifiers; our system outperforms all of these systems. Building an integrated CBR system can improve its performance. Representing CBR knowledge using the fuzzy ontology and building a case retrieval algorithm that treats different features differently improves the accuracy of the resulting systems. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. On non-autonomous dynamical systems

    NASA Astrophysics Data System (ADS)

    Anzaldo-Meneses, A.

    2015-04-01

    In usual realistic classical dynamical systems, the Hamiltonian depends explicitly on time. In this work, a class of classical systems with time dependent nonlinear Hamiltonians is analyzed. This type of problems allows to find invariants by a family of Veronese maps. The motivation to develop this method results from the observation that the Poisson-Lie algebra of monomials in the coordinates and momenta is clearly defined in terms of its brackets and leads naturally to an infinite linear set of differential equations, under certain circumstances. To perform explicit analytic and numerical calculations, two examples are presented to estimate the trajectories, the first given by a nonlinear problem and the second by a quadratic Hamiltonian with three time dependent parameters. In the nonlinear problem, the Veronese approach using jets is shown to be equivalent to a direct procedure using elliptic functions identities, and linear invariants are constructed. For the second example, linear and quadratic invariants as well as stability conditions are given. Explicit solutions are also obtained for stepwise constant forces. For the quadratic Hamiltonian, an appropriated set of coordinates relates the geometric setting to that of the three dimensional manifold of central conic sections. It is shown further that the quantum mechanical problem of scattering in a superlattice leads to mathematically equivalent equations for the wave function, if the classical time is replaced by the space coordinate along a superlattice. The mathematical method used to compute the trajectories for stepwise constant parameters can be applied to both problems. It is the standard method in quantum scattering calculations, as known for locally periodic systems including a space dependent effective mass.

  20. A learning approach to the bandwidth multicolouring problem

    NASA Astrophysics Data System (ADS)

    Akbari Torkestani, Javad

    2016-05-01

    In this article, a generalisation of the vertex colouring problem known as bandwidth multicolouring problem (BMCP), in which a set of colours is assigned to each vertex such that the difference between the colours, assigned to each vertex and its neighbours, is by no means less than a predefined threshold, is considered. It is shown that the proposed method can be applied to solve the bandwidth colouring problem (BCP) as well. BMCP is known to be NP-hard in graph theory, and so a large number of approximation solutions, as well as exact algorithms, have been proposed to solve it. In this article, two learning automata-based approximation algorithms are proposed for estimating a near-optimal solution to the BMCP. We show, for the first proposed algorithm, that by choosing a proper learning rate, the algorithm finds the optimal solution with a probability close enough to unity. Moreover, we compute the worst-case time complexity of the first algorithm for finding a 1/(1-ɛ) optimal solution to the given problem. The main advantage of this method is that a trade-off between the running time of algorithm and the colour set size (colouring optimality) can be made, by a proper choice of the learning rate also. Finally, it is shown that the running time of the proposed algorithm is independent of the graph size, and so it is a scalable algorithm for large graphs. The second proposed algorithm is compared with some well-known colouring algorithms and the results show the efficiency of the proposed algorithm in terms of the colour set size and running time of algorithm.

  1. Analysis of Plasma Bubble Signatures in the Ionosphere

    DTIC Science & Technology

    2011-03-01

    the equinoctial months resulted in greater slant TEC differences and, hence, greater communication problems. The results of this study not only...resulting in miscalculated enemy positions and misidentified space objects and orbit tracks. Errors in orbital positions could result in disastrous...uses a time-dependent physics-based model of the global ionosphere-plasmasphere and a Kalman filter as a basis for assimilating a diverse set of real

  2. Self-Affirmation Improves Problem-Solving under Stress

    PubMed Central

    Creswell, J. David; Dutcher, Janine M.; Klein, William M. P.; Harris, Peter R.; Levine, John M.

    2013-01-01

    High levels of acute and chronic stress are known to impair problem-solving and creativity on a broad range of tasks. Despite this evidence, we know little about protective factors for mitigating the deleterious effects of stress on problem-solving. Building on previous research showing that self-affirmation can buffer stress, we tested whether an experimental manipulation of self-affirmation improves problem-solving performance in chronically stressed participants. Eighty undergraduates indicated their perceived chronic stress over the previous month and were randomly assigned to either a self-affirmation or control condition. They then completed 30 difficult remote associate problem-solving items under time pressure in front of an evaluator. Results showed that self-affirmation improved problem-solving performance in underperforming chronically stressed individuals. This research suggests a novel means for boosting problem-solving under stress and may have important implications for understanding how self-affirmation boosts academic achievement in school settings. PMID:23658751

  3. The Relationship between Functional Status and Judgment/Problem Solving Among Individuals with Dementia

    PubMed Central

    Mayo, Ann M.; Wallhagen, Margaret; Cooper, Bruce A.; Mehta, Kala; Ross, Leslie; Miller, Bruce

    2012-01-01

    Objective To determine the relationship between functional status (independent activities of daily living) and judgment/problem solving and the extent to which select demographic characteristics such as dementia subtype and cognitive measures may moderate that relationship in older adult individuals with dementia. Methods The National Alzheimer’s Coordinating Center Universal Data Set was accessed for a study sample of 3,855 individuals diagnosed with dementia. Primary variables included functional status, judgment/problem solving, and cognition. Results Functional status was related to judgment/problem solving (r= 0.66; p< .0005). Functional status and cognition jointly predicted 56% of the variance in judgment/problem solving (R-squared = .56, p <.0005). As cognition decreases, the prediction of poorer judgment/problem solving by functional status became stronger. Conclusions Among individuals with a diagnosis of dementia, declining functional status as well as declining cognition should raise concerns about judgment/problem solving. PMID:22786576

  4. Self-affirmation improves problem-solving under stress.

    PubMed

    Creswell, J David; Dutcher, Janine M; Klein, William M P; Harris, Peter R; Levine, John M

    2013-01-01

    High levels of acute and chronic stress are known to impair problem-solving and creativity on a broad range of tasks. Despite this evidence, we know little about protective factors for mitigating the deleterious effects of stress on problem-solving. Building on previous research showing that self-affirmation can buffer stress, we tested whether an experimental manipulation of self-affirmation improves problem-solving performance in chronically stressed participants. Eighty undergraduates indicated their perceived chronic stress over the previous month and were randomly assigned to either a self-affirmation or control condition. They then completed 30 difficult remote associate problem-solving items under time pressure in front of an evaluator. Results showed that self-affirmation improved problem-solving performance in underperforming chronically stressed individuals. This research suggests a novel means for boosting problem-solving under stress and may have important implications for understanding how self-affirmation boosts academic achievement in school settings.

  5. Identifying Attributes of CO2 Leakage Zones in Shallow Aquifers Using a Parametric Level Set Method

    NASA Astrophysics Data System (ADS)

    Sun, A. Y.; Islam, A.; Wheeler, M.

    2016-12-01

    Leakage through abandoned wells and geologic faults poses the greatest risk to CO2 storage permanence. For shallow aquifers, secondary CO2 plumes emanating from the leak zones may go undetected for a sustained period of time and has the greatest potential to cause large-scale and long-term environmental impacts. Identification of the attributes of leak zones, including their shape, location, and strength, is required for proper environmental risk assessment. This study applies a parametric level set (PaLS) method to characterize the leakage zone. Level set methods are appealing for tracking topological changes and recovering unknown shapes of objects. However, level set evolution using the conventional level set methods is challenging. In PaLS, the level set function is approximated using a weighted sum of basis functions and the level set evolution problem is replaced by an optimization problem. The efficacy of PaLS is demonstrated through recovering the source zone created by CO2 leakage into a carbonate aquifer. Our results show that PaLS is a robust source identification method that can recover the approximate source locations in the presence of measurement errors, model parameter uncertainty, and inaccurate initial guesses of source flux strengths. The PaLS inversion framework introduced in this work is generic and can be adapted for any reactive transport model by switching the pre- and post-processing routines.

  6. Setting Goals and Objectives for LD Children-Process and Problems

    ERIC Educational Resources Information Center

    Gallistel, Elizabeth R.

    1978-01-01

    Discussed are problems and procedures in setting goals and objectives for learning disabled children in the implementation of Individualized Education Programs required by the Education for All Handicapped Children Act. (Author/ DLS)

  7. The acoustic impedance of a circular orifice in grazing mean flow: comparison with theory.

    PubMed

    Peat, Keith S; Ih, Jeong-Guon; Lee, Seong-Hyun

    2003-12-01

    It is well known that the presence of a grazing mean flow affects the acoustic impedance of an aperture, but the detailed nature and understanding of the influence is still unknown. In this paper, results from a recent theoretical analysis of the problem are compared with a new set of experimental results. The purpose is twofold. First, the experimental results are used to validate the theory. It is found that the theory predicts the resistance quite well, but not the reactance. Second, the theory is used to try and give some physical understanding to the experimental results. In particular, some scaling laws are confirmed, and it is also shown that measured negative resistance values are to be expected. They are not erroneous, as previously thought. Former sets of experimental data for this problem are notable for the amount of variation that they display. Thus, both the theory and the new experimental results are also compared with those earlier detailed results that most closely conform to the conditions assumed here, namely fully developed turbulent pipe flow of low Mach number past circular orifices. The main field of application is in flow ducts, in particular, flow through perforated tubes in exhaust mufflers.

  8. Student evaluation team focus groups increase students' satisfaction with the overall course evaluation process.

    PubMed

    Brandl, Katharina; Mandel, Jess; Winegarden, Babbi

    2017-02-01

    Most medical schools use online systems to gather student feedback on the quality of their educational programmes and services. Online data may be limiting, however, as the course directors cannot question the students about written comments, nor can students engage in mutual problem-solving dialogue with course directors. We describe the implementation of a student evaluation team (SET) process to permit course directors and students to gather shortly after courses end to engage in feedback and problem solving regarding the course and course elements. Approximately 16 students were randomly selected to participate in each SET meeting, along with the course director, academic deans and other faculty members involved in the design and delivery of the course. An objective expert facilitates the SET meetings. SETs are scheduled for each of the core courses and threads that occur within the first 2 years of medical school, resulting in approximately 29 SETs annually. SET-specific satisfaction surveys submitted by students (n = 76) and course directors (n = 16) in 2015 were used to evaluate the SET process itself. Survey data were collected from 885 students (2010-2015), which measured student satisfaction with the overall evaluation process before and after the implementation of SETs. Students and course directors valued the SET process itself as a positive experience. Students felt that SETs allowed their voices to be heard, and that the SET increased the probability of suggested changes being implemented. Students' satisfaction with the overall evaluation process significantly improved after implementation of the SET process. Our data suggest that the SET process is a valuable way to supplement online evaluation systems and to increase students' and faculty members' satisfaction with the evaluation process. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  9. A Pilot Evaluation of a Tutorial to Teach Clients and Clinicians About Gambling Game Design.

    PubMed

    Turner, Nigel E; Robinson, Janine; Harrigan, Kevin; Ferentzy, Peter; Jindani, Farah

    2018-01-01

    This paper describes the pilot evaluation of an Internet-based intervention, designed to teach counselors and problem gamblers about how electronic gambling machines (EGMs) work. This study evaluated the tutorial using assessment tools, such as rating scales and test of knowledge about EGMs and random chance. The study results are based on a number of samples, including problem gambling counselors ( n  = 25) and problem gamblers ( n  = 26). The interactive tutorial was positively rated by both clients and counselors. In addition, we found a significant improvement in scores on a content test about EGM games for both clients and counselors. An analysis of the specific items suggests that the effects of the tutorial were mainly on those items that were most directly related to the content of the tutorial and did not always generalize to other items. This tutorial is available for use with clients and for education counselors. The data also suggest that the tutorial is equally effective in group settings and in individual settings. These results are promising and illustrate that the tool can be used to teach counselors and clients about game design. Furthermore, research is needed to evaluate its impact on gambling behavior.

  10. Studying marine stratus with large eddy simulation

    NASA Technical Reports Server (NTRS)

    Moeng, Chin-Hoh

    1990-01-01

    Data sets from field experiments over the stratocumulus regime may include complications from larger scale variations, decoupled cloud layers, diurnal cycle, or entrainment instability, etc. On top of the already complicated turbulence-radiation-condensation processes within the cloud-topped boundary layer (CTBL), these complexities may sometimes make interpretation of the data sets difficult. To study these processes, a better understanding is needed of the basic processes involved in the prototype CTBL. For example, is cloud top radiative cooling the primary source of the turbulent kinetic energy (TKE) within the CTBL. Historically, laboratory measurements have played an important role in addressing the turbulence problems. The CTBL is a turbulent field which is probably impossible to generate in laboratories. Large eddy simulation (LES) is an alternative way of 'measuring' the turbulent structure under controlled environments, which allows the systematic examination of the basic physical processes involved. However, there are problems with the LES approach for the CTBL. The LES data need to be consistent with the observed data. The LES approach is discussed, and results are given which provide some insights into the simulated turbulent flow field. Problems with this approach for the CTBL and information from the FIRE experiment needed to justify the LES results are discussed.

  11. A new approach to the convective parameterization of the regional atmospheric model BRAMS

    NASA Astrophysics Data System (ADS)

    Dos Santos, A. F.; Freitas, S. R.; de Campos Velho, H. F.; Luz, E. F.; Gan, M. A.; de Mattos, J. Z.; Grell, G. A.

    2013-05-01

    The summer characteristics of January 2010 was performed using the atmospheric model Brazilian developments on the Regional Atmospheric Modeling System (BRAMS). The convective parameterization scheme of Grell and Dévényi was used to represent clouds and their interaction with the large scale environment. As a result, the precipitation forecasts can be combined in several ways, generating a numerical representation of precipitation and atmospheric heating and moistening rates. The purpose of this study was to generate a set of weights to compute a best combination of the hypothesis of the convective scheme. It is an inverse problem of parameter estimation and the problem is solved as an optimization problem. To minimize the difference between observed data and forecasted precipitation, the objective function was computed with the quadratic difference between five simulated precipitation fields and observation. The precipitation field estimated by the Tropical Rainfall Measuring Mission satellite was used as observed data. Weights were obtained using the firefly algorithm and the mass fluxes of each closure of the convective scheme were weighted generating a new set of mass fluxes. The results indicated the better skill of the model with the new methodology compared with the old ensemble mean calculation.

  12. Least-Squares Spectral Element Solutions to the CAA Workshop Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Lin, Wen H.; Chan, Daniel C.

    1997-01-01

    This paper presents computed results for some of the CAA benchmark problems via the acoustic solver developed at Rocketdyne CFD Technology Center under the corporate agreement between Boeing North American, Inc. and NASA for the Aerospace Industry Technology Program. The calculations are considered as benchmark testing of the functionality, accuracy, and performance of the solver. Results of these computations demonstrate that the solver is capable of solving the propagation of aeroacoustic signals. Testing of sound generation and on more realistic problems is now pursued for the industrial applications of this solver. Numerical calculations were performed for the second problem of Category 1 of the current workshop problems for an acoustic pulse scattered from a rigid circular cylinder, and for two of the first CAA workshop problems, i. e., the first problem of Category 1 for the propagation of a linear wave and the first problem of Category 4 for an acoustic pulse reflected from a rigid wall in a uniform flow of Mach 0.5. The aim for including the last two problems in this workshop is to test the effectiveness of some boundary conditions set up in the solver. Numerical results of the last two benchmark problems have been compared with their corresponding exact solutions and the comparisons are excellent. This demonstrates the high fidelity of the solver in handling wave propagation problems. This feature lends the method quite attractive in developing a computational acoustic solver for calculating the aero/hydrodynamic noise in a violent flow environment.

  13. Distributed Sleep Scheduling in Wireless Sensor Networks via Fractional Domatic Partitioning

    NASA Astrophysics Data System (ADS)

    Schumacher, André; Haanpää, Harri

    We consider setting up sleep scheduling in sensor networks. We formulate the problem as an instance of the fractional domatic partition problem and obtain a distributed approximation algorithm by applying linear programming approximation techniques. Our algorithm is an application of the Garg-Könemann (GK) scheme that requires solving an instance of the minimum weight dominating set (MWDS) problem as a subroutine. Our two main contributions are a distributed implementation of the GK scheme for the sleep-scheduling problem and a novel asynchronous distributed algorithm for approximating MWDS based on a primal-dual analysis of Chvátal's set-cover algorithm. We evaluate our algorithm with ns2 simulations.

  14. A semi-implicit level set method for multiphase flows and fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri; Maitre, Emmanuel

    2016-06-01

    In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.

  15. The Implementation of an Alternative School Setting for High Risk Handicapped Adjudicated Juveniles Aged Nine to Sixteen Years.

    ERIC Educational Resources Information Center

    LaCoste, Linda D.

    This practicum was designed to provide an alternative setting for those handicapped adjudicated youths (ages 9-16) who, because of antisocial acts, behavioral/emotional problems, learning disabilities, attendance problems, and other reasons, were excluded from the mainstreamed setting. The practicum sought to enhance the cooperative efforts of the…

  16. Multiclass Reduced-Set Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Tang, Benyang; Mazzoni, Dominic

    2006-01-01

    There are well-established methods for reducing the number of support vectors in a trained binary support vector machine, often with minimal impact on accuracy. We show how reduced-set methods can be applied to multiclass SVMs made up of several binary SVMs, with significantly better results than reducing each binary SVM independently. Our approach is based on Burges' approach that constructs each reduced-set vector as the pre-image of a vector in kernel space, but we extend this by recomputing the SVM weights and bias optimally using the original SVM objective function. This leads to greater accuracy for a binary reduced-set SVM, and also allows vectors to be 'shared' between multiple binary SVMs for greater multiclass accuracy with fewer reduced-set vectors. We also propose computing pre-images using differential evolution, which we have found to be more robust than gradient descent alone. We show experimental results on a variety of problems and find that this new approach is consistently better than previous multiclass reduced-set methods, sometimes with a dramatic difference.

  17. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    PubMed

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  18. Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems

    NASA Astrophysics Data System (ADS)

    Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong

    2016-07-01

    As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.

  19. Solving Fractional Programming Problems based on Swarm Intelligence

    NASA Astrophysics Data System (ADS)

    Raouf, Osama Abdel; Hezam, Ibrahim M.

    2014-04-01

    This paper presents a new approach to solve Fractional Programming Problems (FPPs) based on two different Swarm Intelligence (SI) algorithms. The two algorithms are: Particle Swarm Optimization, and Firefly Algorithm. The two algorithms are tested using several FPP benchmark examples and two selected industrial applications. The test aims to prove the capability of the SI algorithms to solve any type of FPPs. The solution results employing the SI algorithms are compared with a number of exact and metaheuristic solution methods used for handling FPPs. Swarm Intelligence can be denoted as an effective technique for solving linear or nonlinear, non-differentiable fractional objective functions. Problems with an optimal solution at a finite point and an unbounded constraint set, can be solved using the proposed approach. Numerical examples are given to show the feasibility, effectiveness, and robustness of the proposed algorithm. The results obtained using the two SI algorithms revealed the superiority of the proposed technique among others in computational time. A better accuracy was remarkably observed in the solution results of the industrial application problems.

  20. JPRS Report, Soviet Union, World Economy & International Relations, No. 3, March 1987

    DTIC Science & Technology

    1987-06-15

    Efficiency Promotion in Industry (pp 28-40) (E. Vilkhovchenko) (not translated) •Star Wars ’ and Washington’s Allies Western Europe on SDI (pp 41-48) (G...sets of complex problems. The first is the problem S war and peace, pollution of environment and the socio-economic gap between industrially...developed capitalist countries and the developing world. The second -has taken shape as a result of the evolution of state-monopolistic capitalism and

  1. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    PubMed

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  2. We get the algorithms of our ground truths: Designing referential databases in digital image processing

    PubMed Central

    Jaton, Florian

    2017-01-01

    This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called ‘ground truths’ that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an ‘axiomatic’ perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a ‘problem-oriented’ perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs. PMID:28950802

  3. A synoptic approach for analyzing erosion as a guide to land-use planning

    USGS Publications Warehouse

    Brown, William M.; Hines, Walter G.; Rickert, David A.; Beach, Gary L.

    1979-01-01

    A synoptic approach has been devised to delineate the relationships that exist' between physiographic factors, land-use activities, and resultant erosional problems. The approach involves the development of an erosional-depositional province map and a numerical impact matrix for rating the potential for erosional problems. The province map is prepared by collating data on the natural terrain factors that exert the dominant controls on erosion and deposition in each basin. In addition, existing erosional and depositional features are identified and mapped from color-infrared, high-altitude aerial imagery. The axes of the impact matrix are composed of weighting values for the terrain factors used in developing the map and by a second set of values for the prevalent land-use activities. The body of the matrix is composed of composite erosional-impact ratings resulting from the product of the factor sets. Together the province map and problem matrix serve as practical tools for estimating the erosional impact of human activities on different types of terrain. The approach has been applied to the Molalla River basin, Oregon, and has proven useful for the recognition of problem areas. The same approach is currently being used by the State of Oregon (in the 208 assessment of nonpoint-source pollution under Public Law 92-500) to evaluate the impact of land-management practices on stream quality.

  4. Effects of video-assisted training on employment-related social skills of adults with severe mental retardation.

    PubMed Central

    Morgan, R L; Salzberg, C L

    1992-01-01

    Two studies investigated effects of video-assisted training on employment-related social skills of adults with severe mental retardation. In video-assisted training, participants discriminated a model's behavior on videotape and received feedback from the trainer for responses to questions about video scenes. In the first study, 3 adults in an employment program participated in video-assisted training to request their supervisor's assistance when encountering work problems. Results indicated that participants discriminated the target behavior on video but effects did not generalize to the work setting for 2 participants until they rehearsed the behavior. In the second study, 2 participants were taught to fix and report four work problems using video-assisted procedures. Results indicated that after participants rehearsed how to fix and report one or two work problems, they began to fix and report the remaining problems with video-assisted training alone. PMID:1378826

  5. An application of robust ridge regression model in the presence of outliers to real data problem

    NASA Astrophysics Data System (ADS)

    Shariff, N. S. Md.; Ferdaos, N. A.

    2017-09-01

    Multicollinearity and outliers are often leads to inconsistent and unreliable parameter estimates in regression analysis. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is believed are affected by the presence of outlier. The combination of GM-estimation and ridge parameter that is robust towards both problems is on interest in this study. As such, both techniques are employed to investigate the relationship between stock market price and macroeconomic variables in Malaysia due to curiosity of involving the multicollinearity and outlier problem in the data set. There are four macroeconomic factors selected for this study which are Consumer Price Index (CPI), Gross Domestic Product (GDP), Base Lending Rate (BLR) and Money Supply (M1). The results demonstrate that the proposed procedure is able to produce reliable results towards the presence of multicollinearity and outliers in the real data.

  6. Sparse Covariance Matrix Estimation by DCA-Based Algorithms.

    PubMed

    Phan, Duy Nhat; Le Thi, Hoai An; Dinh, Tao Pham

    2017-11-01

    This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.

  7. Student Classroom Misbehavior: An Exploratory Study Based on Teachers' Perceptions

    PubMed Central

    Sun, Rachel C. F.; Shek, Daniel T. L.

    2012-01-01

    This study aimed to examine the conceptions of junior secondary school student misbehaviors in classroom, and to identify the most common, disruptive, and unacceptable student problem behaviors from teachers' perspective. Twelve individual interviews with teachers were conducted. A list of 17 student problem behaviors was generated. Results showed that the most common and disruptive problem behavior was talking out of turn, followed by nonattentiveness, daydreaming, and idleness. The most unacceptable problem behavior was disrespecting teachers in terms of disobedience and rudeness, followed by talking out of turn and verbal aggression. The findings revealed that teachers perceived student problem behaviors as those behaviors involving rule-breaking, violating the implicit norms or expectations, being inappropriate in the classroom settings and upsetting teaching and learning, which mainly required intervention from teachers. PMID:22919297

  8. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    PubMed Central

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  9. Mexican Hat Wavelet Kernel ELM for Multiclass Classification.

    PubMed

    Wang, Jie; Song, Yi-Fan; Ma, Tian-Lei

    2017-01-01

    Kernel extreme learning machine (KELM) is a novel feedforward neural network, which is widely used in classification problems. To some extent, it solves the existing problems of the invalid nodes and the large computational complexity in ELM. However, the traditional KELM classifier usually has a low test accuracy when it faces multiclass classification problems. In order to solve the above problem, a new classifier, Mexican Hat wavelet KELM classifier, is proposed in this paper. The proposed classifier successfully improves the training accuracy and reduces the training time in the multiclass classification problems. Moreover, the validity of the Mexican Hat wavelet as a kernel function of ELM is rigorously proved. Experimental results on different data sets show that the performance of the proposed classifier is significantly superior to the compared classifiers.

  10. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    PubMed

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  11. Parameter optimization of differential evolution algorithm for automatic playlist generation problem

    NASA Astrophysics Data System (ADS)

    Alamag, Kaye Melina Natividad B.; Addawe, Joel M.

    2017-11-01

    With the digitalization of music, the number of collection of music increased largely and there is a need to create lists of music that filter the collection according to user preferences, thus giving rise to the Automatic Playlist Generation Problem (APGP). Previous attempts to solve this problem include the use of search and optimization algorithms. If a music database is very large, the algorithm to be used must be able to search the lists thoroughly taking into account the quality of the playlist given a set of user constraints. In this paper we perform an evolutionary meta-heuristic optimization algorithm, Differential Evolution (DE) using different combination of parameter values and select the best performing set when used to solve four standard test functions. Performance of the proposed algorithm is then compared with normal Genetic Algorithm (GA) and a hybrid GA with Tabu Search. Numerical simulations are carried out to show better results from Differential Evolution approach with the optimized parameter values.

  12. TemperSAT: A new efficient fair-sampling random k-SAT solver

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.

    The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.

  13. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  14. Fuzzy CMAC With incremental Bayesian Ying-Yang learning and dynamic rule construction.

    PubMed

    Nguyen, M N

    2010-04-01

    Inspired by the philosophy of ancient Chinese Taoism, Xu's Bayesian ying-yang (BYY) learning technique performs clustering by harmonizing the training data (yang) with the solution (ying). In our previous work, the BYY learning technique was applied to a fuzzy cerebellar model articulation controller (FCMAC) to find the optimal fuzzy sets; however, this is not suitable for time series data analysis. To address this problem, we propose an incremental BYY learning technique in this paper, with the idea of sliding window and rule structure dynamic algorithms. Three contributions are made as a result of this research. First, an online expectation-maximization algorithm incorporated with the sliding window is proposed for the fuzzification phase. Second, the memory requirement is greatly reduced since the entire data set no longer needs to be obtained during the prediction process. Third, the rule structure dynamic algorithm with dynamically initializing, recruiting, and pruning rules relieves the "curse of dimensionality" problem that is inherent in the FCMAC. Because of these features, the experimental results of the benchmark data sets of currency exchange rates and Mackey-Glass show that the proposed model is more suitable for real-time streaming data analysis.

  15. Machine learning-based coreference resolution of concepts in clinical documents

    PubMed Central

    Ware, Henry; Mullett, Charles J; El-Rawas, Oussama

    2012-01-01

    Objective Coreference resolution of concepts, although a very active area in the natural language processing community, has not yet been widely applied to clinical documents. Accordingly, the 2011 i2b2 competition focusing on this area is a timely and useful challenge. The objective of this research was to collate coreferent chains of concepts from a corpus of clinical documents. These concepts are in the categories of person, problems, treatments, and tests. Design A machine learning approach based on graphical models was employed to cluster coreferent concepts. Features selected were divided into domain independent and domain specific sets. Training was done with the i2b2 provided training set of 489 documents with 6949 chains. Testing was done on 322 documents. Results The learning engine, using the un-weighted average of three different measurement schemes, resulted in an F measure of 0.8423 where no domain specific features were included and 0.8483 where the feature set included both domain independent and domain specific features. Conclusion Our machine learning approach is a promising solution for recognizing coreferent concepts, which in turn is useful for practical applications such as the assembly of problem and medication lists from clinical documents. PMID:22582205

  16. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE PAGES

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...

    2018-03-26

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  17. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  18. [General conditions concerning the implementation of an outpatient education programme--characteristics and distinctions from an inpatient training programme].

    PubMed

    Brandes, I; Wunderlich, B; Niehues, C

    2011-04-01

    The aim of the EVA study was to develop an outpatient education programme for women with endometriosis with a view to permanent transfer into routine care. Implementation of the programme generated several problems and obstacles that are not, or not to this extent, present in the inpatient setting of a rehabilitation clinic. The patient education programme was developed in line with an existing inpatient programme, taking into account the criteria for evaluating such training programmes. Several adjustments to process, structure and content level had to be made to achieve the conditions of the outpatient setting. Since May 2008, 17 training courses took place in various outpatient and acute inpatient settings, and a total of 156 women with diagnosed endometriosis participated. The problems and obstacles that emerged affected similarly the process, structure and content of the training programme. On the structural level, especially problems with availability of rooms, technical equipment and trainers occurred, leading to significant time pressures. The main problem on the procedural level was the recruitment of participants, since--in contrast to the inpatient setting and to disease management programmes--no assignment by physicians or insurers takes place. Furthermore, gainful activity of the participants and the resulting shift of the training beyond the usual working and opening hours are important barriers for implementation. The unavailability of trainers in these settings requires creative solutions. Regarding the contents of the training it has to be taken into consideration that--unlike the inpatient setting--no aftercare intervention and no individual psychological consultation are possible. The training programme has to be designed in such a way that all problems that have occurred could be dealt with appropriately. In summary, the permanent implementation of an outpatient training programme is possible but is more time-consuming than inpatient trainings due to unfavourable conditions concerning recruitment, organization and procedure. It seems that "soft" factors such as motivation, integration into the clinic concept, well-defined acceptance of responsibility and experience in dealing with the disease and with patient groups are the critical success factors. Until now cost carriage by the health insurance funds has not been realized--except for disease management programmes; so there is still a need for action here. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Toward a Practical Model of Cognitive/Information Task Analysis and Schema Acquisition for Complex Problem-Solving Situations.

    ERIC Educational Resources Information Center

    Braune, Rolf; Foshay, Wellesley R.

    1983-01-01

    The proposed three-step strategy for research on human information processing--concept hierarchy analysis, analysis of example sets to teach relations among concepts, and analysis of problem sets to build a progressively larger schema for the problem space--may lead to practical procedures for instructional design and task analysis. Sixty-four…

  20. Encrypted Objects and Decryption Processes: Problem-Solving with Functions in a Learning Environment Based on Cryptography

    ERIC Educational Resources Information Center

    White, Tobin

    2009-01-01

    This paper introduces an applied problem-solving task, set in the context of cryptography and embedded in a network of computer-based tools. This designed learning environment engaged students in a series of collaborative problem-solving activities intended to introduce the topic of functions through a set of linked representations. In a…

  1. Association of Nondisease-Specific Problems with Mortality, Long-Term Care, and Functional Impairment among Older Adults Who Require Skilled Nursing Care after Dialysis Initiation

    PubMed Central

    Plantinga, Laura; Hall, Rasheeda K.; Mirk, Anna; Zhang, Rebecca; Kutner, Nancy

    2016-01-01

    Background and objectives The majority of older adults who initiate dialysis do so during a hospitalization, and these patients may require post-acute skilled nursing facility (SNF) care. For these patients, a focus on nondisease-specific problems, including cognitive impairment, depressive symptoms, exhaustion, falls, impaired mobility, and polypharmacy, may be more relevant to outcomes than the traditional disease-oriented approach. However, the association of the burden of nondisease-specific problems with mortality, transition to long-term care (LTC), and functional impairment among older adults receiving SNF care after dialysis initiation has not been studied. Design, setting, participants, & measurements We identified 40,615 Medicare beneficiaries ≥65 years old who received SNF care after dialysis initiation between 2000 and 2006 by linking renal disease registry data with the Minimum Data Set. Nondisease-specific problems were ascertained from the Minimum Data Set. We defined LTC as ≥100 SNF days and functional impairment as dependence in all four essential activities of daily living at SNF discharge. Associations of the number of nondisease-specific problems (≤1, 2, 3, and 4–6) with 6-month mortality, LTC, and functional impairment were examined. Results Overall, 39.2% of patients who received SNF care after dialysis initiation died within 6 months. Compared with those with ≤1 nondisease-specific problems, multivariable adjusted hazard ratios (95% confidence interval) for mortality were 1.26 (1.19 to 1.32), 1.40 (1.33 to 1.48), and 1.66 (1.57 to 1.76) for 2, 3, and 4–6 nondisease-specific problems, respectively. Among those who survived, 37.1% required LTC; of those remaining who did not require LTC, 74.7% had functional impairment. A higher likelihood of transition to LTC (among those who survived 6 months) and functional impairment (among those who survived and did not require LTC) was seen with a higher number of problems. Conclusions Identifying nondisease-specific problems may help patients and families anticipate LTC needs and functional impairment after dialysis initiation. PMID:27733436

  2. Temporal Constraint Reasoning With Preferences

    NASA Technical Reports Server (NTRS)

    Khatib, Lina; Morris, Paul; Morris, Robert; Rossi, Francesca

    2001-01-01

    A number of reasoning problems involving the manipulation of temporal information can naturally be viewed as implicitly inducing an ordering of potential local decisions involving time (specifically, associated with durations or orderings of events) on the basis of preferences. For example. a pair of events might be constrained to occur in a certain order, and, in addition. it might be preferable that the delay between them be as large, or as small, as possible. This paper explores problems in which a set of temporal constraints is specified, where each constraint is associated with preference criteria for making local decisions about the events involved in the constraint, and a reasoner must infer a complete solution to the problem such that, to the extent possible, these local preferences are met in the best way. A constraint framework for reasoning about time is generalized to allow for preferences over event distances and durations, and we study the complexity of solving problems in the resulting formalism. It is shown that while in general such problems are NP-hard, some restrictions on the shape of the preference functions, and on the structure of the preference set, can be enforced to achieve tractability. In these cases, a simple generalization of a single-source shortest path algorithm can be used to compute a globally preferred solution in polynomial time.

  3. Do different types of school mathematics development depend on different constellations of numerical versus general cognitive abilities?

    PubMed

    Fuchs, Lynn S; Geary, David C; Compton, Donald L; Fuchs, Douglas; Hamlett, Carol L; Seethaler, Pamela M; Bryant, Joan D; Schatschneider, Christopher

    2010-11-01

    The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (N = 280; mean age = 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations, and word problems in fall and then reassessed on procedural calculations and word problems in spring. Development was indexed by latent change scores, and the interplay between numerical and domain-general abilities was analyzed by multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of procedural calculations and word problems development. Yet, for procedural calculations development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for word problems development, the set of domain-general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive.

  4. Domestic Violence during Pregnancy in India

    ERIC Educational Resources Information Center

    Mahapatro, Meerambika; Gupta, R. N.; Gupta, Vinay; Kundu, A. S.

    2011-01-01

    Domestic violence can result in many negative health consequences for women's health and well-being. Studies on domestic violence illustrate that abused women in various settings had increased health problems such as injury, chronic pain, gastrointestinal, and gynecological signs including sexually transmitted diseases, depression, and…

  5. Efficient constraint handling in electromagnetism-like algorithm for traveling salesman problem with time windows.

    PubMed

    Yurtkuran, Alkın; Emel, Erdal

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.

  6. Efficient Constraint Handling in Electromagnetism-Like Algorithm for Traveling Salesman Problem with Time Windows

    PubMed Central

    Yurtkuran, Alkın

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834

  7. Grout formulation for disposal of low-level and hazardous waste streams containing fluoride

    DOEpatents

    McDaniel, E.W.; Sams, T.L.; Tallent, O.K.

    1987-06-02

    A composition and related process for disposal of hazardous waste streams containing fluoride in cement-based materials is disclosed. the presence of fluoride in cement-based materials is disclosed. The presence of fluoride in waste materials acts as a set retarder and as a result, prevents cement-based grouts from setting. This problem is overcome by the present invention wherein calcium hydroxide is incorporated into the dry-solid portion of the grout mix. The calcium hydroxide renders the fluoride insoluble, allowing the grout to set up and immobilize all hazardous constituents of concern. 4 tabs.

  8. A Hyper-Heuristic Ensemble Method for Static Job-Shop Scheduling.

    PubMed

    Hart, Emma; Sim, Kevin

    2016-01-01

    We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyper-heuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.

  9. On the Parameterized Complexity of Some Optimization Problems Related to Multiple-Interval Graphs

    NASA Astrophysics Data System (ADS)

    Jiang, Minghui

    We show that for any constant t ≥ 2, K -Independent Set and K-Dominating Set in t-track interval graphs are W[1]-hard. This settles an open question recently raised by Fellows, Hermelin, Rosamond, and Vialette. We also give an FPT algorithm for K-Clique in t-interval graphs, parameterized by both k and t, with running time max { t O(k), 2 O(klogk) } ·poly(n), where n is the number of vertices in the graph. This slightly improves the previous FPT algorithm by Fellows, Hermelin, Rosamond, and Vialette. Finally, we use the W[1]-hardness of K-Independent Set in t-track interval graphs to obtain the first parameterized intractability result for a recent bioinformatics problem called Maximal Strip Recovery (MSR). We show that MSR-d is W[1]-hard for any constant d ≥ 4 when the parameter is either the total length of the strips, or the total number of adjacencies in the strips, or the number of strips in the optimal solution.

  10. Quantum annealing versus classical machine learning applied to a simplified computational biology problem

    NASA Astrophysics Data System (ADS)

    Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.

    2018-03-01

    Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to classify and rank binding affinities. Using simplified data sets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified data sets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.

  11. Topology optimization in acoustics and elasto-acoustics via a level-set method

    NASA Astrophysics Data System (ADS)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  12. Development of the Austrian Nursing Minimum Data Set (NMDS-AT): the third Delphi Round, a quantitative online survey.

    PubMed

    Ranegger, Renate; Hackl, Werner O; Ammenwerth, Elske

    2015-01-01

    A Nursing Minimum Data Set (NMDS) aims at systematically describing nursing care in terms of patient problems, nursing activities, and patient outcomes. In an earlier Delphi study, 56 data elements were proposed to be included in an Austrian Nursing Minimum Data Set (NMDS-AT). To identify the most important data elements of this list, and to identify appropriate coding systems. Online Delphi-based survey with 88 experts. 43 data elements were rated as relevant for an NMDS-AT (strong agreement of more than half of the experts): nine data elements concerning the institution, patient demographics, and medical condition; 18 data elements concerning patient problems by using nursing diagnosis; seven data elements concerning nursing outcomes, and nine data elements concerning nursing interventions. As classification systems, national classification systems were proposed besides ICNP, NNN, and nursing-sensitive indicators. The resulting proposal for an NMDS-AT will now be tested with routine data.

  13. The impact on social relationships of moving from congregated settings to personalized accommodation.

    PubMed

    McConkey, Roy; Bunting, Brendan; Keogh, Fiona; Garcia Iriarte, Edurne

    2017-01-01

    A natural experiment contrasted the social relationships of people with intellectual disabilities ( n = 110) before and after they moved from congregated settings to either personalized accommodation or group homes. Contrasts could also be drawn with individuals who had enduring mental health problems ( n = 46) and who experienced similar moves. Face-to-face interviews were conducted in each person's residence on two occasions approximately 24 months apart. Multivariate statistical analyses were used to determine significant effects. Greater proportions of people living in personalized settings scored higher on the five chosen indicators of social relationships than did persons living in grouped accommodation. However, multivariate statistical analyses identified that only one in five persons increased their social relationships as a result of changes in their accommodation, particularly persons with an intellectual disability and high support needs. These findings reinforce the extent of social isolation experienced by people with disabilities and mental health problems that changes in their accommodation only partially counter.

  14. Boundary pint corrections for variable radius plots - simulation results

    Treesearch

    Margaret Penner; Sam Otukol

    2000-01-01

    The boundary plot problem is encountered when a forest inventory plot includes two or more forest conditions. Depending on the correction method used, the resulting estimates can be biased. The various correction alternatives are reviewed. No correction, area correction, half sweep, and toss-back methods are evaluated using simulation on an actual data set. Based on...

  15. A Crime Analysis Decision Support System for Crime Report Classification and Visualization

    ERIC Educational Resources Information Center

    Ku, Chih-Hao

    2012-01-01

    Today's Internet-based crime reporting systems make timely and anonymous crime reporting possible. However, these reports also result in a rapidly growing set of unstructured text files. Complicating the problem is that the information has not been filtered or guided in a detective-led interview resulting in much irrelevant information. To…

  16. A simple level set method for solving Stefan problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S.; Merriman, B.; Osher, S.

    1997-07-15

    Discussed in this paper is an implicit finite difference scheme for solving a heat equation and a simple level set method for capturing the interface between solid and liquid phases which are used to solve Stefan problems.

  17. Human-Centered Technologies and Procedures for Future Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Smith, Philip; Woods, David; McCoy, Elaine; Billings, Charles; Sarter, Nadine; Denning, Rebecca; Dekker, Sidney

    1997-01-01

    The use of various methodologies to predict the impact of future Air Traffic Management (ATM) concepts and technologies is explored. The emphasis has been on the importance of modeling coordination and cooperation among multiple agents within this system, and on understanding how the interactions among these agents will be influenced as new roles, responsibilities, procedures and technologies are introduced. To accomplish this, we have been collecting data on performance under the current air traffic management system, identifying critical problem areas and looking for examples suggestive of general approaches for solving such problems. Using the results of these field studies, we have developed a set of concrete scenarios centered around future designs, and have studied performance in these scenarios with a set of 40 controllers, dispatchers, pilots and traffic managers.

  18. Solving Large Problems Quickly: Progress in 2001-2003

    NASA Technical Reports Server (NTRS)

    Mowry, Todd C.; Colohan, Christopher B.; Brown, Angela Demke; Steffan, J. Gregory; Zhai, Antonia

    2004-01-01

    This document describes the progress we have made and the lessons we have learned in 2001 through 2003 under the NASA grant entitled "Solving Important Problems Faster". The long-term goal of this research is to accelerate large, irregular scientific applications which have enormous data sets and which are difficult to parallelize. To accomplish this goal, we are exploring two complementary techniques: (i) using compiler-inserted prefetching to automatically hide the I/O latency of accessing these large data sets from disk; and (ii) using thread-level data speculation to enable the optimistic parallelization of applications despite uncertainty as to whether data dependences exist between the resulting threads which would normally make them unsafe to execute in parallel. Overall, we made significant progress in 2001 through 2003, and the project has gone well.

  19. Data Quality for Situational Awareness during Mass-Casualty Events

    PubMed Central

    Demchak, Barry; Griswold, William G.; Lenert, Leslie A.

    2007-01-01

    Incident Command systems often achieve situational awareness through manual paper-tracking systems. Such systems often produce high latencies and incomplete data, resulting in inefficient and ineffective resource deployment. WIISARD (Wireless Internet Information System for Medical Response in Disasters) collects much more data than a paper-based system, dramatically reducing latency while increasing the kinds and quality of information available to incident commanders. Yet, the introduction of IT into a disaster setting is not problem-free. Notably, system component failures can delay the delivery of data. The type and extent of a failure can have varying effects on the usefulness of information displays. We describe a small, coherent set of customizble information overlays to address this problem, and we discuss reactions to these displays by medical commanders. PMID:18693821

  20. Non-Gaussianity and Excursion Set Theory: Halo Bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adshead, Peter; Baxter, Eric J.; Dodelson, Scott

    2012-09-01

    We study the impact of primordial non-Gaussianity generated during inflation on the bias of halos using excursion set theory. We recapture the familiar result that the bias scales asmore » $$k^{-2}$$ on large scales for local type non-Gaussianity but explicitly identify the approximations that go into this conclusion and the corrections to it. We solve the more complicated problem of non-spherical halos, for which the collapse threshold is scale dependent.« less

Top