Sample records for solve large-scale problems

  1. Side effects of problem-solving strategies in large-scale nutrition science: towards a diversification of health.

    PubMed

    Penders, Bart; Vos, Rein; Horstman, Klasien

    2009-11-01

    Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.

  2. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  3. Performance of Grey Wolf Optimizer on large scale problems

    NASA Astrophysics Data System (ADS)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  4. Finite difference and Runge-Kutta methods for solving vibration problems

    NASA Astrophysics Data System (ADS)

    Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi

    2017-11-01

    The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.

  5. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    PubMed

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method

    PubMed Central

    Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter

    2017-01-01

    An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated. PMID:28098851

  7. A novel heuristic algorithm for capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Kır, Sena; Yazgan, Harun Reşit; Tüncel, Emre

    2017-09-01

    The vehicle routing problem with the capacity constraints was considered in this paper. It is quite difficult to achieve an optimal solution with traditional optimization methods by reason of the high computational complexity for large-scale problems. Consequently, new heuristic or metaheuristic approaches have been developed to solve this problem. In this paper, we constructed a new heuristic algorithm based on the tabu search and adaptive large neighborhood search (ALNS) with several specifically designed operators and features to solve the capacitated vehicle routing problem (CVRP). The effectiveness of the proposed algorithm was illustrated on the benchmark problems. The algorithm provides a better performance on large-scaled instances and gained advantage in terms of CPU time. In addition, we solved a real-life CVRP using the proposed algorithm and found the encouraging results by comparison with the current situation that the company is in.

  8. Measuring health-related problem solving among African Americans with multiple chronic conditions: application of Rasch analysis.

    PubMed

    Fitzpatrick, Stephanie L; Hill-Briggs, Felicia

    2015-10-01

    Identification of patients with poor chronic disease self-management skills can facilitate treatment planning, determine effectiveness of interventions, and reduce disease complications. This paper describes the use of a Rasch model, the Rating Scale Model, to examine psychometric properties of the 50-item Health Problem-Solving Scale (HPSS) among 320 African American patients with high risk for cardiovascular disease. Items on the positive/effective HPSS subscales targeted patients at low, moderate, and high levels of positive/effective problem solving, whereas items on the negative/ineffective problem solving subscales mostly targeted those at moderate or high levels of ineffective problem solving. Validity was examined by correlating factor scores on the measure with clinical and behavioral measures. Items on the HPSS show promise in the ability to assess health-related problem solving among high risk patients. However, further revisions of the scale are needed to increase its usability and validity with large, diverse patient populations in the future.

  9. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  10. Learning Analysis of K-12 Students' Online Problem Solving: A Three-Stage Assessment Approach

    ERIC Educational Resources Information Center

    Hu, Yiling; Wu, Bian; Gu, Xiaoqing

    2017-01-01

    Problem solving is considered a fundamental human skill. However, large-scale assessment of problem solving in K-12 education remains a challenging task. Researchers have argued for the development of an enhanced assessment approach through joint effort from multiple disciplines. In this study, a three-stage approach based on an evidence-centered…

  11. Solving LP Relaxations of Large-Scale Precedence Constrained Problems

    NASA Astrophysics Data System (ADS)

    Bienstock, Daniel; Zuckerberg, Mark

    We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.

  12. A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems.

    PubMed

    Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping

    2013-01-01

    Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

  13. Solving Fuzzy Optimization Problem Using Hybrid Ls-Sa Method

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian

    2011-06-01

    Fuzzy optimization problem has been one of the most and prominent topics inside the broad area of computational intelligent. It's especially relevant in the filed of fuzzy non-linear programming. It's application as well as practical realization can been seen in all the real world problems. In this paper a large scale non-linear fuzzy programming problem has been solved by hybrid optimization techniques of Line Search (LS), Simulated Annealing (SA) and Pattern Search (PS). As industrial production planning problem with cubic objective function, 8 decision variables and 29 constraints has been solved successfully using LS-SA-PS hybrid optimization techniques. The computational results for the objective function respect to vagueness factor and level of satisfaction has been provided in the form of 2D and 3D plots. The outcome is very promising and strongly suggests that the hybrid LS-SA-PS algorithm is very efficient and productive in solving the large scale non-linear fuzzy programming problem.

  14. A numerical projection technique for large-scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang

    2011-10-01

    We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.

  15. Differential Relations between Facets of Complex Problem Solving and Students' Immigration Background

    ERIC Educational Resources Information Center

    Sonnleitner, Philipp; Brunner, Martin; Keller, Ulrich; Martin, Romain

    2014-01-01

    Whereas the assessment of complex problem solving (CPS) has received increasing attention in the context of international large-scale assessments, its fairness in regard to students' cultural background has gone largely unexplored. On the basis of a student sample of 9th-graders (N = 299), including a representative number of immigrant students (N…

  16. A family of conjugate gradient methods for large-scale nonlinear equations.

    PubMed

    Feng, Dexiang; Sun, Min; Wang, Xueyong

    2017-01-01

    In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  17. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  18. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  19. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  20. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  1. Performance of Extended Local Clustering Organization (LCO) for Large Scale Job-Shop Scheduling Problem (JSP)

    NASA Astrophysics Data System (ADS)

    Konno, Yohko; Suzuki, Keiji

    This paper describes an approach to development of a solution algorithm of a general-purpose for large scale problems using “Local Clustering Organization (LCO)” as a new solution for Job-shop scheduling problem (JSP). Using a performance effective large scale scheduling in the study of usual LCO, a solving JSP keep stability induced better solution is examined. In this study for an improvement of a performance of a solution for JSP, processes to a optimization by LCO is examined, and a scheduling solution-structure is extended to a new solution-structure based on machine-division. A solving method introduced into effective local clustering for the solution-structure is proposed as an extended LCO. An extended LCO has an algorithm which improves scheduling evaluation efficiently by clustering of parallel search which extends over plural machines. A result verified by an application of extended LCO on various scale of problems proved to conduce to minimizing make-span and improving on the stable performance.

  2. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    NASA Technical Reports Server (NTRS)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  3. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  4. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  5. Vectorial finite elements for solving the radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.

    2018-06-01

    The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.

  6. Solving large scale traveling salesman problems by chaotic neurodynamics.

    PubMed

    Hasegawa, Mikio; Ikeguch, Tohru; Aihara, Kazuyuki

    2002-03-01

    We propose a novel approach for solving large scale traveling salesman problems (TSPs) by chaotic dynamics. First, we realize the tabu search on a neural network, by utilizing the refractory effects as the tabu effects. Then, we extend it to a chaotic neural network version. We propose two types of chaotic searching methods, which are based on two different tabu searches. While the first one requires neurons of the order of n2 for an n-city TSP, the second one requires only n neurons. Moreover, an automatic parameter tuning method of our chaotic neural network is presented for easy application to various problems. Last, we show that our method with n neurons is applicable to large TSPs such as an 85,900-city problem and exhibits better performance than the conventional stochastic searches and the tabu searches.

  7. Active subspace: toward scalable low-rank learning.

    PubMed

    Liu, Guangcan; Yan, Shuicheng

    2012-12-01

    We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.

  8. Structural design using equilibrium programming formulations

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1995-01-01

    Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.

  9. Results and Implications of a Problem-Solving Treatment Program for Obesity.

    ERIC Educational Resources Information Center

    Mahoney, B. K.; And Others

    Data are from a large scale experimental study which was designed to evaluate a multimethod problem solving approach to obesity. Obese adult volunteers (N=90) were randomly assigned to three groups: maximal treatment, minimal treatment, and no treatment control. In the two treatment groups, subjects were exposed to bibliographic material and…

  10. The Development of Complex Problem Solving in Adolescence: A Latent Growth Curve Analysis

    ERIC Educational Resources Information Center

    Frischkorn, Gidon T.; Greiff, Samuel; Wüstenberg, Sascha

    2014-01-01

    Complex problem solving (CPS) as a cross-curricular competence has recently attracted more attention in educational psychology as indicated by its implementation in international educational large-scale assessments such as the Programme for International Student Assessment. However, research on the development of CPS is scarce, and the few…

  11. Large-Scale Studies on the Transferability of General Problem-Solving Skills and the Pedagogic Potential of Physics

    ERIC Educational Resources Information Center

    Mashood, K. K.; Singh, Vijay A.

    2013-01-01

    Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in…

  12. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  13. Designs for Operationalizing Collaborative Problem Solving for Automated Assessment

    ERIC Educational Resources Information Center

    Scoular, Claire; Care, Esther; Hesse, Friedrich W.

    2017-01-01

    Collaborative problem solving is a complex skill set that draws on social and cognitive factors. The construct remains in its infancy due to lack of empirical evidence that can be drawn upon for validation. The differences and similarities between two large-scale initiatives that reflect this state of the art, in terms of underlying assumptions…

  14. VET Workers' Problem-Solving Skills in Technology-Rich Environments: European Approach

    ERIC Educational Resources Information Center

    Hämäläinen, Raija; Cincinnato, Sebastiano; Malin, Antero; De Wever, Bram

    2014-01-01

    The European workplace is challenging VET adults' problem-solving skills in technology-rich environments (TREs). So far, no international large-scale assessment data has been available for VET. The PIAAC data comprise the most comprehensive source of information on adults' skills to date. The present study (N = 50 369) focuses on gaining insight…

  15. Complex Problem Solving in Educational Contexts--Something beyond "g": Concept, Assessment, Measurement Invariance, and Construct Validity

    ERIC Educational Resources Information Center

    Greiff, Samuel; Wustenberg, Sascha; Molnar, Gyongyver; Fischer, Andreas; Funke, Joachim; Csapo, Beno

    2013-01-01

    Innovative assessments of cross-curricular competencies such as complex problem solving (CPS) have currently received considerable attention in large-scale educational studies. This study investigated the nature of CPS by applying a state-of-the-art approach to assess CPS in high school. We analyzed whether two processes derived from cognitive…

  16. Assessment of Complex Problem Solving: What We Know and What We Don't Know

    ERIC Educational Resources Information Center

    Herde, Christoph Nils; Wüstenberg, Sascha; Greiff, Samuel

    2016-01-01

    Complex Problem Solving (CPS) is seen as a cross-curricular 21st century skill that has attracted interest in large-scale-assessments. In the Programme for International Student Assessment (PISA) 2012, CPS was assessed all over the world to gain information on students' skills to acquire and apply knowledge while dealing with nontransparent…

  17. Cross-borehole flowmeter tests for transient heads in heterogeneous aquifers.

    PubMed

    Le Borgne, Tanguy; Paillet, Frederick; Bour, Olivier; Caudal, Jean-Pierre

    2006-01-01

    Cross-borehole flowmeter tests have been proposed as an efficient method to investigate preferential flowpaths in heterogeneous aquifers, which is a major task in the characterization of fractured aquifers. Cross-borehole flowmeter tests are based on the idea that changing the pumping conditions in a given aquifer will modify the hydraulic head distribution in large-scale flowpaths, producing measurable changes in the vertical flow profiles in observation boreholes. However, inversion of flow measurements to derive flowpath geometry and connectivity and to characterize their hydraulic properties is still a subject of research. In this study, we propose a framework for cross-borehole flowmeter test interpretation that is based on a two-scale conceptual model: discrete fractures at the borehole scale and zones of interconnected fractures at the aquifer scale. We propose that the two problems may be solved independently. The first inverse problem consists of estimating the hydraulic head variations that drive the transient borehole flow observed in the cross-borehole flowmeter experiments. The second inverse problem is related to estimating the geometry and hydraulic properties of large-scale flowpaths in the region between pumping and observation wells that are compatible with the head variations deduced from the first problem. To solve the borehole-scale problem, we treat the transient flow data as a series of quasi-steady flow conditions and solve for the hydraulic head changes in individual fractures required to produce these data. The consistency of the method is verified using field experiments performed in a fractured-rock aquifer.

  18. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    NASA Astrophysics Data System (ADS)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  19. Internet computer coaches for introductory physics problem solving

    NASA Astrophysics Data System (ADS)

    Xu Ryan, Qing

    The ability to solve problems in a variety of contexts is becoming increasingly important in our rapidly changing technological society. Problem-solving is a complex process that is important for everyday life and crucial for learning physics. Although there is a great deal of effort to improve student problem solving skills throughout the educational system, national studies have shown that the majority of students emerge from such courses having made little progress toward developing good problem-solving skills. The Physics Education Research Group at the University of Minnesota has been developing Internet computer coaches to help students become more expert-like problem solvers. During the Fall 2011 and Spring 2013 semesters, the coaches were introduced into large sections (200+ students) of the calculus based introductory mechanics course at the University of Minnesota. This dissertation, will address the research background of the project, including the pedagogical design of the coaches and the assessment of problem solving. The methodological framework of conducting experiments will be explained. The data collected from the large-scale experimental studies will be discussed from the following aspects: the usage and usability of these coaches; the usefulness perceived by students; and the usefulness measured by final exam and problem solving rubric. It will also address the implications drawn from this study, including using this data to direct future coach design and difficulties in conducting authentic assessment of problem-solving.

  20. Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving

    PubMed Central

    Semeniuk, Yulia Yuriyivna; Brown, Roger L.; Riesch, Susan K.

    2016-01-01

    We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem solving skill. The intervention is based on the Circumplex Model and Social Problem Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. PMID:26936844

  1. Collaborative Problem-Solving Environments; Proceedings for the Workshop CPSEs for Scientific Research, San Diego, California, June 20 to July 1, 1999

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, George

    1999-01-11

    A workshop on collaborative problem-solving environments (CPSEs) was held June 29 through July 1, 1999, in San Diego, California. The workshop was sponsored by the U.S. Department of Energy and the High Performance Network Applications Team of the Large Scale Networking Working Group. The workshop brought together researchers and developers from industry, academia, and government to identify, define, and discuss future directions in collaboration and problem-solving technologies in support of scientific research.

  2. An unbalanced spectra classification method based on entropy

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-bao; Zhao, Wen-juan

    2017-05-01

    How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.

  3. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  4. Large-scale studies on the transferability of general problem-solving skills and the pedagogic potential of physics

    NASA Astrophysics Data System (ADS)

    Mashood, K. K.; Singh, Vijay A.

    2013-09-01

    Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in highly competitive problem-solving examinations was studied using a massive database. The sample sizes ranged from hundreds to a few hundred thousand. Encouraged by the presence of significant correlations, we interviewed 20 students to explore the pedagogic potential of physics in imparting transferable problem-solving skills. We report strategies and practices relevant to physics employed by these students which foster transfer.

  5. Comparison of an algebraic multigrid algorithm to two iterative solvers used for modeling ground water flow and transport

    USGS Publications Warehouse

    Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.

    2002-01-01

    Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.

  6. Inverse problems in the design, modeling and testing of engineering systems

    NASA Technical Reports Server (NTRS)

    Alifanov, Oleg M.

    1991-01-01

    Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.

  7. Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong

    2017-12-01

    The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.

  8. Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method

    NASA Astrophysics Data System (ADS)

    Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru

    2015-05-01

    Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.

  9. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  10. A machine learning approach for efficient uncertainty quantification using multiscale methods

    NASA Astrophysics Data System (ADS)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  11. Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving.

    PubMed

    Semeniuk, Yulia Yuriyivna; Brown, Roger L; Riesch, Susan K

    2016-07-01

    We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem-solving skill. The intervention is based on the Circumplex Model and Social Problem-Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem-Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. © The Author(s) 2016.

  12. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  13. Temperament and problem solving in a population of adolescent guide dogs.

    PubMed

    Bray, Emily E; Sammel, Mary D; Seyfarth, Robert M; Serpell, James A; Cheney, Dorothy L

    2017-09-01

    It is often assumed that measures of temperament within individuals are more correlated to one another than to measures of problem solving. However, the exact relationship between temperament and problem-solving tasks remains unclear because large-scale studies have typically focused on each independently. To explore this relationship, we tested 119 prospective adolescent guide dogs on a battery of 11 temperament and problem-solving tasks. We then summarized the data using both confirmatory factor analysis and exploratory principal components analysis. Results of confirmatory analysis revealed that a priori separation of tests as measuring either temperament or problem solving led to weak results, poor model fit, some construct validity, and no predictive validity. In contrast, results of exploratory analysis were best summarized by principal components that mixed temperament and problem-solving traits. These components had both construct and predictive validity (i.e., association with success in the guide dog training program). We conclude that there is complex interplay between tasks of "temperament" and "problem solving" and that the study of both together will be more informative than approaches that consider either in isolation.

  14. Software environment for implementing engineering applications on MIMD computers

    NASA Technical Reports Server (NTRS)

    Lopez, L. A.; Valimohamed, K. A.; Schiff, S.

    1990-01-01

    In this paper the concept for a software environment for developing engineering application systems for multiprocessor hardware (MIMD) is presented. The philosophy employed is to solve the largest problems possible in a reasonable amount of time, rather than solve existing problems faster. In the proposed environment most of the problems concerning parallel computation and handling of large distributed data spaces are hidden from the application program developer, thereby facilitating the development of large-scale software applications. Applications developed under the environment can be executed on a variety of MIMD hardware; it protects the application software from the effects of a rapidly changing MIMD hardware technology.

  15. An iterative bidirectional heuristic placement algorithm for solving the two-dimensional knapsack packing problem

    NASA Astrophysics Data System (ADS)

    Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae

    2018-02-01

    This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.

  16. Effective optimization using sample persistence: A case study on quantum annealers and various Monte Carlo optimization methods

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.

    2017-10-01

    We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.

  17. Adaptive Neural Networks Decentralized FTC Design for Nonstrict-Feedback Nonlinear Interconnected Large-Scale Systems Against Actuator Faults.

    PubMed

    Li, Yongming; Tong, Shaocheng

    The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.

  18. Algorithm and Application of Gcp-Independent Block Adjustment for Super Large-Scale Domestic High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.

    2018-04-01

    The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.

  19. Can microbes economically remove sulfur

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, J.L.

    Researchers have reported that refiners who now rely on costly physic-chemical procedures to desulfurize petroleum will soon have an alternative microbial-enzyme-based approach to this process. This new approach is still under development and considerable number chemical engineering problems need to be solved before this process is ready for large-scale use. This paper reviews the several research projects dedicated solving the problems that keep a biotechnology-based alternative from competing with chemical desulfurization.

  20. Adaptive Fuzzy Output-Constrained Fault-Tolerant Control of Nonlinear Stochastic Large-Scale Systems With Actuator Faults.

    PubMed

    Li, Yongming; Ma, Zhiyao; Tong, Shaocheng

    2017-09-01

    The problem of adaptive fuzzy output-constrained tracking fault-tolerant control (FTC) is investigated for the large-scale stochastic nonlinear systems of pure-feedback form. The nonlinear systems considered in this paper possess the unstructured uncertainties, unknown interconnected terms and unknown nonaffine nonlinear faults. The fuzzy logic systems are employed to identify the unknown lumped nonlinear functions so that the problems of structured uncertainties can be solved. An adaptive fuzzy state observer is designed to solve the nonmeasurable state problem. By combining the barrier Lyapunov function theory, adaptive decentralized and stochastic control principles, a novel fuzzy adaptive output-constrained FTC approach is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.

  1. PetIGA: A framework for high-performance isogeometric analysis

    DOE PAGES

    Dalcin, Lisandro; Collier, Nathaniel; Vignal, Philippe; ...

    2016-05-25

    We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility ofmore » PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. Lastly, we show strong scaling results on up to 4096 cores, which confirm the suitability of PetIGA for large scale simulations.« less

  2. The Reliability and Construct Validity of Scores on the Attitudes toward Problem Solving Scale

    ERIC Educational Resources Information Center

    Zakaria, Effandi; Haron, Zolkepeli; Daud, Md Yusoff

    2004-01-01

    The Attitudes Toward Problem Solving Scale (ATPSS) has received limited attention concerning its reliability and validity with a Malaysian secondary education population. Developed by Charles, Lester & O'Daffer (1987), the instruments assessed attitudes toward problem solving in areas of Willingness to Engage in Problem Solving Activities,…

  3. Methods for High-Order Multi-Scale and Stochastic Problems Analysis, Algorithms, and Applications

    DTIC Science & Technology

    2016-10-17

    finite volume schemes, discontinuous Galerkin finite element method, and related methods, for solving computational fluid dynamics (CFD) problems and...approximation for finite element methods. (3) The development of methods of simulation and analysis for the study of large scale stochastic systems of...laws, finite element method, Bernstein-Bezier finite elements , weakly interacting particle systems, accelerated Monte Carlo, stochastic networks 16

  4. a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.

    2017-12-01

    We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.

  5. Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches

    NASA Astrophysics Data System (ADS)

    Egging, Rudolf Gerardus

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in shorter solution times relative to solving the extensive-forms. Larger problems, up to 117,481 variables, were solved in extensive-form, but not when applying BD due to numerical issues. It is discussed how BD could significantly reduce the solution time of large-scale stochastic models, but various challenges remain and more research is needed to assess the potential of Benders decomposition for solving large-scale stochastic MCP. 1 www.gecforum.org

  6. Summer Proceedings 2016: The Center for Computing Research at Sandia National Laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carleton, James Brian; Parks, Michael L.

    Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensionalmore » problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.« less

  7. A Kohonen-like decomposition method for the Euclidean traveling salesman problem-KNIES/spl I.bar/DECOMPOSE.

    PubMed

    Aras, N; Altinel, I K; Oommen, J

    2003-01-01

    In addition to the classical heuristic algorithms of operations research, there have also been several approaches based on artificial neural networks for solving the traveling salesman problem. Their efficiency, however, decreases as the problem size (number of cities) increases. A technique to reduce the complexity of a large-scale traveling salesman problem (TSP) instance is to decompose or partition it into smaller subproblems. We introduce an all-neural decomposition heuristic that is based on a recent self-organizing map called KNIES, which has been successfully implemented for solving both the Euclidean traveling salesman problem and the Euclidean Hamiltonian path problem. Our solution for the Euclidean TSP proceeds by solving the Euclidean HPP for the subproblems, and then patching these solutions together. No such all-neural solution has ever been reported.

  8. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    PubMed

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  9. THE APPLICATION OF ENGLISH-WORD MORPHOLOGY TO AUTOMATIC INDEXING AND EXTRACTING. ANNUAL SUMMARY REPORT.

    ERIC Educational Resources Information Center

    DOLBY, J.L.; AND OTHERS

    THE STUDY IS CONCERNED WITH THE LINGUISTIC PROBLEM INVOLVED IN TEXT COMPRESSION--EXTRACTING, INDEXING, AND THE AUTOMATIC CREATION OF SPECIAL-PURPOSE CITATION DICTIONARIES. IN SPITE OF EARLY SUCCESS IN USING LARGE-SCALE COMPUTERS TO AUTOMATE CERTAIN HUMAN TASKS, THESE PROBLEMS REMAIN AMONG THE MOST DIFFICULT TO SOLVE. ESSENTIALLY, THE PROBLEM IS TO…

  10. Algorithms for Mathematical Programming with Emphasis on Bi-level Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldfarb, Donald; Iyengar, Garud

    2014-05-22

    The research supported by this grant was focused primarily on first-order methods for solving large scale and structured convex optimization problems and convex relaxations of nonconvex problems. These include optimal gradient methods, operator and variable splitting methods, alternating direction augmented Lagrangian methods, and block coordinate descent methods.

  11. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  12. Self-interacting inelastic dark matter: a viable solution to the small scale structure problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blennow, Mattias; Clementz, Stefan; Herrero-Garcia, Juan, E-mail: emb@kth.se, E-mail: scl@kth.se, E-mail: juan.herrero-garcia@adelaide.edu.au

    2017-03-01

    Self-interacting dark matter has been proposed as a solution to the small-scale structure problems, such as the observed flat cores in dwarf and low surface brightness galaxies. If scattering takes place through light mediators, the scattering cross section relevant to solve these problems may fall into the non-perturbative regime leading to a non-trivial velocity dependence, which allows compatibility with limits stemming from cluster-size objects. However, these models are strongly constrained by different observations, in particular from the requirements that the decay of the light mediator is sufficiently rapid (before Big Bang Nucleosynthesis) and from direct detection. A natural solution tomore » reconcile both requirements are inelastic endothermic interactions, such that scatterings in direct detection experiments are suppressed or even kinematically forbidden if the mass splitting between the two-states is sufficiently large. Using an exact solution when numerically solving the Schrödinger equation, we study such scenarios and find regions in the parameter space of dark matter and mediator masses, and the mass splitting of the states, where the small scale structure problems can be solved, the dark matter has the correct relic abundance and direct detection limits can be evaded.« less

  13. Investigating the psychological resilience, self-confidence and problem-solving skills of midwife candidates.

    PubMed

    Ertekin Pinar, Sukran; Yildirim, Gulay; Sayin, Neslihan

    2018-05-01

    The high level of psychological resilience, self-confidence and problem solving skills of midwife candidates play an important role in increasing the quality of health care and in fulfilling their responsibilities towards patients. This study was conducted to investigate the psychological resilience, self-confidence and problem-solving skills of midwife candidates. It is a convenience descriptive quantitative study. Students who study at Health Sciences Faculty in Turkey's Central Anatolia Region. Midwife candidates (N = 270). In collection of data, the Personal Information Form, Psychological Resilience Scale for Adults (PRSA), Self-Confidence Scale (SCS), and Problem Solving Inventory (PSI) were used. There was a negatively moderate-level significant relationship between the Problem Solving Inventory scores and the Psychological Resilience Scale for Adults scores (r = -0.619; p = 0.000), and between Self-Confidence Scale scores (r = -0.524; p = 0.000). There was a positively moderate-level significant relationship between the Psychological Resilience Scale for Adults scores and the Self-Confidence Scale scores (r = 0.583; p = 0.000). There was a statistically significant difference (p < 0.05) between the Problem Solving Inventory and the Psychological Resilience Scale for Adults scores according to getting support in a difficult situation. As psychological resilience and self-confidence levels increase, problem-solving skills increase; additionally, as self-confidence increases, psychological resilience increases too. Psychological resilience, self-confidence, and problem-solving skills of midwife candidates in their first-year of studies are higher than those who are in their fourth year. Self-confidence and psychological resilience of midwife candidates aged between 17 and 21, self-confidence and problem solving skills of residents of city centers, psychological resilience of those who perceive their monthly income as sufficient are high. Psychological resilience and problem-solving skills for midwife candidates who receive social support are also high. The fact that levels of self-confidence, problem-solving skills and psychological resilience of fourth-year students are found to be low presents a situation that should be taken into consideration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  15. Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeff Linderoth

    2011-11-06

    the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.

  16. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE PAGES

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    2017-10-26

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  17. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  18. A multilevel correction adaptive finite element method for Kohn-Sham equation

    NASA Astrophysics Data System (ADS)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  19. Emotion dysregulation, problem-solving, and hopelessness.

    PubMed

    Vatan, Sevginar; Lester, David; Gunn, John F

    2014-04-01

    A sample of 87 Turkish undergraduate students was administered scales to measure hopelessness, problem-solving skills, emotion dysregulation, and psychiatric symptoms. All of the scores from these scales were strongly associated. In a multiple regression, hopelessness scores were predicted by poor problem-solving skills and emotion dysregulation.

  20. An interior-point method-based solver for simulation of aircraft parts riveting

    NASA Astrophysics Data System (ADS)

    Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael

    2018-05-01

    The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.

  1. A cooperative strategy for parameter estimation in large scale systems biology models.

    PubMed

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.

  2. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112

  3. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less

  4. A Large-scale Distributed Indexed Learning Framework for Data that Cannot Fit into Memory

    DTIC Science & Technology

    2015-03-27

    learn a classifier. Integrating three learning techniques (online, semi-supervised and active learning ) together with a selective sampling with minimum communication between the server and the clients solved this problem.

  5. Crowdsourced 'R&D' and medical research.

    PubMed

    Callaghan, Christian William

    2015-09-01

    Crowdsourced R&D, a research methodology increasingly applied to medical research, has properties well suited to large-scale medical data collection and analysis, as well as enabling rapid research responses to crises such as disease outbreaks. Multidisciplinary literature offers diverse perspectives of crowdsourced R&D as a useful large-scale medical data collection and research problem-solving methodology. Crowdsourced R&D has demonstrated 'proof of concept' in a host of different biomedical research applications. A wide range of quality and ethical issues relate to crowdsourced R&D. The rapid growth in applications of crowdsourced R&D in medical research is predicted by an increasing body of multidisciplinary theory. Further research in areas such as artificial intelligence may allow better coordination and management of the high volumes of medical data and problem-solving inputs generated by the crowdsourced R&D process. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  7. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    NASA Astrophysics Data System (ADS)

    Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo; Okui, Takemichi; Tsai, Yuhsinz

    2016-12-01

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightly coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.

  8. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightlymore » coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.« less

  9. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    DOE PAGES

    Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo; ...

    2016-12-21

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightlymore » coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.« less

  10. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.

  11. Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control

    NASA Astrophysics Data System (ADS)

    Kamyar, Reza

    In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to efficiently utilize hundreds and potentially thousands of processors, and analyze systems with 100+ dimensional state-space. Furthermore, we extend our algorithms to analyze robust stability over more complicated geometries such as hypercubes and arbitrary convex polytopes. Our algorithms can be readily extended to address a wide variety of problems in control such as Hinfinity synthesis for systems with parametric uncertainty and computing control Lyapunov functions.

  12. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  13. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  14. Trinification, the hierarchy problem, and inverse seesaw neutrino masses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cauet, Christophe; Paes, Heinrich; Wiesenfeldt, Soeren

    2011-05-01

    In minimal trinification models light neutrino masses can be generated via a radiative seesaw mechanism, where the masses of the right-handed neutrinos originate from loops involving Higgs and fermion fields at the unification scale. This mechanism is absent in models aiming at solving or ameliorating the hierarchy problem, such as low-energy supersymmetry, since the large seesaw scale disappears. In this case, neutrino masses need to be generated via a TeV-scale mechanism. In this paper, we investigate an inverse seesaw mechanism and discuss some phenomenological consequences.

  15. Coupling LAMMPS with Lattice Boltzmann fluid solver: theory, implementation, and applications

    NASA Astrophysics Data System (ADS)

    Tan, Jifu; Sinno, Talid; Diamond, Scott

    2016-11-01

    Studying of fluid flow coupled with solid has many applications in biological and engineering problems, e.g., blood cell transport, particulate flow, drug delivery. We present a partitioned approach to solve the coupled Multiphysics problem. The fluid motion is solved by the Lattice Boltzmann method, while the solid displacement and deformation is simulated by Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). The coupling is achieved through the immersed boundary method so that the expensive remeshing step is eliminated. The code can model both rigid and deformable solids. The code also shows very good scaling results. It was validated with classic problems such as migration of rigid particles, ellipsoid particle's orbit in shear flow. Examples of the applications in blood flow, drug delivery, platelet adhesion and rupture are also given in the paper. NIH.

  16. Parallel-vector solution of large-scale structural analysis problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1989-01-01

    A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.

  17. Solving satisfiability problems using a novel microarray-based DNA computer.

    PubMed

    Lin, Che-Hsin; Cheng, Hsiao-Ping; Yang, Chang-Biau; Yang, Chia-Ning

    2007-01-01

    An algorithm based on a modified sticker model accompanied with an advanced MEMS-based microarray technology is demonstrated to solve SAT problem, which has long served as a benchmark in DNA computing. Unlike conventional DNA computing algorithms needing an initial data pool to cover correct and incorrect answers and further executing a series of separation procedures to destroy the unwanted ones, we built solutions in parts to satisfy one clause in one step, and eventually solve the entire Boolean formula through steps. No time-consuming sample preparation procedures and delicate sample applying equipment were required for the computing process. Moreover, experimental results show the bound DNA sequences can sustain the chemical solutions during computing processes such that the proposed method shall be useful in dealing with large-scale problems.

  18. Information Power Grid Posters

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    2003-01-01

    This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.

  19. Multi-GPU implementation of a VMAT treatment plan optimization algorithm.

    PubMed

    Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B

    2015-06-01

    Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.

  20. Transmission Technologies and Operational Characteristic Analysis of Hybrid UHV AC/DC Power Grids in China

    NASA Astrophysics Data System (ADS)

    Tian, Zhang; Yanfeng, Gong

    2017-05-01

    In order to solve the contradiction between demand and distribution range of primary energy resource, Ultra High Voltage (UHV) power grids should be developed rapidly to meet development of energy bases and accessing of large-scale renewable energy. This paper reviewed the latest research processes of AC/DC transmission technologies, summarized the characteristics of AC/DC power grids, concluded that China’s power grids certainly enter a new period of large -scale hybrid UHV AC/DC power grids and characteristics of “strong DC and weak AC” becomes increasingly pro minent; possible problems in operation of AC/DC power grids was discussed, and interaction or effect between AC/DC power grids was made an intensive study of; according to above problems in operation of power grids, preliminary scheme is summarized as fo llows: strengthening backbone structures, enhancing AC/DC transmission technologies, promoting protection measures of clean energ y accessing grids, and taking actions to solve stability problems of voltage and frequency etc. It’s valuable for making hybrid UHV AC/DC power grids adapt to operating mode of large power grids, thus guaranteeing security and stability of power system.

  1. Path changing methods applied to the 4-D guidance of STOL aircraft.

    DOT National Transportation Integrated Search

    1971-11-01

    Prior to the advent of large-scale commercial STOL service, some challenging navigation and guidance problems must be solved. Proposed terminal area operations may require that these aircraft be capable of accurately flying complex flight paths, and ...

  2. Optimizing a realistic large-scale frequency assignment problem using a new parallel evolutionary approach

    NASA Astrophysics Data System (ADS)

    Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.

    2011-08-01

    This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.

  3. An extended basis inexact shift-invert Lanczos for the efficient solution of large-scale generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Rewieński, M.; Lamecki, A.; Mrozowski, M.

    2013-09-01

    This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.

  4. Solving Navier-Stokes equations on a massively parallel processor; The 1 GFLOP performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saati, A.; Biringen, S.; Farhat, C.

    This paper reports on experience in solving large-scale fluid dynamics problems on the Connection Machine model CM-2. The authors have implemented a parallel version of the MacCormack scheme for the solution of the Navier-Stokes equations. By using triad floating point operations and reducing the number of interprocessor communications, they have achieved a sustained performance rate of 1.42 GFLOPS.

  5. Introduction to bioinformatics.

    PubMed

    Can, Tolga

    2014-01-01

    Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.

  6. [Research on non-rigid registration of multi-modal medical image based on Demons algorithm].

    PubMed

    Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang

    2014-02-01

    Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.

  7. Task-driven dictionary learning.

    PubMed

    Mairal, Julien; Bach, Francis; Ponce, Jean

    2012-04-01

    Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.

  8. The Relationship between Students' Problem Posing and Problem Solving Abilities and Beliefs: A Small-Scale Study with Chinese Elementary School Children

    ERIC Educational Resources Information Center

    Limin, Chen; Van Dooren, Wim; Verschaffel, Lieven

    2013-01-01

    The goal of the present study is to investigate the relationship between pupils' problem posing and problem solving abilities, their beliefs about problem posing and problem solving, and their general mathematics abilities, in a Chinese context. Five instruments, i.e., a problem posing test, a problem solving test, a problem posing questionnaire,…

  9. Solving large scale unit dilemma in electricity system by applying commutative law

    NASA Astrophysics Data System (ADS)

    Legino, Supriadi; Arianto, Rakhmat

    2018-03-01

    The conventional system, pooling resources with large centralized power plant interconnected as a network. provides a lot of advantages compare to the isolated one include optimizing efficiency and reliability. However, such a large plant need a huge capital. In addition, more problems emerged to hinder the construction of big power plant as well as its associated transmission lines. By applying commutative law of math, ab = ba, for all a,b €-R, the problem associated with conventional system as depicted above, can be reduced. The idea of having small unit but many power plants, namely “Listrik Kerakyatan,” abbreviated as LK provides both social and environmental benefit that could be capitalized by using proper assumption. This study compares the cost and benefit of LK to those of conventional system, using simulation method to prove that LK offers alternative solution to answer many problems associated with the large system. Commutative Law of Algebra can be used as a simple mathematical model to analyze whether the LK system as an eco-friendly distributed generation can be applied to solve various problems associated with a large scale conventional system. The result of simulation shows that LK provides more value if its plants operate in less than 11 hours as peaker power plant or load follower power plant to improve load curve balance of the power system. The result of simulation indicates that the investment cost of LK plant should be optimized in order to minimize the plant investment cost. This study indicates that the benefit of economies of scale principle does not always apply to every condition, particularly if the portion of intangible cost and benefit is relatively high.

  10. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  11. An Assessment of the Effect of Collaborative Groups on Students' Problem-Solving Strategies and Abilities

    ERIC Educational Resources Information Center

    Cooper, Melanie M.; Cox, Charles T., Jr.; Nammouz, Minory; Case, Edward; Stevens, Ronald

    2008-01-01

    Improving students' problem-solving skills is a major goal for most science educators. While a large body of research on problem solving exists, assessment of meaningful problem solving is very difficult, particularly for courses with large numbers of students in which one-on-one interactions are not feasible. We have used a suite of software…

  12. Neural Networks For Demodulation Of Phase-Modulated Signals

    NASA Technical Reports Server (NTRS)

    Altes, Richard A.

    1995-01-01

    Hopfield neural networks proposed for demodulating quadrature phase-shift-keyed (QPSK) signals carrying digital information. Networks solve nonlinear integral equations prior demodulation circuits cannot solve. Consists of set of N operational amplifiers connected in parallel, with weighted feedback from output terminal of each amplifier to input terminals of other amplifiers. Used to solve signal processing problems. Implemented as analog very-large-scale integrated circuit that achieves rapid convergence. Alternatively, implemented as digital simulation of such circuit. Also used to improve phase estimation performance over that of phase-locked loop.

  13. Computer problem-solving coaches for introductory physics: Design and usability studies

    NASA Astrophysics Data System (ADS)

    Ryan, Qing X.; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Mason, Andrew

    2016-06-01

    The combination of modern computing power, the interactivity of web applications, and the flexibility of object-oriented programming may finally be sufficient to create computer coaches that can help students develop metacognitive problem-solving skills, an important competence in our rapidly changing technological society. However, no matter how effective such coaches might be, they will only be useful if they are attractive to students. We describe the design and testing of a set of web-based computer programs that act as personal coaches to students while they practice solving problems from introductory physics. The coaches are designed to supplement regular human instruction, giving students access to effective forms of practice outside class. We present results from large-scale usability tests of the computer coaches and discuss their implications for future versions of the coaches.

  14. Structure preserving parallel algorithms for solving the Bethe–Salpeter eigenvalue problem

    DOE PAGES

    Shao, Meiyue; da Jornada, Felipe H.; Yang, Chao; ...

    2015-10-02

    The Bethe–Salpeter eigenvalue problem is a dense structured eigenvalue problem arising from discretized Bethe–Salpeter equation in the context of computing exciton energies and states. A computational challenge is that at least half of the eigenvalues and the associated eigenvectors are desired in practice. In this paper, we establish the equivalence between Bethe–Salpeter eigenvalue problems and real Hamiltonian eigenvalue problems. Based on theoretical analysis, structure preserving algorithms for a class of Bethe–Salpeter eigenvalue problems are proposed. We also show that for this class of problems all eigenvalues obtained from the Tamm–Dancoff approximation are overestimated. In order to solve large scale problemsmore » of practical interest, we discuss parallel implementations of our algorithms targeting distributed memory systems. Finally, several numerical examples are presented to demonstrate the efficiency and accuracy of our algorithms.« less

  15. Implementing and Bounding a Cascade Heuristic for Large-Scale Optimization

    DTIC Science & Technology

    2017-06-01

    solving the monolith, we develop a method for producing lower bounds to the optimal objective function value. To do this, we solve a new integer...as developing and analyzing methods for producing lower bounds to the optimal objective function value of the seminal problem monolith, which this...length of the window decreases, the end effects of the model typically increase (Zerr, 2016). There are four primary methods for correcting end

  16. Scale-Up: Improving Large Enrollment Physics Courses

    NASA Astrophysics Data System (ADS)

    Beichner, Robert

    1999-11-01

    The Student-Centered Activities for Large Enrollment University Physics (SCALE-UP) project is working to establish a learning environment that will promote increased conceptual understanding, improved problem-solving performance, and greater student satisfaction, while still maintaining class sizes of approximately 100. We are also addressing the new ABET engineering accreditation requirements for inquiry-based learning along with communication and team-oriented skills development. Results of studies of our latest classroom design, plans for future classroom space, and the current iteration of instructional materials will be discussed.

  17. On Instability of Geostrophic Current with Linear Vertical Shear at Length Scales of Interleaving

    NASA Astrophysics Data System (ADS)

    Kuzmina, N. P.; Skorokhodov, S. L.; Zhurbas, N. V.; Lyzhkov, D. A.

    2018-01-01

    The instability of long-wave disturbances of a geostrophic current with linear velocity shear is studied with allowance for the diffusion of buoyancy. A detailed derivation of the model problem in dimensionless variables is presented, which is used for analyzing the dynamics of disturbances in a vertically bounded layer and for describing the formation of large-scale intrusions in the Arctic basin. The problem is solved numerically based on a high-precision method developed for solving fourth-order differential equations. It is established that there is an eigenvalue in the spectrum of eigenvalues that corresponds to unstable (growing with time) disturbances, which are characterized by a phase velocity exceeding the maximum velocity of the geostrophic flow. A discussion is presented to explain some features of the instability.

  18. Grand challenges for biological engineering

    PubMed Central

    Yoon, Jeong-Yeol; Riley, Mark R

    2009-01-01

    Biological engineering will play a significant role in solving many of the world's problems in medicine, agriculture, and the environment. Recently the U.S. National Academy of Engineering (NAE) released a document "Grand Challenges in Engineering," covering broad realms of human concern from sustainability, health, vulnerability and the joy of living. Biological engineers, having tools and techniques at the interface between living and non-living entities, will play a prominent role in forging a better future. The 2010 Institute of Biological Engineering (IBE) conference in Cambridge, MA, USA will address, in part, the roles of biological engineering in solving the challenges presented by the NAE. This letter presents a brief outline of how biological engineers are working to solve these large scale and integrated problems of our society. PMID:19772647

  19. Can compactifications solve the cosmological constant problem?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertzberg, Mark P.; Center for Theoretical Physics, Department of Physics,Massachusetts Institute of Technology,77 Massachusetts Ave, Cambridge, MA 02139; Masoumi, Ali

    2016-06-30

    Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ=0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain whymore » Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.« less

  20. Solution of matrix equations using sparse techniques

    NASA Technical Reports Server (NTRS)

    Baddourah, Majdi

    1994-01-01

    The solution of large systems of matrix equations is key to the solution of a large number of scientific and engineering problems. This talk describes the sparse matrix solver developed at Langley which can routinely solve in excess of 263,000 equations in 40 seconds on one Cray C-90 processor. It appears that for large scale structural analysis applications, sparse matrix methods have a significant performance advantage over other methods.

  1. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill; Feiereisen, William (Technical Monitor)

    2000-01-01

    The term "Grid" refers to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. The vision for NASN's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks that will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: The scientist / design engineer whose primary interest is problem solving (e.g., determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user if the tool designer: The computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. This paper describes the current state of IPG (the operational testbed), the set of capabilities being put into place for the operational prototype IPG, as well as some of the longer term R&D tasks.

  2. Demonstration of quantum advantage in machine learning

    NASA Astrophysics Data System (ADS)

    Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.

    2017-04-01

    The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.

  3. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2017-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  4. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2018-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  5. Source localization in electromyography using the inverse potential problem

    NASA Astrophysics Data System (ADS)

    van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.

    2011-02-01

    We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.

  6. Large-scale brain network associated with creative insight: combined voxel-based morphometry and resting-state functional connectivity analyses.

    PubMed

    Ogawa, Takeshi; Aihara, Takatsugu; Shimokawa, Takeaki; Yamashita, Okito

    2018-04-24

    Creative insight occurs with an "Aha!" experience when solving a difficult problem. Here, we investigated large-scale networks associated with insight problem solving. We recruited 232 healthy participants aged 21-69 years old. Participants completed a magnetic resonance imaging study (MRI; structural imaging and a 10 min resting-state functional MRI) and an insight test battery (ITB) consisting of written questionnaires (matchstick arithmetic task, remote associates test, and insight problem solving task). To identify the resting-state functional connectivity (RSFC) associated with individual creative insight, we conducted an exploratory voxel-based morphometry (VBM)-constrained RSFC analysis. We identified positive correlations between ITB score and grey matter volume (GMV) in the right insula and middle cingulate cortex/precuneus, and a negative correlation between ITB score and GMV in the left cerebellum crus 1 and right supplementary motor area. We applied seed-based RSFC analysis to whole brain voxels using the seeds obtained from the VBM and identified insight-positive/negative connections, i.e. a positive/negative correlation between the ITB score and individual RSFCs between two brain regions. Insight-specific connections included motor-related regions whereas creative-common connections included a default mode network. Our results indicate that creative insight requires a coupling of multiple networks, such as the default mode, semantic and cerebral-cerebellum networks.

  7. Introducing the MCHF/OVRP/SDMP: Multicapacitated/Heterogeneous Fleet/Open Vehicle Routing Problems with Split Deliveries and Multiproducts

    PubMed Central

    Yilmaz Eroglu, Duygu; Caglar Gencosman, Burcu; Cavdur, Fatih; Ozmutlu, H. Cenk

    2014-01-01

    In this paper, we analyze a real-world OVRP problem for a production company. Considering real-world constrains, we classify our problem as multicapacitated/heterogeneous fleet/open vehicle routing problem with split deliveries and multiproduct (MCHF/OVRP/SDMP) which is a novel classification of an OVRP. We have developed a mixed integer programming (MIP) model for the problem and generated test problems in different size (10–90 customers) considering real-world parameters. Although MIP is able to find optimal solutions of small size (10 customers) problems, when the number of customers increases, the problem gets harder to solve, and thus MIP could not find optimal solutions for problems that contain more than 10 customers. Moreover, MIP fails to find any feasible solution of large-scale problems (50–90 customers) within time limits (7200 seconds). Therefore, we have developed a genetic algorithm (GA) based solution approach for large-scale problems. The experimental results show that the GA based approach reaches successful solutions with 9.66% gap in 392.8 s on average instead of 7200 s for the problems that contain 10–50 customers. For large-scale problems (50–90 customers), GA reaches feasible solutions of problems within time limits. In conclusion, for the real-world applications, GA is preferable rather than MIP to reach feasible solutions in short time periods. PMID:25045735

  8. Linear solver performance in elastoplastic problem solution on GPU cluster

    NASA Astrophysics Data System (ADS)

    Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.

    2017-12-01

    Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, Edmond

    Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.

  10. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  11. The Cyclic Nature of Problem Solving: An Emergent Multidimensional Problem-Solving Framework

    ERIC Educational Resources Information Center

    Carlson, Marilyn P.; Bloom, Irene

    2005-01-01

    This paper describes the problem-solving behaviors of 12 mathematicians as they completed four mathematical tasks. The emergent problem-solving framework draws on the large body of research, as grounded by and modified in response to our close observations of these mathematicians. The resulting "Multidimensional Problem-Solving Framework" has four…

  12. [Research progress on hydrological scaling].

    PubMed

    Liu, Jianmei; Pei, Tiefan

    2003-12-01

    With the development of hydrology and the extending effect of mankind on environment, scale issue has become a great challenge to many hydrologists due to the stochasticism and complexity of hydrological phenomena and natural catchments. More and more concern has been given to the scaling issues to gain a large-scale (or small-scale) hydrological characteristic from a certain known catchments, but hasn't been solved successfully. The first part of this paper introduced some concepts about hydrological scale, scale issue and scaling. The key problem is the spatial heterogeneity of catchments and the temporal and spatial variability of hydrological fluxes. Three approaches to scale were put forward in the third part, which were distributed modeling, fractal theory and statistical self similarity analyses. Existing problems and future research directions were proposed in the last part.

  13. Vibration-based structural health monitoring of the aircraft large component

    NASA Astrophysics Data System (ADS)

    Pavelko, V.; Kuznetsov, S.; Nevsky, A.; Marinbah, M.

    2017-10-01

    In the presented paper there are investigated the basic problems of the local system of SHM of large scale aircraft component. Vibration-based damage detection is accepted as a basic condition, and main attention focused to a low-cost solution that would be attractive for practice. The conditions of small damage detection in the full scale structural component at low-frequency excitation were defined in analytical study and modal FEA. In experimental study the dynamic test of the helicopter Mi-8 tail beam was performed at harmonic excitation with frequency close to first natural frequency of the beam. The index of correlation coefficient deviation (CCD) was used for extraction of the features due to embedded pseudo-damage. It is shown that the problem of vibration-based detection of a small damage in the large scale structure at low-frequency excitation can be solved successfully.

  14. A forward-advancing wave expansion method for numerical solution of large-scale sound propagation problems

    NASA Astrophysics Data System (ADS)

    Rolla, L. Barrera; Rice, H. J.

    2006-09-01

    In this paper a "forward-advancing" field discretization method suitable for solving the Helmholtz equation in large-scale problems is proposed. The forward wave expansion method (FWEM) is derived from a highly efficient discretization procedure based on interpolation of wave functions known as the wave expansion method (WEM). The FWEM computes the propagated sound field by means of an exclusively forward advancing solution, neglecting the backscattered field. It is thus analogous to methods such as the (one way) parabolic equation method (PEM) (usually discretized using standard finite difference or finite element methods). These techniques do not require the inversion of large system matrices and thus enable the solution of large-scale acoustic problems where backscatter is not of interest. Calculations using FWEM are presented for two propagation problems and comparisons to data computed with analytical and theoretical solutions and show this forward approximation to be highly accurate. Examples of sound propagation over a screen in upwind and downwind refracting atmospheric conditions at low nodal spacings (0.2 per wavelength in the propagation direction) are also included to demonstrate the flexibility and efficiency of the method.

  15. Spin determination at the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Yavin, Itay

    The quantum field theory describing the Electroweak sector demands some new physics at the TeV scale in order to unitarize the scattering of longitudinal W bosons. If this new physics takes the form of a scalar Higgs boson then it is hard to understand the huge hierarchy of scales between the Electroweak scale ˜ TeV and the Planck scale ˜ 1019 GeV. This is known as the Naturalness problem. Normally, in order to solve this problem, new particles, in addition to the Higgs boson, are required to be present in the spectrum below a few TeV. If such particles are indeed discovered at the Large Hadron Collider it will become important to determine their spin. Several classes of models for physics beyond the Electroweak scale exist. Determining the spin of any such newly discovered particle could prove to be the only means of distinguishing between these different models. In the first part of this thesis; we present a thorough discussion regarding such a measurement. We survey the different potentially useful channels for spin determination and a detailed analysis of the most promising channel is performed. The Littlest Higgs model offers a way to solve the Hierarchy problem by introduring heavy partners to Standard Model particles with the same spin and quantum numbers. However, this model is only good up to ˜ 10 TeV. In the second part of this thesis we present an extension of this model into a strongly coupled theory above ˜ 10 TeV. We use the celebrated AdS/CFT correspondence to calculate properties of the low-energy physics in terms of high-energy parameters. We comment on some of the tensions inherent to such a construction involving a large-N CFT (or equivalently, an AdS space).

  16. Non-Gaussianity and Excursion Set Theory: Halo Bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adshead, Peter; Baxter, Eric J.; Dodelson, Scott

    2012-09-01

    We study the impact of primordial non-Gaussianity generated during inflation on the bias of halos using excursion set theory. We recapture the familiar result that the bias scales asmore » $$k^{-2}$$ on large scales for local type non-Gaussianity but explicitly identify the approximations that go into this conclusion and the corrections to it. We solve the more complicated problem of non-spherical halos, for which the collapse threshold is scale dependent.« less

  17. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  18. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  19. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  20. The min-conflicts heuristic: Experimental and theoretical results

    NASA Technical Reports Server (NTRS)

    Minton, Steven; Philips, Andrew B.; Johnston, Mark D.; Laird, Philip

    1991-01-01

    This paper describes a simple heuristic method for solving large-scale constraint satisfaction and scheduling problems. Given an initial assignment for the variables in a problem, the method operates by searching through the space of possible repairs. The search is guided by an ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. We demonstrate empirically that the method performs orders of magnitude better than traditional backtracking techniques on certain standard problems. For example, the one million queens problem can be solved rapidly using our approach. We also describe practical scheduling applications where the method has been successfully applied. A theoretical analysis is presented to explain why the method works so well on certain types of problems and to predict when it is likely to be most effective.

  1. Designing Cognitive Complexity in Mathematical Problem-Solving Items

    ERIC Educational Resources Information Center

    Daniel, Robert C.; Embretson, Susan E.

    2010-01-01

    Cognitive complexity level is important for measuring both aptitude and achievement in large-scale testing. Tests for standards-based assessment of mathematics, for example, often include cognitive complexity level in the test blueprint. However, little research exists on how mathematics items can be designed to vary in cognitive complexity level.…

  2. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    PubMed

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  3. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)β k ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  4. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  5. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  6. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGES

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  7. Conic Sampling: An Efficient Method for Solving Linear and Quadratic Programming by Randomly Linking Constraints within the Interior

    PubMed Central

    Serang, Oliver

    2012-01-01

    Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics. PMID:22952741

  8. A modified priority list-based MILP method for solving large-scale unit commitment problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ke, Xinda; Lu, Ning; Wu, Di

    This paper studies the typical pattern of unit commitment (UC) results in terms of generator’s cost and capacity. A method is then proposed to combine a modified priority list technique with mixed integer linear programming (MILP) for UC problem. The proposed method consists of two steps. At the first step, a portion of generators are predetermined to be online or offline within a look-ahead period (e.g., a week), based on the demand curve and generator priority order. For the generators whose on/off status is predetermined, at the second step, the corresponding binary variables are removed from the UC MILP problemmore » over the operational planning horizon (e.g., 24 hours). With a number of binary variables removed, the resulted problem can be solved much faster using the off-the-shelf MILP solvers, based on the branch-and-bound algorithm. In the modified priority list method, scale factors are designed to adjust the tradeoff between solution speed and level of optimality. It is found that the proposed method can significantly speed up the UC problem with minor compromise in optimality by selecting appropriate scale factors.« less

  9. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  10. A parallel orbital-updating based plane-wave basis method for electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Pan, Yan; Dai, Xiaoying; de Gironcoli, Stefano; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui

    2017-11-01

    Motivated by the recently proposed parallel orbital-updating approach in real space method [1], we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.

  11. Fully implicit adaptive mesh refinement solver for 2D MHD

    NASA Astrophysics Data System (ADS)

    Philip, B.; Chacon, L.; Pernice, M.

    2008-11-01

    Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)

  12. Factors affecting the social problem-solving ability of baccalaureate nursing students.

    PubMed

    Lau, Ying

    2014-01-01

    The hospital environment is characterized by time pressure, uncertain information, conflicting goals, high stakes, stress, and dynamic conditions. These demands mean there is a need for nurses with social problem-solving skills. This study set out to (1) investigate the social problem-solving ability of Chinese baccalaureate nursing students in Macao and (2) identify the association between communication skill, clinical interaction, interpersonal dysfunction, and social problem-solving ability. All nursing students were recruited in one public institute through the census method. The research design was exploratory, cross-sectional, and quantitative. The study used the Chinese version of the Social Problem Solving Inventory short form (C-SPSI-R), Communication Ability Scale (CAS), Clinical Interactive Scale (CIS), and Interpersonal Dysfunction Checklist (IDC). Macao nursing students were more likely to use the two constructive or adaptive dimensions rather than the three dysfunctional dimensions of the C-SPSI-R to solve their problems. Multiple linear regression analysis revealed that communication ability (ß=.305, p<.0001), clinical interaction (ß=.129, p=.047), and interpersonal dysfunction (ß=-.402, p<.0001) were associated with social problem-solving after controlling for covariates. Macao has had no problem-solving training in its educational curriculum; an effective problem-solving training should be implemented as part of the curriculum. With so many changes in healthcare today, nurses must be good social problem-solvers in order to deliver holistic care. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Effects of problem-solving interventions on aggressive behaviours among primary school pupils in Ibadan, Nigeria.

    PubMed

    Abdulmalik, Jibril; Ani, Cornelius; Ajuwon, Ademola J; Omigbodun, Olayinka

    2016-01-01

    Aggressive patterns of behavior often start early in childhood, and tend to remain stable into adulthood. The negative consequences include poor academic performance, disciplinary problems and encounters with the juvenile justice system. Early school intervention programs can alter this trajectory for aggressive children. However, there are no studies evaluating the feasibility of such interventions in Africa. This study therefore, assessed the effect of group-based problem-solving interventions on aggressive behaviors among primary school pupils in Ibadan, Nigeria. This was an intervention study with treatment and wait-list control groups. Two public primary schools in Ibadan Nigeria were randomly allocated to an intervention group and a waiting list control group. Teachers rated male Primary five pupils in the two schools on aggressive behaviors and the top 20 highest scorers in each school were selected. Pupils in the intervention school received 6 twice-weekly sessions of group-based intervention, which included problem-solving skills, calming techniques and attribution retraining. Outcome measures were; teacher rated aggressive behaviour (TRAB), self-rated aggression scale (SRAS), strengths and difficulties questionnaire (SDQ), attitude towards aggression questionnaire (ATAQ), and social cognition and attribution scale (SCAS). The participants were aged 12 years (SD = 1.2, range 9-14 years). Both groups had similar socio-demographic backgrounds and baseline measures of aggressive behaviors. Controlling for baseline scores, the intervention group had significantly lower scores on TRAB and SRAS 1-week post intervention with large Cohen's effect sizes of 1.2 and 0.9 respectively. The other outcome measures were not significantly different between the groups post-intervention. Group-based problem solving intervention for aggressive behaviors among primary school students showed significant reductions in both teachers' and students' rated aggressive behaviours with large effect sizes. However, this was a small exploratory trial whose findings may not be generalizable, but it demonstrates that psychological interventions for children with high levels of aggressive behaviour are feasible and potentially effective in Nigeria.

  14. Naturalness of Electroweak Symmetry Breaking

    NASA Astrophysics Data System (ADS)

    Espinosa, J. R.

    2007-02-01

    After revisiting the hierarchy problem of the Standard Model and its implications for the scale of New Physics, I consider the fine tuning problem of electroweak symmetry breaking in two main scenarios beyond the Standard Model: SUSY and Little Higgs models. The main conclusions are that New Physics should appear on the reach of the LHC; that some SUSY models can solve the hierarchy problem with acceptable residual fine tuning and, finally, that Little Higgs models generically suffer from large tunings, many times hidden.

  15. Interface COMSOL-PHREEQC (iCP), an efficient numerical framework for the solution of coupled multiphysics and geochemistry

    NASA Astrophysics Data System (ADS)

    Nardi, Albert; Idiart, Andrés; Trinchero, Paolo; de Vries, Luis Manuel; Molinero, Jorge

    2014-08-01

    This paper presents the development, verification and application of an efficient interface, denoted as iCP, which couples two standalone simulation programs: the general purpose Finite Element framework COMSOL Multiphysics® and the geochemical simulator PHREEQC. The main goal of the interface is to maximize the synergies between the aforementioned codes, providing a numerical platform that can efficiently simulate a wide number of multiphysics problems coupled with geochemistry. iCP is written in Java and uses the IPhreeqc C++ dynamic library and the COMSOL Java-API. Given the large computational requirements of the aforementioned coupled models, special emphasis has been placed on numerical robustness and efficiency. To this end, the geochemical reactions are solved in parallel by balancing the computational load over multiple threads. First, a benchmark exercise is used to test the reliability of iCP regarding flow and reactive transport. Then, a large scale thermo-hydro-chemical (THC) problem is solved to show the code capabilities. The results of the verification exercise are successfully compared with those obtained using PHREEQC and the application case demonstrates the scalability of a large scale model, at least up to 32 threads.

  16. Moditored unsaturated soil transport processes as a support for large scale soil and water management

    NASA Astrophysics Data System (ADS)

    Vanclooster, Marnik

    2010-05-01

    The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.

  17. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less

  18. A General-Purpose Optimization Engine for Multi-Disciplinary Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Berke, Laszlo

    1996-01-01

    A general purpose optimization tool for multidisciplinary applications, which in the literature is known as COMETBOARDS, is being developed at NASA Lewis Research Center. The modular organization of COMETBOARDS includes several analyzers and state-of-the-art optimization algorithms along with their cascading strategy. The code structure allows quick integration of new analyzers and optimizers. The COMETBOARDS code reads input information from a number of data files, formulates a design as a set of multidisciplinary nonlinear programming problems, and then solves the resulting problems. COMETBOARDS can be used to solve a large problem which can be defined through multiple disciplines, each of which can be further broken down into several subproblems. Alternatively, a small portion of a large problem can be optimized in an effort to improve an existing system. Some of the other unique features of COMETBOARDS include design variable formulation, constraint formulation, subproblem coupling strategy, global scaling technique, analysis approximation, use of either sequential or parallel computational modes, and so forth. The special features and unique strengths of COMETBOARDS assist convergence and reduce the amount of CPU time used to solve the difficult optimization problems of aerospace industries. COMETBOARDS has been successfully used to solve a number of problems, including structural design of space station components, design of nozzle components of an air-breathing engine, configuration design of subsonic and supersonic aircraft, mixed flow turbofan engines, wave rotor topped engines, and so forth. This paper introduces the COMETBOARDS design tool and its versatility, which is illustrated by citing examples from structures, aircraft design, and air-breathing propulsion engine design.

  19. Coupling molecular dynamics with lattice Boltzmann method based on the immersed boundary method

    NASA Astrophysics Data System (ADS)

    Tan, Jifu; Sinno, Talid; Diamond, Scott

    2017-11-01

    The study of viscous fluid flow coupled with rigid or deformable solids has many applications in biological and engineering problems, e.g., blood cell transport, drug delivery, and particulate flow. We developed a partitioned approach to solve this coupled Multiphysics problem. The fluid motion was solved by Palabos (Parallel Lattice Boltzmann Solver), while the solid displacement and deformation was simulated by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator). The coupling was achieved through the immersed boundary method (IBM). The code modeled both rigid and deformable solids exposed to flow. The code was validated with the classic problem of rigid ellipsoid particle orbit in shear flow, blood cell stretching test and effective blood viscosity, and demonstrated essentially linear scaling over 16 cores. An example of the fluid-solid coupling was given for flexible filaments (drug carriers) transport in a flowing blood cell suspensions, highlighting the advantages and capabilities of the developed code. NIH 1U01HL131053-01A1.

  20. Walking the Filament of Feasibility: Global Optimization of Highly-Constrained, Multi-Modal Interplanetary Trajectories Using a Novel Stochastic Search Technique

    NASA Technical Reports Server (NTRS)

    Englander, Arnold C.; Englander, Jacob A.

    2017-01-01

    Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.

  1. Physical activity problem-solving inventory for adolescents: development and initial validation.

    PubMed

    Thompson, Debbe; Bhatt, Riddhi; Watson, Kathy

    2013-08-01

    Youth encounter physical activity barriers, often called problems. The purpose of problem solving is to generate solutions to overcome the barriers. Enhancing problem-solving ability may enable youth to be more physically active. Therefore, a method for reliably assessing physical activity problem-solving ability is needed. The purpose of this research was to report the development and initial validation of the physical activity problem-solving inventory for adolescents (PAPSIA). Qualitative and quantitative procedures were used. The social problem-solving inventory for adolescents guided the development of the PAPSIA scale. Youth (14- to 17-year-olds) were recruited using standard procedures, such as distributing flyers in the community and to organizations likely to be attended by adolescents. Cognitive interviews were conducted in person. Adolescents completed pen and paper versions of the questionnaire and/or scales assessing social desirability, self-reported physical activity, and physical activity self-efficacy. An expert panel review, cognitive interviews, and a pilot study (n = 129) established content validity. Construct, concurrent, and predictive validity were also established (n = 520 youth). PAPSIA is a promising measure for assessing youth physical activity problem-solving ability. Future research will assess its validity with objectively measured physical activity.

  2. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  3. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design

    PubMed Central

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei

    2016-01-01

    Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509

  4. Conducting Automated Test Assembly Using the Premium Solver Platform Version 7.0 with Microsoft Excel and the Large-Scale LP/QP Solver Engine Add-In

    ERIC Educational Resources Information Center

    Cor, Ken; Alves, Cecilia; Gierl, Mark J.

    2008-01-01

    This review describes and evaluates a software add-in created by Frontline Systems, Inc., that can be used with Microsoft Excel 2007 to solve large, complex test assembly problems. The combination of Microsoft Excel 2007 with the Frontline Systems Premium Solver Platform is significant because Microsoft Excel is the most commonly used spreadsheet…

  5. Student Performance and Attitudes in a Collaborative and Flipped Linear Algebra Course

    ERIC Educational Resources Information Center

    Murphy, Julia; Chang, Jen-Mei; Suaray, Kagba

    2016-01-01

    Flipped learning is gaining traction in K-12 for enhancing students' problem-solving skills at an early age; however, there is relatively little large-scale research showing its effectiveness in promoting better learning outcomes in higher education, especially in mathematics classes. In this study, we examined the data compiled from both…

  6. Discriminant WSRC for Large-Scale Plant Species Recognition.

    PubMed

    Zhang, Shanwen; Zhang, Chuanlei; Zhu, Yihai; You, Zhuhong

    2017-01-01

    In sparse representation based classification (SRC) and weighted SRC (WSRC), it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC) is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.

  7. The relationship between family functioning and the crime types in incarcerated children.

    PubMed

    Teker, Kamil; Topçu, Seda; Başkan, Sevgi; Orhon, Filiz Ş; Ulukol, Betül

    2017-06-01

    We investigated the relationship between the family functioning and crime types in incarcerated children. One hundred eighty two incarcerated children aged between 13-18 years who were confined in child-youth prisons and child correctional facilities were enrolled into this descriptive study. Participants completed demographic questions and the McMaster Family Assessment Device (Epstein, Baldwin, & Bishop, 1983) (FAD) with face to face interviews. The crime types were theft, assault (bodily injury), robbery, sexual assault, drug trafficker and murder. The socio-demographic characteristics were compared by using FAD scale, and growing up in a nuclear family had statistically significant better scores for problem solving and communication subscales and the children whose parents had their own house had significantly better problem solving scores When we compared the crime types of children by using problem solving, communication and general functioning subscales of FAD, we found statistical lower scores in assault (bodily injury) group than in theft, sexual assault, murder groups and in drug trafficker group than in murder group, also we found lower scores in drug trafficker group than in theft group for problem solving and general functioning sub-scales, also there were lower scores in bodily injury assault group than in robbery, theft groups and in drug trafficker than in theft group for problem solving subscale. The communication and problem solving sub-scales of FAD are firstly impaired scales for the incarcerated children. We mention these sub-scales are found with unplanned and less serious crimes and commented those as cry for help of the children.

  8. An Extended, Problem-Based Learning Laboratory Exercise on the Diagnosis of Infectious Diseases Suitable for Large Level 1 Undergraduate Biology Classes

    ERIC Educational Resources Information Center

    Tatner, Mary; Tierney, Anne

    2016-01-01

    The development and evaluation of a two-week laboratory class, based on the diagnosis of human infectious diseases, is described. It can be easily scaled up or down, to suit class sizes from 50 to 600 and completed in a shorter time scale, and to different audiences as desired. Students employ a range of techniques to solve a real-life and…

  9. Radiative Natural Supersymmetry with Mixed Axion/Higgsino Cold Dark Matter

    NASA Astrophysics Data System (ADS)

    Baer, Howard

    Models of natural supersymmetry seek to solve the little hierarchy problem by positing a spectrum of light higgsinos ≲ 200 GeV and light top squarks ≲ 500 GeV along with very heavy squarks and TeV-scale gluinos. Such models have low electroweak finetuning and are safe from LHC searches. However, in the context of the MSSM, they predict too low a value of m h and the relic density of thermally produced higgsino-like WIMPs falls well below dark matter (DM) measurements. Allowing for high scale soft SUSY breaking Higgs mass m H u > m 0 leads to natural cancellations during RG running, and to radiatively induced low finetuning at the electroweak scale. This model of radiative natural SUSY (RNS), with large mixing in the top squark sector, allows for finetuning at the 5-10 % level with TeV-scale top squarks and a 125 GeV light Higgs scalar h. If the strong CP problem is solved via the PQ mechanism, then we expect an axion-higgsino admixture of dark matter, where either or both the DM particles might be directly detected.

  10. Radiative natural supersymmetry with mixed axion/higgsino cold dark matter

    NASA Astrophysics Data System (ADS)

    Baer, Howard

    2013-05-01

    Models of natural supersymmetry seek to solve the little hierarchy problem by positing a spectrum of light higgsinos <~ 200 GeV and light top squarks <~ 500 GeV along with very heavy squarks and TeV-scale gluinos. Such models have low electroweak finetuning and are safe from LHC searches. However, in the context of the MSSM, they predict too low a value of mh and the relic density of thermally produced higgsino-like WIMPs falls well below dark matter (DM) measurements. Allowing for high scale soft SUSY breaking Higgs mass mHu > m0 leads to natural cancellations during RG running, and to radiatively induced low finetuning at the electroweak scale. This model of radiative natural SUSY (RNS), with large mixing in the top squark sector, allows for finetuning at the 5-10% level with TeV-scale top squarks and a 125 GeV light Higgs scalar h. If the strong CP problem is solved via the PQ mechanism, then we expect an axion-higgsino admixture of dark matter, where either or both the DM particles might be directly detected.

  11. Improved Quasi-Newton method via PSB update for solving systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, Mustafa; Dauda, M. K.; Waziri, M. Y.; Ahmad, Fadhilah; Mohamad, Fatma Susilawati

    2016-10-01

    The Newton method has some shortcomings which includes computation of the Jacobian matrix which may be difficult or even impossible to compute and solving the Newton system in every iteration. Also, the common setback with some quasi-Newton methods is that they need to compute and store an n × n matrix at each iteration, this is computationally costly for large scale problems. To overcome such drawbacks, an improved Method for solving systems of nonlinear equations via PSB (Powell-Symmetric-Broyden) update is proposed. In the proposed method, the approximate Jacobian inverse Hk of PSB is updated and its efficiency has improved thereby require low memory storage, hence the main aim of this paper. The preliminary numerical results show that the proposed method is practically efficient when applied on some benchmark problems.

  12. A Case Study in an Integrated Development and Problem Solving Environment

    ERIC Educational Resources Information Center

    Deek, Fadi P.; McHugh, James A.

    2003-01-01

    This article describes an integrated problem solving and program development environment, illustrating the application of the system with a detailed case study of a small-scale programming problem. The system, which is based on an explicit cognitive model, is intended to guide the novice programmer through the stages of problem solving and program…

  13. A hybrid Dantzig-Wolfe, Benders decomposition and column generation procedure for multiple diet production planning under uncertainties

    NASA Astrophysics Data System (ADS)

    Udomsungworagul, A.; Charnsethikul, P.

    2018-03-01

    This article introduces methodology to solve large scale two-phase linear programming with a case of multiple time period animal diet problems under both nutrients in raw materials and finished product demand uncertainties. Assumption of allowing to manufacture multiple product formulas in the same time period and assumption of allowing to hold raw materials and finished products inventory have been added. Dantzig-Wolfe decompositions, Benders decomposition and Column generations technique has been combined and applied to solve the problem. The proposed procedure was programmed using VBA and Solver tool in Microsoft Excel. A case study was used and tested in term of efficiency and effectiveness trade-offs.

  14. Influence of Distributed Residential Energy Storage on Voltage in Rural Distribution Network and Capacity Configuration

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Tong, Yibin; Zhao, Zhigang; Zhang, Xuefen

    2018-03-01

    Large-scale access of distributed residential photovoltaic (PV) in rural areas has solved the voltage problem to a certain extent. However, due to the intermittency of PV and the particularity of rural residents’ power load, the problem of low voltage in the evening peak remains to be resolved. This paper proposes to solve the problem by accessing residential energy storage. Firstly, the influence of access location and capacity of energy storage on voltage distribution in rural distribution network is analyzed. Secondly, the relation between the storage capacity and load capacity is deduced for four typical load and energy storage cases when the voltage deviation meets the demand. Finally, the optimal storage position and capacity are obtained by using PSO and power flow simulation.

  15. High-performance image reconstruction in fluorescence tomography on desktop computers and graphics hardware.

    PubMed

    Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann

    2011-11-01

    Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.

  16. Mesoscale modeling: solving complex flows in biology and biotechnology.

    PubMed

    Mills, Zachary Grant; Mao, Wenbin; Alexeev, Alexander

    2013-07-01

    Fluids are involved in practically all physiological activities of living organisms. However, biological and biorelated flows are hard to analyze due to the inherent combination of interdependent effects and processes that occur on a multitude of spatial and temporal scales. Recent advances in mesoscale simulations enable researchers to tackle problems that are central for the understanding of such flows. Furthermore, computational modeling effectively facilitates the development of novel therapeutic approaches. Among other methods, dissipative particle dynamics and the lattice Boltzmann method have become increasingly popular during recent years due to their ability to solve a large variety of problems. In this review, we discuss recent applications of these mesoscale methods to several fluid-related problems in medicine, bioengineering, and biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Contribution of problem-solving skills to fear of recurrence in breast cancer survivors.

    PubMed

    Akechi, Tatuo; Momino, Kanae; Yamashita, Toshinari; Fujita, Takashi; Hayashi, Hironori; Tsunoda, Nobuyuki; Iwata, Hiroji

    2014-05-01

    Although fear of recurrence is a major concern among breast cancer survivors after surgery, no standard strategies exist that alleviate their distress. This study examined the association of patients' problem-solving skills and fear of recurrence and psychological distress among breast cancer survivors. Randomly selected, ambulatory, female patients with breast cancer participated in this study. They were asked to complete the Concerns about Recurrence Scale (CARS) and the Hospital Anxiety and Depression Scale. Multiple regression analyses were used to examine their associations. Data were obtained from 317 patients. Patients' problem-solving skills were significantly associated with all subscales of fear of recurrence and overall worries measured by the CARS. In addition, patients' problem-solving skills were significantly associated with both their anxiety and depression. Our findings warrant clinical trials to investigate effectiveness of psychosocial intervention program, including enhancing patients' problem-solving skills and reducing fear of recurrence among breast cancer survivors.

  18. Does Problem-Solving Training for Family Caregivers Benefit Their Care Recipients With Severe Disabilities? A Latent Growth Model of the Project CLUES Randomized Clinical Trial

    PubMed Central

    Berry, Jack W.; Elliott, Timothy R.; Grant, Joan S.; Edwards, Gary; Fine, Philip R.

    2012-01-01

    Objective To examine whether an individualized problem-solving intervention provided to family caregivers of persons with severe disabilities provides benefits to both caregivers and their care recipients. Design Family caregivers were randomly assigned to an education-only control group or a problem-solving training (PST) intervention group. Participants received monthly contacts for 1 year. Participants Family caregivers (129 women, 18 men) and their care recipients (81 women, 66 men) consented to participate. Main Outcome Measures Caregivers completed the Social Problem-Solving Inventory–Revised, the Center for Epidemiological Studies-Depression scale, the Satisfaction with Life scale, and a measure of health complaints at baseline and in 3 additional assessments throughout the year. Care recipient depression was assessed with a short form of the Hamilton Depression Scale. Results Latent growth modeling was used to analyze data from the dyads. Caregivers who received PST reported a significant decrease in depression over time, and they also displayed gains in constructive problem-solving abilities and decreases in dysfunctional problem-solving abilities. Care recipients displayed significant decreases in depression over time, and these decreases were significantly associated with decreases in caregiver depression in response to training. Conclusions PST significantly improved the problem-solving skills of community-residing caregivers and also lessened their depressive symptoms. Care recipients in the PST group also had reductions in depression over time, and it appears that decreases in caregiver depression may account for this effect. PMID:22686549

  19. Does problem-solving training for family caregivers benefit their care recipients with severe disabilities? A latent growth model of the Project CLUES randomized clinical trial.

    PubMed

    Berry, Jack W; Elliott, Timothy R; Grant, Joan S; Edwards, Gary; Fine, Philip R

    2012-05-01

    To examine whether an individualized problem-solving intervention provided to family caregivers of persons with severe disabilities provides benefits to both caregivers and their care recipients. Family caregivers were randomly assigned to an education-only control group or a problem-solving training (PST) intervention group. Participants received monthly contacts for 1 year. Family caregivers (129 women, 18 men) and their care recipients (81 women, 66 men) consented to participate. Caregivers completed the Social Problem-Solving Inventory-Revised, the Center for Epidemiological Studies-Depression scale, the Satisfaction with Life scale, and a measure of health complaints at baseline and in 3 additional assessments throughout the year. Care recipient depression was assessed with a short form of the Hamilton Depression Scale. Latent growth modeling was used to analyze data from the dyads. Caregivers who received PST reported a significant decrease in depression over time, and they also displayed gains in constructive problem-solving abilities and decreases in dysfunctional problem-solving abilities. Care recipients displayed significant decreases in depression over time, and these decreases were significantly associated with decreases in caregiver depression in response to training. PST significantly improved the problem-solving skills of community-residing caregivers and also lessened their depressive symptoms. Care recipients in the PST group also had reductions in depression over time, and it appears that decreases in caregiver depression may account for this effect. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  20. Adaptation of Social Problem Solving for Children Questionnaire in 6 Age Groups and its Relationships with Preschool Behavior Problems

    ERIC Educational Resources Information Center

    Dereli-Iman, Esra

    2013-01-01

    Social Problem Solving for Child Scale is frequently used to determine behavioral problems of children with their own word and to identify ways of conflict encountered in daily life, and interpersonal relationships in abroad. The primary purpose of this study was to adapt the Wally Child Social Problem-Solving Detective Game Test. In order to…

  1. Ordering Unstructured Meshes for Sparse Matrix Computations on Leading Parallel Systems

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Li, Xiaoye; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The ability of computers to solve hitherto intractable problems and simulate complex processes using mathematical models makes them an indispensable part of modern science and engineering. Computer simulations of large-scale realistic applications usually require solving a set of non-linear partial differential equations (PDES) over a finite region. For example, one thrust area in the DOE Grand Challenge projects is to design future accelerators such as the SpaHation Neutron Source (SNS). Our colleagues at SLAC need to model complex RFQ cavities with large aspect ratios. Unstructured grids are currently used to resolve the small features in a large computational domain; dynamic mesh adaptation will be added in the future for additional efficiency. The PDEs for electromagnetics are discretized by the FEM method, which leads to a generalized eigenvalue problem Kx = AMx, where K and M are the stiffness and mass matrices, and are very sparse. In a typical cavity model, the number of degrees of freedom is about one million. For such large eigenproblems, direct solution techniques quickly reach the memory limits. Instead, the most widely-used methods are Krylov subspace methods, such as Lanczos or Jacobi-Davidson. In all the Krylov-based algorithms, sparse matrix-vector multiplication (SPMV) must be performed repeatedly. Therefore, the efficiency of SPMV usually determines the eigensolver speed. SPMV is also one of the most heavily used kernels in large-scale numerical simulations.

  2. On the role of minicomputers in structural design

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1977-01-01

    Results are presented of exploratory studies on the use of a minicomputer in conjunction with large-scale computers to perform structural design tasks, including data and program management, use of interactive graphics, and computations for structural analysis and design. An assessment is made of minicomputer use for the structural model definition and checking and for interpreting results. Included are results of computational experiments demonstrating the advantages of using both a minicomputer and a large computer to solve a large aircraft structural design problem.

  3. Dispositional Insight Scale: Development and Validation of a Tool That Measures Propensity toward Insight in Problem Solving

    ERIC Educational Resources Information Center

    Ovington, Linda A.; Saliba, Anthony J.; Goldring, Jeremy

    2016-01-01

    This article reports the development of a brief self-report measure of dispositional insight problem solving, the Dispositional Insight Scale (DIS). From a representative Australian database, 1,069 adults (536 women and 533 men) completed an online questionnaire. An exploratory and confirmatory factor analysis revealed a 5-item scale, with all…

  4. The solution of large multi-dimensional Poisson problems

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1974-01-01

    The Buneman algorithm for solving Poisson problems can be adapted to solve large Poisson problems on computers with a rotating drum memory so that the computation is done with very little time lost due to rotational latency of the drum.

  5. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  6. Comparing genetic algorithm and particle swarm optimization for solving capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Iswari, T.; Asih, A. M. S.

    2018-04-01

    In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.

  7. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  8. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  9. Fidelity of Problem Solving in Everyday Practice: Typical Training May Miss the Mark

    ERIC Educational Resources Information Center

    Ruby, Susan F.; Crosby-Cooper, Tricia; Vanderwood, Michael L.

    2011-01-01

    With national attention on scaling up the implementation of Response to Intervention, problem solving teams remain one of the central components for development, implementation, and monitoring of school-based interventions. Studies have shown that problem solving teams evidence a sound theoretical base and demonstrated efficacy; however, limited…

  10. Functional reasoning in diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Sticklen, Jon; Bond, W. E.; Stclair, D. C.

    1988-01-01

    This work is one facet of an integrated approach to diagnostic problem solving for aircraft and space systems currently under development. The authors are applying a method of modeling and reasoning about deep knowledge based on a functional viewpoint. The approach recognizes a level of device understanding which is intermediate between a compiled level of typical Expert Systems, and a deep level at which large-scale device behavior is derived from known properties of device structure and component behavior. At this intermediate functional level, a device is modeled in three steps. First, a component decomposition of the device is defined. Second, the functionality of each device/subdevice is abstractly identified. Third, the state sequences which implement each function are specified. Given a functional representation and a set of initial conditions, the functional reasoner acts as a consequence finder. The output of the consequence finder can be utilized in diagnostic problem solving. The paper also discussed ways in which this functional approach may find application in the aerospace field.

  11. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  12. Using Coaching to Improve the Teaching of Problem Solving to Year 8 Students in Mathematics

    ERIC Educational Resources Information Center

    Kargas, Christine Anestis; Stephens, Max

    2014-01-01

    This study investigated how to improve the teaching of problem solving in a large Melbourne secondary school. Coaching was used to support and equip five teachers, some with limited experiences in teaching problem solving, with knowledge and strategies to build up students' problem solving and reasoning skills. The results showed increased…

  13. SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2013-12-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.

  14. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  15. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  16. Adding intelligence to scientific data management

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Short, Nicholas M., Jr.; Treinish, Lloyd A.

    1989-01-01

    NASA plans to solve some of the problems of handling large-scale scientific data bases by turning to artificial intelligence (AI) are discussed. The growth of the information glut and the ways that AI can help alleviate the resulting problems are reviewed. The employment of the Intelligent User Interface prototype, where the user will generate his own natural language query with the assistance of the system, is examined. Spatial data management, scientific data visualization, and data fusion are discussed.

  17. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-01

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  18. An O( N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    DOE PAGES

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O( N 2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Methodmore » (FMM) to evaluate the integrals in O( N) operations, with O( N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less

  19. Special issue of Computers and Fluids in honor of Cecil E. (Chuck) Leith

    DOE PAGES

    Zhou, Ye; Herring, Jackson

    2017-05-12

    Here, this special issue of Computers and Fluids is dedicated to Cecil E. (Chuck) Leith in honor of his research contributions, leadership in the areas of statistical fluid mechanics, computational fluid dynamics, and climate theory. Leith's contribution to these fields emerged from his interest in solving complex fluid flow problems--even those at high Mach numbers--in an era well before large scale supercomputing became the dominant mode of inquiry into these fields. Yet the issues raised and solved by his research effort are still of vital interest today.

  20. A Block-LU Update for Large-Scale Linear Programming

    DTIC Science & Technology

    1990-01-01

    linear programming problems. Results are given from runs on the Cray Y -MP. 1. Introduction We wish to use the simplex method [Dan63] to solve the...standard linear program, minimize cTx subject to Ax = b 1< x <U, where A is an m by n matrix and c, x, 1, u, and b are of appropriate dimension. The simplex...the identity matrix. The basis is used to solve for the search direction y and the dual variables 7r in the following linear systems: Bky = aq (1.2) and

  1. Special issue of Computers and Fluids in honor of Cecil E. (Chuck) Leith

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ye; Herring, Jackson

    Here, this special issue of Computers and Fluids is dedicated to Cecil E. (Chuck) Leith in honor of his research contributions, leadership in the areas of statistical fluid mechanics, computational fluid dynamics, and climate theory. Leith's contribution to these fields emerged from his interest in solving complex fluid flow problems--even those at high Mach numbers--in an era well before large scale supercomputing became the dominant mode of inquiry into these fields. Yet the issues raised and solved by his research effort are still of vital interest today.

  2. Hierarchical optimal control of large-scale nonlinear chemical processes.

    PubMed

    Ramezani, Mohammad Hossein; Sadati, Nasser

    2009-01-01

    In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.

  3. Institutionalizing Large-Scale Curricular Change: The Top 25 Project at Miami University

    ERIC Educational Resources Information Center

    Hodge, David C.; Nadler, Marjorie Keeshan; Shore, Cecilia; Taylor, Beverley A. P.

    2011-01-01

    Now more than ever, it is urgent that colleges and universities mobilize themselves to produce graduates who are capable of being productive, creative, and responsible members of a global society. Employers want clear communicators who are strong critical thinkers and who can solve real-world problems in an ethical way. To achieve these outcomes,…

  4. Changing Schools from the inside out: Small Wins in Hard Times. Third Edition

    ERIC Educational Resources Information Center

    Larson, Robert

    2011-01-01

    At any time, public schools labor under great economic, political, and social pressures that make it difficult to create large-scale, "whole school" change. But current top-down mandates require that schools close achievement gaps while teaching more problem solving, inquiry, and research skills--with fewer resources. Failure to meet test-based…

  5. Methodologies for Investigating Item- and Test-Level Measurement Equivalence in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; Olson, Brent F.; Ercikan, Kadriye; Zumbo, Bruno D.

    2012-01-01

    In this study, the Canadian English and French versions of the Problem-Solving Measure of the Programme for International Student Assessment 2003 were examined to investigate their degree of measurement comparability at the item- and test-levels. Three methods of differential item functioning (DIF) were compared: parametric and nonparametric item…

  6. Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.

    Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less

  7. Personality, problem solving, and adolescent substance use.

    PubMed

    Jaffee, William B; D'Zurilla, Thomas J

    2009-03-01

    The major aim of this study was to examine the role of social problem solving in the relationship between personality and substance use in adolescents. Although a number of studies have identified a relationship between personality and substance use, the precise mechanism by which this occurs is not clear. We hypothesized that problem-solving skills could be one such mechanism. More specifically, we sought to determine whether problem solving mediates, moderates, or both mediates and moderates the relationship between different personality traits and substance use. Three hundred and seven adolescents were administered the Substance Use Profile Scale, the Social Problem-Solving Inventory-Revised, and the Personality Experiences Inventory to assess personality, social problem-solving ability, and substance use, respectively. Results showed that the dimension of rational problem solving (i.e., effective problem-solving skills) significantly mediated the relationship between hopelessness and lifetime alcohol and marijuana use. The theoretical and clinical implications of these results were discussed.

  8. Social problem-solving in Chinese baccalaureate nursing students.

    PubMed

    Fang, Jinbo; Luo, Ying; Li, Yanhua; Huang, Wenxia

    2016-11-01

    To describe social problem solving in Chinese baccalaureate nursing students. A descriptive cross-sectional study was conducted with a cluster sample of 681 Chinese baccalaureate nursing students. The Chinese version of the Social Problem-Solving scale was used. Descriptive analyses, independent t-test and one-way analysis of variance were applied to analyze the data. The final year nursing students presented the highest scores of positive social problem-solving skills. Students with experiences of self-directed and problem-based learning presented significantly higher scores in Positive Problem Orientation subscale. The group with Critical thinking training experience, however, displayed higher negative problem solving scores compared with nonexperience group. Social problem solving abilities varied based upon teaching-learning strategies. Self-directed and problem-based learning may be recommended as effective way to improve social problem-solving ability. © 2016 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  9. DESCRIPTION OF THE ENIAC CONVERTER CODE

    DTIC Science & Technology

    The report is intended as a working manual for personnel preparing problems for the ENIAC . It should also serve as a guide to those groups who have...computing problems that could be solved on the ENIAC . The report discusses the ENIAC from the point of view of the coder, describing its memory as well...accomplishes as well as how to use each instruction. A few remarks are made on the more general subject of problem preparation for large scale computers in general based on the experience of operating the ENIAC . (Author)

  10. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  11. GLAD: a system for developing and deploying large-scale bioinformatics grid.

    PubMed

    Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong

    2005-03-01

    Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai-Yuan; Zavala, Victor M.

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection viamore » symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.« less

  13. Price schedules coordination for electricity pool markets

    NASA Astrophysics Data System (ADS)

    Legbedji, Alexis Motto

    2002-04-01

    We consider the optimal coordination of a class of mathematical programs with equilibrium constraints, which is formally interpreted as a resource-allocation problem. Many decomposition techniques were proposed to circumvent the difficulty of solving large systems with limited computer resources. The considerable improvement in computer architecture has allowed the solution of large-scale problems with increasing speed. Consequently, interest in decomposition techniques has waned. Nonetheless, there is an important class of applications for which decomposition techniques will still be relevant, among others, distributed systems---the Internet, perhaps, being the most conspicuous example---and competitive economic systems. Conceptually, a competitive economic system is a collection of agents that have similar or different objectives while sharing the same system resources. In theory, constructing a large-scale mathematical program and solving it centrally, using currently available computing power can optimize such systems of agents. In practice, however, because agents are self-interested and not willing to reveal some sensitive corporate data, one cannot solve these kinds of coordination problems by simply maximizing the sum of agent's objective functions with respect to their constraints. An iterative price decomposition or Lagrangian dual method is considered best suited because it can operate with limited information. A price-directed strategy, however, can only work successfully when coordinating or equilibrium prices exist, which is not generally the case when a weak duality is unavoidable. Showing when such prices exist and how to compute them is the main subject of this thesis. Among our results, we show that, if the Lagrangian function of a primal program is additively separable, price schedules coordination may be attained. The prices are Lagrange multipliers, and are also the decision variables of a dual program. In addition, we propose a new form of augmented or nonlinear pricing, which is an example of the use of penalty functions in mathematical programming. Applications are drawn from mathematical programming problems of the form arising in electric power system scheduling under competition.

  14. Solving lot-sizing problem with quantity discount and transportation cost

    NASA Astrophysics Data System (ADS)

    Lee, Amy H. I.; Kang, He-Yau; Lai, Chun-Mei

    2013-04-01

    Owing to today's increasingly competitive market and ever-changing manufacturing environment, the inventory problem is becoming more complicated to solve. The incorporation of heuristics methods has become a new trend to tackle the complex problem in the past decade. This article considers a lot-sizing problem, and the objective is to minimise total costs, where the costs include ordering, holding, purchase and transportation costs, under the requirement that no inventory shortage is allowed in the system. We first formulate the lot-sizing problem as a mixed integer programming (MIP) model. Next, an efficient genetic algorithm (GA) model is constructed for solving large-scale lot-sizing problems. An illustrative example with two cases in a touch panel manufacturer is used to illustrate the practicality of these models, and a sensitivity analysis is applied to understand the impact of the changes in parameters to the outcomes. The results demonstrate that both the MIP model and the GA model are effective and relatively accurate tools for determining the replenishment for touch panel manufacturing for multi-periods with quantity discount and batch transportation. The contributions of this article are to construct an MIP model to obtain an optimal solution when the problem is not too complicated itself and to present a GA model to find a near-optimal solution efficiently when the problem is complicated.

  15. The Investigation of Social Problem Solving Abilities of University Students in Terms of Perceived Social Support

    ERIC Educational Resources Information Center

    Tras, Zeliha

    2013-01-01

    The purpose of this study is to analyze of university students' perceived social support and social problem solving. The participants were 827 (474 female and 353 male) university students. Data were collected Perceived Social Support Scale-Revised (Yildirim, 2004) and Social Problem Solving (Maydeu-Olivares and D'Zurilla, 1996) translated and…

  16. Comparison of application of various crossovers in solving inhomogeneous minimax problem modified by Goldberg model

    NASA Astrophysics Data System (ADS)

    Kobak, B. V.; Zhukovskiy, A. G.; Kuzin, A. P.

    2018-05-01

    This paper considers one of the classical NP complete problems - an inhomogeneous minimax problem. When solving such large-scale problem, there appear difficulties in obtaining an exact solution. Therefore, let us propose getting an optimum solution in an acceptable time. Among a wide range of genetic algorithm models, let us choose the modified Goldberg model, which earlier was successfully used by authors in solving NP complete problems. The classical Goldberg model uses a single-point crossover and a singlepoint mutation, which somewhat decreases the accuracy of the obtained results. In the article, let us propose using a full two-point crossover with various mutations previously researched. In addition, the work studied the necessary probability to apply it to the crossover in order to obtain results that are more accurate. Results of the computation experiment showed that the higher the probability of a crossover, the higher the quality of both the average results and the best solutions. In addition, it was found out that the higher the values of the number of individuals and the number of repetitions, the closer both the average results and the best solutions to the optimum. The paper shows how the use of a full two-point crossover increases the accuracy of solving an inhomogeneous minimax problem, while the time for getting the solution increases, but remains polynomial.

  17. Escript: Open Source Environment For Solving Large-Scale Geophysical Joint Inversion Problems in Python

    NASA Astrophysics Data System (ADS)

    Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy

    2014-05-01

    The program package escript has been designed for solving mathematical modeling problems using python, see Gross et al. (2013). Its development and maintenance has been funded by the Australian Commonwealth to provide open source software infrastructure for the Australian Earth Science community (recent funding by the Australian Geophysical Observing System EIF (AGOS) and the AuScope Collaborative Research Infrastructure Scheme (CRIS)). The key concepts of escript are based on the terminology of spatial functions and partial differential equations (PDEs) - an approach providing abstraction from the underlying spatial discretization method (i.e. the finite element method (FEM)). This feature presents a programming environment to the user which is easy to use even for complex models. Due to the fact that implementations are independent from data structures simulations are easily portable across desktop computers and scalable compute clusters without modifications to the program code. escript has been successfully applied in a variety of applications including modeling mantel convection, melting processes, volcanic flow, earthquakes, faulting, multi-phase flow, block caving and mineralization (see Poulet et al. 2013). The recent escript release (see Gross et al. (2013)) provides an open framework for solving joint inversion problems for geophysical data sets (potential field, seismic and electro-magnetic). The strategy bases on the idea to formulate the inversion problem as an optimization problem with PDE constraints where the cost function is defined by the data defect and the regularization term for the rock properties, see Gross & Kemp (2013). This approach of first-optimize-then-discretize avoids the assemblage of the - in general- dense sensitivity matrix as used in conventional approaches where discrete programming techniques are applied to the discretized problem (first-discretize-then-optimize). In this paper we will discuss the mathematical framework for inversion and appropriate solution schemes in escript. We will also give a brief introduction into escript's open framework for defining and solving geophysical inversion problems. Finally we will show some benchmark results to demonstrate the computational scalability of the inversion method across a large number of cores and compute nodes in a parallel computing environment. References: - L. Gross et al. (2013): Escript Solving Partial Differential Equations in Python Version 3.4, The University of Queensland, https://launchpad.net/escript-finley - L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306 - T. Poulet, L. Gross, D. Georgiev, J. Cleverley (2012): escript-RT: Reactive transport simulation in Python using escript, Computers & Geosciences, Volume 45, 168-176. http://dx.doi.org/10.1016/j.cageo.2011.11.005.

  18. On the decentralized control of large-scale systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chong, C.

    1973-01-01

    The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.

  19. Co-optimizing Generation and Transmission Expansion with Wind Power in Large-Scale Power Grids Implementation in the US Eastern Interconnection

    DOE PAGES

    You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; ...

    2016-01-12

    This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less

  20. Sustainable Utilization of Traditional Chinese Medicine Resources: Systematic Evaluation on Different Production Modes

    PubMed Central

    Li, Xiwen; Chen, Yuning; Yang, Qing; Wang, Yitao

    2015-01-01

    The usage amount of medicinal plant rapidly increased along with the development of traditional Chinese medicine industry. The higher market demand and the shortage of wild herbal resources enforce us to carry out large-scale introduction and cultivation. Herbal cultivation can ease current contradiction between medicinal resources supply and demand while they bring new problems such as pesticide residues and plant disease and pests. Researchers have recently placed high hopes on the application of natural fostering, a new method incorporated herbal production and diversity protecting practically, which can solve the problems brought by artificial cultivation. However no modes can solve all problems existing in current herbal production. This study evaluated different production modes including cultivation, natural fostering, and wild collection to guide the traditional Chinese medicine production for sustainable utilization of herbal resources. PMID:26074987

  1. The relationship between mathematical problem-solving skills and self-regulated learning through homework behaviours, motivation, and metacognition

    NASA Astrophysics Data System (ADS)

    Çiğdem Özcan, Zeynep

    2016-04-01

    Studies highlight that using appropriate strategies during problem solving is important to improve problem-solving skills and draw attention to the fact that using these skills is an important part of students' self-regulated learning ability. Studies on this matter view the self-regulated learning ability as key to improving problem-solving skills. The aim of this study is to investigate the relationship between mathematical problem-solving skills and the three dimensions of self-regulated learning (motivation, metacognition, and behaviour), and whether this relationship is of a predictive nature. The sample of this study consists of 323 students from two public secondary schools in Istanbul. In this study, the mathematics homework behaviour scale was administered to measure students' homework behaviours. For metacognition measurements, the mathematics metacognition skills test for students was administered to measure offline mathematical metacognitive skills, and the metacognitive experience scale was used to measure the online mathematical metacognitive experience. The internal and external motivational scales used in the Programme for International Student Assessment (PISA) test were administered to measure motivation. A hierarchic regression analysis was conducted to determine the relationship between the dependent and independent variables in the study. Based on the findings, a model was formed in which 24% of the total variance in students' mathematical problem-solving skills is explained by the three sub-dimensions of the self-regulated learning model: internal motivation (13%), willingness to do homework (7%), and post-problem retrospective metacognitive experience (4%).

  2. Effects of traumatic brain injury on a virtual reality social problem solving task and relations to cortical thickness in adolescence.

    PubMed

    Hanten, Gerri; Cook, Lori; Orsten, Kimberley; Chapman, Sandra B; Li, Xiaoqi; Wilde, Elisabeth A; Schnelle, Kathleen P; Levin, Harvey S

    2011-02-01

    Social problem solving was assessed in 28 youth ages 12-19 years (15 with moderate to severe traumatic brain injury (TBI), 13 uninjured) using a naturalistic, computerized virtual reality (VR) version of the Interpersonal Negotiations Strategy interview (Yeates, Schultz, & Selman, 1991). In each scenario, processing load condition was varied in terms of number of characters and amount of information. Adolescents viewed animated scenarios depicting social conflict in a virtual microworld environment from an avatar's viewpoint, and were questioned on four problem solving steps: defining the problem, generating solutions, selecting solutions, and evaluating the likely outcome. Scoring was based on a developmental scale in which responses were judged as impulsive, unilateral, reciprocal, or collaborative, in order of increasing score. Adolescents with TBI were significantly impaired on the summary VR-Social Problem Solving (VR-SPS) score in Condition A (2 speakers, no irrelevant information), p=0.005; in Condition B (2 speakers+irrelevant information), p=0.035; and Condition C (4 speakers+irrelevant information), p=0.008. Effect sizes (Cohen's D) were large (A=1.40, B=0.96, C=1.23). Significant group differences were strongest and most consistent for defining the problems and evaluating outcomes. The relation of task performance to cortical thickness of specific brain regions was also explored, with significant relations found with orbitofrontal regions, the frontal pole, the cuneus, and the temporal pole. Results are discussed in the context of specific cognitive and neural mechanisms underlying social problem solving deficits after childhood TBI. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Effects of Traumatic Brain Injury on a Virtual Reality Social Problem Solving Task and Relations to Cortical Thickness in Adolescence

    PubMed Central

    Hanten, Gerri; Cook, Lori; Orsten, Kimberley; Chapman, Sandra B.; Li, Xiaoqi; Wilde, Elisabeth A.; Schnelle, Kathleen P.; Levin, Harvey S.

    2011-01-01

    Social problem solving was assessed in 28 youth ages 12–19 years (15 with moderate to severe traumatic brain injury (TBI), 13 uninjured) using a naturalistic, computerized virtual reality (VR) version of the Interpersonal Negotiations Strategy interview (Yeates, Schultz, & Selman, 1991). In each scenario, processing load condition was varied in terms of number of characters and amount of information. Adolescents viewed animated scenarios depicting social conflict in a virtual microworld environment from an avatar’s viewpoint, and were questioned on four problem solving steps: defining the problem, generating solutions, selecting solutions, and evaluating the likely outcome. Scoring was based on a developmental scale in which responses were judged as impulsive, unilateral, reciprocal, or collaborative, in order of increasing score. Adolescents with TBI were significantly impaired on the summary VR-Social Problem Solving (VR-SPS) score in Condition A (2 speakers, no irrelevant information), p = 0.005; in Condition B (2 speakers + irrelevant information), p = 0.035; and Condition C (4 speakers + irrelevant information), p = 0.008. Effect sizes (Cohen’s d) were large (A = 1.40, B = 0.96, C = 1.23). Significant group differences were strongest and most consistent for defining the problems and evaluating outcomes. The relation of task performance to cortical thickness of specific brain regions was also explored, with significant relations found with orbitofrontal regions, the frontal pole, the cuneus, and the temporal pole. Results are discussed in the context of specific cognitive and neural mechanisms underlying social problem solving deficits after childhood TBI. PMID:21147137

  4. Associations of Patient Health-Related Problem Solving with Disease Control, Emergency Department Visits, and Hospitalizations in HIV and Diabetes Clinic Samples

    PubMed Central

    Gemmell, Leigh; Kulkarni, Babul; Klick, Brendan; Brancati, Frederick L.

    2007-01-01

    Background Patient problem solving and decision making are recognized as essential to effective self-management across multiple chronic diseases. However, a health-related problem-solving instrument that demonstrates sensitivity to disease control parameters in multiple diseases has not been established. Objectives To determine, in two disease samples, internal consistency and associations with disease control of the Health Problem-Solving Scale (HPSS), a 50-item measure with 7 subscales assessing effective and ineffective problem-solving approaches, learning from past experiences, and motivation/orientation. Design Cross-sectional study. Participants Outpatients from university-affiliated medical center HIV (N = 111) and diabetes mellitus (DM, N = 78) clinics. Measurements HPSS, CD4, hemoglobin A1c (HbA1c), and number of hospitalizations in the previous year and Emergency Department (ED) visits in the previous 6 months. Results Administration time for the HPSS ranged from 5 to 10 minutes. Cronbach’s alpha for the total HPSS was 0.86 and 0.89 for HIV and DM, respectively. Higher total scores (better problem solving) were associated with higher CD4 and fewer hospitalizations in HIV and lower HbA1c and fewer ED visits in DM. Health Problem-Solving Scale subscales representing negative problem-solving approaches were consistently associated with more hospitalizations (HIV, DM) and ED visits (DM). Conclusions The HPSS may identify problem-solving difficulties with disease self-management and assess effectiveness of interventions targeting patient decision making in self-care. PMID:17443373

  5. More Reasons to be Straightforward: Findings and Norms for Two Scales Relevant to Social Anxiety

    PubMed Central

    Rodebaugh, Thomas L.; Heimberg, Richard G.; Brown, Patrick J.; Fernandez, Katya C.; Blanco, Carlos; Schneier, Franklin R.; Liebowitz, Michael R.

    2011-01-01

    The validity of both the Social Interaction Anxiety Scale and Brief Fear of Negative Evaluation scale has been well-supported, yet the scales have a small number of reverse-scored items that may detract from the validity of their total scores. The current study investigates two characteristics of participants that may be associated with compromised validity of these items: higher age and lower levels of education. In community and clinical samples, the validity of each scale's reverse-scored items was moderated by age, years of education, or both. The straightforward items did not show this pattern. To encourage the use of the straightforward items of these scales, we provide normative data from the same samples as well as two large student samples. We contend that although response bias can be a substantial problem, the reverse-scored questions of these scales do not solve that problem and instead decrease overall validity. PMID:21388781

  6. The Association between Motivation, Affect, and Self-regulated Learning When Solving Problems.

    PubMed

    Baars, Martine; Wijnia, Lisette; Paas, Fred

    2017-01-01

    Self-regulated learning (SRL) skills are essential for learning during school years, particularly in complex problem-solving domains, such as biology and math. Although a lot of studies have focused on the cognitive resources that are needed for learning to solve problems in a self-regulated way, affective and motivational resources have received much less research attention. The current study investigated the relation between affect (i.e., Positive Affect and Negative Affect Scale), motivation (i.e., autonomous and controlled motivation), mental effort, SRL skills, and problem-solving performance when learning to solve biology problems in a self-regulated online learning environment. In the learning phase, secondary education students studied video-modeling examples of how to solve hereditary problems, solved hereditary problems which they chose themselves from a set of problems with different complexity levels (i.e., five levels). In the posttest, students solved hereditary problems, self-assessed their performance, and chose a next problem from the set of problems but did not solve these problems. The results from this study showed that negative affect, inaccurate self-assessments during the posttest, and higher perceptions of mental effort during the posttest were negatively associated with problem-solving performance after learning in a self-regulated way.

  7. Implementation and Performance Issues in Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Braun, Robert; Gage, Peter; Kroo, Ilan; Sobieski, Ian

    1996-01-01

    Collaborative optimization is a multidisciplinary design architecture that is well-suited to large-scale multidisciplinary optimization problems. This paper compares this approach with other architectures, examines the details of the formulation, and some aspects of its performance. A particular version of the architecture is proposed to better accommodate the occurrence of multiple feasible regions. The use of system level inequality constraints is shown to increase the convergence rate. A series of simple test problems, demonstrated to challenge related optimization architectures, is successfully solved with collaborative optimization.

  8. A Combined Adaptive Tabu Search and Set Partitioning Approach for the Crew Scheduling Problem with an Air Tanker Crew Application

    DTIC Science & Technology

    2002-08-15

    Agency Name(s) and Address(es) Maj Juan Vasquez AFOSR/NM 801 N. Randolph St., Rm 732 Arlington, VA 22203-1977 Sponsor/Monitor’s Acronym(s) Sponsor... Gelman , E., Patty, B., and R. Tanga. 1991. Recent Advances in Crew-Pairing Optimization at American Airlines, Interfaces, 21(1):62-74. Baker, E.K...Operations Research, 25(11):887-894. Chu, H.D., Gelman , E., and E.L. Johnson. 1997. Solving Large Scale Crew Scheduling Problems, European

  9. How do Rumination and Social Problem Solving Intensify Depression? A Longitudinal Study.

    PubMed

    Hasegawa, Akira; Kunisato, Yoshihiko; Morimoto, Hiroshi; Nishimura, Haruki; Matsuda, Yuko

    2018-01-01

    In order to examine how rumination and social problem solving intensify depression, the present study investigated longitudinal associations among each dimension of rumination and social problem solving and evaluated aspects of these constructs that predicted subsequent depression. A three-wave longitudinal study, with an interval of 4 weeks between waves, was conducted. Japanese university students completed the Beck Depression Inventory-Second Edition, Ruminative Responses Scale, Social Problem-Solving Inventory-Revised Short Version, and Interpersonal Stress Event Scale on three occasions 4 weeks apart ( n  = 284 at Time 1, 198 at Time 2, 165 at Time 3). Linear mixed models were analyzed to test whether each variable predicted subsequent depression, rumination, and each dimension of social problem solving. Rumination and negative problem orientation demonstrated a mutually enhancing relationship. Because these two variables were not associated with interpersonal conflict during the subsequent 4 weeks, rumination and negative problem orientation appear to strengthen each other without environmental change. Rumination and impulsivity/carelessness style were associated with subsequent depressive symptoms, after controlling for the effect of initial depression. Because rumination and impulsivity/carelessness style were not concurrently and longitudinally associated with each other, rumination and impulsive/careless problem solving style appear to be independent processes that serve to intensify depression.

  10. Anomalous leptonic U(1) symmetry: Syndetic origin of the QCD axion, weak-scale dark matter, and radiative neutrino mass

    NASA Astrophysics Data System (ADS)

    Ma, Ernest; Restrepo, Diego; Zapata, Óscar

    2018-01-01

    The well-known leptonic U(1) symmetry of the Standard Model (SM) of quarks and leptons is extended to include a number of new fermions and scalars. The resulting theory has an invisible QCD axion (thereby solving the strong CP problem), a candidate for weak-scale dark matter (DM), as well as radiative neutrino masses. A possible key connection is a color-triplet scalar, which may be produced and detected at the Large Hadron Collider.

  11. Some Cognitive Characteristics of Night-Sky Watchers: Correlations between Social Problem-Solving, Need for Cognition, and Noctcaelador

    ERIC Educational Resources Information Center

    Kelly, William E.

    2005-01-01

    This study explored the relationship between night-sky watching and self-reported cognitive variables: need for cognition and social problem-solving. University students (N = 140) completed the Noctcaelador Inventory, the Need for Cognition Scale, and the Social Problem Solving Inventory. The results indicated that an interest in the night-sky was…

  12. Investigating Prospective Teachers' Perceived Problem-Solving Abilities in Relation to Gender, Major, Place Lived, and Locus of Control

    ERIC Educational Resources Information Center

    Çakir, Mustafa

    2017-01-01

    The purpose of this study is to investigate prospective teachers' perceived personal problem-solving competencies in relation to gender, major, place lived, and internal-external locus of control. The Personal Problem-Solving Inventory and Rotter's Internal-External Locus of Control Scale were used to collect data from freshman teacher candidates…

  13. An Academic Survey Concerning High School and University Students' Attitudes and Approaches to Problem Solving in Chemistry

    ERIC Educational Resources Information Center

    Duran, Muharrem

    2016-01-01

    The aim of this study is to reveal differences between attitudes and approaches of students from different types of high school and the first grade of university towards problem solving in chemistry. For this purpose, the scale originally developed by Mason and Singh (2010) to measure students' attitude and approaches towards problem solving in…

  14. Elementary School Students Perception Levels of Problem Solving Skills

    ERIC Educational Resources Information Center

    Yavuz, Günes; Yasemin, Deringöl; Arslan, Çigdem

    2017-01-01

    The purpose of this study is to reveal the perception levels of problem solving skills of elementary school students. The sample of the study is formed by totally 264 elementary students attending to 5th, 6th, 7th and 8th grade in a big city in Turkey. Data were collected by means of "Perception Scale for Problem Solving Skills" which…

  15. Structural dynamics payload loads estimates

    NASA Technical Reports Server (NTRS)

    Engels, R. C.

    1982-01-01

    Methods for the prediction of loads on large space structures are discussed. Existing approaches to the problem of loads calculation are surveyed. A full scale version of an alternate numerical integration technique to solve the response part of a load cycle is presented, and a set of short cut versions of the algorithm developed. The implementation of these techniques using the software package developed is discussed.

  16. Implementation and Performance of GaAs Digital Signal Processing ASICs

    NASA Technical Reports Server (NTRS)

    Whitaker, William D.; Buchanan, Jeffrey R.; Burke, Gary R.; Chow, Terrance W.; Graham, J. Scott; Kowalski, James E.; Lam, Barbara; Siavoshi, Fardad; Thompson, Matthew S.; Johnson, Robert A.

    1993-01-01

    The feasibility of performing high speed digital signal processing in GaAs gate array technology has been demonstrated with the successful implementation of a VLSI communications chip set for NASA's Deep Space Network. This paper describes the techniques developed to solve some of the technology and implementation problems associated with large scale integration of GaAs gate arrays.

  17. Application of NASA General-Purpose Solver to Large-Scale Computations in Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Storaasli, Olaf O.

    2004-01-01

    Of several iterative and direct equation solvers evaluated previously for computations in aeroacoustics, the most promising was the NASA-developed General-Purpose Solver (winner of NASA's 1999 software of the year award). This paper presents detailed, single-processor statistics of the performance of this solver, which has been tailored and optimized for large-scale aeroacoustic computations. The statistics, compiled using an SGI ORIGIN 2000 computer with 12 Gb available memory (RAM) and eight available processors, are the central processing unit time, RAM requirements, and solution error. The equation solver is capable of solving 10 thousand complex unknowns in as little as 0.01 sec using 0.02 Gb RAM, and 8.4 million complex unknowns in slightly less than 3 hours using all 12 Gb. This latter solution is the largest aeroacoustics problem solved to date with this technique. The study was unable to detect any noticeable error in the solution, since noise levels predicted from these solution vectors are in excellent agreement with the noise levels computed from the exact solution. The equation solver provides a means for obtaining numerical solutions to aeroacoustics problems in three dimensions.

  18. In search of the 'Aha!' experience: Elucidating the emotionality of insight problem-solving.

    PubMed

    Shen, Wangbing; Yuan, Yuan; Liu, Chang; Luo, Jing

    2016-05-01

    Although the experience of insight has long been noted, the essence of the 'Aha!' experience, reflecting a sudden change in the brain that accompanies an insight solution, remains largely unknown. This work aimed to uncover the mystery of the 'Aha!' experience through three studies. In Study 1, participants were required to solve a set of verbal insight problems and then subjectively report their affective experience when solving the problem. The participants were found to have experienced many types of emotions, with happiness the most frequently reported one. Multidimensional scaling was employed in Study 2 to simplify the dimensions of these reported emotions. The results showed that these different types of emotions could be clearly placed in two-dimensional space and that components constituting the 'Aha!' experience mainly reflected positive emotion and approached cognition. To validate previous findings, in Study 3, participants were asked to select the most appropriate emotional item describing their feelings at the time the problem was solved. The results of this study replicated the multidimensional construct consisting of approached cognition and positive affect. These three studies provide the first direct evidence of the essence of the 'Aha!' The potential significance of the findings was discussed. © 2015 The British Psychological Society.

  19. Improving extreme-scale problem solving: assessing electronic brainstorming effectiveness in an industrial setting.

    PubMed

    Dornburg, Courtney C; Stevens, Susan M; Hendrickson, Stacey M L; Davidson, George S

    2009-08-01

    An experiment was conducted to compare the effectiveness of individual versus group electronic brainstorming to address difficult, real-world challenges. Although industrial reliance on electronic communications has become ubiquitous, empirical and theoretical understanding of the bounds of its effectiveness have been limited. Previous research using short-term laboratory experiments have engaged small groups of students in answering questions irrelevant to an industrial setting. The present experiment extends current findings beyond the laboratory to larger groups of real-world employees addressing organization-relevant challenges during the course of 4 days. Employees and contractors at a national laboratory participated, either in a group setting or individually, in an electronic brainstorm to pose solutions to a real-world problem. The data demonstrate that (for this design) individuals perform at least as well as groups in producing quantity of electronic ideas, regardless of brainstorming duration. However, when judged with respect to quality along three dimensions (originality, feasibility, and effectiveness), the individuals significantly (p < .05) outperformed the group. When quality is used to benchmark success, these data indicate that work-relevant challenges are better solved by aggregating electronic individual responses rather than by electronically convening a group. This research suggests that industrial reliance on electronic problem-solving groups should be tempered, and large nominal groups may be more appropriate corporate problem-solving vehicles.

  20. EvArnoldi: A New Algorithm for Large-Scale Eigenvalue Problems.

    PubMed

    Tal-Ezer, Hillel

    2016-05-19

    Eigenvalues and eigenvectors are an essential theme in numerical linear algebra. Their study is mainly motivated by their high importance in a wide range of applications. Knowledge of eigenvalues is essential in quantum molecular science. Solutions of the Schrödinger equation for the electrons composing the molecule are the basis of electronic structure theory. Electronic eigenvalues compose the potential energy surfaces for nuclear motion. The eigenvectors allow calculation of diople transition matrix elements, the core of spectroscopy. The vibrational dynamics molecule also requires knowledge of the eigenvalues of the vibrational Hamiltonian. Typically in these problems, the dimension of Hilbert space is huge. Practically, only a small subset of eigenvalues is required. In this paper, we present a highly efficient algorithm, named EvArnoldi, for solving the large-scale eigenvalues problem. The algorithm, in its basic formulation, is mathematically equivalent to ARPACK ( Sorensen , D. C. Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations ; Springer , 1997 ; Lehoucq , R. B. ; Sorensen , D. C. SIAM Journal on Matrix Analysis and Applications 1996 , 17 , 789 ; Calvetti , D. ; Reichel , L. ; Sorensen , D. C. Electronic Transactions on Numerical Analysis 1994 , 2 , 21 ) (or Eigs of Matlab) but significantly simpler.

  1. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-13

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  2. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  3. Maximizing algebraic connectivity in air transportation networks

    NASA Astrophysics Data System (ADS)

    Wei, Peng

    In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the weight assignment can not be studied separately for the problem with operating cost constraint. Therefore a relaxed SDP method with golden section search is developed to solve both at the same time. The cluster decomposition is utilized to solve large scale networks.

  4. Inversion of very large matrices encountered in large scale problems of photogrammetry and photographic astrometry

    NASA Technical Reports Server (NTRS)

    Brown, D. C.

    1971-01-01

    The simultaneous adjustment of very large nets of overlapping plates covering the celestial sphere becomes computationally feasible by virtue of a twofold process that generates a system of normal equations having a bordered-banded coefficient matrix, and solves such a system in a highly efficient manner. Numerical results suggest that when a well constructed spherical net is subjected to a rigorous, simultaneous adjustment, the exercise of independently established control points is neither required for determinancy nor for production of accurate results.

  5. Consensus properties and their large-scale applications for the gene duplication problem.

    PubMed

    Moon, Jucheol; Lin, Harris T; Eulenstein, Oliver

    2016-06-01

    Solving the gene duplication problem is a classical approach for species tree inference from gene trees that are confounded by gene duplications. This problem takes a collection of gene trees and seeks a species tree that implies the minimum number of gene duplications. Wilkinson et al. posed the conjecture that the gene duplication problem satisfies the desirable Pareto property for clusters. That is, for every instance of the problem, all clusters that are commonly present in the input gene trees of this instance, called strict consensus, will also be found in every solution to this instance. We prove that this conjecture does not generally hold. Despite this negative result we show that the gene duplication problem satisfies a weaker version of the Pareto property where the strict consensus is found in at least one solution (rather than all solutions). This weaker property contributes to our design of an efficient scalable algorithm for the gene duplication problem. We demonstrate the performance of our algorithm in analyzing large-scale empirical datasets. Finally, we utilize the algorithm to evaluate the accuracy of standard heuristics for the gene duplication problem using simulated datasets.

  6. An exploratory study of the relationship between changes in emotion and cognitive processes and treatment outcome in borderline personality disorder.

    PubMed

    McMain, Shelley; Links, Paul S; Guimond, Tim; Wnuk, Susan; Eynan, Rahel; Bergmans, Yvonne; Warwar, Serine

    2013-01-01

    This exploratory study examined specific emotion processes and cognitive problem-solving processes in individuals with borderline personality disorder (BPD), and assessed the relationship of these changes to treatment outcome. Emotion and cognitive problem-solving processes were assessed using the Toronto Alexithymia Scale, the Linguistic Inquiry Word Count, the Derogatis Affect Balance Scale, and the Problem Solving Inventory. Participants who showed greater improvements in affect balance, problem solving, and the ability to identify and describe emotions showed greater improvements on treatment outcome, with affect balance remaining statistically significant under the most conservative conditions. The results provide preliminary evidence to support the theory that specific improvements in emotion and cognitive processes are associated with positive treatment outcomes (symptom distress, interpersonal functioning) in BPD. The implications for treatment are discussed.

  7. Autobiographical memory, interpersonal problem solving, and suicidal behavior in adolescent inpatients.

    PubMed

    Arie, Miri; Apter, Alan; Orbach, Israel; Yefet, Yael; Zalsman, Gil; Zalzman, Gil

    2008-01-01

    The aim of the study was to test Williams' (Williams JMG. Depression and the specificity of autobiographical memory. In: Rubin D, ed. Remembering Our Past: Studies in Autobiographical Memory. London: Cambridge University Press; 1996:244-267.) theory of suicidal behavior in adolescents and young adults by examining the relationship among suicidal behaviors, defective ability to retrieve specific autobiographical memories, impaired interpersonal problem solving, negative life events, repression, and hopelessness. Twenty-five suicidal adolescent and young adult inpatients (16.5 y +/- 2.5) were compared with 25 nonsuicidal adolescent and young adult inpatients (16.5 y +/- 2.5) and 25 healthy controls. Autobiographical memory was tested by a word association test; problem solving by the means-ends problem solving technique; negative life events by the Coddington scale; repression by the Life Style Index; hopelessness by the Beck scale; suicidal risk by the Plutchik scale, and suicide attempt by clinical history. Impairment in the ability to produce specific autobiographical memories, difficulties with interpersonal problem solving, negative life events, and repression were all associated with hopelessness and suicidal behavior. There were significant correlations among all the variables except for repression and negative life events. These findings support Williams' notion that generalized autobiographical memory is associated with deficits in interpersonal problem solving, negative life events, hopelessness, and suicidal behavior. The finding that defects in autobiographical memory are associated with suicidal behavior in adolescents and young adults may lead to improvements in the techniques of cognitive behavioral therapy in this age group.

  8. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    PubMed

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  9. Coping, problem solving, depression, and health-related quality of life in patients receiving outpatient stroke rehabilitation.

    PubMed

    Visser, Marieke M; Heijenbrok-Kal, Majanka H; Spijker, Adriaan Van't; Oostra, Kristine M; Busschbach, Jan J; Ribbers, Gerard M

    2015-08-01

    To investigate whether patients with high and low depression scores after stroke use different coping strategies and problem-solving skills and whether these variables are related to psychosocial health-related quality of life (HRQOL) independent of depression. Cross-sectional study. Two rehabilitation centers. Patients participating in outpatient stroke rehabilitation (N=166; mean age, 53.06±10.19y; 53% men; median time poststroke, 7.29mo). Not applicable. Coping strategy was measured using the Coping Inventory for Stressful Situations; problem-solving skills were measured using the Social Problem Solving Inventory-Revised: Short Form; depression was assessed using the Center for Epidemiologic Studies Depression Scale; and HRQOL was measured using the five-level EuroQol five-dimensional questionnaire and the Stroke-Specific Quality of Life Scale. Independent samples t tests and multivariable regression analyses, adjusted for patient characteristics, were performed. Compared with patients with low depression scores, patients with high depression scores used less positive problem orientation (P=.002) and emotion-oriented coping (P<.001) and more negative problem orientation (P<.001) and avoidance style (P<.001). Depression score was related to all domains of both general HRQOL (visual analog scale: β=-.679; P<.001; utility: β=-.009; P<.001) and stroke-specific HRQOL (physical HRQOL: β=-.020; P=.001; psychosocial HRQOL: β=-.054, P<.001; total HRQOL: β=-.037; P<.001). Positive problem orientation was independently related to psychosocial HRQOL (β=.086; P=.018) and total HRQOL (β=.058; P=.031). Patients with high depression scores use different coping strategies and problem-solving skills than do patients with low depression scores. Independent of depression, positive problem-solving skills appear to be most significantly related to better HRQOL. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  10. Distributed intrusion detection system based on grid security model

    NASA Astrophysics Data System (ADS)

    Su, Jie; Liu, Yahui

    2008-03-01

    Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.

  11. Technology and testing.

    PubMed

    Quellmalz, Edys S; Pellegrino, James W

    2009-01-02

    Large-scale testing of educational outcomes benefits already from technological applications that address logistics such as development, administration, and scoring of tests, as well as reporting of results. Innovative applications of technology also provide rich, authentic tasks that challenge the sorts of integrated knowledge, critical thinking, and problem solving seldom well addressed in paper-based tests. Such tasks can be used on both large-scale and classroom-based assessments. Balanced assessment systems can be developed that integrate curriculum-embedded, benchmark, and summative assessments across classroom, district, state, national, and international levels. We discuss here the potential of technology to launch a new era of integrated, learning-centered assessment systems.

  12. Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Brentner, Kenneth S.

    2000-01-01

    This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.

  13. Discrete-time neural network for fast solving large linear L1 estimation problems and its application to image restoration.

    PubMed

    Xia, Youshen; Sun, Changyin; Zheng, Wei Xing

    2012-05-01

    There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.

  14. Gyrodampers for large space structures

    NASA Technical Reports Server (NTRS)

    Aubrun, J. N.; Margulies, G.

    1979-01-01

    The problem of controlling the vibrations of a large space structures by the use of actively augmented damping devices distributed throughout the structure is addressed. The gyrodamper which consists of a set of single gimbal control moment gyros which are actively controlled to extract the structural vibratory energy through the local rotational deformations of the structure, is described and analyzed. Various linear and nonlinear dynamic simulations of gyrodamped beams are shown, including results on self-induced vibrations due to sensor noise and rotor imbalance. The complete nonlinear dynamic equations are included. The problem of designing and sizing a system of gyrodampers for a given structure, or extrapolating results for one gyrodamped structure to another is solved in terms of scaling laws. Novel scaling laws for gyro systems are derived, based upon fundamental physical principles, and various examples are given.

  15. [Methods of high-throughput plant phenotyping for large-scale breeding and genetic experiments].

    PubMed

    Afonnikov, D A; Genaev, M A; Doroshkov, A V; Komyshev, E G; Pshenichnikova, T A

    2016-07-01

    Phenomics is a field of science at the junction of biology and informatics which solves the problems of rapid, accurate estimation of the plant phenotype; it was rapidly developed because of the need to analyze phenotypic characteristics in large scale genetic and breeding experiments in plants. It is based on using the methods of computer image analysis and integration of biological data. Owing to automation, new approaches make it possible to considerably accelerate the process of estimating the characteristics of a phenotype, to increase its accuracy, and to remove a subjectivism (inherent to humans). The main technologies of high-throughput plant phenotyping in both controlled and field conditions, their advantages and disadvantages, and also the prospects of their use for the efficient solution of problems of plant genetics and breeding are presented in the review.

  16. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    PubMed Central

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631

  17. Global detection of live virtual machine migration based on cellular neural networks.

    PubMed

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  18. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.

  19. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-11-01

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Association Between Anticipatory Grief and Problem Solving Among Family Caregivers of Persons with Cognitive Impairment

    PubMed Central

    Fowler, Nicole R.; Hansen, Alexandra S.; Barnato, Amber E.; Garand, Linda

    2013-01-01

    Objective Measure perceived involvement in medical decision making and determine if anticipatory grief is associated with problem solving among family caregivers of older adults with cognitive impairment. Method Retrospective analysis of baseline data from a caregiver intervention (n=73). Multivariable regression models testing the association between caregivers’ anticipatory grief, measured by the Anticipatory Grief Scale (AGS), with problem solving abilities, measured by the Social Problem Solving Inventory – Revised: Short Form (SPSI-R: S). Results 47/73 (64%) of caregivers reported involvement in medical decision making. Mean AGS was 70.1 (± 14.8) and mean SPSI-R:S was 107.2 (± 11.6). Higher AGS scores were associated with lower positive problem orientation (P=0.041) and higher negative problem orientation scores (P=0.001) but not other components of problem solving- rational problem solving, avoidance style, and impulsivity/carelessness style. Discussion Higher anticipatory grief among family caregivers impaired problem solving, which could have negative consequences for their medical decision making responsibilities. PMID:23428394

  1. The Investigation of Problem Solving Skill of the Mountaineers in Terms of Demographic Variables

    ERIC Educational Resources Information Center

    Gürer, Burak

    2015-01-01

    The aim of this research is to investigate problem solving skills of the individuals involved in mountaineering. 315 volunteers participated in the study. The research data were collected by problem solving scale developed by Heppner and Peterson and the Turkish version of which was conducted by Sahin et al. There are totally 35 items and only 3…

  2. The Association between Motivation, Affect, and Self-regulated Learning When Solving Problems

    PubMed Central

    Baars, Martine; Wijnia, Lisette; Paas, Fred

    2017-01-01

    Self-regulated learning (SRL) skills are essential for learning during school years, particularly in complex problem-solving domains, such as biology and math. Although a lot of studies have focused on the cognitive resources that are needed for learning to solve problems in a self-regulated way, affective and motivational resources have received much less research attention. The current study investigated the relation between affect (i.e., Positive Affect and Negative Affect Scale), motivation (i.e., autonomous and controlled motivation), mental effort, SRL skills, and problem-solving performance when learning to solve biology problems in a self-regulated online learning environment. In the learning phase, secondary education students studied video-modeling examples of how to solve hereditary problems, solved hereditary problems which they chose themselves from a set of problems with different complexity levels (i.e., five levels). In the posttest, students solved hereditary problems, self-assessed their performance, and chose a next problem from the set of problems but did not solve these problems. The results from this study showed that negative affect, inaccurate self-assessments during the posttest, and higher perceptions of mental effort during the posttest were negatively associated with problem-solving performance after learning in a self-regulated way. PMID:28848467

  3. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    PubMed

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  4. Naturalness of Electroweak Symmetry Breaking while Waiting for the LHC

    NASA Astrophysics Data System (ADS)

    Espinosa, J. R.

    2007-06-01

    After revisiting the hierarchy problem of the Standard Model and its implications for the scale of New Physics, I consider the finetuning problem of electroweak symmetry breaking in several scenarios beyond the Standard Model: SUSY, Little Higgs and "improved naturalness" models. The main conclusions are that: New Physics should appear on the reach of the LHC; some SUSY models can solve the hierarchy problem with acceptable residual tuning; Little Higgs models generically suffer from large tunings, many times hidden; and, finally, that "improved naturalness" models do not generically improve the naturalness of the SM.

  5. Portable parallel portfolio optimization in the Aurora Financial Management System

    NASA Astrophysics Data System (ADS)

    Laure, Erwin; Moritsch, Hans

    2001-07-01

    Financial planning problems are formulated as large scale, stochastic, multiperiod, tree structured optimization problems. An efficient technique for solving this kind of problems is the nested Benders decomposition method. In this paper we present a parallel, portable, asynchronous implementation of this technique. To achieve our portability goals we elected the programming language Java for our implementation and used a high level Java based framework, called OpusJava, for expressing the parallelism potential as well as synchronization constraints. Our implementation is embedded within a modular decision support tool for portfolio and asset liability management, the Aurora Financial Management System.

  6. An Investigation of Taiwanese Early Adolescents' Self-Evaluations Concerning the Big 6 Information Problem-Solving Approach

    ERIC Educational Resources Information Center

    Chang, Chiung-Sui

    2007-01-01

    The study developed a Big 6 Information Problem-Solving Scale (B61PS), including the subscales of task definition and information-seeking strategies, information access and synthesis, and evaluation. More than 1,500 fifth and sixth graders in Taiwan responded. The study revealed that the scale showed adequate reliability in assessing the…

  7. The Role of the Goal in Solving Hard Computational Problems: Do People Really Optimize?

    ERIC Educational Resources Information Center

    Carruthers, Sarah; Stege, Ulrike; Masson, Michael E. J.

    2018-01-01

    The role that the mental, or internal, representation plays when people are solving hard computational problems has largely been overlooked to date, despite the reality that this internal representation drives problem solving. In this work we investigate how performance on versions of two hard computational problems differs based on what internal…

  8. A majorized Newton-CG augmented Lagrangian-based finite element method for 3D restoration of geological models

    NASA Astrophysics Data System (ADS)

    Tang, Peipei; Wang, Chengjing; Dai, Xiaoxia

    2016-04-01

    In this paper, we propose a majorized Newton-CG augmented Lagrangian-based finite element method for 3D elastic frictionless contact problems. In this scheme, we discretize the restoration problem via the finite element method and reformulate it to a constrained optimization problem. Then we apply the majorized Newton-CG augmented Lagrangian method to solve the optimization problem, which is very suitable for the ill-conditioned case. Numerical results demonstrate that the proposed method is a very efficient algorithm for various large-scale 3D restorations of geological models, especially for the restoration of geological models with complicated faults.

  9. Problem Solving and the Development of Expertise in Management.

    ERIC Educational Resources Information Center

    Lash, Fredrick B.

    This study investigated novice and expert problem solving behavior in management to examine the role of domain specific knowledge on problem solving processes. Forty-one middle level marketing managers in a large petrochemical organization provided think aloud protocols in response to two hypothetical management scenarios. Protocol analysis…

  10. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  11. Planning meals: Problem-solving on a real data-base

    ERIC Educational Resources Information Center

    Byrne, Richard

    1977-01-01

    Planning the menu for a dinner party, which involves problem-solving with a large body of knowledge, is used to study the daily operation of human memory. Verbal protocol analysis, a technique devised to investigate formal problem-solving, is examined theoretically and adapted for analysis of this task. (Author/MV)

  12. P-Hint-Hunt: a deep parallelized whole genome DNA methylation detection tool.

    PubMed

    Peng, Shaoliang; Yang, Shunyun; Gao, Ming; Liao, Xiangke; Liu, Jie; Yang, Canqun; Wu, Chengkun; Yu, Wenqiang

    2017-03-14

    The increasing studies have been conducted using whole genome DNA methylation detection as one of the most important part of epigenetics research to find the significant relationships among DNA methylation and several typical diseases, such as cancers and diabetes. In many of those studies, mapping the bisulfite treated sequence to the whole genome has been the main method to study DNA cytosine methylation. However, today's relative tools almost suffer from inaccuracies and time-consuming problems. In our study, we designed a new DNA methylation prediction tool ("Hint-Hunt") to solve the problem. By having an optimal complex alignment computation and Smith-Waterman matrix dynamic programming, Hint-Hunt could analyze and predict the DNA methylation status. But when Hint-Hunt tried to predict DNA methylation status with large-scale dataset, there are still slow speed and low temporal-spatial efficiency problems. In order to solve the problems of Smith-Waterman dynamic programming and low temporal-spatial efficiency, we further design a deep parallelized whole genome DNA methylation detection tool ("P-Hint-Hunt") on Tianhe-2 (TH-2) supercomputer. To the best of our knowledge, P-Hint-Hunt is the first parallel DNA methylation detection tool with a high speed-up to process large-scale dataset, and could run both on CPU and Intel Xeon Phi coprocessors. Moreover, we deploy and evaluate Hint-Hunt and P-Hint-Hunt on TH-2 supercomputer in different scales. The experimental results illuminate our tools eliminate the deviation caused by bisulfite treatment in mapping procedure and the multi-level parallel program yields a 48 times speed-up with 64 threads. P-Hint-Hunt gain a deep acceleration on CPU and Intel Xeon Phi heterogeneous platform, which gives full play of the advantages of multi-cores (CPU) and many-cores (Phi).

  13. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    NASA Astrophysics Data System (ADS)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  14. Neural Network Solves "Traveling-Salesman" Problem

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.

    1990-01-01

    Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.

  15. The accurate particle tracer code

    NASA Astrophysics Data System (ADS)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  16. The Time on Task Effect in Reading and Problem Solving Is Moderated by Task Difficulty and Skill: Insights from a Computer-Based Large-Scale Assessment

    ERIC Educational Resources Information Center

    Goldhammer, Frank; Naumann, Johannes; Stelter, Annette; Tóth, Krisztina; Rölke, Heiko; Klieme, Eckhard

    2014-01-01

    Computer-based assessment can provide new insights into behavioral processes of task completion that cannot be uncovered by paper-based instruments. Time presents a major characteristic of the task completion process. Psychologically, time on task has 2 different interpretations, suggesting opposing associations with task outcome: Spending more…

  17. Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods

    NASA Astrophysics Data System (ADS)

    Thurin, J.; Brossier, R.; Métivier, L.

    2017-12-01

    Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012

  18. Quantum algorithm for solving some discrete mathematical problems by probing their energy spectra

    NASA Astrophysics Data System (ADS)

    Wang, Hefeng; Fan, Heng; Li, Fuli

    2014-01-01

    When a probe qubit is coupled to a quantum register that represents a physical system, the probe qubit will exhibit a dynamical response only when it is resonant with a transition in the system. Using this principle, we propose a quantum algorithm for solving discrete mathematical problems based on the circuit model. Our algorithm has favorable scaling properties in solving some discrete mathematical problems.

  19. Relations of social problem solving with interpersonal competence in Japanese students.

    PubMed

    Sumi, Katsunori

    2011-12-01

    To clarify the relations of the dimensions of social problem solving with those of interpersonal competence in a sample of 234 Japanese college students, Japanese versions of the Social Problem-solving Inventory-Revised and the Social Skill Scale were administered. Pearson correlations between the two sets of variables were low, but higher within each set of subscales. Cronbach's alpha was low for four subscales assessing interpersonal competence.

  20. Social problem-solving among adolescents treated for depression.

    PubMed

    Becker-Weidman, Emily G; Jacobs, Rachel H; Reinecke, Mark A; Silva, Susan G; March, John S

    2010-01-01

    Studies suggest that deficits in social problem-solving may be associated with increased risk of depression and suicidality in children and adolescents. It is unclear, however, which specific dimensions of social problem-solving are related to depression and suicidality among youth. Moreover, rational problem-solving strategies and problem-solving motivation may moderate or predict change in depression and suicidality among children and adolescents receiving treatment. The effect of social problem-solving on acute treatment outcomes were explored in a randomized controlled trial of 439 clinically depressed adolescents enrolled in the Treatment for Adolescents with Depression Study (TADS). Measures included the Children's Depression Rating Scale-Revised (CDRS-R), the Suicidal Ideation Questionnaire--Grades 7-9 (SIQ-Jr), and the Social Problem-Solving Inventory-Revised (SPSI-R). A random coefficients regression model was conducted to examine main and interaction effects of treatment and SPSI-R subscale scores on outcomes during the 12-week acute treatment stage. Negative problem orientation, positive problem orientation, and avoidant problem-solving style were non-specific predictors of depression severity. In terms of suicidality, avoidant problem-solving style and impulsiveness/carelessness style were predictors, whereas negative problem orientation and positive problem orientation were moderators of treatment outcome. Implications of these findings, limitations, and directions for future research are discussed. Copyright 2009 Elsevier Ltd. All rights reserved.

  1. Problem-Solving: Scaling the "Brick Wall"

    ERIC Educational Resources Information Center

    Benson, Dave

    2011-01-01

    Across the primary and secondary phases, pupils are encouraged to use and apply their knowledge, skills, and understanding of mathematics to solve problems in a variety of forms, ranging from single-stage word problems to the challenge of extended rich tasks. Amongst many others, Cockcroft (1982) emphasised the importance and relevance of…

  2. Impact of spatio-temporal scale of adjustment on variational assimilation of hydrologic and hydrometeorological data in operational distributed hydrologic models

    NASA Astrophysics Data System (ADS)

    Lee, H.; Seo, D.; McKee, P.; Corby, R.

    2009-12-01

    One of the large challenges in data assimilation (DA) into distributed hydrologic models is to reduce the large degrees of freedom involved in the inverse problem to avoid overfitting. To assess the sensitivity of the performance of DA to the dimensionality of the inverse problem, we design and carry out real-world experiments in which the control vector in variational DA (VAR) is solved at different scales in space and time, e.g., lumped, semi-distributed, and fully-distributed in space, and hourly, 6 hourly, etc., in time. The size of the control vector is related to the degrees of freedom in the inverse problem. For the assessment, we use the prototype 4-dimenational variational data assimilator (4DVAR) that assimilates streamflow, precipitation and potential evaporation data into the NWS Hydrology Laboratory’s Research Distributed Hydrologic Model (HL-RDHM). In this talk, we present the initial results for a number of basins in Oklahoma and Texas.

  3. Implementation of an effective hybrid GA for large-scale traveling salesman problems.

    PubMed

    Nguyen, Hung Dinh; Yoshihara, Ikuo; Yamamori, Kunihito; Yasunaga, Moritoshi

    2007-02-01

    This correspondence describes a hybrid genetic algorithm (GA) to find high-quality solutions for the traveling salesman problem (TSP). The proposed method is based on a parallel implementation of a multipopulation steady-state GA involving local search heuristics. It uses a variant of the maximal preservative crossover and the double-bridge move mutation. An effective implementation of the Lin-Kernighan heuristic (LK) is incorporated into the method to compensate for the GA's lack of local search ability. The method is validated by comparing it with the LK-Helsgaun method (LKH), which is one of the most effective methods for the TSP. Experimental results with benchmarks having up to 316228 cities show that the proposed method works more effectively and efficiently than LKH when solving large-scale problems. Finally, the method is used together with the implementation of the iterated LK to find a new best tour (as of June 2, 2003) for a 1904711-city TSP challenge.

  4. Production of black holes and their angular momentum distribution in models with split fermions

    NASA Astrophysics Data System (ADS)

    Dai, De-Chang; Starkman, Glenn D.; Stojkovic, Dejan

    2006-05-01

    In models with TeV-scale gravity it is expected that mini black holes will be produced in near-future accelerators. On the other hand, TeV-scale gravity is plagued with many problems like fast proton decay, unacceptably large n-n¯ oscillations, flavor changing neutral currents, large mixing between leptons, etc. Most of these problems can be solved if different fermions are localized at different points in the extra dimensions. We study the cross section for the production of black holes and their angular momentum distribution in these models with “split” fermions. We find that, for a fixed value of the fundamental mass scale, the total production cross section is reduced compared with models where all the fermions are localized at the same point in the extra dimensions. Fermion splitting also implies that the bulk component of the black hole angular momentum must be taken into account in studies of the black hole decay via Hawking radiation.

  5. Pre-service mathematics teachers’ ability in solving well-structured problem

    NASA Astrophysics Data System (ADS)

    Paradesa, R.

    2018-01-01

    This study aimed to describe the mathematical problem-solving ability of undergraduate students of mathematics education in solving the well-structured problem. The type of this study was qualitative descriptive. The subjects in this study were 100 undergraduate students of Mathematics Education at one of the private universities in Palembang city. The data in this study was collected through two test items with essay form. The results of this study showed that, from the first problem, only 8% students can solve it, but do not check back again to validate the process. Based on a scoring rubric that follows Polya strategy, their answer satisfied 2 4 2 0 patterns. But, from the second problem, 45% students satisfied it. This is because the second problem imitated from the example that was given in learning process. The average score of undergraduate students mathematical problem-solving ability in solving well-structured problems showed 56.00 with standard deviation was 13.22. It means that, from 0 - 100 scale, undergraduate students mathematical problem-solving ability can be categorized low. From this result, the conclusion was undergraduate students of mathematics education in Palembang still have a problem in solving mathematics well-structured problem.

  6. On unified modeling, theory, and method for solving multi-scale global optimization problems

    NASA Astrophysics Data System (ADS)

    Gao, David Yang

    2016-10-01

    A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.

  7. A problem-solving education intervention in caregivers and patients during allogeneic hematopoietic stem cell transplantation.

    PubMed

    Bevans, Margaret; Wehrlen, Leslie; Castro, Kathleen; Prince, Patricia; Shelburne, Nonniekaye; Soeken, Karen; Zabora, James; Wallen, Gwenyth R

    2014-05-01

    The aim of this study was to determine the effect of problem-solving education on self-efficacy and distress in informal caregivers of allogeneic hematopoietic stem cell transplantation patients. Patient/caregiver teams attended three 1-hour problem-solving education sessions to help cope with problems during hematopoietic stem cell transplantation. Primary measures included the Cancer Self-Efficacy Scale-transplant and Brief Symptom Inventory-18. Active caregivers reported improvements in self-efficacy (p < 0.05) and distress (p < 0.01) post-problem-solving education; caregiver responders also reported better health outcomes such as fatigue. The effect of problem-solving education on self-efficacy and distress in hematopoietic stem cell transplantation caregivers supports its inclusion in future interventions to meet the multifaceted needs of this population.

  8. Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications

    NASA Astrophysics Data System (ADS)

    Zu, Yue

    Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.

  9. Parallel Computing for Probabilistic Response Analysis of High Temperature Composites

    NASA Technical Reports Server (NTRS)

    Sues, R. H.; Lua, Y. J.; Smith, M. D.

    1994-01-01

    The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.

  10. Beam wavefront and farfield control for ICF laser driver

    NASA Astrophysics Data System (ADS)

    Dai, Wanjun; Deng, Wu; Zhang, Xin; Jiang, Xuejun; Zhang, Kun; Zhou, Wei; Zhao, Junpu; Hu, Dongxia

    2010-10-01

    Five main problems of beam wavefront and farfield control in ICF laser driver are synthetically discussed, including control requirements, beam propagation principle, distortions source control, system design and adjustment optimization, active wavefront correction technology. We demonstrate that beam can be propagated well and the divergence angle of the TIL pulses can be improved to less than 60μrad with solving these problems, which meets the requirements of TIL. The results can provide theoretical and experimental support for wavefront and farfield control designing requirements of the next large scale ICF driver.

  11. The large area crop inventory experiment: An experiment to demonstrate how space-age technology can contribute to solving critical problems here on earth

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The large area crop inventory experiment is being developed to predict crop production through satellite photographs. This experiment demonstrates how space age technology can contribute to solving practical problems of agriculture management.

  12. Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.

    2010-12-01

    Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.

  13. Reconstructing high-dimensional two-photon entangled states via compressive sensing

    PubMed Central

    Tonolini, Francesco; Chan, Susan; Agnew, Megan; Lindsay, Alan; Leach, Jonathan

    2014-01-01

    Accurately establishing the state of large-scale quantum systems is an important tool in quantum information science; however, the large number of unknown parameters hinders the rapid characterisation of such states, and reconstruction procedures can become prohibitively time-consuming. Compressive sensing, a procedure for solving inverse problems by incorporating prior knowledge about the form of the solution, provides an attractive alternative to the problem of high-dimensional quantum state characterisation. Using a modified version of compressive sensing that incorporates the principles of singular value thresholding, we reconstruct the density matrix of a high-dimensional two-photon entangled system. The dimension of each photon is equal to d = 17, corresponding to a system of 83521 unknown real parameters. Accurate reconstruction is achieved with approximately 2500 measurements, only 3% of the total number of unknown parameters in the state. The algorithm we develop is fast, computationally inexpensive, and applicable to a wide range of quantum states, thus demonstrating compressive sensing as an effective technique for measuring the state of large-scale quantum systems. PMID:25306850

  14. An investigation of Taiwanese early adolescents' self-evaluations concerning the Big 6 information problem-solving approach.

    PubMed

    Chang, Chiung-Sui

    2007-01-01

    The study developed a Big 6 Information Problem-Solving Scale (B61PS), including the subscales of task definition and information-seeking strategies, information access and synthesis, and evaluation. More than 1,500 fifth and sixth graders in Taiwan responded. The study revealed that the scale showed adequate reliability in assessing the adolescents' perceptions about the Big 6 information problem-solving approach. In addition, the adolescents had quite different responses toward different subscales of the approach. Moreover, females tended to have higher quality information-searching skills than their male counterparts. The adolescents of different grades also displayed varying views toward the approach. Other results are also provided.

  15. Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi

    This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.

  16. Gauging the Gaps in Student Problem-Solving Skills: Assessment of Individual and Group Use of Problem-Solving Strategies Using Online Discussions

    ERIC Educational Resources Information Center

    Anderson, William L.; Mitchell, Steven M.; Osgood, Marcy P.

    2008-01-01

    For the past 3 yr, faculty at the University of New Mexico, Department of Biochemistry and Molecular Biology have been using interactive online Problem-Based Learning (PBL) case discussions in our large-enrollment classes. We have developed an illustrative tracking method to monitor student use of problem-solving strategies to provide targeted…

  17. More reasons to be straightforward: findings and norms for two scales relevant to social anxiety.

    PubMed

    Rodebaugh, Thomas L; Heimberg, Richard G; Brown, Patrick J; Fernandez, Katya C; Blanco, Carlos; Schneier, Franklin R; Liebowitz, Michael R

    2011-06-01

    The validity of both the Social Interaction Anxiety Scale and Brief Fear of Negative Evaluation scale has been well-supported, yet the scales have a small number of reverse-scored items that may detract from the validity of their total scores. The current study investigates two characteristics of participants that may be associated with compromised validity of these items: higher age and lower levels of education. In community and clinical samples, the validity of each scale's reverse-scored items was moderated by age, years of education, or both. The straightforward items did not show this pattern. To encourage the use of the straightforward items of these scales, we provide normative data from the same samples as well as two large student samples. We contend that although response bias can be a substantial problem, the reverse-scored questions of these scales do not solve that problem and instead decrease overall validity. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Federated learning of predictive models from federated Electronic Health Records.

    PubMed

    Brisimi, Theodora S; Chen, Ruidi; Mela, Theofanie; Olshevsky, Alex; Paschalidis, Ioannis Ch; Shi, Wei

    2018-04-01

    In an era of "big data," computationally efficient and privacy-aware solutions for large-scale machine learning problems become crucial, especially in the healthcare domain, where large amounts of data are stored in different locations and owned by different entities. Past research has been focused on centralized algorithms, which assume the existence of a central data repository (database) which stores and can process the data from all participants. Such an architecture, however, can be impractical when data are not centrally located, it does not scale well to very large datasets, and introduces single-point of failure risks which could compromise the integrity and privacy of the data. Given scores of data widely spread across hospitals/individuals, a decentralized computationally scalable methodology is very much in need. We aim at solving a binary supervised classification problem to predict hospitalizations for cardiac events using a distributed algorithm. We seek to develop a general decentralized optimization framework enabling multiple data holders to collaborate and converge to a common predictive model, without explicitly exchanging raw data. We focus on the soft-margin l 1 -regularized sparse Support Vector Machine (sSVM) classifier. We develop an iterative cluster Primal Dual Splitting (cPDS) algorithm for solving the large-scale sSVM problem in a decentralized fashion. Such a distributed learning scheme is relevant for multi-institutional collaborations or peer-to-peer applications, allowing the data holders to collaborate, while keeping every participant's data private. We test cPDS on the problem of predicting hospitalizations due to heart diseases within a calendar year based on information in the patients Electronic Health Records prior to that year. cPDS converges faster than centralized methods at the cost of some communication between agents. It also converges faster and with less communication overhead compared to an alternative distributed algorithm. In both cases, it achieves similar prediction accuracy measured by the Area Under the Receiver Operating Characteristic Curve (AUC) of the classifier. We extract important features discovered by the algorithm that are predictive of future hospitalizations, thus providing a way to interpret the classification results and inform prevention efforts. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Towards a Cross-Domain MapReduce Framework

    DTIC Science & Technology

    2013-11-01

    These Big Data applications typically run as a set of MapReduce jobs to take advantage of Hadoop’s ease of service deployment and large-scale...parallelism. Yet, Hadoop has not been adapted for multilevel secure (MLS) environments where data of different security classifications co-exist. To solve...multilevel security. I. INTRODUCTION The US Department of Defense (DoD) and US Intelligence Community (IC) recognize they have a Big Data problem

  20. An electromagnetism-like metaheuristic for open-shop problems with no buffer

    NASA Astrophysics Data System (ADS)

    Naderi, Bahman; Najafi, Esmaeil; Yazdani, Mehdi

    2012-12-01

    This paper considers open-shop scheduling with no intermediate buffer to minimize total tardiness. This problem occurs in many production settings, in the plastic molding, chemical, and food processing industries. The paper mathematically formulates the problem by a mixed integer linear program. The problem can be optimally solved by the model. The paper also develops a novel metaheuristic based on an electromagnetism algorithm to solve the large-sized problems. The paper conducts two computational experiments. The first includes small-sized instances by which the mathematical model and general performance of the proposed metaheuristic are evaluated. The second evaluates the metaheuristic for its performance to solve some large-sized instances. The results show that the model and algorithm are effective to deal with the problem.

  1. Reasoning by analogy as an aid to heuristic theorem proving.

    NASA Technical Reports Server (NTRS)

    Kling, R. E.

    1972-01-01

    When heuristic problem-solving programs are faced with large data bases that contain numbers of facts far in excess of those needed to solve any particular problem, their performance rapidly deteriorates. In this paper, the correspondence between a new unsolved problem and a previously solved analogous problem is computed and invoked to tailor large data bases to manageable sizes. This paper outlines the design of an algorithm for generating and exploiting analogies between theorems posed to a resolution-logic system. These algorithms are believed to be the first computationally feasible development of reasoning by analogy to be applied to heuristic theorem proving.

  2. Emergence of distributed coordination in the Kolkata Paise Restaurant problem with finite information

    NASA Astrophysics Data System (ADS)

    Ghosh, Diptesh; Chakrabarti, Anindya S.

    2017-10-01

    In this paper, we study a large-scale distributed coordination problem and propose efficient adaptive strategies to solve the problem. The basic problem is to allocate finite number of resources to individual agents in the absence of a central planner such that there is as little congestion as possible and the fraction of unutilized resources is reduced as far as possible. In the absence of a central planner and global information, agents can employ adaptive strategies that uses only a finite knowledge about the competitors. In this paper, we show that a combination of finite information sets and reinforcement learning can increase the utilization fraction of resources substantially.

  3. The evolution and practical application of machine translation system (1)

    NASA Astrophysics Data System (ADS)

    Tominaga, Isao; Sato, Masayuki

    This paper describes a development, practical applicatioin, problem of a system, evaluation of practical system, and development trend of machine translation. Most recent system contains next four problems. 1) the vagueness of a text, 2) a difference of the definition of the terminology between different language, 3) the preparing of a large-scale translation dictionary, 4) the development of a software for the logical inference. Machine translation system is already used practically in many industry fields. However, many problems are not solved. The implementation of an ideal system will be after 15 years. Also, this paper described seven evaluation items detailedly. This English abstract was made by Mu system.

  4. Application of augmented-Lagrangian methods in meteorology: Comparison of different conjugate-gradient codes for large-scale minimization

    NASA Technical Reports Server (NTRS)

    Navon, I. M.

    1984-01-01

    A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.

  5. A comparative study of the effects of problem-solving skills training and relaxation on the score of self-esteem in women with postpartum depression

    PubMed Central

    Nasiri, Saeideh; Kordi, Masoumeh; Gharavi, Morteza Modares

    2015-01-01

    Background: Self-esteem is a determinant factor of mental health. Individuals with low self-esteem have depression, and low self-esteem is one of main symptoms of depression. Aim of this study is to compare the effects of problem-solving skills and relaxation on the score of self-esteem in women with postpartum depression. Materials and Methods: This clinical trial was performed on 80 women. Sampling was done in Mashhad healthy centers from December 2009 to June 2010. Women were randomly divided and assigned to problem-solving skills (n = 26), relaxation (n = 26), and control groups (n = 28). Interventions were implemented for 6 weeks and the subjects again completed Eysenck self-esteem scale 9 weeks after delivery. Data analysis was done by descriptive statistics, Kruskal–Wallis test, and analysis of variance (ANOVA) test by SPSS software. Results: The findings showed that the mean of self-esteem scale scores was 117.9 ± 9.7 after intervention in the problem-solving group, 117.0 ± 11.8 in the relaxation group, and 113.5 ± 10.4 in the control group and there was significant difference between the groups of relaxation and problem solving, and also between intervention groups and control group. Conclusions: According to the results, problem-solving skills and relaxation can be used to prevent and recover from postpartum depression. PMID:25709699

  6. Solving the flatness problem with an anisotropic instanton in Hořava-Lifshitz gravity

    NASA Astrophysics Data System (ADS)

    Bramberger, Sebastian F.; Coates, Andrew; Magueijo, João; Mukohyama, Shinji; Namba, Ryo; Watanabe, Yota

    2018-02-01

    In Hořava-Lifshitz gravity a scaling isotropic in space but anisotropic in spacetime, often called "anisotropic scaling," with the dynamical critical exponent z =3 , lies at the base of its renormalizability. This scaling also leads to a novel mechanism of generating scale-invariant cosmological perturbations, solving the horizon problem without inflation. In this paper we propose a possible solution to the flatness problem, in which we assume that the initial condition of the Universe is set by a small instanton respecting the same scaling. We argue that the mechanism may be more general than the concrete model presented here. We rely simply on the deformed dispersion relations of the theory, and on equipartition of the various forms of energy at the starting point.

  7. Cosmological signatures of a UV-conformal standard model.

    PubMed

    Dorsch, Glauber C; Huber, Stephan J; No, Jose Miguel

    2014-09-19

    Quantum scale invariance in the UV has been recently advocated as an attractive way of solving the gauge hierarchy problem arising in the standard model. We explore the cosmological signatures at the electroweak scale when the breaking of scale invariance originates from a hidden sector and is mediated to the standard model by gauge interactions (gauge mediation). These scenarios, while being hard to distinguish from the standard model at LHC, can give rise to a strong electroweak phase transition leading to the generation of a large stochastic gravitational wave signal in possible reach of future space-based detectors such as eLISA and BBO. This relic would be the cosmological imprint of the breaking of scale invariance in nature.

  8. Efficacy of an internet-based problem-solving training for teachers: results of a randomized controlled trial.

    PubMed

    Ebert, David Daniel; Lehr, Dirk; Boß, Leif; Riper, Heleen; Cuijpers, Pim; Andersson, Gerhard; Thiart, Hanne; Heber, Elena; Berking, Matthias

    2014-11-01

    The primary purpose of this randomized controlled trial (RCT) was to evaluate the efficacy of internet-based problem-solving training (iPST) for employees in the educational sector (teachers) with depressive symptoms. The results of training were compared to those of a waitlist control group (WLC). One-hundred and fifty teachers with elevated depressive symptoms (Center for Epidemiologic Studies Depression Scale, CES-D ≥16) were assigned to either the iPST or WLC group. The iPST consisted of five lessons, including problem-solving and rumination techniques. Symptoms were assessed before the intervention began and in follow-up assessments after seven weeks, three months, and six months. The primary outcome was depressive symptom severity (CES-D). Secondary outcomes included general and work-specific self-efficacy, perceived stress, pathological worries, burnout symptoms, general physical and mental health, and absenteeism. iPST participants displayed a significantly greater reduction in depressive symptoms after the intervention (d=0.59, 95% CI 0.26-0.92), after three months (d=0.37, 95% CI 0.05-0.70) and after six months (d=0.38, 95% CI 0.05-0.70) compared to the control group. The iPST participants also displayed significantly higher improvements in secondary outcomes. However, workplace absenteeism was not significantly affected. iPST is effective in reducing symptoms of depression among teachers. Disseminated on a large scale, iPST could contribute to reducing the burden of stress-related mental health problems among teachers. Future studies should evaluate iPST approaches for use in other working populations.

  9. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  10. Development and validation of the hypoglycaemia problem-solving scale for people with diabetes mellitus

    PubMed Central

    Juang, Jyuhn-Huarng; Lin, Chia-Hung

    2016-01-01

    Objective To develop and psychometrically test a new instrument, the hypoglycaemia problem-solving scale (HPSS), which was designed to measure how well people with diabetes mellitus manage their hypoglycaemia-related problems. Methods A cross-sectional survey design approach was used to validate the performance assessment instrument. Patients who had a diagnosis of type 1 or type 2 diabetes mellitus for at least 1 year, who were being treated with insulin and who had experienced at least one hypoglycaemic episode within the previous 6 months were eligible for inclusion in the study. Results A total of 313 patients were included in the study. The initial draft of the HPSS included 28 items. After exploratory factor analysis, the 24-item HPSS consisted of seven factors: problem-solving perception, detection control, identifying problem attributes, setting problem-solving goals, seeking preventive strategies, evaluating strategies, and immediate management. The Cronbach’s α for the total HPSS was 0.83. Conclusions The HPSS was verified as being valid and reliable. Future studies should further test and improve the instrument to increase its effectiveness in helping people with diabetes manage their hypoglycaemia-related problems. PMID:27059292

  11. Development and validation of the hypoglycaemia problem-solving scale for people with diabetes mellitus.

    PubMed

    Wu, Fei-Ling; Juang, Jyuhn-Huarng; Lin, Chia-Hung

    2016-06-01

    To develop and psychometrically test a new instrument, the hypoglycaemia problem-solving scale (HPSS), which was designed to measure how well people with diabetes mellitus manage their hypoglycaemia-related problems. A cross-sectional survey design approach was used to validate the performance assessment instrument. Patients who had a diagnosis of type 1 or type 2 diabetes mellitus for at least 1 year, who were being treated with insulin and who had experienced at least one hypoglycaemic episode within the previous 6 months were eligible for inclusion in the study. A total of 313 patients were included in the study. The initial draft of the HPSS included 28 items. After exploratory factor analysis, the 24-item HPSS consisted of seven factors: problem-solving perception, detection control, identifying problem attributes, setting problem-solving goals, seeking preventive strategies, evaluating strategies, and immediate management. The Cronbach's α for the total HPSS was 0.83. The HPSS was verified as being valid and reliable. Future studies should further test and improve the instrument to increase its effectiveness in helping people with diabetes manage their hypoglycaemia-related problems. © The Author(s) 2016.

  12. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less

  13. An optimal control strategy for hybrid actuator systems: Application to an artificial muscle with electric motor assist.

    PubMed

    Ishihara, Koji; Morimoto, Jun

    2018-03-01

    Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  14. Inference of Vohradský's Models of Genetic Networks by Solving Two-Dimensional Function Optimization Problems

    PubMed Central

    Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko

    2013-01-01

    The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175

  15. A global stochastic programming approach for the optimal placement of gas detectors with nonuniform unavailabilities

    DOE PAGES

    Liu, Jianfeng; Laird, Carl Damon

    2017-09-22

    Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less

  16. A global stochastic programming approach for the optimal placement of gas detectors with nonuniform unavailabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jianfeng; Laird, Carl Damon

    Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less

  17. Improvement in Generic Problem-Solving Abilities of Students by Use of Tutor-less Problem-Based Learning in a Large Classroom Setting

    PubMed Central

    Klegeris, Andis; Bahniwal, Manpreet; Hurren, Heather

    2013-01-01

    Problem-based learning (PBL) was originally introduced in medical education programs as a form of small-group learning, but its use has now spread to large undergraduate classrooms in various other disciplines. Introduction of new teaching techniques, including PBL-based methods, needs to be justified by demonstrating the benefits of such techniques over classical teaching styles. Previously, we demonstrated that introduction of tutor-less PBL in a large third-year biochemistry undergraduate class increased student satisfaction and attendance. The current study assessed the generic problem-solving abilities of students from the same class at the beginning and end of the term, and compared student scores with similar data obtained in three classes not using PBL. Two generic problem-solving tests of equal difficulty were administered such that students took different tests at the beginning and the end of the term. Blinded marking showed a statistically significant 13% increase in the test scores of the biochemistry students exposed to PBL, while no trend toward significant change in scores was observed in any of the control groups not using PBL. Our study is among the first to demonstrate that use of tutor-less PBL in a large classroom leads to statistically significant improvement in generic problem-solving skills of students. PMID:23463230

  18. Classical boson sampling algorithms with superior performance to near-term experiments

    NASA Astrophysics Data System (ADS)

    Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony

    2017-12-01

    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.

  19. Predicting protein structures with a multiplayer online game.

    PubMed

    Cooper, Seth; Khatib, Firas; Treuille, Adrien; Barbero, Janos; Lee, Jeehyung; Beenen, Michael; Leaver-Fay, Andrew; Baker, David; Popović, Zoran; Players, Foldit

    2010-08-05

    People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully 'crowd-sourced' through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems.

  20. Effect of Tutorial Giving on The Topic of Special Theory of Relativity in Modern Physics Course Towards Students’ Problem-Solving Ability

    NASA Astrophysics Data System (ADS)

    Hartatiek; Yudyanto; Haryoto, Dwi

    2017-05-01

    A Special Theory of Relativity handbook has been successfully arranged to guide students tutorial activity in the Modern Physics course. The low of students’ problem-solving ability was overcome by giving the tutorial in addition to the lecture class. It was done due to the limited time in the class during the course to have students do some exercises for their problem-solving ability. The explicit problem-solving based tutorial handbook was written by emphasizing to this 5 problem-solving strategies: (1) focus on the problem, (2) picture the physical facts, (3) plan the solution, (4) solve the problem, and (5) check the result. This research and development (R&D) consisted of 3 main steps: (1) preliminary study, (2) draft I. product development, and (3) product validation. The developed draft product was validated by experts to measure the feasibility of the material and predict the effect of the tutorial giving by means of questionnaires with scale 1 to 4. The students problem-solving ability in Special Theory of Relativity showed very good qualification. It implied that the tutorial giving with the help of tutorial handbook increased students problem-solving ability. The empirical test revealed that the developed handbook was significantly affected in improving students’ mastery concept and problem-solving ability. Both students’ mastery concept and problem-solving ability were in middle category with gain of 0.31 and 0.41, respectively.

  1. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  2. Problem-Solving After Traumatic Brain Injury in Adolescence: Associations With Functional Outcomes

    PubMed Central

    Wade, Shari L.; Cassedy, Amy E.; Fulks, Lauren E.; Taylor, H. Gerry; Stancin, Terry; Kirkwood, Michael W.; Yeates, Keith O.; Kurowski, Brad G.

    2017-01-01

    Objective To examine the association of problem-solving with functioning in youth with traumatic brain injury (TBI). Design Cross-sectional evaluation of pretreatment data from a randomized controlled trial. Setting Four children’s hospitals and 1 general hospital, with level 1 trauma units. Participants Youth, ages 11 to 18 years, who sustained moderate or severe TBI in the last 18 months (N=153). Main Outcome Measures Problem-solving skills were assessed using the Social Problem-Solving Inventory (SPSI) and the Dodge Social Information Processing Short Stories. Everyday functioning was assessed based on a structured clinical interview using the Child and Adolescent Functional Assessment Scale (CAFAS) and via adolescent ratings on the Youth Self Report (YSR). Correlations and multiple regression analyses were used to examine associations among measures. Results The TBI group endorsed lower levels of maladaptive problem-solving (negative problem orientation, careless/impulsive responding, and avoidant style) and lower levels of rational problem-solving, resulting in higher total problem-solving scores for the TBI group compared with a normative sample (P<.001). Dodge Social Information Processing Short Stories dimensions were correlated (r=.23–.37) with SPSI subscales in the anticipated direction. Although both maladaptive (P<.001) and adaptive (P=.006) problem-solving composites were associated with overall functioning on the CAFAS, only maladaptive problem-solving (P<.001) was related to the YSR total when outcomes were continuous. For the both CAFAS and YSR logistic models, maladaptive style was significantly associated with greater risk of impairment (P=.001). Conclusions Problem-solving after TBI differs from normative samples and is associated with functional impairments. The relation of problem-solving deficits after TBI with global functioning merits further investigation, with consideration of the potential effects of problem-solving interventions on functional outcomes. PMID:28389109

  3. Problem-Solving After Traumatic Brain Injury in Adolescence: Associations With Functional Outcomes.

    PubMed

    Wade, Shari L; Cassedy, Amy E; Fulks, Lauren E; Taylor, H Gerry; Stancin, Terry; Kirkwood, Michael W; Yeates, Keith O; Kurowski, Brad G

    2017-08-01

    To examine the association of problem-solving with functioning in youth with traumatic brain injury (TBI). Cross-sectional evaluation of pretreatment data from a randomized controlled trial. Four children's hospitals and 1 general hospital, with level 1 trauma units. Youth, ages 11 to 18 years, who sustained moderate or severe TBI in the last 18 months (N=153). Problem-solving skills were assessed using the Social Problem-Solving Inventory (SPSI) and the Dodge Social Information Processing Short Stories. Everyday functioning was assessed based on a structured clinical interview using the Child and Adolescent Functional Assessment Scale (CAFAS) and via adolescent ratings on the Youth Self Report (YSR). Correlations and multiple regression analyses were used to examine associations among measures. The TBI group endorsed lower levels of maladaptive problem-solving (negative problem orientation, careless/impulsive responding, and avoidant style) and lower levels of rational problem-solving, resulting in higher total problem-solving scores for the TBI group compared with a normative sample (P<.001). Dodge Social Information Processing Short Stories dimensions were correlated (r=.23-.37) with SPSI subscales in the anticipated direction. Although both maladaptive (P<.001) and adaptive (P=.006) problem-solving composites were associated with overall functioning on the CAFAS, only maladaptive problem-solving (P<.001) was related to the YSR total when outcomes were continuous. For the both CAFAS and YSR logistic models, maladaptive style was significantly associated with greater risk of impairment (P=.001). Problem-solving after TBI differs from normative samples and is associated with functional impairments. The relation of problem-solving deficits after TBI with global functioning merits further investigation, with consideration of the potential effects of problem-solving interventions on functional outcomes. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  4. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  5. Skills of U.S. Unemployed, Young, and Older Adults in Sharper Focus: Results from the Program for the International Assessment of Adult Competencies (PIAAC) 2012/2014. First Look. NCES 2016-039

    ERIC Educational Resources Information Center

    Rampey, Bobby D.; Finnegan, Robert; Goodman, Madeline; Mohadjer, Leyla; Krenzke, Tom; Hogan, Jacquie; Provasnik, Stephen

    2016-01-01

    The "Program for the International Assessment of Adult Competencies" (PIAAC) is a cyclical, large-scale study of adult skills and life experiences focusing on education and employment. Nationally representative samples of adults between the ages of 16 and 65 are administered an assessment of literacy, numeracy, and problem solving in…

  6. Ellipsoidal universe can solve the cosmic microwave background quadrupole problem.

    PubMed

    Campanelli, L; Cea, P; Tedesco, L

    2006-09-29

    The recent 3 yr Wilkinson Microwave Anisotropy Probe data have confirmed the anomaly concerning the low quadrupole amplitude compared to the best-fit Lambda-cold dark matter prediction. We show that by allowing the large-scale spatial geometry of our universe to be plane symmetric with eccentricity at decoupling or order 10(-2), the quadrupole amplitude can be drastically reduced without affecting higher multipoles of the angular power spectrum of the temperature anisotropy.

  7. VALIDATION OF ANSYS FINITE ELEMENT ANALYSIS SOFTWARE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HAMM, E.R.

    2003-06-27

    This document provides a record of the verification and Validation of the ANSYS Version 7.0 software that is installed on selected CH2M HILL computers. The issues addressed include: Software verification, installation, validation, configuration management and error reporting. The ANSYS{reg_sign} computer program is a large scale multi-purpose finite element program which may be used for solving several classes of engineering analysis. The analysis capabilities of ANSYS Full Mechanical Version 7.0 installed on selected CH2M Hill Hanford Group (CH2M HILL) Intel processor based computers include the ability to solve static and dynamic structural analyses, steady-state and transient heat transfer problems, mode-frequency andmore » buckling eigenvalue problems, static or time-varying magnetic analyses and various types of field and coupled-field applications. The program contains many special features which allow nonlinearities or secondary effects to be included in the solution, such as plasticity, large strain, hyperelasticity, creep, swelling, large deflections, contact, stress stiffening, temperature dependency, material anisotropy, and thermal radiation. The ANSYS program has been in commercial use since 1970, and has been used extensively in the aerospace, automotive, construction, electronic, energy services, manufacturing, nuclear, plastics, oil and steel industries.« less

  8. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  9. The accurate particle tracer code

    DOE PAGES

    Wang, Yulei; Liu, Jian; Qin, Hong; ...

    2017-07-20

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  10. The accurate particle tracer code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yulei; Liu, Jian; Qin, Hong

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runawaymore » electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world’s fastest computer, the Sunway TaihuLight supercomputer, by supporting master–slave architecture of Sunway many-core processors. Here, based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.« less

  11. Transient analysis of 1D inhomogeneous media by dynamic inhomogeneous finite element method

    NASA Astrophysics Data System (ADS)

    Yang, Zailin; Wang, Yao; Hei, Baoping

    2013-12-01

    The dynamic inhomogeneous finite element method is studied for use in the transient analysis of onedimensional inhomogeneous media. The general formula of the inhomogeneous consistent mass matrix is established based on the shape function. In order to research the advantages of this method, it is compared with the general finite element method. A linear bar element is chosen for the discretization tests of material parameters with two fictitious distributions. And, a numerical example is solved to observe the differences in the results between these two methods. Some characteristics of the dynamic inhomogeneous finite element method that demonstrate its advantages are obtained through comparison with the general finite element method. It is found that the method can be used to solve elastic wave motion problems with a large element scale and a large number of iteration steps.

  12. Accelerating large-scale simulation of seismic wave propagation by multi-GPUs and three-dimensional domain decomposition

    NASA Astrophysics Data System (ADS)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki

    2010-12-01

    We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.

  13. New Parallel Algorithms for Structural Analysis and Design of Aerospace Structures

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1998-01-01

    Subspace and Lanczos iterations have been developed, well documented, and widely accepted as efficient methods for obtaining p-lowest eigen-pair solutions of large-scale, practical engineering problems. The focus of this paper is to incorporate recent developments in vectorized sparse technologies in conjunction with Subspace and Lanczos iterative algorithms for computational enhancements. Numerical performance, in terms of accuracy and efficiency of the proposed sparse strategies for Subspace and Lanczos algorithm, is demonstrated by solving for the lowest frequencies and mode shapes of structural problems on the IBM-R6000/590 and SunSparc 20 workstations.

  14. Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks

    NASA Technical Reports Server (NTRS)

    Lee, Charles H.; Cheung, Kar-Ming

    2012-01-01

    In this paper, we propose to solve the constrained optimization problem in two phases. The first phase uses heuristic methods such as the ant colony method, particle swarming optimization, and genetic algorithm to seek a near optimal solution among a list of feasible initial populations. The final optimal solution can be found by using the solution of the first phase as the initial condition to the SQP algorithm. We demonstrate the above problem formulation and optimization schemes with a large-scale network that includes the DSN ground stations and a number of spacecraft of deep space missions.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Zhaojun; Yang, Chao

    What is common among electronic structure calculation, design of MEMS devices, vibrational analysis of high speed railways, and simulation of the electromagnetic field of a particle accelerator? The answer: they all require solving large scale nonlinear eigenvalue problems. In fact, these are just a handful of examples in which solving nonlinear eigenvalue problems accurately and efficiently is becoming increasingly important. Recognizing the importance of this class of problems, an invited minisymposium dedicated to nonlinear eigenvalue problems was held at the 2005 SIAM Annual Meeting. The purpose of the minisymposium was to bring together numerical analysts and application scientists to showcasemore » some of the cutting edge results from both communities and to discuss the challenges they are still facing. The minisymposium consisted of eight talks divided into two sessions. The first three talks focused on a type of nonlinear eigenvalue problem arising from electronic structure calculations. In this type of problem, the matrix Hamiltonian H depends, in a non-trivial way, on the set of eigenvectors X to be computed. The invariant subspace spanned by these eigenvectors also minimizes a total energy function that is highly nonlinear with respect to X on a manifold defined by a set of orthonormality constraints. In other applications, the nonlinearity of the matrix eigenvalue problem is restricted to the dependency of the matrix on the eigenvalues to be computed. These problems are often called polynomial or rational eigenvalue problems In the second session, Christian Mehl from Technical University of Berlin described numerical techniques for solving a special type of polynomial eigenvalue problem arising from vibration analysis of rail tracks excited by high-speed trains.« less

  16. Acceleration of aircraft-level Traffic Flow Management

    NASA Astrophysics Data System (ADS)

    Rios, Joseph Lucio

    This dissertation describes novel approaches to solving large-scale, high fidelity, aircraft-level Traffic Flow Management scheduling problems. Depending on the methods employed, solving these problems to optimality can take longer than the length of the planning horizon in question. Research in this domain typically focuses on the quality of the modeling used to describe the problem and the benefits achieved from the optimized solution, often treating computational aspects as secondary or tertiary. The work presented here takes the complementary view and considers the computational aspect as the primary concern. To this end, a previously published model for solving this Traffic Flow Management scheduling problem is used as starting point for this study. The model proposed by Bertsimas and Stock-Patterson is a binary integer program taking into account all major resource capacities and the trajectories of each flight to decide which flights should be held in which resource for what amount of time in order to satisfy all capacity requirements. For large instances, the solve time using state-of-the-art solvers is prohibitive for use within a potential decision support tool. With this dissertation, however, it will be shown that solving can be achieved in reasonable time for instances of real-world size. Five other techniques developed and tested for this dissertation will be described in detail. These are heuristic methods that provide good results. Performance is measured in terms of runtime and "optimality gap." We then describe the most successful method presented in this dissertation: Dantzig-Wolfe Decomposition. Results indicate that a parallel implementation of Dantzig-Wolfe Decomposition optimally solves the original problem in much reduced time and with better integrality and smaller optimality gap than any of the heuristic methods or state-of-the-art, commercial solvers. The solution quality improves in every measureable way as the number of subproblems solved in parallel increases. A maximal decomposition provides the best results of any method tested. The convergence qualities of Dantzig-Wolfe Decomposition have been criticized in the past, so we examine what makes the Bertsimas-Stock Patterson model so amenable to use of this method. These mathematical qualities of the model are generalized to provide guidance on other problems that may benefit from massively parallel Dantzig-Wolfe Decomposition. This result, together with the development of the software, and the experimental results indicating the feasibility of real-time, nationwide Traffic Flow Management scheduling represent the major contributions of this dissertation.

  17. Numerical simulation using vorticity-vector potential formulation

    NASA Technical Reports Server (NTRS)

    Tokunaga, Hiroshi

    1993-01-01

    An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the computational method. We present the computational method and apply the present method to computations of flows in a square cavity at large Reynolds number in order to investigate its effectiveness.

  18. Quo vadis: Hydrologic inverse analyses using high-performance computing and a D-Wave quantum annealer

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2017-12-01

    Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.

  19. Personal and parental problem drinking: effects on problem-solving performance and self-appraisal.

    PubMed

    Slavkin, S L; Heimberg, R G; Winning, C D; McCaffrey, R J

    1992-01-01

    This study examined the problem-solving performances and self-appraisals of problem-solving ability of college-age subjects with and without parental history of problem drinking. Contrary to our predictions, children of problem drinkers (COPDs) were rated as somewhat more effective in their problem-solving skills than non-COPDs, undermining prevailing assumptions about offspring from alcoholic households. While this difference was not large and was qualified by other variables, subjects' own alcohol abuse did exert a detrimental effect on problem-solving performance, regardless of parental history of problem drinking. However, a different pattern was evident for problem-solving self-appraisals. Alcohol-abusing non-COPDs saw themselves as effective problem-solvers while alcohol-abusing COPDs appraised themselves as poor problem-solvers. In addition, the self-appraisals of alcohol-abusing COPDs were consistent with objective ratings of solution effectiveness (i.e., they were both negative) while alcohol-abusing non-COPDs were overly positive in their appraisals, opposing the judgments of trained raters. This finding suggests that the relationship between personal alcohol abuse and self-appraised problem-solving abilities may differ as a function of parental history of problem drinking. Limitations on the generalizability of findings are addressed.

  20. Efficient ICCG on a shared memory multiprocessor

    NASA Technical Reports Server (NTRS)

    Hammond, Steven W.; Schreiber, Robert

    1989-01-01

    Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.

  1. Motion-based prediction is sufficient to solve the aperture problem

    PubMed Central

    Perrinet, Laurent U; Masson, Guillaume S

    2012-01-01

    In low-level sensory systems, it is still unclear how the noisy information collected locally by neurons may give rise to a coherent global percept. This is well demonstrated for the detection of motion in the aperture problem: as luminance of an elongated line is symmetrical along its axis, tangential velocity is ambiguous when measured locally. Here, we develop the hypothesis that motion-based predictive coding is sufficient to infer global motion. Our implementation is based on a context-dependent diffusion of a probabilistic representation of motion. We observe in simulations a progressive solution to the aperture problem similar to physiology and behavior. We demonstrate that this solution is the result of two underlying mechanisms. First, we demonstrate the formation of a tracking behavior favoring temporally coherent features independently of their texture. Second, we observe that incoherent features are explained away while coherent information diffuses progressively to the global scale. Most previous models included ad-hoc mechanisms such as end-stopped cells or a selection layer to track specific luminance-based features as necessary conditions to solve the aperture problem. Here, we have proved that motion-based predictive coding, as it is implemented in this functional model, is sufficient to solve the aperture problem. This solution may give insights in the role of prediction underlying a large class of sensory computations. PMID:22734489

  2. Some Problems of Industrial Scale-Up.

    ERIC Educational Resources Information Center

    Jackson, A. T.

    1985-01-01

    Scientific ideas of the biological laboratory are turned into economic realities in industry only after several problems are solved. Economics of scale, agitation, heat transfer, sterilization of medium and air, product recovery, waste disposal, and future developments are discussed using aerobic respiration as the example in the scale-up…

  3. Parallel block schemes for large scale least squares computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golub, G.H.; Plemmons, R.J.; Sameh, A.

    1986-04-01

    Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less

  4. Optimal network modification for spectral radius dependent phase transitions

    NASA Astrophysics Data System (ADS)

    Rosen, Yonatan; Kirsch, Lior; Louzoun, Yoram

    2016-09-01

    The dynamics of contact processes on networks is often determined by the spectral radius of the networks adjacency matrices. A decrease of the spectral radius can prevent the outbreak of an epidemic, or impact the synchronization among systems of coupled oscillators. The spectral radius is thus tightly linked to network dynamics and function. As such, finding the minimal change in network structure necessary to reach the intended spectral radius is important theoretically and practically. Given contemporary big data resources such as large scale communication or social networks, this problem should be solved with a low runtime complexity. We introduce a novel method for the minimal decrease in weights of edges required to reach a given spectral radius. The problem is formulated as a convex optimization problem, where a global optimum is guaranteed. The method can be easily adjusted to an efficient discrete removal of edges. We introduce a variant of the method which finds optimal decrease with a focus on weights of vertices. The proposed algorithm is exceptionally scalable, solving the problem for real networks of tens of millions of edges in a short time.

  5. Key aspects of coronal heating

    PubMed Central

    Klimchuk, James A.

    2015-01-01

    We highlight 10 key aspects of coronal heating that must be understood before we can consider the problem to be solved. (1) All coronal heating is impulsive. (2) The details of coronal heating matter. (3) The corona is filled with elemental magnetic stands. (4) The corona is densely populated with current sheets. (5) The strands must reconnect to prevent an infinite build-up of stress. (6) Nanoflares repeat with different frequencies. (7) What is the characteristic magnitude of energy release? (8) What causes the collective behaviour responsible for loops? (9) What are the onset conditions for energy release? (10) Chromospheric nanoflares are not a primary source of coronal plasma. Significant progress in solving the coronal heating problem will require coordination of approaches: observational studies, field-aligned hydrodynamic simulations, large-scale and localized three-dimensional magnetohydrodynamic simulations, and possibly also kinetic simulations. There is a unique value to each of these approaches, and the community must strive to coordinate better. PMID:25897094

  6. Exploration versus exploitation in space, mind, and society

    PubMed Central

    Hills, Thomas T.; Todd, Peter M.; Lazer, David; Redish, A. David; Couzin, Iain D.

    2015-01-01

    Search is a ubiquitous property of life. Although diverse domains have worked on search problems largely in isolation, recent trends across disciplines indicate that the formal properties of these problems share similar structures and, often, similar solutions. Moreover, internal search (e.g., memory search) shows similar characteristics to external search (e.g., spatial foraging), including shared neural mechanisms consistent with a common evolutionary origin across species. Search problems and their solutions also scale from individuals to societies, underlying and constraining problem solving, memory, information search, and scientific and cultural innovation. In summary, search represents a core feature of cognition, with a vast influence on its evolution and processes across contexts and requiring input from multiple domains to understand its implications and scope. PMID:25487706

  7. Prospects for mirage mediation

    NASA Astrophysics Data System (ADS)

    Pierce, Aaron; Thaler, Jesse

    2006-09-01

    Mirage mediation reduces the fine-tuning in the minimal supersymmetric standard model by dynamically arranging a cancellation between anomaly-mediated and modulus-mediated supersymmetry breaking. We explore the conditions under which a mirage ``messenger scale'' is generated near the weak scale and the little hierarchy problem is solved. We do this by explicitly including the dynamics of the SUSY-breaking sector needed to cancel the cosmological constant. The most plausible scenario for generating a low mirage scale does not readily admit an extra-dimensional interpretation. We also review the possibilities for solving the μ/Bμ problem in such theories, a potential hidden source of fine-tuning.

  8. Algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations with the use of parallel computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moryakov, A. V., E-mail: sailor@orc.ru

    2016-12-15

    An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.

  9. The application of artificial intelligence techniques to large distributed networks

    NASA Technical Reports Server (NTRS)

    Dubyah, R.; Smith, T. R.; Star, J. L.

    1985-01-01

    Data accessibility and transfer of information, including the land resources information system pilot, are structured as large computer information networks. These pilot efforts include the reduction of the difficulty to find and use data, reducing processing costs, and minimize incompatibility between data sources. Artificial Intelligence (AI) techniques were suggested to achieve these goals. The applicability of certain AI techniques are explored in the context of distributed problem solving systems and the pilot land data system (PLDS). The topics discussed include: PLDS and its data processing requirements, expert systems and PLDS, distributed problem solving systems, AI problem solving paradigms, query processing, and distributed data bases.

  10. Learning biology through connecting mathematics to scientific mechanisms: Student outcomes and teacher supports

    NASA Astrophysics Data System (ADS)

    Schuchardt, Anita

    Integrating mathematics into science classrooms has been part of the conversation in science education for a long time. However, studies on student learning after incorporating mathematics in to the science classroom have shown mixed results. Understanding the mixed effects of including mathematics in science has been hindered by a historical focus on characteristics of integration tangential to student learning (e.g., shared elements, extent of integration). A new framework is presented emphasizing the epistemic role of mathematics in science. An epistemic role of mathematics missing from the current literature is identified: use of mathematics to represent scientific mechanisms, Mechanism Connected Mathematics (MCM). Building on prior theoretical work, it is proposed that having students develop mathematical equations that represent scientific mechanisms could elevate their conceptual understanding and quantitative problem solving. Following design and implementation of an MCM unit in inheritance, a large-scale quantitative analysis of pre and post implementation test results showed MCM students, compared to traditionally instructed students) had significantly greater gains in conceptual understanding of mathematically modeled scientific mechanisms, and their ability to solve complex quantitative problems. To gain insight into the mechanism behind the gain in quantitative problem solving, a small-scale qualitative study was conducted of two contrasting groups: 1) within-MCM instruction: competent versus struggling problem solvers, and 2) within-competent problem solvers: MCM instructed versus traditionally instructed. Competent MCM students tended to connect their mathematical inscriptions to the scientific phenomenon and to switch between mathematical and scientifically productive approaches during problem solving in potentially productive ways. The other two groups did not. To address concerns about teacher capacity presenting barriers to scalability of MCM approaches, the types and amount of teacher support needed to achieve these types of student learning gains were investigated. In the context of providing teachers with access to educative materials, students achieved learning gains in both areas in the absence of face-to-face teacher professional development. However, maximal student learning gains required the investment of face-to-face professional development. This finding can govern distribution of scarce resources, but does not preclude implementation of MCM instruction even where resource availability does not allow for face-to-face professional development.

  11. Application of computational aero-acoustics to real world problems

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.

  12. Efficient Computation of Sparse Matrix Functions for Large-Scale Electronic Structure Calculations: The CheSS Library.

    PubMed

    Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi

    2017-10-10

    We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.

  13. The Development of a Small-World Network of Higher Education Students, Using a Large-Group Problem-Solving Method

    ERIC Educational Resources Information Center

    Sousa, Fernando Cardoso; Monteiro, Ileana Pardal; Pellissier, René

    2014-01-01

    This article presents the development of a small-world network using an adapted version of the large-group problem-solving method "Future Search." Two management classes in a higher education setting were selected and required to plan a project. The students completed a survey focused on the frequency of communications before and after…

  14. PROSPECTIVE ASSOCIATIONS OF DEPRESSIVE RUMINATION AND SOCIAL PROBLEM SOLVING WITH DEPRESSION: A 6-MONTH LONGITUDINAL STUDY(.).

    PubMed

    Hasegawa, Akira; Hattori, Yosuke; Nishimura, Haruki; Tanno, Yoshihiko

    2015-06-01

    The main purpose of this study was to examine whether depressive rumination and social problem solving are prospectively associated with depressive symptoms. Nonclinical university students (N = 161, 64 men, 97 women; M age = 19.7 yr., SD = 3.6, range = 18-61) recruited from three universities in Japan completed the Beck Depression Inventory-Second Edition (BDI-II), the Ruminative Responses Scale, Social Problem-Solving Inventory-Revised Short Version (SPSI-R:S), and the Means-Ends Problem-Solving Procedure at baseline, and the BDI-II again at 6 mo. later. A stepwise multiple regression analysis with the BDI-II and all subscales of the rumination and social problem solving measures as independent variables indicated that only the BDI-II scores and the Impulsivity/carelessness style subscale of the SPSI-R:S at Time 1 were significantly associated with BDI-II scores at Time 2 (β = 0.73, 0.12, respectively; independent variables accounted for 58.8% of the variance). These findings suggest that in Japan an impulsive and careless problem-solving style was prospectively associated with depressive symptomatology 6 mo. later, as contrasted with previous findings of a cycle of rumination and avoidance problem-solving style.

  15. Problem Solving in Electricity.

    ERIC Educational Resources Information Center

    Caillot, Michel; Chalouhi, Elias

    Two studies were conducted to describe how students perform direct current (D-C) circuit problems. It was hypothesized that problem solving in the electricity domain depends largely on good visual processing of the circuit diagram and that this processing depends on the ability to recognize when two or more electrical components are in series or…

  16. An evolving effective stress approach to anisotropic distortional hardening

    DOE PAGES

    Lester, B. T.; Scherzinger, W. M.

    2018-03-11

    A new yield surface with an evolving effective stress definition is proposed for consistently and efficiently describing anisotropic distortional hardening. Specifically, a new internal state variable is introduced to capture the thermodynamic evolution between different effective stress definitions. The corresponding yield surface and evolution equations of the internal variables are derived from thermodynamic considerations enabling satisfaction of the second law. A closest point projection return mapping algorithm for the proposed model is formulated and implemented for use in finite element analyses. Finally, select constitutive and larger scale boundary value problems are solved to explore the capabilities of the model andmore » examine the impact of distortional hardening on constitutive and structural responses. Importantly, these simulations demonstrate the tractability of the proposed formulation in investigating large-scale problems of interest.« less

  17. An evolving effective stress approach to anisotropic distortional hardening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, B. T.; Scherzinger, W. M.

    A new yield surface with an evolving effective stress definition is proposed for consistently and efficiently describing anisotropic distortional hardening. Specifically, a new internal state variable is introduced to capture the thermodynamic evolution between different effective stress definitions. The corresponding yield surface and evolution equations of the internal variables are derived from thermodynamic considerations enabling satisfaction of the second law. A closest point projection return mapping algorithm for the proposed model is formulated and implemented for use in finite element analyses. Finally, select constitutive and larger scale boundary value problems are solved to explore the capabilities of the model andmore » examine the impact of distortional hardening on constitutive and structural responses. Importantly, these simulations demonstrate the tractability of the proposed formulation in investigating large-scale problems of interest.« less

  18. Correction of Excessive Precipitation over Steep Mountains in a General Circulation Model (GCM)

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.

    2012-01-01

    Excessive precipitation over steep and high mountains (EPSM) is a well-known problem in GCMs and regional climate models even at a resolution as high as 19km. The affected regions include the Andes, the Himalayas, Sierra Madre, New Guinea and others. This problem also shows up in some data assimilation products. Among the possible causes investigated in this study, we found that the most important one, by far, is a missing upward transport of heat out of the boundary layer due to the vertical circulations forced by the daytime subgrid-scale upslope winds, which in turn is forced by heated boundary layer on the slopes. These upslope winds are associated with large subgrid-scale topographic variance, which is found over steep mountains. Without such subgrid-scale heat ventilation, the resolvable-scale upslope flow in the boundary layer generated by surface sensible heat flux along the mountain slopes is excessive. Such an excessive resolvable-scale upslope flow in the boundary layer combined with the high moisture content in the boundary layer results in excessive moisture transport toward mountaintops, which in turn gives rise to excessive precipitation over the affected regions. We have parameterized the effects of subgrid-scale heated-slope-induced vertical circulation (SHVC) by removing heat from the boundary layer and depositing it in the layers higher up when topographic variance exceeds a critical value. Test results using NASA/Goddard's GEOS-5 GCM have shown that the EPSM problem is largely solved.

  19. Reconciling large- and small-scale structure in Twin Higgs models

    DOE PAGES

    Prilepina, Valentina; Tsai, Yuhsin

    2017-09-08

    Here, we study possible extensions of the Twin Higgs model that solve the Hierarchy problem and simultaneously address problems of the large- and small-scale structures of the Universe. Besides naturally providing dark matter (DM) candidates as the lightest charged twin fermions, the twin sector contains a light photon and neutrinos, which can modify structure formation relative to the prediction from the ΛCDM paradigm. We focus on two viable scenarios. First, we study a Fraternal Twin Higgs model in which the spin-3/2 baryonmore » $$\\hat{Ω}$$~($$\\hat{b}$$$\\hat{b}$$$\\hat{b}$$) and the lepton twin tau $$\\hat{τ}$$ contribute to the dominant and subcomponent dark matter densities. A non-decoupled scattering between the twin tau and twin neutrino arising from a gauged twin lepton number symmetry provides a drag force that damps the density inhomogeneity of a dark matter subcomponent. Next, we consider the possibility of introducing a twin hydrogen atom $$\\hat{H}$$ as the dominant DM component. After recombination, a small fraction of the twin protons and leptons remains ionized during structure formation, and their scattering to twin neutrinos through a gauged U(1) B-L force provides the mechanism that damps the density inhomogeneity. Both scenarios realize the Partially Acoustic dark matter (PAcDM) scenario and explain the σ 8 discrepancy between the CMB and weak lensing results. Moreover, the self-scattering neutrino behaves as a dark fluid that enhances the size of the Hubble rate H 0 to accommodate the local measurement result while satisfying the CMB constraint. For the small-scale structure, the scattering of $$\\hat{Ω}$$ ’s and $$\\hat{H}$$’s through the twin photon exchange generates a self-interacting dark matter (SIDM) model that solves the mass deficit problem from dwarf galaxy to galaxy cluster scales. Furthermore, when varying general choices of the twin photon coupling, bounds from the dwarf galaxy and the cluster merger observations can set an upper limit on the twin electric coupling.« less

  20. Reconciling large- and small-scale structure in Twin Higgs models

    NASA Astrophysics Data System (ADS)

    Prilepina, Valentina; Tsai, Yuhsin

    2017-09-01

    We study possible extensions of the Twin Higgs model that solve the Hierarchy problem and simultaneously address problems of the large- and small-scale structures of the Universe. Besides naturally providing dark matter (DM) candidates as the lightest charged twin fermions, the twin sector contains a light photon and neutrinos, which can modify structure formation relative to the prediction from the ΛCDM paradigm. We focus on two viable scenarios. First, we study a Fraternal Twin Higgs model in which the spin-3/2 baryon \\widehat{Ω}˜ (\\widehat{b}\\widehat{b}\\widehat{b}) and the lepton twin tau \\widehat{τ} contribute to the dominant and subcomponent dark matter densities. A non-decoupled scattering between the twin tau and twin neutrino arising from a gauged twin lepton number symmetry provides a drag force that damps the density inhomogeneity of a dark matter subcomponent. Next, we consider the possibility of introducing a twin hydrogen atom Ĥ as the dominant DM component. After recombination, a small fraction of the twin protons and leptons remains ionized during structure formation, and their scattering to twin neutrinos through a gauged U(1) B-L force provides the mechanism that damps the density inhomogeneity. Both scenarios realize the Partially Acoustic dark matter (PAcDM) scenario and explain the σ 8 discrepancy between the CMB and weak lensing results. Moreover, the self-scattering neutrino behaves as a dark fluid that enhances the size of the Hubble rate H 0 to accommodate the local measurement result while satisfying the CMB constraint. For the small-scale structure, the scattering of \\widehat{Ω} 's and Ĥ's through the twin photon exchange generates a self-interacting dark matter (SIDM) model that solves the mass deficit problem from dwarf galaxy to galaxy cluster scales. Furthermore, when varying general choices of the twin photon coupling, bounds from the dwarf galaxy and the cluster merger observations can set an upper limit on the twin electric coupling.

  1. A brief historical introduction to Euler's formula for polyhedra, topology, graph theory and networks

    NASA Astrophysics Data System (ADS)

    Debnath, Lokenath

    2010-09-01

    This article is essentially devoted to a brief historical introduction to Euler's formula for polyhedra, topology, theory of graphs and networks with many examples from the real-world. Celebrated Königsberg seven-bridge problem and some of the basic properties of graphs and networks for some understanding of the macroscopic behaviour of real physical systems are included. We also mention some important and modern applications of graph theory or network problems from transportation to telecommunications. Graphs or networks are effectively used as powerful tools in industrial, electrical and civil engineering, communication networks in the planning of business and industry. Graph theory and combinatorics can be used to understand the changes that occur in many large and complex scientific, technical and medical systems. With the advent of fast large computers and the ubiquitous Internet consisting of a very large network of computers, large-scale complex optimization problems can be modelled in terms of graphs or networks and then solved by algorithms available in graph theory. Many large and more complex combinatorial problems dealing with the possible arrangements of situations of various kinds, and computing the number and properties of such arrangements can be formulated in terms of networks. The Knight's tour problem, Hamilton's tour problem, problem of magic squares, the Euler Graeco-Latin squares problem and their modern developments in the twentieth century are also included.

  2. Skills of U.S. Unemployed, Young, and Older Adults in Sharper Focus: Results from the Program for the International Assessment of Adult Competencies (PIAAC) 2012/2014. First Look. NCES 2016-039rev

    ERIC Educational Resources Information Center

    Rampey, Bobby D.; Finnegan, Robert; Mohadjer, Leyla; Krenzke, Tom; Hogan, Jacquie; Provasnik, Stephen

    2016-01-01

    The Program for the International Assessment of Adult Competencies (PIAAC) is a cyclical, large-scale study of adult skills and life experiences focusing on education and employment. Nationally representative samples of adults between the ages of 16 and 65 are administered an assessment of literacy, numeracy, and problem solving in technology rich…

  3. Ways to improve your correlation functions

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    1993-01-01

    This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.

  4. The workshop. [use and application of remotely sensed data

    NASA Technical Reports Server (NTRS)

    Wake, W. H.

    1981-01-01

    The plan is presented for a two day workshop held to provide educational and training experience in the reading, interpretation, and application of LANDSAT and correlated larger scale imagery, digital printout maps, and other collateral material for a large number of participants with widely diverse levels of expertise, backgrounds, and occupations in government, industry, and education. The need for using surface truth field studies with correlated aerial imagery in solving real world problems was demonstrated.

  5. Literacy, Numeracy, and Problem Solving in Technology-Rich Environments among U.S. Adults: Results from the Program for the International Assessment of Adult Competencies 2012. First Look. NCES 2014-008

    ERIC Educational Resources Information Center

    Goodman, Madeline; Finnegan, Robert; Mohadjer, Leyla; Krenzke, Tom; Hogan, Jacquie

    2013-01-01

    The Program for the International Assessment of Adult Competencies (PIAAC) is a cyclical, large scale study of adult skills and life experience focusing on education and employment that was developed and organized by the Organization for Economic Cooperation and Development (OECD). In the United States, the study was conducted in 2011-12 with a…

  6. The Next Frontier in Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarrao, John

    2016-11-16

    Exascale computing refers to computing systems capable of at least one exaflop or a billion calculations per second (1018). That is 50 times faster than the most powerful supercomputers being used today and represents a thousand-fold increase over the first petascale computer that came into operation in 2008. How we use these large-scale simulation resources is the key to solving some of today’s most pressing problems, including clean energy production, nuclear reactor lifetime extension and nuclear stockpile aging.

  7. A fast solver for the Helmholtz equation based on the generalized multiscale finite-element method

    NASA Astrophysics Data System (ADS)

    Fu, Shubin; Gao, Kai

    2017-11-01

    Conventional finite-element methods for solving the acoustic-wave Helmholtz equation in highly heterogeneous media usually require finely discretized mesh to represent the medium property variations with sufficient accuracy. Computational costs for solving the Helmholtz equation can therefore be considerably expensive for complicated and large geological models. Based on the generalized multiscale finite-element theory, we develop a novel continuous Galerkin method to solve the Helmholtz equation in acoustic media with spatially variable velocity and mass density. Instead of using conventional polynomial basis functions, we use multiscale basis functions to form the approximation space on the coarse mesh. The multiscale basis functions are obtained from multiplying the eigenfunctions of a carefully designed local spectral problem with an appropriate multiscale partition of unity. These multiscale basis functions can effectively incorporate the characteristics of heterogeneous media's fine-scale variations, thus enable us to obtain accurate solution to the Helmholtz equation without directly solving the large discrete system formed on the fine mesh. Numerical results show that our new solver can significantly reduce the dimension of the discrete Helmholtz equation system, and can also obviously reduce the computational time.

  8. Analysis of mathematical problem-solving ability based on metacognition on problem-based learning

    NASA Astrophysics Data System (ADS)

    Mulyono; Hadiyanti, R.

    2018-03-01

    Problem-solving is the primary purpose of the mathematics curriculum. Problem-solving abilities influenced beliefs and metacognition. Metacognition as superordinate capabilities can direct, regulate cognition and motivation and then problem-solving processes. This study aims to (1) test and analyzes the quality of problem-based learning and (2) investigate the problem-solving capabilities based on metacognition. This research uses mixed method study with The subject research are class XI students of Mathematics and Science at High School Kesatrian 2 Semarang which divided into tacit use, aware use, strategic use and reflective use level. The collecting data using scale, interviews, and tests. The data processed with the proportion of test, t-test, and paired samples t-test. The result shows that the students with levels tacit use were able to complete the whole matter given, but do not understand what and why a strategy is used. Students with aware use level were able to solve the problem, be able to build new knowledge through problem-solving to the indicators, understand the problem, determine the strategies used, although not right. Students on the Strategic ladder Use can be applied and adopt a wide variety of appropriate strategies to solve the issues and achieved re-examine indicators of process and outcome. The student with reflective use level is not found in this study. Based on the results suggested that study about the identification of metacognition in problem-solving so that the characteristics of each level of metacognition more clearly in a more significant sampling. Teachers need to know in depth about the student metacognitive activity and its relationship with mathematical problem solving and another problem resolution.

  9. Examining the Epistemological Beliefs and Problem Solving Skills of Preservice Teachers during Teaching Practice

    ERIC Educational Resources Information Center

    Erdamar, Gurcu; Alpan, Gulgun

    2013-01-01

    This study aims to examine the development of preservice teachers' epistemological beliefs and problem solving skills in the process of teaching practice. Participants of this descriptive study were senior students from Gazi University's Faculty of Vocational Education ("n" = 189). They completed the Epistemological Belief Scale and…

  10. ADHD and Problem-Solving in Play

    ERIC Educational Resources Information Center

    Borg, Suzanne

    2009-01-01

    This paper reports a small-scale study to determine whether there is a difference in problem-solving abilities, from a play perspective, between individuals who are diagnosed as ADHD and are on medication and those not on medication. Ten children, five of whom where on medication and five not, diagnosed as ADHD predominantly inattentive type, were…

  11. Validation of the Solving Problems Scale with Teachers

    ERIC Educational Resources Information Center

    Ryan, Mary Elizabeth

    2011-01-01

    Rapid advancements in technology, global competitiveness, and an increasing demand for 21st-century skills, such as problem-solving, underscore the pivotal role teachers play to prepare our youth for an era of exponential change. Those at the forefront of education are challenged to equip students with skills and strategies necessary to think…

  12. The Strengthening Families Program 10-14: influence on parent and youth problem-solving skill.

    PubMed

    Semeniuk, Y; Brown, R L; Riesch, S K; Zywicki, M; Hopper, J; Henriques, J B

    2010-06-01

    The aim of this paper is to report the results of a preliminary examination of the efficacy of the Strengthening Families Program (SFP) 10-14 in improving parent and youth problem-solving skill. The Hypotheses in this paper include: (1) youth and parents who participated in SFP would have lower mean scores immediately (T2) and 6 months (T3) post intervention on indicators of hostile and negative problem-solving strategies; (2) higher mean scores on positive problem-solving strategies; and (3) youth who participated in SFP would have higher mean scores at T2 and at T3 on indicators of individual problem solving and problem-solving efficacy than youth in the comparison group. The dyads were recruited from elementary schools that had been stratified for race and assigned randomly to intervention or comparison conditions. Mean age of youth was 11 years (SD = 1.04). Fifty-seven dyads (34-intervention&23-control) were videotaped discussing a frequently occurring problem. The videotapes were analysed using the Iowa Family Interaction Rating Scale (IFIRS) and data were analysed using Dyadic Assessment Intervention Model. Most mean scores on the IFIRS did not change. One score changed as predicted: youth hostility decreased at T3. Two scores changed contrary to prediction: parent hostility increased T3 and parent positive problem solving decreased at T2. SFP demonstrated questionable efficacy for problem-solving skill in this study.

  13. Dynamic Flow Management Problems in Air Transportation

    NASA Technical Reports Server (NTRS)

    Patterson, Sarah Stock

    1997-01-01

    In 1995, over six hundred thousand licensed pilots flew nearly thirty-five million flights into over eighteen thousand U.S. airports, logging more than 519 billion passenger miles. Since demand for air travel has increased by more than 50% in the last decade while capacity has stagnated, congestion is a problem of undeniable practical significance. In this thesis, we will develop optimization techniques that reduce the impact of congestion on the national airspace. We start by determining the optimal release times for flights into the airspace and the optimal speed adjustment while airborne taking into account the capacitated airspace. This is called the Air Traffic Flow Management Problem (TFMP). We address the complexity, showing that it is NP-hard. We build an integer programming formulation that is quite strong as some of the proposed inequalities are facet defining for the convex hull of solutions. For practical problems, the solutions of the LP relaxation of the TFMP are very often integral. In essence, we reduce the problem to efficiently solving large scale linear programming problems. Thus, the computation times are reasonably small for large scale, practical problems involving thousands of flights. Next, we address the problem of determining how to reroute aircraft in the airspace system when faced with dynamically changing weather conditions. This is called the Air Traffic Flow Management Rerouting Problem (TFMRP) We present an integrated mathematical programming approach for the TFMRP, which utilizes several methodologies, in order to minimize delay costs. In order to address the high dimensionality, we present an aggregate model, in which we formulate the TFMRP as a multicommodity, integer, dynamic network flow problem with certain side constraints. Using Lagrangian relaxation, we generate aggregate flows that are decomposed into a collection of flight paths using a randomized rounding heuristic. This collection of paths is used in a packing integer programming formulation, the solution of which generates feasible and near-optimal routes for individual flights. The algorithm, termed the Lagrangian Generation Algorithm, is used to solve practical problems in the southwestern portion of United States in which the solutions are within 1% of the corresponding lower bounds.

  14. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, P. T.; Shadid, J. N.; Hu, J. J.

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  15. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE PAGES

    Lin, P. T.; Shadid, J. N.; Hu, J. J.; ...

    2017-11-06

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  16. Model and Data Reduction for Control, Identification and Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Kramer, Boris

    This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new, compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application.

  17. Problem-solving skills and hardiness as protective factors against stress in Iranian nurses.

    PubMed

    Abdollahi, Abbas; Talib, Mansor Abu; Yaacob, Siti Nor; Ismail, Zanariah

    2014-02-01

    Nursing is a stressful occupation, even when compared with other health professions; therefore, it is necessary to advance our knowledge about the protective factors that can help reduce stress among nurses. The present study sought to investigate the associations among problem-solving skills and hardiness with perceived stress in nurses. The participants, 252 nurses from six private hospitals in Tehran, completed the Personal Views Survey, the Perceived Stress Scale, and the Problem-Solving Inventory. Structural Equation Modeling (SEM) was used to analyse the data and answer the research hypotheses. As expected, greater hardiness was associated with low levels of perceived stress, and nurses low in perceived stress were more likely to be considered approachable, have a style that relied on their own sense of internal personal control, and demonstrate effective problem-solving confidence. These findings reinforce the importance of hardiness and problem-solving skills as protective factors against perceived stress among nurses, and could be important in training future nurses so that hardiness ability and problem-solving skills can be imparted, allowing nurses to have more ability to control their perceived stress.

  18. Neural networks for continuous online learning and control.

    PubMed

    Choy, Min Chee; Srinivasan, Dipti; Cheu, Ruey Long

    2006-11-01

    This paper proposes a new hybrid neural network (NN) model that employs a multistage online learning process to solve the distributed control problem with an infinite horizon. Various techniques such as reinforcement learning and evolutionary algorithm are used to design the multistage online learning process. For this paper, the infinite horizon distributed control problem is implemented in the form of real-time distributed traffic signal control for intersections in a large-scale traffic network. The hybrid neural network model is used to design each of the local traffic signal controllers at the respective intersections. As the state of the traffic network changes due to random fluctuation of traffic volumes, the NN-based local controllers will need to adapt to the changing dynamics in order to provide effective traffic signal control and to prevent the traffic network from becoming overcongested. Such a problem is especially challenging if the local controllers are used for an infinite horizon problem where online learning has to take place continuously once the controllers are implemented into the traffic network. A comprehensive simulation model of a section of the Central Business District (CBD) of Singapore has been developed using PARAMICS microscopic simulation program. As the complexity of the simulation increases, results show that the hybrid NN model provides significant improvement in traffic conditions when evaluated against an existing traffic signal control algorithm as well as a new, continuously updated simultaneous perturbation stochastic approximation-based neural network (SPSA-NN). Using the hybrid NN model, the total mean delay of each vehicle has been reduced by 78% and the total mean stoppage time of each vehicle has been reduced by 84% compared to the existing traffic signal control algorithm. This shows the efficacy of the hybrid NN model in solving large-scale traffic signal control problem in a distributed manner. Also, it indicates the possibility of using the hybrid NN model for other applications that are similar in nature as the infinite horizon distributed control problem.

  19. An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging

    NASA Astrophysics Data System (ADS)

    Linares, R.; Furfaro, R.

    The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.

  20. Viscous decay of nonlinear oscillations of a spherical bubble at large Reynolds number

    NASA Astrophysics Data System (ADS)

    Smith, W. R.; Wang, Q. X.

    2017-08-01

    The long-time viscous decay of large-amplitude bubble oscillations is considered in an incompressible Newtonian fluid, based on the Rayleigh-Plesset equation. At large Reynolds numbers, this is a multi-scaled problem with a short time scale associated with inertial oscillation and a long time scale associated with viscous damping. A multi-scaled perturbation method is thus employed to solve the problem. The leading-order analytical solution of the bubble radius history is obtained to the Rayleigh-Plesset equation in a closed form including both viscous and surface tension effects. Some important formulae are derived including the following: the average energy loss rate of the bubble system during each cycle of oscillation, an explicit formula for the dependence of the oscillation frequency on the energy, and an implicit formula for the amplitude envelope of the bubble radius as a function of the energy. Our theory shows that the energy of the bubble system and the frequency of oscillation do not change on the inertial time scale at leading order, the energy loss rate on the long viscous time scale being inversely proportional to the Reynolds number. These asymptotic predictions remain valid during each cycle of oscillation whether or not compressibility effects are significant. A systematic parametric analysis is carried out using the above formula for the energy of the bubble system, frequency of oscillation, and minimum/maximum bubble radii in terms of the Reynolds number, the dimensionless initial pressure of the bubble gases, and the Weber number. Our results show that the frequency and the decay rate have substantial variations over the lifetime of a decaying oscillation. The results also reveal that large-amplitude bubble oscillations are very sensitive to small changes in the initial conditions through large changes in the phase shift.

  1. Internationalisation and Economic Growth: The Portuguese Case

    ERIC Educational Resources Information Center

    da Costa, Renato J. Lopes; António, Nélson J. Santos; Miguel, Maria Isabel

    2017-01-01

    Historically, a policy of enforcement in internationalisation processes is still seen by many as an approach to solve certain economic crises. However, Portugal's solution for this problem is part of a greater problem, namely trying to solve a European problem that has recently worsened and is largely uncontrolled. This paper aims to contribute,…

  2. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    PubMed

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  3. Implicit and explicit subgrid-scale modeling in discontinuous Galerkin methods for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Fernandez, Pablo; Nguyen, Ngoc-Cuong; Peraire, Jaime

    2017-11-01

    Over the past few years, high-order discontinuous Galerkin (DG) methods for Large-Eddy Simulation (LES) have emerged as a promising approach to solve complex turbulent flows. Despite the significant research investment, the relation between the discretization scheme, the Riemann flux, the subgrid-scale (SGS) model and the accuracy of the resulting LES solver remains unclear. In this talk, we investigate the role of the Riemann solver and the SGS model in the ability to predict a variety of flow regimes, including transition to turbulence, wall-free turbulence, wall-bounded turbulence, and turbulence decay. The Taylor-Green vortex problem and the turbulent channel flow at various Reynolds numbers are considered. Numerical results show that DG methods implicitly introduce numerical dissipation in under-resolved turbulence simulations and, even in the high Reynolds number limit, this implicit dissipation provides a more accurate representation of the actual subgrid-scale dissipation than that by explicit models.

  4. Strategy for large-scale isolation of enantiomers in drug discovery.

    PubMed

    Leek, Hanna; Thunberg, Linda; Jonson, Anna C; Öhlén, Kristina; Klarqvist, Magnus

    2017-01-01

    A strategy for large-scale chiral resolution is illustrated by the isolation of pure enantiomer from a 5kg batch. Results from supercritical fluid chromatography will be presented and compared with normal phase liquid chromatography. Solubility of the compound in the supercritical mobile phase was shown to be the limiting factor. To circumvent this, extraction injection was used but shown not to be efficient for this compound. Finally, a method for chiral resolution by crystallization was developed and applied to give diastereomeric salt with an enantiomeric excess of 99% at a 91% yield. Direct access to a diverse separation tool box will be shown to be essential for solving separation problems in the most cost and time efficient way. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. A new solution method for wheel/rail rolling contact.

    PubMed

    Yang, Jian; Song, Hua; Fu, Lihua; Wang, Meng; Li, Wei

    2016-01-01

    To solve the problem of wheel/rail rolling contact of nonlinear steady-state curving, a three-dimensional transient finite element (FE) model is developed by the explicit software ANSYS/LS-DYNA. To improve the solving speed and efficiency, an explicit-explicit order solution method is put forward based on analysis of the features of implicit and explicit algorithm. The solution method was first applied to calculate the pre-loading of wheel/rail rolling contact with explicit algorithm, and then the results became the initial conditions in solving the dynamic process of wheel/rail rolling contact with explicit algorithm as well. Simultaneously, the common implicit-explicit order solution method is used to solve the FE model. Results show that the explicit-explicit order solution method has faster operation speed and higher efficiency than the implicit-explicit order solution method while the solution accuracy is almost the same. Hence, the explicit-explicit order solution method is more suitable for the wheel/rail rolling contact model with large scale and high nonlinearity.

  6. Large Eddy Simulation in the Computation of Jet Noise

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.

    1999-01-01

    Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.

  7. Within-Group Effect-Size Benchmarks for Problem-Solving Therapy for Depression in Adults

    ERIC Educational Resources Information Center

    Rubin, Allen; Yu, Miao

    2017-01-01

    This article provides benchmark data on within-group effect sizes from published randomized clinical trials that supported the efficacy of problem-solving therapy (PST) for depression among adults. Benchmarks are broken down by type of depression (major or minor), type of outcome measure (interview or self-report scale), whether PST was provided…

  8. Impulsive-Analytic Disposition in Mathematical Problem Solving: A Survey and a Mathematics Test

    ERIC Educational Resources Information Center

    Lim, Kien H.; Wagler, Amy

    2012-01-01

    The Likelihood-to-Act (LtA) survey and a mathematics test were used in this study to assess students' impulsive-analytic disposition in the context of mathematical problem solving. The results obtained from these two instruments were compared to those obtained using two widely-used scales: Need for Cognition (NFC) and Barratt Impulsivity Scale…

  9. Large-scale particle acceleration by magnetic reconnection during solar flares

    NASA Astrophysics Data System (ADS)

    Li, X.; Guo, F.; Li, H.; Li, G.; Li, S.

    2017-12-01

    Magnetic reconnection that triggers explosive magnetic energy release has been widely invoked to explain the large-scale particle acceleration during solar flares. While great efforts have been spent in studying the acceleration mechanism in small-scale kinetic simulations, there have been rare studies that make predictions to acceleration in the large scale comparable to the flare reconnection region. Here we present a new arrangement to study this problem. We solve the large-scale energetic-particle transport equation in the fluid velocity and magnetic fields from high-Lundquist-number MHD simulations of reconnection layers. This approach is based on examining the dominant acceleration mechanism and pitch-angle scattering in kinetic simulations. Due to the fluid compression in reconnection outflows and merging magnetic islands, particles are accelerated to high energies and develop power-law energy distributions. We find that the acceleration efficiency and power-law index depend critically on upstream plasma beta and the magnitude of guide field (the magnetic field component perpendicular to the reconnecting component) as they influence the compressibility of the reconnection layer. We also find that the accelerated high-energy particles are mostly concentrated in large magnetic islands, making the islands a source of energetic particles and high-energy emissions. These findings may provide explanations for acceleration process in large-scale magnetic reconnection during solar flares and the temporal and spatial emission properties observed in different flare events.

  10. Insight Is Not in the Problem: Investigating Insight in Problem Solving across Task Types.

    PubMed

    Webb, Margaret E; Little, Daniel R; Cropper, Simon J

    2016-01-01

    The feeling of insight in problem solving is typically associated with the sudden realization of a solution that appears obviously correct (Kounios et al., 2006). Salvi et al. (2016) found that a solution accompanied with sudden insight is more likely to be correct than a problem solved through conscious and incremental steps. However, Metcalfe (1986) indicated that participants would often present an inelegant but plausible (wrong) answer as correct with a high feeling of warmth (a subjective measure of closeness to solution). This discrepancy may be due to the use of different tasks or due to different methods in the measurement of insight (i.e., using a binary vs. continuous scale). In three experiments, we investigated both findings, using many different problem tasks (e.g., Compound Remote Associates, so-called classic insight problems, and non-insight problems). Participants rated insight-related affect (feelings of Aha-experience, confidence, surprise, impasse, and pleasure) on continuous scales. As expected we found that, for problems designed to elicit insight, correct solutions elicited higher proportions of reported insight in the solution compared to non-insight solutions; further, correct solutions elicited stronger feelings of insight compared to incorrect solutions.

  11. Insight Is Not in the Problem: Investigating Insight in Problem Solving across Task Types

    PubMed Central

    Webb, Margaret E.; Little, Daniel R.; Cropper, Simon J.

    2016-01-01

    The feeling of insight in problem solving is typically associated with the sudden realization of a solution that appears obviously correct (Kounios et al., 2006). Salvi et al. (2016) found that a solution accompanied with sudden insight is more likely to be correct than a problem solved through conscious and incremental steps. However, Metcalfe (1986) indicated that participants would often present an inelegant but plausible (wrong) answer as correct with a high feeling of warmth (a subjective measure of closeness to solution). This discrepancy may be due to the use of different tasks or due to different methods in the measurement of insight (i.e., using a binary vs. continuous scale). In three experiments, we investigated both findings, using many different problem tasks (e.g., Compound Remote Associates, so-called classic insight problems, and non-insight problems). Participants rated insight-related affect (feelings of Aha-experience, confidence, surprise, impasse, and pleasure) on continuous scales. As expected we found that, for problems designed to elicit insight, correct solutions elicited higher proportions of reported insight in the solution compared to non-insight solutions; further, correct solutions elicited stronger feelings of insight compared to incorrect solutions. PMID:27725805

  12. Human-Assisted Machine Information Exploitation: a crowdsourced investigation of information-based problem solving

    NASA Astrophysics Data System (ADS)

    Kase, Sue E.; Vanni, Michelle; Caylor, Justine; Hoye, Jeff

    2017-05-01

    The Human-Assisted Machine Information Exploitation (HAMIE) investigation utilizes large-scale online data collection for developing models of information-based problem solving (IBPS) behavior in a simulated time-critical operational environment. These types of environments are characteristic of intelligence workflow processes conducted during human-geo-political unrest situations when the ability to make the best decision at the right time ensures strategic overmatch. The project takes a systems approach to Human Information Interaction (HII) by harnessing the expertise of crowds to model the interaction of the information consumer and the information required to solve a problem at different levels of system restrictiveness and decisional guidance. The design variables derived from Decision Support Systems (DSS) research represent the experimental conditions in this online single-player against-the-clock game where the player, acting in the role of an intelligence analyst, is tasked with a Commander's Critical Information Requirement (CCIR) in an information overload scenario. The player performs a sequence of three information processing tasks (annotation, relation identification, and link diagram formation) with the assistance of `HAMIE the robot' who offers varying levels of information understanding dependent on question complexity. We provide preliminary results from a pilot study conducted with Amazon Mechanical Turk (AMT) participants on the Volunteer Science scientific research platform.

  13. Hybrid fully nonlinear BEM-LBM numerical wave tank with applications in naval hydrodynamics

    NASA Astrophysics Data System (ADS)

    Mivehchi, Amin; Grilli, Stephan T.; Dahl, Jason M.; O'Reilly, Chris M.; Harris, Jeffrey C.; Kuznetsov, Konstantin; Janssen, Christian F.

    2017-11-01

    simulation of the complex dynamics response of ships in waves is typically modeled by nonlinear potential flow theory, usually solved with a higher order BEM. In some cases, the viscous/turbulent effects around a structure and in its wake need to be accurately modeled to capture the salient physics of the problem. Here, we present a fully 3D model based on a hybrid perturbation method. In this method, the velocity and pressure are decomposed as the sum of an inviscid flow and viscous perturbation. The inviscid part is solved over the whole domain using a BEM based on cubic spline element. These inviscid results are then used to force a near-field perturbation solution on a smaller domain size, which is solved with a NS model based on LBM-LES, and implemented on GPUs. The BEM solution for large grids is greatly accelerated by using a parallelized FMM, which is efficiently implemented on large and small clusters, yielding an almost linear scaling with the number of unknowns. A new representation of corners and edges is implemented, which improves the global accuracy of the BEM solver, particularly for moving boundaries. We present model results and the recent improvements of the BEM, alongside results of the hybrid model, for applications to problems. Office of Naval Research Grants N000141310687 and N000141612970.

  14. Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  15. Modal test/analysis correlation of Space Station structures using nonlinear sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  16. The GeoClaw software for depth-averaged flows with adaptive refinement

    USGS Publications Warehouse

    Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, Kyle T.

    2011-01-01

    Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.

  17. Modal Analysis for Grid Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANGO software is to provide a solution for improving small signal stability of power systems through adjusting operator-controllable variables using PMU measurement. System oscillation problems are one of the major threats to the grid stability and reliability in California and the Western Interconnection. These problems result in power fluctuations, lower grid operation efficiency, and may even lead to large-scale grid breakup and outages. This MANGO software aims to solve this problem by automatically generating recommended operation procedures termed Modal Analysis for Grid Operation (MANGO) to improve damping of inter-area oscillation modes. The MANGO procedure includes three steps: recognizing small signalmore » stability problems, implementing operating point adjustment using modal sensitivity, and evaluating the effectiveness of the adjustment. The MANGO software package is designed to help implement the MANGO procedure.« less

  18. Time-domain finite elements in optimal control with application to launch-vehicle guidance. PhD. Thesis

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.

    1991-01-01

    A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.

  19. Best candidates for cognitive treatment of illness perceptions in chronic low back pain: results of a theory-driven predictor study.

    PubMed

    Siemonsma, Petra C; Stuvie, Ilse; Roorda, Leo D; Vollebregt, Joke A; Lankhorst, Gustaaf J; Lettinga, Ant T

    2011-04-01

    The aim of this study was to identify treatment-specific predictors of the effectiveness of a method of evidence-based treatment: cognitive treatment of illness perceptions. This study focuses on what treatment works for whom, whereas most prognostic studies focusing on chronic non-specific low back pain rehabilitation aim to reduce the heterogeneity of the population of patients who are suitable for rehabilitation treatment in general. Three treatment-specific predictors were studied in patients with chronic non-specific low back pain receiving cognitive treatment of illness perceptions: a rational approach to problem-solving, discussion skills and verbal skills. Hierarchical linear regression analysis was used to assess their predictive value. Short-term changes in physical activity, measured with the Patient-Specific Functioning List, were the outcome measure for cognitive treatment of illness perceptions effect. A total of 156 patients with chronic non-specific low back pain participated in the study. Rational problem-solving was found to be a significant predictor for the change in physical activity. Discussion skills and verbal skills were non-significant. Rational problem-solving explained 3.9% of the total variance. The rational problem-solving scale results are encouraging, because chronic non-specific low back pain problems are complex by nature and can be influenced by a variety of factors. A minimum score of 44 points on the rational problem-solving scale may assist clinicians in selecting the most appropriate candidates for cognitive treatment of illness perceptions.

  20. Frames of reference for helicopter electronic maps - The relevance of spatial cognition and componential analysis

    NASA Technical Reports Server (NTRS)

    Harwood, Kelly; Wickens, Christopher D.

    1991-01-01

    Computer-generated map displays for NOE and low-level helicopter flight were formed according to prior research on maps, navigational problem solving, and spatial cognition in large-scale environments. The north-up map emphasized consistency of object location, wheareas, the track-up map emphasized map-terrain congruency. A component analysis indicates that different cognitive components, e.g., orienting and absolute object location, are supported to varying degrees by properties of different frames of reference.

  1. Energy and technology review: Engineering modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabayan, H.S.; Goudreau, G.L.; Ziolkowski, R.W.

    1986-10-01

    This report presents information concerning: Modeling Canonical Problems in Electromagnetic Coupling Through Apertures; Finite-Element Codes for Computing Electrostatic Fields; Finite-Element Modeling of Electromagnetic Phenomena; Modeling Microwave-Pulse Compression in a Resonant Cavity; Lagrangian Finite-Element Analysis of Penetration Mechanics; Crashworthiness Engineering; Computer Modeling of Metal-Forming Processes; Thermal-Mechanical Modeling of Tungsten Arc Welding; Modeling Air Breakdown Induced by Electromagnetic Fields; Iterative Techniques for Solving Boltzmann's Equations for p-Type Semiconductors; Semiconductor Modeling; and Improved Numerical-Solution Techniques in Large-Scale Stress Analysis.

  2. Self-adjusting wind turbine rotors: a concept

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordan, P.F.

    A conceptual design is described for wind turbine rotor blades that can react to changing wind conditions. Studies indicate that self-adjusting rotors will be more economical to operate with large rotors, although there are still mechanical problems of scaling-up to be solved. Details of the design specifications, accompanied by a schematic drawing, are explained in terms of the aerodynamic test performance date obtained and the expected effect on overall performance. The segmented design concept will make the turbine blades easier to manufacture, transport, erect, and maintain.

  3. Real World Cognitive Multi-Tasking and Problem Solving: A Large Scale Cognitive Architecture Simulation Through High Performance Computing-Project Casie

    DTIC Science & Technology

    2008-03-01

    computational version of the CASIE architecture serves to demonstrate the functionality of our primary theories. However, implementation of several other...following facts. First, based on Theorem 3 and Theorem 5, the objective function is non -increasing under updating rule (6); second, by the criteria for...reassignment in updating rule (7), it is trivial to show that the objective function is non -increasing under updating rule (7). A Unified View to Graph

  4. The Next Frontier in Computing

    ScienceCinema

    Sarrao, John

    2018-06-13

    Exascale computing refers to computing systems capable of at least one exaflop or a billion calculations per second (1018). That is 50 times faster than the most powerful supercomputers being used today and represents a thousand-fold increase over the first petascale computer that came into operation in 2008. How we use these large-scale simulation resources is the key to solving some of today’s most pressing problems, including clean energy production, nuclear reactor lifetime extension and nuclear stockpile aging.

  5. Advanced Artificial Intelligence Technology Testbed

    NASA Technical Reports Server (NTRS)

    Anken, Craig S.

    1993-01-01

    The Advanced Artificial Intelligence Technology Testbed (AAITT) is a laboratory testbed for the design, analysis, integration, evaluation, and exercising of large-scale, complex, software systems, composed of both knowledge-based and conventional components. The AAITT assists its users in the following ways: configuring various problem-solving application suites; observing and measuring the behavior of these applications and the interactions between their constituent modules; gathering and analyzing statistics about the occurrence of key events; and flexibly and quickly altering the interaction of modules within the applications for further study.

  6. A Low Collision and High Throughput Data Collection Mechanism for Large-Scale Super Dense Wireless Sensor Networks.

    PubMed

    Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Gaura, Elena; Brusey, James; Zhang, Xuekun; Dutkiewicz, Eryk

    2016-07-18

    Super dense wireless sensor networks (WSNs) have become popular with the development of Internet of Things (IoT), Machine-to-Machine (M2M) communications and Vehicular-to-Vehicular (V2V) networks. While highly-dense wireless networks provide efficient and sustainable solutions to collect precise environmental information, a new channel access scheme is needed to solve the channel collision problem caused by the large number of competing nodes accessing the channel simultaneously. In this paper, we propose a space-time random access method based on a directional data transmission strategy, by which collisions in the wireless channel are significantly decreased and channel utility efficiency is greatly enhanced. Simulation results show that our proposed method can decrease the packet loss rate to less than 2 % in large scale WSNs and in comparison with other channel access schemes for WSNs, the average network throughput can be doubled.

  7. [Computer-assisted education in problem-solving in neurology; a randomized educational study].

    PubMed

    Weverling, G J; Stam, J; ten Cate, T J; van Crevel, H

    1996-02-24

    To determine the effect of computer-based medical teaching (CBMT) as a supplementary method to teach clinical problem-solving during the clerkship in neurology. Randomized controlled blinded study. Academic Medical Centre, Amsterdam, the Netherlands. 103 Students were assigned at random to a group with access to CBMT and a control group. CBMT consisted of 20 computer-simulated patients with neurological diseases, and was permanently available during five weeks to students in the CBMT group. The ability to recognize and solve neurological problems was assessed with two free-response tests, scored by two blinded observers. The CBMT students scored significantly better on the test related to the CBMT cases (mean score 7.5 on a zero to 10 point scale; control group 6.2; p < 0.001). There was no significant difference on the control test not related to the problems practised with CBMT. CBMT can be an effective method for teaching clinical problem-solving, when used as a supplementary teaching facility during a clinical clerkship. The increased ability to solve problems learned by CBMT had no demonstrable effect on the performance with other neurological problems.

  8. Innovative problem solving by wild spotted hyenas

    PubMed Central

    Benson-Amram, Sarah; Holekamp, Kay E.

    2012-01-01

    Innovative animals are those able to solve novel problems or invent novel solutions to existing problems. Despite the important ecological and evolutionary consequences of innovation, we still know very little about the traits that vary among individuals within a species to make them more or less innovative. Here we examine innovative problem solving by spotted hyenas (Crocuta crocuta) in their natural habitat, and demonstrate for the first time in a non-human animal that those individuals exhibiting a greater diversity of initial exploratory behaviours are more successful problem solvers. Additionally, as in earlier work, we found that neophobia was a critical inhibitor of problem-solving success. Interestingly, although juveniles and adults were equally successful in solving the problem, juveniles were significantly more diverse in their initial exploratory behaviours, more persistent and less neophobic than were adults. We found no significant effects of social rank or sex on success, the diversity of initial exploratory behaviours, behavioural persistence or neophobia. Our results suggest that the diversity of initial exploratory behaviours, akin to some measures of human creativity, is an important, but largely overlooked, determinant of problem-solving success in non-human animals. PMID:22874748

  9. The Association of DRD2 with Insight Problem Solving.

    PubMed

    Zhang, Shun; Zhang, Jinghuan

    2016-01-01

    Although the insight phenomenon has attracted great attention from psychologists, it is still largely unknown whether its variation in well-functioning human adults has a genetic basis. Several lines of evidence suggest that genes involved in dopamine (DA) transmission might be potential candidates. The present study explored for the first time the association of dopamine D2 receptor gene ( DRD2 ) with insight problem solving. Fifteen single-nucleotide polymorphisms (SNPs) covering DRD2 were genotyped in 425 unrelated healthy Chinese undergraduates, and were further tested for association with insight problem solving. Both single SNP and haplotype analysis revealed several associations of DRD2 SNPs and haplotypes with insight problem solving. In conclusion, the present study provides the first evidence for the involvement of DRD2 in insight problem solving, future studies are necessary to validate these findings.

  10. The Association of DRD2 with Insight Problem Solving

    PubMed Central

    Zhang, Shun; Zhang, Jinghuan

    2016-01-01

    Although the insight phenomenon has attracted great attention from psychologists, it is still largely unknown whether its variation in well-functioning human adults has a genetic basis. Several lines of evidence suggest that genes involved in dopamine (DA) transmission might be potential candidates. The present study explored for the first time the association of dopamine D2 receptor gene (DRD2) with insight problem solving. Fifteen single-nucleotide polymorphisms (SNPs) covering DRD2 were genotyped in 425 unrelated healthy Chinese undergraduates, and were further tested for association with insight problem solving. Both single SNP and haplotype analysis revealed several associations of DRD2 SNPs and haplotypes with insight problem solving. In conclusion, the present study provides the first evidence for the involvement of DRD2 in insight problem solving, future studies are necessary to validate these findings. PMID:27933030

  11. Investigating the role of future thinking in social problem solving.

    PubMed

    Noreen, Saima; Whyte, Katherine E; Dritschel, Barbara

    2015-03-01

    There is well-established evidence that both rumination and depressed mood negatively impact the ability to solve social problems. A preliminary stage of the social problem solving process may be the process of catapulting oneself forward in time to think about the consequences of a problem before attempting to solve it. The aim of the present study was to examine how thinking about the consequences of a social problem being resolved or unresolved prior to solving it influences the solution of the problem as a function of levels of rumination and dysphoric mood. Eighty six participants initially completed the Beck Depression Inventory- II (BDI-II) and the Ruminative Response Scale (RRS). They were then presented with six social problems and generated consequences for half of the problems being resolved and half of the problems remaining unresolved. Participants then solved some of the problems, and following a delay, were asked to recall all of the consequences previously generated. Participants reporting higher levels of depressed mood and rumination were less effective at generating problem solutions. Specifically, those reporting higher levels of rumination produced less effective solutions for social problems that they had previously generated unresolved than resolved consequences. We also found that individuals higher in rumination, irrespective of depressed mood recalled more of the unresolved consequences in a subsequent memory test. As participants did not solve problems for scenarios where no consequences were generated, no baseline measure of problem solving was obtained. Our results suggest thinking about the consequences of a problem remaining unresolved may impair the generation of effective solutions in individuals with higher levels of rumination. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Architecture independent environment for developing engineering software on MIMD computers

    NASA Technical Reports Server (NTRS)

    Valimohamed, Karim A.; Lopez, L. A.

    1990-01-01

    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.

  13. A computational study of the use of an optimization-based method for simulating large multibody systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petra, C.; Gavrea, B.; Anitescu, M.

    2009-01-01

    The present work aims at comparing the performance of several quadratic programming (QP) solvers for simulating large-scale frictional rigid-body systems. Traditional time-stepping schemes for simulation of multibody systems are formulated as linear complementarity problems (LCPs) with copositive matrices. Such LCPs are generally solved by means of Lemke-type algorithms and solvers such as the PATH solver proved to be robust. However, for large systems, the PATH solver or any other pivotal algorithm becomes unpractical from a computational point of view. The convex relaxation proposed by one of the authors allows the formulation of the integration step as a QPD, for whichmore » a wide variety of state-of-the-art solvers are available. In what follows we report the results obtained solving that subproblem when using the QP solvers MOSEK, OOQP, TRON, and BLMVM. OOQP is presented with both the symmetric indefinite solver MA27 and our Cholesky reformulation using the CHOLMOD package. We investigate computational performance and address the correctness of the results from a modeling point of view. We conclude that the OOQP solver, particularly with the CHOLMOD linear algebra solver, has predictable performance and memory use patterns and is far more competitive for these problems than are the other solvers.« less

  14. Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem

    PubMed Central

    Akutsah, Francis; Olusanya, Micheal O.; Adewumi, Aderemi O.

    2018-01-01

    The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems. PMID:29554662

  15. Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem.

    PubMed

    Ezugwu, Absalom E; Akutsah, Francis; Olusanya, Micheal O; Adewumi, Aderemi O

    2018-01-01

    The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems.

  16. Tuning Parameters in Heuristics by Using Design of Experiments Methods

    NASA Technical Reports Server (NTRS)

    Arin, Arif; Rabadi, Ghaith; Unal, Resit

    2010-01-01

    With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.

  17. [Prevalence of and factors related to depression in high school students].

    PubMed

    Eskin, Mehmet; Ertekin, Kamil; Harlak, Hacer; Dereboy, Ciğdem

    2008-01-01

    The study aimed at investigating the prevalence of and factors related to depression in high school students. A total of 805 (n = 367 girls; n = 438 boys) first year students from three high schools in the city of Aydin filled in a self-report questionnaire that contained questions about socio-demographics, academic achievement and religious belief. It included also a depression rating scale, social support scale, problem solving inventory and an assertiveness scale. T-tests, chi-square tests, Pearson moment products correlation coefficients, and logistic regression analysis were used to analyze the data. 141 students (17.5%) scored on and above the cut-off point on the Children Depression Inventory (CDI). In the first regression analyses low self-esteem, low grade point average (GPA) and low perceived social support from friends in boys, and low self-esteem, low paternal educational level and low social support from friends were the predictors of girls' depression. When self-esteem scores were excluded, low GPA, low perceived social support from friends and family, and inefficient problem solving skills were predictors of depression in boys; low perceived social support from friends and family, low paternal educational level, and inefficient problem solving skills were the independent predictors of depression in girls. Depression is prevalent in high school students. Low self-esteem, low perceived social support from peers and family, and inefficient problem solving skills appears to be risk factors for adolescent depression. Low GPA for boys and low paternal education for girls were gender specific risk factors. Psychosocial interventions geared for increasing self-esteem, social support and problem solving skills may be effective in the prevention and treatment of adolescent depression.

  18. Working Memory and Reasoning Benefit from Different Modes of Large-scale Brain Dynamics in Healthy Older Adults.

    PubMed

    Lebedev, Alexander V; Nilsson, Jonna; Lövdén, Martin

    2018-07-01

    Researchers have proposed that solving complex reasoning problems, a key indicator of fluid intelligence, involves the same cognitive processes as solving working memory tasks. This proposal is supported by an overlap of the functional brain activations associated with the two types of tasks and by high correlations between interindividual differences in performance. We replicated these findings in 53 older participants but also showed that solving reasoning and working memory problems benefits from different configurations of the functional connectome and that this dissimilarity increases with a higher difficulty load. Specifically, superior performance in a typical working memory paradigm ( n-back) was associated with upregulation of modularity (increased between-network segregation), whereas performance in the reasoning task was associated with effective downregulation of modularity. We also showed that working memory training promotes task-invariant increases in modularity. Because superior reasoning performance is associated with downregulation of modular dynamics, training may thus have fostered an inefficient way of solving the reasoning tasks. This could help explain why working memory training does little to promote complex reasoning performance. The study concludes that complex reasoning abilities cannot be reduced to working memory and suggests the need to reconsider the feasibility of using working memory training interventions to attempt to achieve effects that transfer to broader cognition.

  19. Recent progress in multi-electrode spike sorting methods

    PubMed Central

    Lefebvre, Baptiste; Yger, Pierre; Marre, Olivier

    2017-01-01

    In recent years, arrays of extracellular electrodes have been developed and manufactured to record simultaneously from hundreds of electrodes packed with a high density. These recordings should allow neuroscientists to reconstruct the individual activity of the neurons spiking in the vicinity of these electrodes, with the help of signal processing algorithms. Algorithms need to solve a source separation problem, also known as spike sorting. However, these new devices challenge the classical way to do spike sorting. Here we review different methods that have been developed to sort spikes from these large-scale recordings. We describe the common properties of these algorithms, as well as their main differences. Finally, we outline the issues that remain to be solved by future spike sorting algorithms. PMID:28263793

  20. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    NASA Astrophysics Data System (ADS)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-05-01

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.

  1. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  2. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-04

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  3. The Effect of Training Problem-Solving Skills on Coping Skills of Depressed Nursing and Midwifery Students

    PubMed Central

    Ebrahimi, Hossein; Barzanjeh Atri, Shirin; Ghavipanjeh, Somayeh; Farnam, Alireza; Gholizadeh, Leyla

    2013-01-01

    Introduction: Nurses have a considerable role in caring and health promotion. Depressed nurses are deficient in their coping skills that are important in mental health. This study evaluated the effectiveness of training problem-solving skills on coping skills of depressed nursing and midwifery students. Methods: The Beck Depression Scale and coping skills questionnaire were administered in Tabriz and Urmia nursing and midwifery schools. 92 students, who had achieved a score above 10 on the Beck Depression Scale, were selected. 46 students as study group and 46 students as control group were selected randomly. The intervention group received six sessions of problem-solving training within three weeks. Finally, after the end of sessions, coping skills and depression scales were administered and analyzed for both groups. Results: Comparing the mean coping skills showed that before the intervention there were no significant differences between the control and study groups. However, after the intervention, a significant difference was observed between the control group and the study group. By comparing the mean coping skills before and after the intervention, a significant difference was observed in the study group. Conclusion: Training problem-solving skills increased the coping skills of depressed students. According to the role of coping skills in people's mental health, increasing coping skills can promote mental health, provide the basis for caring skills, and improve the quality of nurses’ caring skills. PMID:25276704

  4. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  5. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    NASA Astrophysics Data System (ADS)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  6. Distributed multimodal data fusion for large scale wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Ertin, Emre

    2006-05-01

    Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.

  7. Multi scales based sparse matrix spectral clustering image segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin

    2018-04-01

    In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.

  8. An Implicit Solver on A Parallel Block-Structured Adaptive Mesh Grid for FLASH

    NASA Astrophysics Data System (ADS)

    Lee, D.; Gopal, S.; Mohapatra, P.

    2012-07-01

    We introduce a fully implicit solver for FLASH based on a Jacobian-Free Newton-Krylov (JFNK) approach with an appropriate preconditioner. The main goal of developing this JFNK-type implicit solver is to provide efficient high-order numerical algorithms and methodology for simulating stiff systems of differential equations on large-scale parallel computer architectures. A large number of natural problems in nonlinear physics involve a wide range of spatial and time scales of interest. A system that encompasses such a wide magnitude of scales is described as "stiff." A stiff system can arise in many different fields of physics, including fluid dynamics/aerodynamics, laboratory/space plasma physics, low Mach number flows, reactive flows, radiation hydrodynamics, and geophysical flows. One of the big challenges in solving such a stiff system using current-day computational resources lies in resolving time and length scales varying by several orders of magnitude. We introduce FLASH's preliminary implementation of a time-accurate JFNK-based implicit solver in the framework of FLASH's unsplit hydro solver.

  9. Routing design and fleet allocation optimization of freeway service patrol: Improved results using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Xiuqiao; Wang, Jian

    2018-07-01

    Freeway service patrol (FSP), is considered to be an effective method for incident management and can help transportation agency decision-makers alter existing route coverage and fleet allocation. This paper investigates the FSP problem of patrol routing design and fleet allocation, with the objective of minimizing the overall average incident response time. While the simulated annealing (SA) algorithm and its improvements have been applied to solve this problem, they often become trapped in local optimal solution. Moreover, the issue of searching efficiency remains to be further addressed. In this paper, we employ the genetic algorithm (GA) and SA to solve the FSP problem. To maintain population diversity and avoid premature convergence, niche strategy is incorporated into the traditional genetic algorithm. We also employ elitist strategy to speed up the convergence. Numerical experiments have been conducted with the help of the Sioux Falls network. Results show that the GA slightly outperforms the dual-based greedy (DBG) algorithm, the very large-scale neighborhood searching (VLNS) algorithm, the SA algorithm and the scenario algorithm.

  10. u-w formulation for dynamic problems in large deformation regime solved through an implicit meshfree scheme

    NASA Astrophysics Data System (ADS)

    Navas, Pedro; Sanavia, Lorenzo; López-Querol, Susana; Yu, Rena C.

    2017-12-01

    Solving dynamic problems for fluid saturated porous media at large deformation regime is an interesting but complex issue. An implicit time integration scheme is herein developed within the framework of the u-w (solid displacement-relative fluid displacement) formulation for the Biot's equations. In particular, liquid water saturated porous media is considered and the linearization of the linear momentum equations taking into account all the inertia terms for both solid and fluid phases is for the first time presented. The spatial discretization is carried out through a meshfree method, in which the shape functions are based on the principle of local maximum entropy LME. The current methodology is firstly validated with the dynamic consolidation of a soil column and the plastic shear band formulation of a square domain loaded by a rigid footing. The feasibility of this new numerical approach for solving large deformation dynamic problems is finally demonstrated through the application to an embankment problem subjected to an earthquake.

  11. An Observational Study for Evaluating the Effects of Interpersonal Problem-Solving Skills Training on Behavioural Dimensions

    ERIC Educational Resources Information Center

    Anliak, Sakire; Sahin, Derya

    2010-01-01

    The present observational study was designed to evaluate the effectiveness of the I Can Problem Solve (ICPS) programme on behavioural change from aggression to pro-social behaviours by using the DECB rating scale. Non-participant observation method was used to collect data in pretest-training-posttest design. It was hypothesised that the ICPS…

  12. Lesion mapping of social problem solving

    PubMed Central

    Colom, Roberto; Paul, Erick J.; Chau, Aileen; Solomon, Jeffrey; Grafman, Jordan H.

    2014-01-01

    Accumulating neuroscience evidence indicates that human intelligence is supported by a distributed network of frontal and parietal regions that enable complex, goal-directed behaviour. However, the contributions of this network to social aspects of intellectual function remain to be well characterized. Here, we report a human lesion study (n = 144) that investigates the neural bases of social problem solving (measured by the Everyday Problem Solving Inventory) and examine the degree to which individual differences in performance are predicted by a broad spectrum of psychological variables, including psychometric intelligence (measured by the Wechsler Adult Intelligence Scale), emotional intelligence (measured by the Mayer, Salovey, Caruso Emotional Intelligence Test), and personality traits (measured by the Neuroticism-Extraversion-Openness Personality Inventory). Scores for each variable were obtained, followed by voxel-based lesion–symptom mapping. Stepwise regression analyses revealed that working memory, processing speed, and emotional intelligence predict individual differences in everyday problem solving. A targeted analysis of specific everyday problem solving domains (involving friends, home management, consumerism, work, information management, and family) revealed psychological variables that selectively contribute to each. Lesion mapping results indicated that social problem solving, psychometric intelligence, and emotional intelligence are supported by a shared network of frontal, temporal, and parietal regions, including white matter association tracts that bind these areas into a coordinated system. The results support an integrative framework for understanding social intelligence and make specific recommendations for the application of the Everyday Problem Solving Inventory to the study of social problem solving in health and disease. PMID:25070511

  13. GPU Accelerated DG-FDF Large Eddy Simulator

    NASA Astrophysics Data System (ADS)

    Inkarbekov, Medet; Aitzhan, Aidyn; Sammak, Shervin; Givi, Peyman; Kaltayev, Aidarkhan

    2017-11-01

    A GPU accelerated simulator is developed and implemented for large eddy simulation (LES) of turbulent flows. The filtered density function (FDF) is utilized for modeling of the subgrid scale quantities. The filtered transport equations are solved via a discontinuous Galerkin (DG) and the FDF is simulated via particle based Lagrangian Monte-Carlo (MC) method. It is demonstrated that the GPUs simulations are of the order of 100 times faster than the CPU-based calculations. This brings LES of turbulent flows to a new level, facilitating efficient simulation of more complex problems. The work at Al-Faraby Kazakh National University is sponsored by MoES of RK under Grant 3298/GF-4.

  14. Robust visual tracking via multiscale deep sparse networks

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo

    2017-04-01

    In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.

  15. Investigation of double-beta decay at the Institute of Theoretical and Experimental Physics (ITEP, Moscow)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeldovich, O. Ya.; Kirpichnikov, I. V.

    Investigation of neutrinoless double-beta (2{beta}0{nu}) decay is presently being considered as one of the most important problems in particle physics and cosmology Interest in the problem was quickened by the observation of neutrino oscillations. The results of oscillation experiments determine the mass differences between different neutrino flavors, and the observation of neutrinoless decay may fix the absolute scale and the hierarchy of the neutrino masses. Investigation of 2{beta}0{nu} decay is the most efficient method for solving the problem of whether the neutrino is a Dirae or a Majorana particle, Physicists from the Institute of Theoretical and Experimental Physics (ITEP, Moscow)more » have been participating actively in solving this problem. They initiated and pioneered the application of semiconductor detectors manufactured from enriched germanium to searches for the double-beta decay of {sup 76}Ge. Investigations with {sup 76}Ge provided the most important results. At present, ITEP physicists are taking active part in four very large projects, GERDA. Majorana, EXO, and NEMO, which are capable of recording 2{beta}0{nu} decay at a Majorana neutrino mass of {approx} 10{sup -2} eV.« less

  16. Efficient combination of a 3D Quasi-Newton inversion algorithm and a vector dual-primal finite element tearing and interconnecting method

    NASA Astrophysics Data System (ADS)

    Voznyuk, I.; Litman, A.; Tortel, H.

    2015-08-01

    A Quasi-Newton method for reconstructing the constitutive parameters of three-dimensional (3D) penetrable scatterers from scattered field measurements is presented. This method is adapted for handling large-scale electromagnetic problems while keeping the memory requirement and the time flexibility as low as possible. The forward scattering problem is solved by applying the finite-element tearing and interconnecting full-dual-primal (FETI-FDP2) method which shares the same spirit as the domain decomposition methods for finite element methods. The idea is to split the computational domain into smaller non-overlapping sub-domains in order to simultaneously solve local sub-problems. Various strategies are proposed in order to efficiently couple the inversion algorithm with the FETI-FDP2 method: a separation into permanent and non-permanent subdomains is performed, iterative solvers are favorized for resolving the interface problem and a marching-on-in-anything initial guess selection further accelerates the process. The computational burden is also reduced by applying the adjoint state vector methodology. Finally, the inversion algorithm is confronted to measurements extracted from the 3D Fresnel database.

  17. Do students benefit from drawing productive diagrams themselves while solving introductory physics problems? The case of two electrostatics problems

    NASA Astrophysics Data System (ADS)

    Maries, Alexandru; Singh, Chandralekha

    2018-01-01

    An appropriate diagram is a required element of a solution building process in physics problem solving and it can transform a given problem into a representation that is easier to exploit for solving the problem. A major focus while helping introductory physics students learn problem solving is to help them appreciate that drawing diagrams facilitates problem solving. We conducted an investigation in which two different interventions were implemented during recitation quizzes throughout the semester in a large enrolment, algebra-based introductory physics course. Students were either (1) asked to solve problems in which the diagrams were drawn for them or (2) explicitly told to draw a diagram. A comparison group was not given any instruction regarding diagrams. We developed a rubric to score the problem solving performance of students in different intervention groups. We investigated two problems involving electric field and electric force and found that students who drew productive diagrams were more successful problem solvers and that a higher level of relevant detail in a student’s diagram corresponded to a better score. We also conducted think-aloud interviews with nine students who were at the time taking an equivalent introductory algebra-based physics course in order to gain insight into how drawing diagrams affects the problem solving process. These interviews supported some of the interpretations of the quantitative results. We end by discussing instructional implications of the findings.

  18. A Chess-Like Game for Teaching Engineering Students to Solve Large System of Simultaneous Linear Equations

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Mohammed, Ahmed Ali; Kadiam, Subhash

    2010-01-01

    Solving large (and sparse) system of simultaneous linear equations has been (and continues to be) a major challenging problem for many real-world engineering/science applications [1-2]. For many practical/large-scale problems, the sparse, Symmetrical and Positive Definite (SPD) system of linear equations can be conveniently represented in matrix notation as [A] {x} = {b} , where the square coefficient matrix [A] and the Right-Hand-Side (RHS) vector {b} are known. The unknown solution vector {x} can be efficiently solved by the following step-by-step procedures [1-2]: Reordering phase, Matrix Factorization phase, Forward solution phase, and Backward solution phase. In this research work, a Game-Based Learning (GBL) approach has been developed to help engineering students to understand crucial details about matrix reordering and factorization phases. A "chess-like" game has been developed and can be played by either a single player, or two players. Through this "chess-like" open-ended game, the players/learners will not only understand the key concepts involved in reordering algorithms (based on existing algorithms), but also have the opportunities to "discover new algorithms" which are better than existing algorithms. Implementing the proposed "chess-like" game for matrix reordering and factorization phases can be enhanced by FLASH [3] computer environments, where computer simulation with animated human voice, sound effects, visual/graphical/colorful displays of matrix tables, score (or monetary) awards for the best game players, etc. can all be exploited. Preliminary demonstrations of the developed GBL approach can be viewed by anyone who has access to the internet web-site [4]!

  19. Toward an optimal solver for time-spectral fluid-dynamic and aeroelastic solutions on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Mundis, Nathan L.; Mavriplis, Dimitri J.

    2017-09-01

    The time-spectral method applied to the Euler and coupled aeroelastic equations theoretically offers significant computational savings for purely periodic problems when compared to standard time-implicit methods. However, attaining superior efficiency with time-spectral methods over traditional time-implicit methods hinges on the ability rapidly to solve the large non-linear system resulting from time-spectral discretizations which become larger and stiffer as more time instances are employed or the period of the flow becomes especially short (i.e. the maximum resolvable wave-number increases). In order to increase the efficiency of these solvers, and to improve robustness, particularly for large numbers of time instances, the Generalized Minimal Residual Method (GMRES) is used to solve the implicit linear system over all coupled time instances. The use of GMRES as the linear solver makes time-spectral methods more robust, allows them to be applied to a far greater subset of time-accurate problems, including those with a broad range of harmonic content, and vastly improves the efficiency of time-spectral methods. In previous work, a wave-number independent preconditioner that mitigates the increased stiffness of the time-spectral method when applied to problems with large resolvable wave numbers has been developed. This preconditioner, however, directly inverts a large matrix whose size increases in proportion to the number of time instances. As a result, the computational time of this method scales as the cube of the number of time instances. In the present work, this preconditioner has been reworked to take advantage of an approximate-factorization approach that effectively decouples the spatial and temporal systems. Once decoupled, the time-spectral matrix can be inverted in frequency space, where it has entries only on the main diagonal and therefore can be inverted quite efficiently. This new GMRES/preconditioner combination is shown to be over an order of magnitude more efficient than the previous wave-number independent preconditioner for problems with large numbers of time instances and/or large reduced frequencies.

  20. On the relationship between math anxiety and math achievement in early elementary school: The role of problem solving strategies.

    PubMed

    Ramirez, Gerardo; Chang, Hyesang; Maloney, Erin A; Levine, Susan C; Beilock, Sian L

    2016-01-01

    Even at young ages, children self-report experiencing math anxiety, which negatively relates to their math achievement. Leveraging a large dataset of first and second grade students' math achievement scores, math problem solving strategies, and math attitudes, we explored the possibility that children's math anxiety (i.e., a fear or apprehension about math) negatively relates to their use of more advanced problem solving strategies, which in turn relates to their math achievement. Our results confirm our hypothesis and, moreover, demonstrate that the relation between math anxiety and math problem solving strategies is strongest in children with the highest working memory capacity. Ironically, children who have the highest cognitive capacity avoid using advanced problem solving strategies when they are high in math anxiety and, as a result, underperform in math compared with their lower working memory peers. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Genetic algorithms - What fitness scaling is optimal?

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac

    1993-01-01

    A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.

  2. Determination of macro-scale soil properties from pore-scale structures: model derivation.

    PubMed

    Daly, K R; Roose, T

    2018-01-01

    In this paper, we use homogenization to derive a set of macro-scale poro-elastic equations for soils composed of rigid solid particles, air-filled pore space and a poro-elastic mixed phase. We consider the derivation in the limit of large deformation and show that by solving representative problems on the micro-scale we can parametrize the macro-scale equations. To validate the homogenization procedure, we compare the predictions of the homogenized equations with those of the full equations for a range of different geometries and material properties. We show that the results differ by [Formula: see text] for all cases considered. The success of the homogenization scheme means that it can be used to determine the macro-scale poro-elastic properties of soils from the underlying structure. Hence, it will prove a valuable tool in both characterization and optimization.

  3. Application of high-performance computing to numerical simulation of human movement

    NASA Technical Reports Server (NTRS)

    Anderson, F. C.; Ziegler, J. M.; Pandy, M. G.; Whalen, R. T.

    1995-01-01

    We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.

  4. Interaction Network Estimation: Predicting Problem-Solving Diversity in Interactive Environments

    ERIC Educational Resources Information Center

    Eagle, Michael; Hicks, Drew; Barnes, Tiffany

    2015-01-01

    Intelligent tutoring systems and computer aided learning environments aimed at developing problem solving produce large amounts of transactional data which make it a challenge for both researchers and educators to understand how students work within the environment. Researchers have modeled student-tutor interactions using complex networks in…

  5. Solving Large Problems with a Small Working Memory

    ERIC Educational Resources Information Center

    Pizlo, Zygmunt; Stefanov, Emil

    2013-01-01

    We describe an important elaboration of our multiscale/multiresolution model for solving the Traveling Salesman Problem (TSP). Our previous model emulated the non-uniform distribution of receptors on the human retina and the shifts of visual attention. This model produced near-optimal solutions of TSP in linear time by performing hierarchical…

  6. Problem Solving: Physics Modeling-Based Interactive Engagement

    ERIC Educational Resources Information Center

    Ornek, Funda

    2009-01-01

    The purpose of this study was to investigate how modeling-based instruction combined with an interactive-engagement teaching approach promotes students' problem solving abilities. I focused on students in a calculus-based introductory physics course, based on the matter and interactions curriculum of Chabay & Sherwood (2002) at a large state…

  7. Integrating Study Skills and Problem Solving into Remedial Mathematics

    ERIC Educational Resources Information Center

    Cornick, Jonathan; Guy, G. Michael; Beckford, Ian

    2015-01-01

    Students at a large urban community college enrolled in seven classes of an experimental remedial algebra programme, which integrated study skills instruction and collaborative problem solving. A control group of seven classes was taught in a traditional lecture format without study skills instruction. Student performance in the course was…

  8. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).

  9. [The application of new technologies to solving maths problems for students with learning disabilities: the 'underwater school'].

    PubMed

    Miranda-Casas, A; Marco-Taverner, R; Soriano-Ferrer, M; Melià de Alba, A; Simó-Casañ, P

    2008-01-01

    Different procedures have demonstrated efficacy to teach cognitive and metacognitive strategies to problem solving in mathematics. Some studies have used computer-based problem solving instructional programs. To analyze in students with learning disabilities the efficacy of a cognitive strategies training for problem solving, with three instructional delivery formats: a teacher-directed program (T-D), a computer-assisted instructional (CAI) program, and a combined program (T-D + CAI). Forty-four children with mathematics learning disabilities, between 8 and 10 years old participated in this study. The children were randomly assigned to one of the three instructional formats and a control group without cognitive strategies training. In the three instructional conditions which were compared all the students learnt problems solving linguistic and visual cognitive strategies trough the self-instructional procedure. Several types of measurements were used for analysing the possible differential efficacy of the three instructional methods implemented: solving problems tests, marks in mathematics, internal achievement responsibility scale, and school behaviours teacher ratings. Our findings show that the T-D training group and the T-D + CAI group improved significantly on math word problem solving and on marks in Maths from pre- to post-testing. In addition, the results indicated that the students of the T-D + CAI group solved more real-life problems and developed more internal attributions compared to both control and CAI groups. Finally, with regard to school behaviours, improvements in school adjustment and learning problems were observed in the students of the group with a combined instructional format (T-D + CAI).

  10. Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design

    NASA Technical Reports Server (NTRS)

    Krasteva, Denitza T.

    1998-01-01

    Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.

  11. Tensor Factorization for Low-Rank Tensor Completion.

    PubMed

    Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao

    2018-03-01

    Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.

  12. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  13. Efficient Parallelization of a Dynamic Unstructured Application on the Tera MTA

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak

    1999-01-01

    The success of parallel computing in solving real-life computationally-intensive problems relies on their efficient mapping and execution on large-scale multiprocessor architectures. Many important applications are both unstructured and dynamic in nature, making their efficient parallel implementation a daunting task. This paper presents the parallelization of a dynamic unstructured mesh adaptation algorithm using three popular programming paradigms on three leading supercomputers. We examine an MPI message-passing implementation on the Cray T3E and the SGI Origin2OOO, a shared-memory implementation using cache coherent nonuniform memory access (CC-NUMA) of the Origin2OOO, and a multi-threaded version on the newly-released Tera Multi-threaded Architecture (MTA). We compare several critical factors of this parallel code development, including runtime, scalability, programmability, and memory overhead. Our overall results demonstrate that multi-threaded systems offer tremendous potential for quickly and efficiently solving some of the most challenging real-life problems on parallel computers.

  14. Nonlinear Dynamic Model-Based Multiobjective Sensor Network Design Algorithm for a Plant with an Estimator-Based Control System

    DOE PAGES

    Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...

    2017-06-06

    Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less

  15. Psychometrics behind Computerized Adaptive Testing.

    PubMed

    Chang, Hua-Hua

    2015-03-01

    The paper provides a survey of 18 years' progress that my colleagues, students (both former and current) and I made in a prominent research area in Psychometrics-Computerized Adaptive Testing (CAT). We start with a historical review of the establishment of a large sample foundation for CAT. It is worth noting that the asymptotic results were derived under the framework of Martingale Theory, a very theoretical perspective of Probability Theory, which may seem unrelated to educational and psychological testing. In addition, we address a number of issues that emerged from large scale implementation and show that how theoretical works can be helpful to solve the problems. Finally, we propose that CAT technology can be very useful to support individualized instruction on a mass scale. We show that even paper and pencil based tests can be made adaptive to support classroom teaching.

  16. Large-scale computations in fluid mechanics; Proceedings of the Fifteenth Summer Seminar on Applied Mathematics, University of California, La Jolla, CA, June 27-July 8, 1983. Parts 1 & 2

    NASA Technical Reports Server (NTRS)

    Engquist, B. E. (Editor); Osher, S. (Editor); Somerville, R. C. J. (Editor)

    1985-01-01

    Papers are presented on such topics as the use of semi-Lagrangian advective schemes in meteorological modeling; computation with high-resolution upwind schemes for hyperbolic equations; dynamics of flame propagation in a turbulent field; a modified finite element method for solving the incompressible Navier-Stokes equations; computational fusion magnetohydrodynamics; and a nonoscillatory shock capturing scheme using flux-limited dissipation. Consideration is also given to the use of spectral techniques in numerical weather prediction; numerical methods for the incorporation of mountains in atmospheric models; techniques for the numerical simulation of large-scale eddies in geophysical fluid dynamics; high-resolution TVD schemes using flux limiters; upwind-difference methods for aerodynamic problems governed by the Euler equations; and an MHD model of the earth's magnetosphere.

  17. The Relationship of Social Problem-Solving Skills and Dysfunctional Attitudes with Risk of Drug Abuse among Dormitory Students at Isfahan University of Medical Sciences

    PubMed Central

    Nasrazadani, Ehteram; Maghsoudi, Jahangir; Mahrabi, Tayebeh

    2017-01-01

    Background: Dormitory students encounter multiple social factors which cause pressure, such as new social relationships, fear of the future, and separation from family, which could cause serious problems such as tendency toward drug abuse. This research was conducted with the goal to determine social problem-solving skills, dysfunctional attitudes, and risk of drug abuse among dormitory students of Isfahan University of Medical Sciences, Iran. Materials and Methods: This was a descriptive-analytical, correlational, and cross-sectional research. The research sample consisted of 211 students living in dormitories. The participants were selected using randomized quota sampling method. The data collection tools included the Social Problem-Solving Inventory (SPSI), Dysfunctional Attitude Scale (DAS), and Identifying People at Risk of Addiction Questionnaire. Results: The results indicated an inverse relationship between social problem-solving skills and risk of drug abuse (P = 0.0002), a direct relationship between dysfunctional attitude and risk of drug abuse (P = 0.030), and an inverse relationship between social problem-solving skills and dysfunctional attitude among students (P = 0.0004). Conclusions: Social problem-solving skills have a correlation with dysfunctional attitudes. As a result, teaching these skills and the way to create efficient attitudes should be considered in dormitory students. PMID:28904539

  18. The Relationship of Social Problem-Solving Skills and Dysfunctional Attitudes with Risk of Drug Abuse among Dormitory Students at Isfahan University of Medical Sciences.

    PubMed

    Nasrazadani, Ehteram; Maghsoudi, Jahangir; Mahrabi, Tayebeh

    2017-01-01

    Dormitory students encounter multiple social factors which cause pressure, such as new social relationships, fear of the future, and separation from family, which could cause serious problems such as tendency toward drug abuse. This research was conducted with the goal to determine social problem-solving skills, dysfunctional attitudes, and risk of drug abuse among dormitory students of Isfahan University of Medical Sciences, Iran. This was a descriptive-analytical, correlational, and cross-sectional research. The research sample consisted of 211 students living in dormitories. The participants were selected using randomized quota sampling method. The data collection tools included the Social Problem-Solving Inventory (SPSI), Dysfunctional Attitude Scale (DAS), and Identifying People at Risk of Addiction Questionnaire. The results indicated an inverse relationship between social problem-solving skills and risk of drug abuse ( P = 0.0002), a direct relationship between dysfunctional attitude and risk of drug abuse ( P = 0.030), and an inverse relationship between social problem-solving skills and dysfunctional attitude among students ( P = 0.0004). Social problem-solving skills have a correlation with dysfunctional attitudes. As a result, teaching these skills and the way to create efficient attitudes should be considered in dormitory students.

  19. Coping Behavior of International Late Adolescent Students in Selected Australian Educational Institutions

    PubMed Central

    Shahrill, Masitah; Mundia, Lawrence

    2014-01-01

    Using the Adolescent Coping Scale, ACS (Frydenberg & Lewis, 1993) we surveyed 45 randomly selected foreign adolescents in Australian schools. The coping strategies used most by the participants were: focus on solving the problem; seeking relaxing diversions; focusing on the positive; seeking social support; worry; seeking to belong; investing in close friends; wishful thinking; and keep to self (Table 4). With regard to coping styles, the most widely used was the productive coping followed by non-productive coping while the least used style was reference to others (Table 4). In terms of both genders the four coping strategies used most often were: work hard to achieve; seeking relaxing diversions; focus on solving the problem; and focus on the positive (Table 5). The most noticeable gender difference was the use of the physical recreation coping strategy in which male students engaged more (Fig 1). The usage of four coping strategies (solving problem; work hard; focus on positive; and social support) was higher for students who have been away from family more than once as compared to those who have been away once only while the usage of seeking relaxing diversions was higher for the first timers (Table 6). No significant differences were obtained on the sample’s performance on the ACS subscales by gender (Table 7), frequency of leaving own country (Table 8), country of origin (Table 9), and length of stay in Australia (Table 11). However, foundation students scored significantly higher on the reference to others variable than their secondary school peers (Table 10). We recommended counseling for students with high support needs and further large-scale mixed-methods research to gain additional insights. PMID:24373267

  20. Social problem-solving, perceived stress, negative life events, depression and life satisfaction in psoriasis.

    PubMed

    Eskin, M; Savk, E; Uslu, M; Küçükaydoğan, N

    2014-11-01

    Psoriasis is a chronic dermatosis which may cause significant impairment of the patient's quality of life. The purpose of this study was to investigate the social problem-solving skills, perceived stress, negative life events, depression and life satisfaction in psoriasis patients. Data were gathered by means of questionnaires and clinical evaluations from 51 psoriatic patients and 51 matched healthy controls. Average disease duration was 16.47 years and average Psoriasis Area and Severity Index score was 3.67. Compared with the controls, the patients displayed lower social problem-solving skills. They displayed higher negative problem orientation and impulsive-careless problem-solving style scores than the controls. Patients tended also to show more avoidant problem-solving style and lower life satisfaction than controls. There was no difference between psoriatic patients and controls in terms of depression, perceived stress and negative life events. Higher social problem-solving skills were associated with lower depression, perceived stress and fewer numbers of negative life events but higher level of life satisfaction. The patient group largely included mild and moderate psoriatic cases. The findings of the study suggest that problem-solving training or therapy may be a suitable option for alleviating levels of psychological distress in patients suffering from psoriasis. © 2014 European Academy of Dermatology and Venereology.

  1. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biros, George

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less

  2. nu-TRLan User Guide Version 1.0: A High-Performance Software Package for Large-Scale Harmitian Eigenvalue Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamazaki, Ichitaro; Wu, Kesheng; Simon, Horst

    2008-10-27

    The original software package TRLan, [TRLan User Guide], page 24, implements the thick restart Lanczos method, [Wu and Simon 2001], page 24, for computing eigenvalues {lambda} and their corresponding eigenvectors v of a symmetric matrix A: Av = {lambda}v. Its effectiveness in computing the exterior eigenvalues of a large matrix has been demonstrated, [LBNL-42982], page 24. However, its performance strongly depends on the user-specified dimension of a projection subspace. If the dimension is too small, TRLan suffers from slow convergence. If it is too large, the computational and memory costs become expensive. Therefore, to balance the solution convergence and costs,more » users must select an appropriate subspace dimension for each eigenvalue problem at hand. To free users from this difficult task, nu-TRLan, [LNBL-1059E], page 23, adjusts the subspace dimension at every restart such that optimal performance in solving the eigenvalue problem is automatically obtained. This document provides a user guide to the nu-TRLan software package. The original TRLan software package was implemented in Fortran 90 to solve symmetric eigenvalue problems using static projection subspace dimensions. nu-TRLan was developed in C and extended to solve Hermitian eigenvalue problems. It can be invoked using either a static or an adaptive subspace dimension. In order to simplify its use for TRLan users, nu-TRLan has interfaces and features similar to those of TRLan: (1) Solver parameters are stored in a single data structure called trl-info, Chapter 4 [trl-info structure], page 7. (2) Most of the numerical computations are performed by BLAS, [BLAS], page 23, and LAPACK, [LAPACK], page 23, subroutines, which allow nu-TRLan to achieve optimized performance across a wide range of platforms. (3) To solve eigenvalue problems on distributed memory systems, the message passing interface (MPI), [MPI forum], page 23, is used. The rest of this document is organized as follows. In Chapter 2 [Installation], page 2, we provide an installation guide of the nu-TRLan software package. In Chapter 3 [Example], page 3, we present a simple nu-TRLan example program. In Chapter 4 [trl-info structure], page 7, and Chapter 5 [trlan subroutine], page 14, we describe the solver parameters and interfaces in detail. In Chapter 6 [Solver parameters], page 21, we discuss the selection of the user-specified parameters. In Chapter 7 [Contact information], page 22, we give the acknowledgements and contact information of the authors. In Chapter 8 [References], page 23, we list reference to related works.« less

  3. Parallel Simulation of Three-Dimensional Free Surface Fluid Flow Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BAER,THOMAS A.; SACKINGER,PHILIP A.; SUBIA,SAMUEL R.

    1999-10-14

    Simulation of viscous three-dimensional fluid flow typically involves a large number of unknowns. When free surfaces are included, the number of unknowns increases dramatically. Consequently, this class of problem is an obvious application of parallel high performance computing. We describe parallel computation of viscous, incompressible, free surface, Newtonian fluid flow problems that include dynamic contact fines. The Galerkin finite element method was used to discretize the fully-coupled governing conservation equations and a ''pseudo-solid'' mesh mapping approach was used to determine the shape of the free surface. In this approach, the finite element mesh is allowed to deform to satisfy quasi-staticmore » solid mechanics equations subject to geometric or kinematic constraints on the boundaries. As a result, nodal displacements must be included in the set of unknowns. Other issues discussed are the proper constraints appearing along the dynamic contact line in three dimensions. Issues affecting efficient parallel simulations include problem decomposition to equally distribute computational work among a SPMD computer and determination of robust, scalable preconditioners for the distributed matrix systems that must be solved. Solution continuation strategies important for serial simulations have an enhanced relevance in a parallel coquting environment due to the difficulty of solving large scale systems. Parallel computations will be demonstrated on an example taken from the coating flow industry: flow in the vicinity of a slot coater edge. This is a three dimensional free surface problem possessing a contact line that advances at the web speed in one region but transitions to static behavior in another region. As such, a significant fraction of the computational time is devoted to processing boundary data. Discussion focuses on parallel speed ups for fixed problem size, a class of problems of immediate practical importance.« less

  4. The 2-D magnetotelluric inverse problem solved with optimization

    NASA Astrophysics Data System (ADS)

    van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven

    2011-02-01

    The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.

  5. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulationmore » of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.« less

  6. Natural SUSY and the Higgs boson

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Peisi

    2014-01-01

    Supersymmetry (SUSY) solves the hierarchy problem by introducing a super partner to each Standard Model(SM) particle. SUSY must be broken in nature, which means the fine-tuning is reintroduced to some level. Natural SUSY models enjoy low fine-tuning by featuring a small super potential parameter μ ~ 125 GeV, while the third generation squarks have mass less than 1.5 TeV. First and second generation sfermions can be at the multi-TeV level which yields a decoupling solution to the SUSY flavor and CP problem. However, models of Natural SUSY have difficulties in predicting a m{sub h} at 125 GeV, because the thirdmore » generation is too light to give large radiative correction to the Higgs mass. The models of Radiative Natural SUSY (RNS) address this problem by allowing for high scale soft SUSY breaking Higgs mass m{sub Hu} > m{sub 0}, which leads to automatic cancellation by the Renormalization Group (RG) running effect. Coupled with the large mixing in the stop sector, RNS allows low fine-tuning at 3-10 % level and a 125 GeV SM-like Higgs. RNS can be reached at the LHC, and a linear collider. If the strong CP problem is solved by the Peccei-Quinn mechanism, then RNS accommodates mixed axion-Higgsino cold dark matter, where the Higgsino-like WIMPs, which in this case make up only a fraction of the relic abundance, can be detectable at future WIMP detectors.« less

  7. Genetic algorithms and their use in Geophysical Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Paul B.

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less

  8. Genetic algorithms and their use in geophysical problems

    NASA Astrophysics Data System (ADS)

    Parker, Paul Bradley

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.

  9. A fast, parallel algorithm to solve the basic fluvial erosion/transport equations

    NASA Astrophysics Data System (ADS)

    Braun, J.

    2012-04-01

    Quantitative models of landform evolution are commonly based on the solution of a set of equations representing the processes of fluvial erosion, transport and deposition, which leads to predict the geometry of a river channel network and its evolution through time. The river network is often regarded as the backbone of any surface processes model (SPM) that might include other physical processes acting at a range of spatial and temporal scales along hill slopes. The basic laws of fluvial erosion requires the computation of local (slope) and non-local (drainage area) quantities at every point of a given landscape, a computationally expensive operation which limits the resolution of most SPMs. I present here an algorithm to compute the various components required in the parameterization of fluvial erosion (and transport) and thus solve the basic fluvial geomorphic equation, that is very efficient because it is O(n) (the number of required arithmetic operations is linearly proportional to the number of nodes defining the landscape), and is fully parallelizable (the computation cost decreases in a direct inverse proportion to the number of processors used to solve the problem). The algorithm is ideally suited for use on latest multi-core processors. Using this new technique, geomorphic problems can be solved at an unprecedented resolution (typically of the order of 10,000 X 10,000 nodes) while keeping the computational cost reasonable (order 1 sec per time step). Furthermore, I will show that the algorithm is applicable to any regular or irregular representation of the landform, and is such that the temporal evolution of the landform can be discretized by a fully implicit time-marching algorithm, making it unconditionally stable. I will demonstrate that such an efficient algorithm is ideally suited to produce a fully predictive SPM that links observationally based parameterizations of small-scale processes to the evolution of large-scale features of the landscapes on geological time scales. It can also be used to model surface processes at the continental or planetary scale and be linked to lithospheric or mantle flow models to predict the potential interactions between tectonics driving surface uplift in orogenic areas, mantle flow producing dynamic topography on continental scales and surface processes.

  10. Combined AIE/EBE/GMRES approach to incompressible flows. [Adaptive Implicit-Explicit/Grouped Element-by-Element/Generalized Minimum Residuals

    NASA Technical Reports Server (NTRS)

    Liou, J.; Tezduyar, T. E.

    1990-01-01

    Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.

  11. Perfect quantum multiple-unicast network coding protocol

    NASA Astrophysics Data System (ADS)

    Li, Dan-Dan; Gao, Fei; Qin, Su-Juan; Wen, Qiao-Yan

    2018-01-01

    In order to realize long-distance and large-scale quantum communication, it is natural to utilize quantum repeater. For a general quantum multiple-unicast network, it is still puzzling how to complete communication tasks perfectly with less resources such as registers. In this paper, we solve this problem. By applying quantum repeaters to multiple-unicast communication problem, we give encoding-decoding schemes for source nodes, internal ones and target ones, respectively. Source-target nodes share EPR pairs by using our encoding-decoding schemes over quantum multiple-unicast network. Furthermore, quantum communication can be accomplished perfectly via teleportation. Compared with existed schemes, our schemes can reduce resource consumption and realize long-distance transmission of quantum information.

  12. An investigation of the use of temporal decomposition in space mission scheduling

    NASA Technical Reports Server (NTRS)

    Bullington, Stanley E.; Narayanan, Venkat

    1994-01-01

    This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.

  13. Relaxion: A landscape without anthropics

    NASA Astrophysics Data System (ADS)

    Nelson, Ann; Prescod-Weinstein, Chanda

    2017-12-01

    The relaxion mechanism provides a potentially elegant solution to the hierarchy problem without resorting to anthropic or other fine-tuning arguments. This mechanism introduces an axion-like field, dubbed the relaxion, whose expectation value determines the electroweak hierarchy as well as the QCD strong C P -violating θ ¯ parameter. During an inflationary period, the Higgs mass squared is selected to be negative and hierarchically small in a theory which is consistent with 't Hooft's technical naturalness criteria. However, in the original model proposed by Graham, Kaplan, and Rajendran [Phys. Rev. Lett. 115, 221801 (2015), 10.1103/PhysRevLett.115.221801], the relaxion does not solve the strong C P problem, and in fact contributes to it, as the coupling of the relaxion to the Higgs field and the introduction of a linear potential for the relaxion produces large strong C P violation. We resolve this tension by considering inflation with a Hubble scale which is above the QCD scale but below the weak scale, and estimating the Hubble temperature dependence of the axion mass. The relaxion potential is thus very different during inflation than it is today. We find that provided the inflationary Hubble scale is between the weak scale and about 3 GeV, the relaxion resolves the hierarchy, strong C P , and dark matter problems in a way that is technically natural.

  14. Case of two electrostatics problems: Can providing a diagram adversely impact introductory physics students' problem solving performance?

    NASA Astrophysics Data System (ADS)

    Maries, Alexandru; Singh, Chandralekha

    2018-06-01

    Drawing appropriate diagrams is a useful problem solving heuristic that can transform a problem into a representation that is easier to exploit for solving it. One major focus while helping introductory physics students learn effective problem solving is to help them understand that drawing diagrams can facilitate problem solution. We conducted an investigation in which two different interventions were implemented during recitation quizzes in a large enrollment algebra-based introductory physics course. Students were either (i) asked to solve problems in which the diagrams were drawn for them or (ii) explicitly told to draw a diagram. A comparison group was not given any instruction regarding diagrams. We developed rubrics to score the problem solving performance of students in different intervention groups and investigated ten problems. We found that students who were provided diagrams never performed better and actually performed worse than the other students on three problems, one involving standing sound waves in a tube (discussed elsewhere) and two problems in electricity which we focus on here. These two problems were the only problems in electricity that involved considerations of initial and final conditions, which may partly account for why students provided with diagrams performed significantly worse than students who were not provided with diagrams. In order to explore potential reasons for this finding, we conducted interviews with students and found that some students provided with diagrams may have spent less time on the conceptual analysis and planning stage of the problem solving process. In particular, those provided with the diagram were more likely to jump into the implementation stage of problem solving early without fully analyzing and understanding the problem, which can increase the likelihood of mistakes in solutions.

  15. Applications of remote sensing to estuarine problems. [estuaries of Chesapeake Bay

    NASA Technical Reports Server (NTRS)

    Munday, J. C., Jr.

    1975-01-01

    A variety of siting problems for the estuaries of the lower Chesapeake Bay have been solved with cost beneficial remote sensing techniques. Principal techniques used were repetitive 1:30,000 color photography of dye emitting buoys to map circulation patterns, and investigation of water color boundaries via color and color infrared imagery to scales of 1:120,000. Problems solved included sewage outfall siting, shoreline preservation and enhancement, oil pollution risk assessment, and protection of shellfish beds from dredge operations.

  16. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  17. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  18. Partially-Averaged Navier Stokes Model for Turbulence: Implementation and Validation

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Abdol-Hamid, Khaled S.

    2005-01-01

    Partially-averaged Navier Stokes (PANS) is a suite of turbulence closure models of various modeled-to-resolved scale ratios ranging from Reynolds-averaged Navier Stokes (RANS) to Navier-Stokes (direct numerical simulations). The objective of PANS, like hybrid models, is to resolve large scale structures at reasonable computational expense. The modeled-to-resolved scale ratio or the level of physical resolution in PANS is quantified by two parameters: the unresolved-to-total ratios of kinetic energy (f(sub k)) and dissipation (f(sub epsilon)). The unresolved-scale stress is modeled with the Boussinesq approximation and modeled transport equations are solved for the unresolved kinetic energy and dissipation. In this paper, we first present a brief discussion of the PANS philosophy followed by a description of the implementation procedure and finally perform preliminary evaluation in benchmark problems.

  19. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3) Coupling large-scale computing and data systems to scientific and engineering instruments (e.g., realtime interaction with experiments through real-time data analysis and interpretation presented to the experimentalist in ways that allow direct interaction with the experiment (instead of just with instrument control); (5) Highly interactive, augmented reality and virtual reality remote collaborations (e.g., Ames / Boeing Remote Help Desk providing field maintenance use of coupled video and NDI to a remote, on-line airframe structures expert who uses this data to index into detailed design databases, and returns 3D internal aircraft geometry to the field); (5) Single computational problems too large for any single system (e.g. the rotocraft reference calculation). Grids also have the potential to provide pools of resources that could be called on in extraordinary / rapid response situations (such as disaster response) because they can provide common interfaces and access mechanisms, standardized management, and uniform user authentication and authorization, for large collections of distributed resources (whether or not they normally function in concert). IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: the scientist / design engineer whose primary interest is problem solving (e.g. determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user is the tool designer: the computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. The results of the analysis of the needs of these two types of users provides a broad set of requirements that gives rise to a general set of required capabilities. The IPG project is intended to address all of these requirements. In some cases the required computing technology exists, and in some cases it must be researched and developed. The project is using available technology to provide a prototype set of capabilities in a persistent distributed computing testbed. Beyond this, there are required capabilities that are not immediately available, and whose development spans the range from near-term engineering development (one to two years) to much longer term R&D (three to six years). Additional information is contained in the original.

  20. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE PAGES

    Lin, Fu; Leyffer, Sven; Munson, Todd

    2016-04-12

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  1. Poisson-Nernst-Planck equations for simulating biomolecular diffusion-reaction processes I: Finite element solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu Benzhuo; Holst, Michael J.; Center for Theoretical Biological Physics, University of California San Diego, La Jolla, CA 92093

    2010-09-20

    In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for simulating electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised formore » time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.« less

  2. Poisson-Nernst-Planck Equations for Simulating Biomolecular Diffusion-Reaction Processes I: Finite Element Solutions

    PubMed Central

    Lu, Benzhuo; Holst, Michael J.; McCammon, J. Andrew; Zhou, Y. C.

    2010-01-01

    In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems. PMID:21709855

  3. Poisson-Nernst-Planck Equations for Simulating Biomolecular Diffusion-Reaction Processes I: Finite Element Solutions.

    PubMed

    Lu, Benzhuo; Holst, Michael J; McCammon, J Andrew; Zhou, Y C

    2010-09-20

    In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.

  4. Quantum communication and information processing

    NASA Astrophysics Data System (ADS)

    Beals, Travis Roland

    Quantum computers enable dramatically more efficient algorithms for solving certain classes of computational problems, but, in doing so, they create new problems. In particular, Shor's Algorithm allows for efficient cryptanalysis of many public-key cryptosystems. As public key cryptography is a critical component of present-day electronic commerce, it is crucial that a working, secure replacement be found. Quantum key distribution (QKD), first developed by C.H. Bennett and G. Brassard, offers a partial solution, but many challenges remain, both in terms of hardware limitations and in designing cryptographic protocols for a viable large-scale quantum communication infrastructure. In Part I, I investigate optical lattice-based approaches to quantum information processing. I look at details of a proposal for an optical lattice-based quantum computer, which could potentially be used for both quantum communications and for more sophisticated quantum information processing. In Part III, I propose a method for converting and storing photonic quantum bits in the internal state of periodically-spaced neutral atoms by generating and manipulating a photonic band gap and associated defect states. In Part II, I present a cryptographic protocol which allows for the extension of present-day QKD networks over much longer distances without the development of new hardware. I also present a second, related protocol which effectively solves the authentication problem faced by a large QKD network, thus making QKD a viable, information-theoretic secure replacement for public key cryptosystems.

  5. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Fu; Leyffer, Sven; Munson, Todd

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  6. Ways of problem solving as predictors of relapse in alcohol dependent male inpatients.

    PubMed

    Demirbas, Hatice; Ilhan, Inci Ozgur; Dogan, Yildirim Beyatli

    2012-01-01

    The purpose of this study was to identify how remitters and relapsers view their everyday problem solving strategies. A total of 128 male alcohol dependent male inpatients who were hospitalized at the Ankara University Psychiatry Clinic, Alcohol and Substance Abuse Treatment Unit were recruited for the study. Subjects demographic status and alcohol use histories were assessed by a self-report questionnaire. Also, patients were evaluated with The Coopersmith Self-esteem Inventory (CSI), The Spielberger State-Trait Anxiety Scale (STAI-I-II), and The Problem Solving Inventory (PSI). Patients were followed for six months with monthly intervals after hospital discharge. Drinking status was assessed in terms of abstinence and relapse. Data were assessed with Student t-test, and univariate and multivariate analyses. In the logistic regression analysis, age, marital status, employment status and PSI subscores were taken as the independent variables and drinking state at the end of six months as the dependent variable. There were significant differences in reflective and avoidant styles, and monitoring style of problem solving between abstainers and relapses. It was found that subjects who perceived their problem solving style as less avoidant and less reflective were at greater risk to relapse. The findings demonstrated that active engagement in problem solving like utilizing avoidant and reflective styles of problem solving enhances abstinence. In treatment, expanding the behavior repertoire and increasing the variety of ways of problem solving ways that can be utilized in daily life should be one of the major goals of the treatment program. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. The roles of emotional competence and social problem-solving in the relationship between physical abuse and adolescent suicidal ideation in China.

    PubMed

    Kwok, Sylvia Y C L; Yeung, Jerf W K; Low, Andrew Y T; Lo, Herman H M; Tam, Cherry H L

    2015-06-01

    The study investigated the relationship among physical abuse, positive psychological factors including emotional competence and social problem-solving, and suicidal ideation among adolescents in China. The possible moderating effects of emotional competence and social problem-solving in the association between physical abuse and adolescent suicidal ideation were also studied. A cross-sectional survey employing convenience sampling was conducted and self-administered questionnaires were collected from 527 adolescents with mean age of 14 years from the schools in Shanghai. Results showed that physical abuse was significantly and positively related to suicidal ideation in both male and female adolescents. Emotional competence was not found to be significantly associated with adolescent suicidal ideation, but rational problem-solving, a sub-scale of social problem-solving, was shown to be significantly and negatively associated with suicidal ideation for males, but not for females. However, emotional competence and rational problem-solving were shown to be a significant and a marginally significant moderator in the relationship between physical abuse and suicidal ideation in females respectively, but not in males. High rational problem-solving buffered the negative impact of physical abuse on suicidal ideation for females. Interestingly, females with higher empathy and who reported being physically abused by their parents have higher suicidal ideation. Findings are discussed and implications are stated. It is suggested to change the attitudes of parents on the concept of physical abuse, guide them on appropriate attitudes, knowledge and skills in parenting, and enhance adolescents' skills in rational problem-solving. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Biomimetics: its practice and theory.

    PubMed

    Vincent, Julian F V; Bogatyreva, Olga A; Bogatyrev, Nikolaj R; Bowyer, Adrian; Pahl, Anja-Karina

    2006-08-22

    Biomimetics, a name coined by Otto Schmitt in the 1950s for the transfer of ideas and analogues from biology to technology, has produced some significant and successful devices and concepts in the past 50 years, but is still empirical. We show that TRIZ, the Russian system of problem solving, can be adapted to illuminate and manipulate this process of transfer. Analysis using TRIZ shows that there is only 12% similarity between biology and technology in the principles which solutions to problems illustrate, and while technology solves problems largely by manipulating usage of energy, biology uses information and structure, two factors largely ignored by technology.

  9. Recent progress in multi-electrode spike sorting methods.

    PubMed

    Lefebvre, Baptiste; Yger, Pierre; Marre, Olivier

    2016-11-01

    In recent years, arrays of extracellular electrodes have been developed and manufactured to record simultaneously from hundreds of electrodes packed with a high density. These recordings should allow neuroscientists to reconstruct the individual activity of the neurons spiking in the vicinity of these electrodes, with the help of signal processing algorithms. Algorithms need to solve a source separation problem, also known as spike sorting. However, these new devices challenge the classical way to do spike sorting. Here we review different methods that have been developed to sort spikes from these large-scale recordings. We describe the common properties of these algorithms, as well as their main differences. Finally, we outline the issues that remain to be solved by future spike sorting algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Zero-temperature quantum annealing bottlenecks in the spin-glass phase.

    PubMed

    Knysh, Sergey

    2016-08-05

    A promising approach to solving hard binary optimization problems is quantum adiabatic annealing in a transverse magnetic field. An instantaneous ground state-initially a symmetric superposition of all possible assignments of N qubits-is closely tracked as it becomes more and more localized near the global minimum of the classical energy. Regions where the energy gap to excited states is small (for instance at the phase transition) are the algorithm's bottlenecks. Here I show how for large problems the complexity becomes dominated by O(log N) bottlenecks inside the spin-glass phase, where the gap scales as a stretched exponential. For smaller N, only the gap at the critical point is relevant, where it scales polynomially, as long as the phase transition is second order. This phenomenon is demonstrated rigorously for the two-pattern Gaussian Hopfield model. Qualitative comparison with the Sherrington-Kirkpatrick model leads to similar conclusions.

  11. NASA/Drexel program. [research effort in large-scale technical programs management for application to urban problems

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The results are reported of the NASA/Drexel research effort which was conducted in two separate phases. The initial phase stressed exploration of the problem from the point of view of three primary research areas and the building of a multidisciplinary team. The final phase consisted of a clinical demonstration program in which the research associates consulted with the County Executive of New Castle County, Delaware, to aid in solving actual problems confronting the County Government. The three primary research areas of the initial phase are identified as technology, management science, and behavioral science. Five specific projects which made up the research effort are treated separately. A final section contains the conclusions drawn from total research effort as well as from the specific projects.

  12. Lexical Problems in Large Distributed Information Systems.

    ERIC Educational Resources Information Center

    Berkovich, Simon Ya; Shneiderman, Ben

    1980-01-01

    Suggests a unified concept of a lexical subsystem as part of an information system to deal with lexical problems in local and network environments. The linguistic and control functions of the lexical subsystems in solving problems for large computer systems are described, and references are included. (Author/BK)

  13. Experimental realization of a one-way quantum computer algorithm solving Simon's problem.

    PubMed

    Tame, M S; Bell, B A; Di Franco, C; Wadsworth, W J; Rarity, J G

    2014-11-14

    We report an experimental demonstration of a one-way implementation of a quantum algorithm solving Simon's problem-a black-box period-finding problem that has an exponential gap between the classical and quantum runtime. Using an all-optical setup and modifying the bases of single-qubit measurements on a five-qubit cluster state, key representative functions of the logical two-qubit version's black box can be queried and solved. To the best of our knowledge, this work represents the first experimental realization of the quantum algorithm solving Simon's problem. The experimental results are in excellent agreement with the theoretical model, demonstrating the successful performance of the algorithm. With a view to scaling up to larger numbers of qubits, we analyze the resource requirements for an n-qubit version. This work helps highlight how one-way quantum computing provides a practical route to experimentally investigating the quantum-classical gap in the query complexity model.

  14. [Problem-solving strategies and marital satisfaction].

    PubMed

    Kriegelewicz, Olga

    2006-01-01

    This study investigated the relation between problem-solving strategies in the marital conflict and marital satisfaction. Four problem-solving strategies (Dialogue, Loyalty, Escalation of conflict and Withdrawal) were measured by the Problem-Solving Strategies Inventory, in two versions: self-report and report of partners' perceived behaviour. This measure refers to the concept of Rusbult, Johnson and Morrow, and meets high standards of reliability (alpha Cronbach from alpha = 0.78 to alpha = 0.94) and validity. Marital satisfaction was measured by Marriage Success Scale. The sample was composed of 147 marital couples. The study revealed that satisfied couples, in comparison with non-satisfied couples, tend to use constructive problem-solving strategies (Dialogue and Loyalty). They rarely use destructive strategies like Escalation of conflict or Withdrawal. Dialogue is the strategy connected with satisfaction in a most positive manner. These might be very important guidelines to couples' psychotherapy. Loyalty to oneself is a significant positive predictor of male satisfaction is also own Loyalty. The study shows that constructive attitudes are the most significant predictors of marriage satisfaction. It is therefore worth concentrating mostly on them in the psychotherapeutic process instead of eliminating destructive attitudes.

  15. Cross-syndrome comparison of real-world executive functioning and problem solving using a new problem-solving questionnaire.

    PubMed

    Camp, Joanne S; Karmiloff-Smith, Annette; Thomas, Michael S C; Farran, Emily K

    2016-12-01

    Individuals with neurodevelopmental disorders like Williams syndrome and Down syndrome exhibit executive function impairments on experimental tasks (Lanfranchi, Jerman, Dal Pont, Alberti, & Vianello, 2010; Menghini, Addona, Costanzo, & Vicari, 2010), but the way that they use executive functioning for problem solving in everyday life has not hitherto been explored. The study aim is to understand cross-syndrome characteristics of everyday executive functioning and problem solving. Parents/carers of individuals with Williams syndrome (n=47) or Down syndrome (n=31) of a similar chronological age (m=17 years 4 months and 18 years respectively) as well as those of a group of younger typically developing children (n=34; m=8years 3 months) completed two questionnaires: the Behavior Rating Inventory of Executive Function (BRIEF; Gioia, Isquith, Guy, & Kenworthy, 2000) and a novel Problem-Solving Questionnaire. The rated likelihood of reaching a solution in a problem solving situation was lower for both syndromic groups than the typical group, and lower still for the Williams syndrome group than the Down syndrome group. The proportion of group members meeting the criterion for clinical significance on the BRIEF was also highest for the Williams syndrome group. While changing response, avoiding losing focus and maintaining perseverance were important for problem-solving success in all groups, asking for help and avoiding becoming emotional were also important for the Down syndrome and Williams syndrome groups respectively. Keeping possessions in order was a relative strength amongst BRIEF scales for the Down syndrome group. Results suggest that individuals with Down syndrome tend to use compensatory strategies for problem solving (asking for help and potentially, keeping items well ordered), while for individuals with Williams syndrome, emotional reactions disrupt their problem-solving skills. This paper highlights the importance of identifying syndrome-specific problem-solving strengths and difficulties to improve effective functioning in everyday life. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Students' Explanations in Complex Learning of Disciplinary Programming

    ERIC Educational Resources Information Center

    Vieira, Camilo

    2016-01-01

    Computational Science and Engineering (CSE) has been denominated as the third pillar of science and as a set of important skills to solve the problems of a global society. Along with the theoretical and the experimental approaches, computation offers a third alternative to solve complex problems that require processing large amounts of data, or…

  17. Computer-Based Assessment of Complex Problem Solving: Concept, Implementation, and Application

    ERIC Educational Resources Information Center

    Greiff, Samuel; Wustenberg, Sascha; Holt, Daniel V.; Goldhammer, Frank; Funke, Joachim

    2013-01-01

    Complex Problem Solving (CPS) skills are essential to successfully deal with environments that change dynamically and involve a large number of interconnected and partially unknown causal influences. The increasing importance of such skills in the 21st century requires appropriate assessment and intervention methods, which in turn rely on adequate…

  18. Show, Don't Tell: Using Photographic "Snapsignments" to Advance and Assess Creative Problem Solving

    ERIC Educational Resources Information Center

    Machin, Jane E.

    2016-01-01

    Traditional assignments that aim to develop and evaluate creative problem solving skills are frequently foregone in large marketing classes due to the daunting grading prospect they present. Here, a new assessment method is introduced: the "snapsignment." Through photography, individual projects can be assigned that promote higher order…

  19. The Effects of Feedback during Exploratory Mathematics Problem Solving: Prior Knowledge Matters

    ERIC Educational Resources Information Center

    Fyfe, Emily R.; Rittle-Johnson, Bethany; DeCaro, Marci S.

    2012-01-01

    Providing exploratory activities prior to explicit instruction can facilitate learning. However, the level of guidance provided during the exploration has largely gone unstudied. In this study, we examined the effects of 1 form of guidance, feedback, during exploratory mathematics problem solving for children with varying levels of prior domain…

  20. Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture

    NASA Technical Reports Server (NTRS)

    Desai, Prasun N.; Conway, Bruce A.

    2005-01-01

    Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.

Top