Sample records for explicit problem solving

  1. Effects of the SOLVE Strategy on the Mathematical Problem Solving Skills of Secondary Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Freeman-Green, Shaqwana M.; O'Brien, Chris; Wood, Charles L.; Hitt, Sara Beth

    2015-01-01

    This study examined the effects of explicit instruction in the SOLVE Strategy on the mathematical problem solving skills of six Grade 8 students with specific learning disabilities. The SOLVE Strategy is an explicit instruction, mnemonic-based learning strategy designed to help students in solving mathematical word problems. Using a multiple probe…

  2. The Effect of Using an Explicit General Problem Solving Teaching Approach on Elementary Pre-Service Teachers' Ability to Solve Heat Transfer Problems

    ERIC Educational Resources Information Center

    Mataka, Lloyd M.; Cobern, William W.; Grunert, Megan L.; Mutambuki, Jacinta; Akom, George

    2014-01-01

    This study investigate the effectiveness of adding an "explicit general problem solving teaching strategy" (EGPS) to guided inquiry (GI) on pre-service elementary school teachers' ability to solve heat transfer problems. The pre-service elementary teachers in this study were enrolled in two sections of a chemistry course for pre-service…

  3. Implicit Runge-Kutta Methods with Explicit Internal Stages

    NASA Astrophysics Data System (ADS)

    Skvortsov, L. M.

    2018-03-01

    The main computational costs of implicit Runge-Kutta methods are caused by solving a system of algebraic equations at every step. By introducing explicit stages, it is possible to increase the stage (or pseudo-stage) order of the method, which makes it possible to increase the accuracy and avoid reducing the order in solving stiff problems, without additional costs of solving algebraic equations. The paper presents implicit methods with an explicit first stage and one or two explicit internal stages. The results of solving test problems are compared with similar methods having no explicit internal stages.

  4. Effects of an explicit problem-solving skills training program using a metacomponential approach for outpatients with acquired brain injury.

    PubMed

    Fong, Kenneth N K; Howie, Dorothy R

    2009-01-01

    We investigated the effects of an explicit problem-solving skills training program using a metacomponential approach with 33 outpatients with moderate acquired brain injury, in the Hong Kong context. We compared an experimental training intervention with this explicit problem-solving approach, which taught metacomponential strategies, with a conventional cognitive training approach that did not have this explicit metacognitive training. We found significant advantages for the experimental group on the Metacomponential Interview measure in association with the explicit metacomponential training, but transfer to the real-life problem-solving measures was not evidenced in statistically significant findings. Small sample size, limited time of intervention, and some limitations with these tools may have been contributing factors to these results. The training program was demonstrated to have a significantly greater effect than the conventional training approach on metacomponential functioning and the component of problem representation. However, these benefits were not transferable to real-life situations.

  5. EXPECT: Explicit Representations for Flexible Acquisition

    NASA Technical Reports Server (NTRS)

    Swartout, BIll; Gil, Yolanda

    1995-01-01

    To create more powerful knowledge acquisition systems, we not only need better acquisition tools, but we need to change the architecture of the knowledge based systems we create so that their structure will provide better support for acquisition. Current acquisition tools permit users to modify factual knowledge but they provide limited support for modifying problem solving knowledge. In this paper, the authors argue that this limitation (and others) stem from the use of incomplete models of problem-solving knowledge and inflexible specification of the interdependencies between problem-solving and factual knowledge. We describe the EXPECT architecture which addresses these problems by providing an explicit representation for problem-solving knowledge and intent. Using this more explicit representation, EXPECT can automatically derive the interdependencies between problem-solving and factual knowledge. By deriving these interdependencies from the structure of the knowledge-based system itself EXPECT supports more flexible and powerful knowledge acquisition.

  6. Interfaces Leading Groups of Learners to Make Their Shared Problem-Solving Organization Explicit

    ERIC Educational Resources Information Center

    Moguel, P.; Tchounikine, P.; Tricot, A.

    2012-01-01

    In this paper, we consider collective problem-solving challenges and a particular structuring objective: lead groups of learners to make their shared problem-solving organization explicit. Such an objective may be considered as a way to lead learners to consider building and maintaining a shared organization, and/or as a way to provide a basis for…

  7. Using Explicit C-R-A Instruction to Teach Fraction Word Problem Solving to Low-Performing Asian English Learners

    ERIC Educational Resources Information Center

    Kim, Sun A.; Wang, Peishi; Michaels, Craig A.

    2015-01-01

    This article investigates the effects of fraction word problem-solving instruction involving explicit teaching of the concrete-representational-abstract sequence with culturally relevant teaching examples for 3 low-performing Asian immigrant English learners who spoke a language other than English at home. We used a multiple probe design across…

  8. A family of approximate solutions and explicit error estimates for the nonlinear stationary Navier-Stokes problem

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.; Karel, S.

    1975-01-01

    An algorithm for solving the nonlinear stationary Navier-Stokes problem is developed. Explicit error estimates are given. This mathematical technique is potentially adaptable to the separation problem.

  9. Effects of a Research-Based Intervention to Improve Seventh-Grade Students' Proportional Problem Solving: A Cluster Randomized Trial

    ERIC Educational Resources Information Center

    Jitendra, Asha K.; Harwell, Michael R.; Dupuis, Danielle N.; Karl, Stacy R.; Lein, Amy E.; Simonson, Gregory; Slater, Susan C.

    2015-01-01

    This experimental study evaluated the effectiveness of a research-based intervention, schema-based instruction (SBI), on students' proportional problem solving. SBI emphasizes the underlying mathematical structure of problems, uses schematic diagrams to represent information in the problem text, provides explicit problem-solving and metacognitive…

  10. Effects of a Research-Based Intervention to Improve Seventh-Grade Students' Proportional Problem Solving: A Cluster Randomized Trial

    ERIC Educational Resources Information Center

    Jitendra, Asha K.; Harwell, Michael R.; Dupuis, Danielle N.; Karl, Stacy R.; Lein, Amy E.; Simonson, Gregory; Slater, Susan C.

    2015-01-01

    This experimental study evaluated the effectiveness of a research-based intervention, schema-based instruction (SBI), on students' proportional problem solving. SBI emphasizes the underlying mathematical structure of problems, uses schematic diagrams to represent information in the problem text, provides explicit problem solving and metacognitive…

  11. A new solution method for wheel/rail rolling contact.

    PubMed

    Yang, Jian; Song, Hua; Fu, Lihua; Wang, Meng; Li, Wei

    2016-01-01

    To solve the problem of wheel/rail rolling contact of nonlinear steady-state curving, a three-dimensional transient finite element (FE) model is developed by the explicit software ANSYS/LS-DYNA. To improve the solving speed and efficiency, an explicit-explicit order solution method is put forward based on analysis of the features of implicit and explicit algorithm. The solution method was first applied to calculate the pre-loading of wheel/rail rolling contact with explicit algorithm, and then the results became the initial conditions in solving the dynamic process of wheel/rail rolling contact with explicit algorithm as well. Simultaneously, the common implicit-explicit order solution method is used to solve the FE model. Results show that the explicit-explicit order solution method has faster operation speed and higher efficiency than the implicit-explicit order solution method while the solution accuracy is almost the same. Hence, the explicit-explicit order solution method is more suitable for the wheel/rail rolling contact model with large scale and high nonlinearity.

  12. Intuitive Tip of the Tongue Judgments Predict Subsequent Problem Solving One Day Later

    ERIC Educational Resources Information Center

    Collier, Azurii K.; Beeman, Mark

    2012-01-01

    Often when failing to solve problems, individuals report some idea of the solution, but cannot explicitly access the idea. We investigated whether such intuition would relate to improvements in solving and to the manner in which a problem was solved after a 24- hour delay. On Day 1, participants attempted to solve Compound Remote Associate…

  13. Incubation, Insight, and Creative Problem Solving: A Unified Theory and a Connectionist Model

    ERIC Educational Resources Information Center

    Helie, Sebastien; Sun, Ron

    2010-01-01

    This article proposes a unified framework for understanding creative problem solving, namely, the explicit-implicit interaction theory. This new theory of creative problem solving constitutes an attempt at providing a more unified explanation of relevant phenomena (in part by reinterpreting/integrating various fragmentary existing theories of…

  14. A Case Study in an Integrated Development and Problem Solving Environment

    ERIC Educational Resources Information Center

    Deek, Fadi P.; McHugh, James A.

    2003-01-01

    This article describes an integrated problem solving and program development environment, illustrating the application of the system with a detailed case study of a small-scale programming problem. The system, which is based on an explicit cognitive model, is intended to guide the novice programmer through the stages of problem solving and program…

  15. Model Drawing Strategy for Fraction Word Problem Solving of Fourth-Grade Students with Learning Disabilities

    ERIC Educational Resources Information Center

    Sharp, Emily; Shih Dennis, Minyi

    2017-01-01

    This study used a multiple probe across participants design to examine the effects of a model drawing strategy (MDS) intervention package on fraction comparing and ordering word problem-solving performance of three Grade 4 students. MDS is a form of cognitive strategy instruction for teaching word problem solving that includes explicit instruction…

  16. Enhancing chemistry problem-solving achievement using problem categorization

    NASA Astrophysics Data System (ADS)

    Bunce, Diane M.; Gabel, Dorothy L.; Samuel, John V.

    The enhancement of chemistry students' skill in problem solving through problem categorization is the focus of this study. Twenty-four students in a freshman chemistry course for health professionals are taught how to solve problems using the explicit method of problem solving (EMPS) (Bunce & Heikkinen, 1986). The EMPS is an organized approach to problem analysis which includes encoding the information given in a problem (Given, Asked For), relating this to what is already in long-term memory (Recall), and planning a solution (Overall Plan) before a mathematical solution is attempted. In addition to the EMPS training, treatment students receive three 40-minute sessions following achievement tests in which they are taught how to categorize problems. Control students use this time to review the EMPS solutions of test questions. Although problem categorization is involved in one section of the EMPS (Recall), treatment students who received specific training in problem categorization demonstrate significantly higher achievement on combination problems (those problems requiring the use of more than one chemical topic for their solution) at (p = 0.01) than their counterparts. Significantly higher achievement for treatment students is also measured on an unannounced test (p = 0.02). Analysis of interview transcripts of both treatment and control students illustrates a Rolodex approach to problem solving employed by all students in this study. The Rolodex approach involves organizing equations used to solve problems on mental index cards and flipping through them, matching units given when a new problem is to be solved. A second phenomenon observed during student interviews is the absence of a link in the conceptual understanding of the chemical concepts involved in a problem and the problem-solving skills employed to correctly solve problems. This study shows that explicit training in categorization skills and the EMPS can lead to higher achievement in complex problem-solving situations (combination problems and unannounced test). However, such achievement may be limited by the lack of linkages between students' conceptual understanding and improved problem-solving skill.

  17. Soft Systems Methodology and Problem Framing: Development of an Environmental Problem Solving Model Respecting a New Emergent Reflexive Paradigm.

    ERIC Educational Resources Information Center

    Gauthier, Benoit; And Others

    1997-01-01

    Identifies the more representative problem-solving models in environmental education. Suggests the addition of a strategy for defining a problem situation using Soft Systems Methodology to environmental education activities explicitly designed for the development of critical thinking. Contains 45 references. (JRH)

  18. Learning problem-solving skills in a distance education physics course

    NASA Astrophysics Data System (ADS)

    Rampho, G. J.; Ramorola, M. Z.

    2017-10-01

    In this paper we present the results of a study on the effectiveness of combinations of delivery modes of distance education in learning problem-solving skills in a distance education introductory physics course. A problem-solving instruction with the explicit teaching of a problem-solving strategy and worked-out examples were implemented in the course. The study used the ex post facto research design with stratified sampling to investigate the effect of the learning of a problem-solving strategy on the problem-solving performance. The number of problems attempted and the mean frequency of using a strategy in solving problems in the three course presentation modes were compared. The finding of the study indicated that combining the different course presentation modes had no statistically significant effect in the learning of problem-solving skills in the distance education course.

  19. A dependency-based modelling mechanism for problem solving

    NASA Technical Reports Server (NTRS)

    London, P.

    1978-01-01

    The paper develops a technique of dependency net modeling which relies on an explicit representation of justifications for beliefs held by the problem solver. Using these justifications, the modeling mechanism is able to determine the relevant lines of inference to pursue during problem solving. Three particular problem-solving difficulties which may be handled by the dependency-based technique are discussed: (1) subgoal violation detection, (2) description binding, and (3) maintaining a consistent world model.

  20. A numerical scheme to solve unstable boundary value problems

    NASA Technical Reports Server (NTRS)

    Kalnay Derivas, E.

    1975-01-01

    A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.

  1. Decision-Making and Problem-Solving Approaches in Pharmacy Education

    PubMed Central

    Martin, Lindsay C.; Holdford, David A.

    2016-01-01

    Domain 3 of the Center for the Advancement of Pharmacy Education (CAPE) 2013 Educational Outcomes recommends that pharmacy school curricula prepare students to be better problem solvers, but are silent on the type of problems they should be prepared to solve. We identified five basic approaches to problem solving in the curriculum at a pharmacy school: clinical, ethical, managerial, economic, and legal. These approaches were compared to determine a generic process that could be applied to all pharmacy decisions. Although there were similarities in the approaches, generic problem solving processes may not work for all problems. Successful problem solving requires identification of the problems faced and application of the right approach to the situation. We also advocate that the CAPE Outcomes make explicit the importance of different approaches to problem solving. Future pharmacists will need multiple approaches to problem solving to adapt to the complexity of health care. PMID:27170823

  2. Decision-Making and Problem-Solving Approaches in Pharmacy Education.

    PubMed

    Martin, Lindsay C; Donohoe, Krista L; Holdford, David A

    2016-04-25

    Domain 3 of the Center for the Advancement of Pharmacy Education (CAPE) 2013 Educational Outcomes recommends that pharmacy school curricula prepare students to be better problem solvers, but are silent on the type of problems they should be prepared to solve. We identified five basic approaches to problem solving in the curriculum at a pharmacy school: clinical, ethical, managerial, economic, and legal. These approaches were compared to determine a generic process that could be applied to all pharmacy decisions. Although there were similarities in the approaches, generic problem solving processes may not work for all problems. Successful problem solving requires identification of the problems faced and application of the right approach to the situation. We also advocate that the CAPE Outcomes make explicit the importance of different approaches to problem solving. Future pharmacists will need multiple approaches to problem solving to adapt to the complexity of health care.

  3. Can False Memories Prime Problem Solutions?

    ERIC Educational Resources Information Center

    Howe, Mark L.; Garner, Sarah R.; Dewhurst, Stephen A.; Ball, Linden J.

    2010-01-01

    Previous research has suggested that false memories can prime performance on related implicit and explicit memory tasks. The present research examined whether false memories can also be used to prime higher order cognitive processes, namely, insight-based problem solving. Participants were asked to solve a number of compound remote associate task…

  4. Eleventh-Grade High School Students' Accounts of Mathematical Metacognitive Knowledge: Explicitness and Systematicity

    ERIC Educational Resources Information Center

    van Velzen, Joke H.

    2016-01-01

    Theoretically, it has been argued that a conscious understanding of metacognitive knowledge requires that this knowledge is explicit and systematic. The purpose of this descriptive study was to obtain a better understanding of explicitness and systematicity in knowledge of the mathematical problem-solving process. Eighteen 11th-grade…

  5. Improving Creative Problem-Solving in a Sample of Third Culture Kids

    ERIC Educational Resources Information Center

    Lee, Young Ju; Bain, Sherry K.; McCallum, R. Steve

    2007-01-01

    We investigated the effects of divergent thinking training (with explicit instruction) on problem-solving tasks in a sample of Third Culture Kids (Useem and Downie, 1976). We were specifically interested in whether the children's originality and fluency in responding increased following instruction, not only on classroom-based worksheets and the…

  6. An Alternative Time for Telling: When Conceptual Instruction Prior to Problem Solving Improves Mathematical Knowledge

    ERIC Educational Resources Information Center

    Fyfe, Emily R.; DeCaro, Marci S.; Rittle-Johnson, Bethany

    2014-01-01

    Background: The sequencing of learning materials greatly influences the knowledge that learners construct. Recently, learning theorists have focused on the sequencing of instruction in relation to solving related problems. The general consensus suggests explicit instruction should be provided; however, when to provide instruction remains unclear.…

  7. The Effects of Feedback during Exploratory Mathematics Problem Solving: Prior Knowledge Matters

    ERIC Educational Resources Information Center

    Fyfe, Emily R.; Rittle-Johnson, Bethany; DeCaro, Marci S.

    2012-01-01

    Providing exploratory activities prior to explicit instruction can facilitate learning. However, the level of guidance provided during the exploration has largely gone unstudied. In this study, we examined the effects of 1 form of guidance, feedback, during exploratory mathematics problem solving for children with varying levels of prior domain…

  8. Grasp of Consciousness and Performance in Mathematics Making Explicit the Ways of Thinking in Solving Cartesian Product Problems

    ERIC Educational Resources Information Center

    Soares, Maria Tereza Carneiro; Moro, Maria Lucia Faria; Spinillo, Alina Galvao

    2012-01-01

    This study examines the relationship between the grasp of consciousness of the reasoning process in Grades 5 and 8 pupils from a public and a private school, and their performance in mathematical problems of Cartesian product. Forty-two participants aged from 10 to 16 solved four problems in writing and explained their solution procedures by…

  9. Explicit solutions for exit-only radioactive decay chains

    NASA Astrophysics Data System (ADS)

    Yuan, Ding; Kernan, Warnick

    2007-05-01

    In this study, we extended Bateman's [Proc. Cambridge Philos. Soc. 15, 423 (1910)] original work for solving radioactive decay chains and explicitly derived analytic solutions for generic exit-only radioactive decay problems under given initial conditions. Instead of using the conventional Laplace transform for solving Bateman's equations, we used a much simpler algebraic approach. Finally, we discuss methods of breaking down certain classes of large decay chains into collections of simpler chains for easy handling.

  10. Modeling of outgassing and matrix decomposition in carbon-phenolic composites

    NASA Technical Reports Server (NTRS)

    Mcmanus, Hugh L.

    1994-01-01

    Work done in the period Jan. - June 1994 is summarized. Two threads of research have been followed. First, the thermodynamics approach was used to model the chemical and mechanical responses of composites exposed to high temperatures. The thermodynamics approach lends itself easily to the usage of variational principles. This thermodynamic-variational approach has been applied to the transpiration cooling problem. The second thread is the development of a better algorithm to solve the governing equations resulting from the modeling. Explicit finite difference method is explored for solving the governing nonlinear, partial differential equations. The method allows detailed material models to be included and solution on massively parallel supercomputers. To demonstrate the feasibility of the explicit scheme in solving nonlinear partial differential equations, a transpiration cooling problem was solved. Some interesting transient behaviors were captured such as stress waves and small spatial oscillations of transient pressure distribution.

  11. Experiences with explicit finite-difference schemes for complex fluid dynamics problems on STAR-100 and CYBER-203 computers

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Rudy, D. H.; Drummond, J. P.; Harris, J. E.

    1982-01-01

    Several two- and three-dimensional external and internal flow problems solved on the STAR-100 and CYBER-203 vector processing computers are described. The flow field was described by the full Navier-Stokes equations which were then solved by explicit finite-difference algorithms. Problem results and computer system requirements are presented. Program organization and data base structure for three-dimensional computer codes which will eliminate or improve on page faulting, are discussed. Storage requirements for three-dimensional codes are reduced by calculating transformation metric data in each step. As a result, in-core grid points were increased in number by 50% to 150,000, with a 10% execution time increase. An assessment of current and future machine requirements shows that even on the CYBER-205 computer only a few problems can be solved realistically. Estimates reveal that the present situation is more storage limited than compute rate limited, but advancements in both storage and speed are essential to realistically calculate three-dimensional flow.

  12. Impact of Teachers' Planned Questions on Opportunities for Students to Reason Mathematically in Whole-Class Discussions around Mathematical Problem-Solving Tasks

    ERIC Educational Resources Information Center

    Enoch, Sarah Elizabeth

    2013-01-01

    While professional developers have been encouraging teachers to plan for discourse around problem solving tasks as a way to orchestrate mathematically productive discourse (Stein, Engle, Smith, & Hughes, 2008; Stein, Smith, Henningsen, & Silver, 2009) no research has been conducted explicitly examining the relationship between the plans…

  13. An improved risk-explicit interval linear programming model for pollution load allocation for watershed management.

    PubMed

    Xia, Bisheng; Qian, Xin; Yao, Hong

    2017-11-01

    Although the risk-explicit interval linear programming (REILP) model has solved the problem of having interval solutions, it has an equity problem, which can lead to unbalanced allocation between different decision variables. Therefore, an improved REILP model is proposed. This model adds an equity objective function and three constraint conditions to overcome this equity problem. In this case, pollution reduction is in proportion to pollutant load, which supports balanced development between different regional economies. The model is used to solve the problem of pollution load allocation in a small transboundary watershed. Compared with the REILP original model result, our model achieves equity between the upstream and downstream pollutant loads; it also overcomes the problem of greatest pollution reduction, where sources are nearest to the control section. The model provides a better solution to the problem of pollution load allocation than previous versions.

  14. Some new results on the central overlap problem in astrometry

    NASA Astrophysics Data System (ADS)

    Rapaport, M.

    1998-07-01

    The central overlap problem in astrometry has been revisited in the recent last years by Eichhorn (1988) who explicitly inverted the matrix of a constrained least squares problem. In this paper, the general explicit solution of the unconstrained central overlap problem is given. We also give the explicit solution for an other set of constraints; this result is a confirmation of a conjecture expressed by Eichhorn (1988). We also consider the use of iterative methods to solve the central overlap problem. A surprising result is obtained when the classical Gauss Seidel method is used; the iterations converge immediately to the general solution of the equations; we explain this property writing the central overlap problem in a new set of variables.

  15. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  16. Reduction of the two dimensional stationary Navier-Stokes problem to a sequence of Fredholm integral equations of the second kind

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.

    1981-01-01

    Present approaches to solving the stationary Navier-Stokes equations are of limited value; however, there does exist an equivalent representation of the problem that has significant potential in solving such problems. This is due to the fact that the equivalent representation consists of a sequence of Fredholm integral equations of the second kind, and the solving of this type of problem is very well developed. For the problem in this form, there is an excellent chance to also determine explicit error estimates, since bounded, rather than unbounded, linear operators are dealt with.

  17. Supporting Organizational Problem Solving with a Workstation.

    DTIC Science & Technology

    1982-07-01

    G. [., and Sussman, G. J. AMORD: Explicit Control or Reasoning. In Proceedings of the Symposium on Artificial Intellignece and Programming Languagues...0505 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK Artificial Intelligence Laboratory AREA& WORK UNIT NUMBERS 545...extending ideas from the field of Artificial Intelligence (A), we describ office work as a problem solving activity. A knowledge embedding language called

  18. Problem-solving rubrics revisited: Attending to the blending of informal conceptual and formal mathematical reasoning

    NASA Astrophysics Data System (ADS)

    Hull, Michael M.; Kuo, Eric; Gupta, Ayush; Elby, Andrew

    2013-06-01

    Much research in engineering and physics education has focused on improving students’ problem-solving skills. This research has led to the development of step-by-step problem-solving strategies and grading rubrics to assess a student’s expertise in solving problems using these strategies. These rubrics value “communication” between the student’s qualitative description of the physical situation and the student’s formal mathematical descriptions (usually equations) at two points: when initially setting up the equations, and when evaluating the final mathematical answer for meaning and plausibility. We argue that (i) neither the rubrics nor the associated problem-solving strategies explicitly value this kind of communication during mathematical manipulations of the chosen equations, and (ii) such communication is an aspect of problem-solving expertise. To make this argument, we present a case study of two students, Alex and Pat, solving the same kinematics problem in clinical interviews. We argue that Pat’s solution, which connects manipulation of equations to their physical interpretation, is more expertlike than Alex’s solution, which uses equations more algorithmically. We then show that the types of problem-solving rubrics currently available do not discriminate between these two types of solutions. We conclude that problem-solving rubrics should be revised or repurposed to more accurately assess problem-solving expertise.

  19. Sleep Does Not Promote Solving Classical Insight Problems and Magic Tricks

    PubMed Central

    Schönauer, Monika; Brodt, Svenja; Pöhlchen, Dorothee; Breßmer, Anja; Danek, Amory H.; Gais, Steffen

    2018-01-01

    During creative problem solving, initial solution attempts often fail because of self-imposed constraints that prevent us from thinking out of the box. In order to solve a problem successfully, the problem representation has to be restructured by combining elements of available knowledge in novel and creative ways. It has been suggested that sleep supports the reorganization of memory representations, ultimately aiding problem solving. In this study, we systematically tested the effect of sleep and time on problem solving, using classical insight tasks and magic tricks. Solving these tasks explicitly requires a restructuring of the problem representation and may be accompanied by a subjective feeling of insight. In two sessions, 77 participants had to solve classical insight problems and magic tricks. The two sessions either occurred consecutively or were spaced 3 h apart, with the time in between spent either sleeping or awake. We found that sleep affected neither general solution rates nor the number of solutions accompanied by sudden subjective insight. Our study thus adds to accumulating evidence that sleep does not provide an environment that facilitates the qualitative restructuring of memory representations and enables problem solving. PMID:29535620

  20. Conjugate gradient based projection - A new explicit methodology for frictional contact

    NASA Technical Reports Server (NTRS)

    Tamma, Kumar K.; Li, Maocheng; Sha, Desong

    1993-01-01

    With special attention towards the applicability to parallel computation or vectorization, a new and effective explicit approach for linear complementary formulations involving a conjugate gradient based projection methodology is proposed in this study for contact problems with Coulomb friction. The overall objectives are focussed towards providing an explicit methodology of computation for the complete contact problem with friction. In this regard, the primary idea for solving the linear complementary formulations stems from an established search direction which is projected to a feasible region determined by the non-negative constraint condition; this direction is then applied to the Fletcher-Reeves conjugate gradient method resulting in a powerful explicit methodology which possesses high accuracy, excellent convergence characteristics, fast computational speed and is relatively simple to implement for contact problems involving Coulomb friction.

  1. Analysing student written solutions to investigate if problem-solving processes are evident throughout

    NASA Astrophysics Data System (ADS)

    Kelly, Regina; McLoughlin, Eilish; Finlayson, Odilla E.

    2016-07-01

    An interdisciplinary science course has been implemented at a university with the intention of providing students the opportunity to develop a range of key skills in relation to: real-world connections of science, problem-solving, information and communications technology use and team while linking subject knowledge in each of the science disciplines. One of the problems used in this interdisciplinary course has been selected to evaluate if it affords students the opportunity to explicitly display problem-solving processes. While the benefits of implementing problem-based learning have been well reported, far less research has been devoted to methods of assessing student problem-solving solutions. A problem-solving theoretical framework was used as a tool to assess student written solutions to indicate if problem-solving processes were present. In two academic years, student problem-solving processes were satisfactory for exploring and understanding, representing and formulating, and planning and executing, indicating that student collaboration on problems is a good initiator of developing these processes. In both academic years, students displayed poor monitoring and reflecting (MR) processes at the intermediate level. A key impact of evaluating student work in this way is that it facilitated meaningful feedback about the students' problem-solving process rather than solely assessing the correctness of problem solutions.

  2. Design of a cooperative problem-solving system for en-route flight planning: An empirical evaluation

    NASA Technical Reports Server (NTRS)

    Layton, Charles; Smith, Philip J.; Mc Coy, C. Elaine

    1994-01-01

    Both optimization techniques and expert systems technologies are popular approaches for developing tools to assist in complex problem-solving tasks. Because of the underlying complexity of many such tasks, however, the models of the world implicitly or explicitly embedded in such tools are often incomplete and the problem-solving methods fallible. The result can be 'brittleness' in situations that were not anticipated by the system designers. To deal with this weakness, it has been suggested that 'cooperative' rather than 'automated' problem-solving systems be designed. Such cooperative systems are proposed to explicitly enhance the collaboration of the person (or a group of people) and the computer system. This study evaluates the impact of alternative design concepts on the performance of 30 airline pilots interacting with such a cooperative system designed to support en-route flight planning. The results clearly demonstrate that different system design concepts can strongly influence the cognitive processes and resultant performances of users. Based on think-aloud protocols, cognitive models are proposed to account for how features of the computer system interacted with specific types of scenarios to influence exploration and decision making by the pilots. The results are then used to develop recommendations for guiding the design of cooperative systems.

  3. Design of a cooperative problem-solving system for en-route flight planning: An empirical evaluation

    NASA Technical Reports Server (NTRS)

    Layton, Charles; Smith, Philip J.; McCoy, C. Elaine

    1994-01-01

    Both optimization techniques and expert systems technologies are popular approaches for developing tools to assist in complex problem-solving tasks. Because of the underlying complexity of many such tasks, however, the models of the world implicitly or explicitly embedded in such tools are often incomplete and the problem-solving methods fallible. The result can be 'brittleness' in situations that were not anticipated by the system designers. To deal with this weakness, it has been suggested that 'cooperative' rather than 'automated' problem-solving systems be designed. Such cooperative systems are proposed to explicitly enhance the collaboration of the person (or a group of people) and the computer system. This study evaluates the impact of alternative design concepts on the performance of 30 airline pilots interacting with such a cooperative system designed to support enroute flight planning. The results clearly demonstrate that different system design concepts can strongly influence the cognitive processes and resultant performances of users. Based on think-aloud protocols, cognitive models are proposed to account for how features of the computer system interacted with specific types of scenarios to influence exploration and decision making by the pilots. The results are then used to develop recommendations for guiding the design of cooperative systems.

  4. Solving Set Cover with Pairs Problem using Quantum Annealing

    NASA Astrophysics Data System (ADS)

    Cao, Yudong; Jiang, Shuxian; Perouli, Debbie; Kais, Sabre

    2016-09-01

    Here we consider using quantum annealing to solve Set Cover with Pairs (SCP), an NP-hard combinatorial optimization problem that plays an important role in networking, computational biology, and biochemistry. We show an explicit construction of Ising Hamiltonians whose ground states encode the solution of SCP instances. We numerically simulate the time-dependent Schrödinger equation in order to test the performance of quantum annealing for random instances and compare with that of simulated annealing. We also discuss explicit embedding strategies for realizing our Hamiltonian construction on the D-wave type restricted Ising Hamiltonian based on Chimera graphs. Our embedding on the Chimera graph preserves the structure of the original SCP instance and in particular, the embedding for general complete bipartite graphs and logical disjunctions may be of broader use than that the specific problem we deal with.

  5. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  6. The Labeling Strategy: Moving beyond Order in Counting Problems

    ERIC Educational Resources Information Center

    CadwalladerOlsker, Todd

    2013-01-01

    Permutations and combinations are used to solve certain kinds of counting problems, but many students have trouble distinguishing which of these concepts applies to a given problem. An "order heuristic" is usually used to distinguish the two, but this heuristic can cause confusion when problems do not explicitly mention order. This…

  7. Discrete-continuous variable structural synthesis using dual methods

    NASA Technical Reports Server (NTRS)

    Schmit, L. A.; Fleury, C.

    1980-01-01

    Approximation concepts and dual methods are extended to solve structural synthesis problems involving a mix of discrete and continuous sizing type of design variables. Pure discrete and pure continuous variable problems can be handled as special cases. The basic mathematical programming statement of the structural synthesis problem is converted into a sequence of explicit approximate primal problems of separable form. These problems are solved by constructing continuous explicit dual functions, which are maximized subject to simple nonnegativity constraints on the dual variables. A newly devised gradient projection type of algorithm called DUAL 1, which includes special features for handling dual function gradient discontinuities that arise from the discrete primal variables, is used to find the solution of each dual problem. Computational implementation is accomplished by incorporating the DUAL 1 algorithm into the ACCESS 3 program as a new optimizer option. The power of the method set forth is demonstrated by presenting numerical results for several example problems, including a pure discrete variable treatment of a metallic swept wing and a mixed discrete-continuous variable solution for a thin delta wing with fiber composite skins.

  8. Effect of Tutorial Giving on The Topic of Special Theory of Relativity in Modern Physics Course Towards Students’ Problem-Solving Ability

    NASA Astrophysics Data System (ADS)

    Hartatiek; Yudyanto; Haryoto, Dwi

    2017-05-01

    A Special Theory of Relativity handbook has been successfully arranged to guide students tutorial activity in the Modern Physics course. The low of students’ problem-solving ability was overcome by giving the tutorial in addition to the lecture class. It was done due to the limited time in the class during the course to have students do some exercises for their problem-solving ability. The explicit problem-solving based tutorial handbook was written by emphasizing to this 5 problem-solving strategies: (1) focus on the problem, (2) picture the physical facts, (3) plan the solution, (4) solve the problem, and (5) check the result. This research and development (R&D) consisted of 3 main steps: (1) preliminary study, (2) draft I. product development, and (3) product validation. The developed draft product was validated by experts to measure the feasibility of the material and predict the effect of the tutorial giving by means of questionnaires with scale 1 to 4. The students problem-solving ability in Special Theory of Relativity showed very good qualification. It implied that the tutorial giving with the help of tutorial handbook increased students problem-solving ability. The empirical test revealed that the developed handbook was significantly affected in improving students’ mastery concept and problem-solving ability. Both students’ mastery concept and problem-solving ability were in middle category with gain of 0.31 and 0.41, respectively.

  9. On a comparison of two schemes in sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Grishina, Anastasiia A.; Penenko, Alexey V.

    2017-11-01

    This paper is focused on variational data assimilation as an approach to mathematical modeling. Realization of the approach requires a sequence of connected inverse problems with different sets of observational data to be solved. Two variational data assimilation schemes, "implicit" and "explicit", are considered in the article. Their equivalence is shown and the numerical results are given on a basis of non-linear Robertson system. To avoid the "inverse problem crime" different schemes were used to produce synthetic measurement and to solve the data assimilation problem.

  10. Do students benefit from drawing productive diagrams themselves while solving introductory physics problems? The case of two electrostatics problems

    NASA Astrophysics Data System (ADS)

    Maries, Alexandru; Singh, Chandralekha

    2018-01-01

    An appropriate diagram is a required element of a solution building process in physics problem solving and it can transform a given problem into a representation that is easier to exploit for solving the problem. A major focus while helping introductory physics students learn problem solving is to help them appreciate that drawing diagrams facilitates problem solving. We conducted an investigation in which two different interventions were implemented during recitation quizzes throughout the semester in a large enrolment, algebra-based introductory physics course. Students were either (1) asked to solve problems in which the diagrams were drawn for them or (2) explicitly told to draw a diagram. A comparison group was not given any instruction regarding diagrams. We developed a rubric to score the problem solving performance of students in different intervention groups. We investigated two problems involving electric field and electric force and found that students who drew productive diagrams were more successful problem solvers and that a higher level of relevant detail in a student’s diagram corresponded to a better score. We also conducted think-aloud interviews with nine students who were at the time taking an equivalent introductory algebra-based physics course in order to gain insight into how drawing diagrams affects the problem solving process. These interviews supported some of the interpretations of the quantitative results. We end by discussing instructional implications of the findings.

  11. Discrete Time McKean–Vlasov Control Problem: A Dynamic Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, Huyên, E-mail: pham@math.univ-paris-diderot.fr; Wei, Xiaoli, E-mail: tyswxl@gmail.com

    We consider the stochastic optimal control problem of nonlinear mean-field systems in discrete time. We reformulate the problem into a deterministic control problem with marginal distribution as controlled state variable, and prove that dynamic programming principle holds in its general form. We apply our method for solving explicitly the mean-variance portfolio selection and the multivariate linear-quadratic McKean–Vlasov control problem.

  12. Variational estimate method for solving autonomous ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Mungkasi, Sudi

    2018-04-01

    In this paper, we propose a method for solving first-order autonomous ordinary differential equation problems using a variational estimate formulation. The variational estimate is constructed with a Lagrange multiplier which is chosen optimally, so that the formulation leads to an accurate solution to the problem. The variational estimate is an integral form, which can be computed using a computer software. As the variational estimate is an explicit formula, the solution is easy to compute. This is a great advantage of the variational estimate formulation.

  13. Pre-Service Teacher Scientific Behavior: Comparative Study of Paired Science Project Assignments

    ERIC Educational Resources Information Center

    Bulunuz, Mizrap; Tapan Broutin, Menekse Seden; Bulunuz, Nermin

    2016-01-01

    Problem Statement: University students usually lack the skills to rigorously define a multi-dimensional real-life problem and its limitations in an explicit, clear and testable way, which prevents them from forming a reliable method, obtaining relevant results and making balanced judgments to solve a problem. Purpose of the Study: The study…

  14. A Research Methodology for Studying What Makes Some Problems Difficult to Solve

    ERIC Educational Resources Information Center

    Gulacar, Ozcan; Fynewever, Herb

    2010-01-01

    We present a quantitative model for predicting the level of difficulty subjects will experience with specific problems. The model explicitly accounts for the number of subproblems a problem can be broken into and the difficultly of each subproblem. Although the model builds on previously published models, it is uniquely suited for blending with…

  15. A function space framework for structural total variation regularization with applications in inverse problems

    NASA Astrophysics Data System (ADS)

    Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas

    2018-06-01

    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.

  16. Solving delay differential equations in S-ADAPT by method of steps.

    PubMed

    Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech

    2013-09-01

    S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.

  17. Explicit integration with GPU acceleration for large kinetic networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, Benjamin; Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830; Belt, Andrew

    2015-12-01

    We demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. This orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies that important coupled, multiphysics problems inmore » various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  18. Control problem for a system of linear loaded differential equations

    NASA Astrophysics Data System (ADS)

    Barseghyan, V. R.; Barseghyan, T. V.

    2018-04-01

    The problem of control and optimal control for a system of linear loaded differential equations is considered. Necessary and sufficient conditions for complete controllability and conditions for the existence of a program control and the corresponding motion are formulated. The explicit form of control action for the control problem is constructed and a method for solving the problem of optimal control is proposed.

  19. Effect of Goal Setting on the Strategies Used to Solve a Block Design Task

    ERIC Educational Resources Information Center

    Rozencwajg, Paulette; Fenouillet, Fabien

    2012-01-01

    In this experiment we studied the effect of goal setting on the strategies used to perform a block design task called SAMUEL. SAMUEL can measure many indicators, which are then combined to determine the strategies used by participants when solving SAMUEL problems. Two experimental groups were created: one group was given an explicit, difficult…

  20. FAST TRACK COMMUNICATION Solving the ultradiscrete KdV equation

    NASA Astrophysics Data System (ADS)

    Willox, Ralph; Nakata, Yoichi; Satsuma, Junkichi; Ramani, Alfred; Grammaticos, Basile

    2010-12-01

    We show that a generalized cellular automaton, exhibiting solitonic interactions, can be explicitly solved by means of techniques first introduced in the context of the scattering problem for the KdV equation. We apply this method to calculate the phase-shifts caused by interactions between the solitonic and non-solitonic parts into which arbitrary initial states separate in time.

  1. Teaching High School Students with Learning Disabilities to Use Model Drawing Strategy to Solve Fraction and Percentage Word Problems

    ERIC Educational Resources Information Center

    Dennis, Minyi Shih; Knight, Jacqueline; Jerman, Olga

    2016-01-01

    This article describes how to teach fraction and percentage word problems using a model-drawing strategy. This cognitive strategy places emphasis on explicitly teaching students how to draw a schematic diagram to represent the qualitative relations described in the problem, and how to formulate the solution based on the schematic diagram. The…

  2. Computer-Aided Group Problem Solving for Unified Life Cycle Engineering (ULCE)

    DTIC Science & Technology

    1989-02-01

    defining the problem, generating alternative solutions, evaluating alternatives, selecting alternatives, and implementing the solution. Systems...specialist in group dynamics, assists the group in formulating the problem and selecting a model framework. The analyst provides the group with computer...allocating resources, evaluating and selecting options, making judgments explicit, and analyzing dynamic systems. c. University of Rhode Island Drs. Geoffery

  3. Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing

    PubMed Central

    Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong

    2018-01-01

    The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination. PMID:29565313

  4. Explicit Content Caching at Mobile Edge Networks with Cross-Layer Sensing.

    PubMed

    Chen, Lingyu; Su, Youxing; Luo, Wenbin; Hong, Xuemin; Shi, Jianghong

    2018-03-22

    The deployment density and computational power of small base stations (BSs) are expected to increase significantly in the next generation mobile communication networks. These BSs form the mobile edge network, which is a pervasive and distributed infrastructure that can empower a variety of edge/fog computing applications. This paper proposes a novel edge-computing application called explicit caching, which stores selective contents at BSs and exposes such contents to local users for interactive browsing and download. We formulate the explicit caching problem as a joint content recommendation, caching, and delivery problem, which aims to maximize the expected user quality-of-experience (QoE) with varying degrees of cross-layer sensing capability. Optimal and effective heuristic algorithms are presented to solve the problem. The theoretical performance bounds of the explicit caching system are derived in simplified scenarios. The impacts of cache storage space, BS backhaul capacity, cross-layer information, and user mobility on the system performance are simulated and discussed in realistic scenarios. Results suggest that, compared with conventional implicit caching schemes, explicit caching can better exploit the mobile edge network infrastructure for personalized content dissemination.

  5. Case of two electrostatics problems: Can providing a diagram adversely impact introductory physics students' problem solving performance?

    NASA Astrophysics Data System (ADS)

    Maries, Alexandru; Singh, Chandralekha

    2018-06-01

    Drawing appropriate diagrams is a useful problem solving heuristic that can transform a problem into a representation that is easier to exploit for solving it. One major focus while helping introductory physics students learn effective problem solving is to help them understand that drawing diagrams can facilitate problem solution. We conducted an investigation in which two different interventions were implemented during recitation quizzes in a large enrollment algebra-based introductory physics course. Students were either (i) asked to solve problems in which the diagrams were drawn for them or (ii) explicitly told to draw a diagram. A comparison group was not given any instruction regarding diagrams. We developed rubrics to score the problem solving performance of students in different intervention groups and investigated ten problems. We found that students who were provided diagrams never performed better and actually performed worse than the other students on three problems, one involving standing sound waves in a tube (discussed elsewhere) and two problems in electricity which we focus on here. These two problems were the only problems in electricity that involved considerations of initial and final conditions, which may partly account for why students provided with diagrams performed significantly worse than students who were not provided with diagrams. In order to explore potential reasons for this finding, we conducted interviews with students and found that some students provided with diagrams may have spent less time on the conceptual analysis and planning stage of the problem solving process. In particular, those provided with the diagram were more likely to jump into the implementation stage of problem solving early without fully analyzing and understanding the problem, which can increase the likelihood of mistakes in solutions.

  6. Can History Succeed at School? Problems of Knowledge in the Australian History Curriculum

    ERIC Educational Resources Information Center

    Gilbert, Rob

    2011-01-01

    Successful curriculum development in any school subject requires a clear and established set of elements: agreed and widely appreciated goals; effective criteria for the selection of important knowledge content; and an explicit and well-integrated explanatory base for authentic problem-solving related to the subject goals. The article shows that…

  7. Maximum Principles and Application to the Analysis of An Explicit Time Marching Algorithm

    NASA Technical Reports Server (NTRS)

    LeTallec, Patrick; Tidriri, Moulay D.

    1996-01-01

    In this paper we develop local and global estimates for the solution of convection-diffusion problems. We then study the convergence properties of a Time Marching Algorithm solving Advection-Diffusion problems on two domains using incompatible discretizations. This study is based on a De-Giorgi-Nash maximum principle.

  8. Exploring Essential Conditions: A Commentary on Bull et al. (2008)

    ERIC Educational Resources Information Center

    Borthwick, Arlene; Hansen, Randall; Gray, Lucy; Ziemann, Irina

    2008-01-01

    The editorial by Bull et al. (2008) on connections between informal and formal learning made explicit one element of solving what Koehler and Mishra (2008) termed a "wicked problem." This wicked (complex, ill-structured) problem involves working with teachers for effective integration of technology in support of student learning. The…

  9. Preliminary study of the use of the STAR-100 computer for transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Keller, J. D.; Jameson, A.

    1977-01-01

    An explicit method for solving the transonic small-disturbance potential equation is presented. This algorithm, which is suitable for the new vector-processor computers such as the CDC STAR-100, is compared to successive line over-relaxation (SLOR) on a simple test problem. The convergence rate of the explicit scheme is slower than that of SLOR, however, the efficiency of the explicit scheme on the STAR-100 computer is sufficient to overcome the slower convergence rate and allow an overall speedup compared to SLOR on the CYBER 175 computer.

  10. Some observations on a new numerical method for solving Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1981-01-01

    An explicit-implicit technique for solving Navier-Stokes equations is described which, is much less complex than other implicit methods. It is used to solve a complex, two-dimensional, steady-state, supersonic-flow problem. The computational efficiency of the method and the quality of the solution obtained from it at high Courant-Friedrich-Lewy (CFL) numbers are discussed. Modifications are discussed and certain observations are made about the method which may be helpful in using it successfully.

  11. The testing effect and analogical problem-solving.

    PubMed

    Peterson, Daniel J; Wissman, Kathryn T

    2018-06-25

    Researchers generally agree that retrieval practice of previously learned material facilitates subsequent recall of same material, a phenomenon known as the testing effect. There is debate, however, about when such benefits transfer to related (though not identical) material. The current study examines the phenomenon of transfer in the domain of analogical problem-solving. In Experiments 1 and 2, learners were presented a source text describing a problem and solution to read which was subsequently either restudied or recalled. Following a short (Experiment 1) or long (Experiment 2) delay, learners were given a new target text and asked to solve a problem. The two texts shared a common structure such that the provided solution for the source text could be applied to solve the problem in the target text. In a combined analysis of both experiments, learners in the retrieval practice condition were more successful at solving the problem than those in the restudy condition. Experiment 3 explored the degree to which retrieval practice promotes cued versus spontaneous transfer by manipulating whether participants were provided with an explicit hint that the source and target texts were related. Results revealed no effect of retrieval practice.

  12. The Efficacy and Development of Students' Problem-Solving Strategies During Compulsory Schooling: Logfile Analyses

    PubMed Central

    Molnár, Gyöngyvér; Csapó, Benő

    2018-01-01

    The purpose of this study was to examine the role of exploration strategies students used in the first phase of problem solving. The sample for the study was drawn from 3rd- to 12th-grade students (aged 9–18) in Hungarian schools (n = 4,371). Problems designed in the MicroDYN approach with different levels of complexity were administered to the students via the eDia online platform. Logfile analyses were performed to ascertain the impact of strategy use on the efficacy of problem solving. Students' exploration behavior was coded and clustered through Latent Class Analyses. Several theoretically effective strategies were identified, including the vary-one-thing-at-a-time (VOTAT) strategy and its sub-strategies. The results of the analyses indicate that the use of a theoretically effective strategy, which extract all information required to solve the problem, did not always lead to high performance. Conscious VOTAT strategy users proved to be the best problem solvers followed by non-conscious VOTAT strategy users and non-VOTAT strategy users. In the primary school sub-sample, six qualitatively different strategy class profiles were distinguished. The results shed new light on and provide a new interpretation of previous analyses of the processes involved in complex problem solving. They also highlight the importance of explicit enhancement of problem-solving skills and problem-solving strategies as a tool for knowledge acquisition in new contexts during and beyond school lessons. PMID:29593606

  13. The Efficacy and Development of Students' Problem-Solving Strategies During Compulsory Schooling: Logfile Analyses.

    PubMed

    Molnár, Gyöngyvér; Csapó, Benő

    2018-01-01

    The purpose of this study was to examine the role of exploration strategies students used in the first phase of problem solving. The sample for the study was drawn from 3 rd - to 12 th -grade students (aged 9-18) in Hungarian schools ( n = 4,371). Problems designed in the MicroDYN approach with different levels of complexity were administered to the students via the eDia online platform. Logfile analyses were performed to ascertain the impact of strategy use on the efficacy of problem solving. Students' exploration behavior was coded and clustered through Latent Class Analyses. Several theoretically effective strategies were identified, including the vary-one-thing-at-a-time (VOTAT) strategy and its sub-strategies. The results of the analyses indicate that the use of a theoretically effective strategy, which extract all information required to solve the problem, did not always lead to high performance. Conscious VOTAT strategy users proved to be the best problem solvers followed by non-conscious VOTAT strategy users and non-VOTAT strategy users. In the primary school sub-sample, six qualitatively different strategy class profiles were distinguished. The results shed new light on and provide a new interpretation of previous analyses of the processes involved in complex problem solving. They also highlight the importance of explicit enhancement of problem-solving skills and problem-solving strategies as a tool for knowledge acquisition in new contexts during and beyond school lessons.

  14. Explicit integration with GPU acceleration for large kinetic networks

    DOE PAGES

    Brock, Benjamin; Belt, Andrew; Billings, Jay Jay; ...

    2015-09-15

    In this study, we demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. In addition, this orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies thatmore » important coupled, multiphysics problems in various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.« less

  15. Dual methods and approximation concepts in structural synthesis

    NASA Technical Reports Server (NTRS)

    Fleury, C.; Schmit, L. A., Jr.

    1980-01-01

    Approximation concepts and dual method algorithms are combined to create a method for minimum weight design of structural systems. Approximation concepts convert the basic mathematical programming statement of the structural synthesis problem into a sequence of explicit primal problems of separable form. These problems are solved by constructing explicit dual functions, which are maximized subject to nonnegativity constraints on the dual variables. It is shown that the joining together of approximation concepts and dual methods can be viewed as a generalized optimality criteria approach. The dual method is successfully extended to deal with pure discrete and mixed continuous-discrete design variable problems. The power of the method presented is illustrated with numerical results for example problems, including a metallic swept wing and a thin delta wing with fiber composite skins.

  16. Solution Methods for Certain Evolution Equations

    NASA Astrophysics Data System (ADS)

    Vega-Guzman, Jose Manuel

    Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.

  17. Block clustering based on difference of convex functions (DC) programming and DC algorithms.

    PubMed

    Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai

    2013-10-01

    We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.

  18. Analysis of random point images with the use of symbolic computation codes and generalized Catalan numbers

    NASA Astrophysics Data System (ADS)

    Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.

    2016-11-01

    Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.

  19. Prompting Children to Reason Proportionally: Processing Discrete Units as Continuous Amounts

    ERIC Educational Resources Information Center

    Boyer, Ty W.; Levine, Susan C.

    2015-01-01

    Recent studies reveal that children can solve proportional reasoning problems presented with continuous amounts that enable intuitive strategies by around 6 years of age but have difficulties with problems presented with discrete units that tend to elicit explicit count-and-match strategies until at least 10 years of age. The current study tests…

  20. Sierra/Solid Mechanics 4.48 User's Guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merewether, Mark Thomas; Crane, Nathan K; de Frias, Gabriel Jose

    Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutionsmore » of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.« less

  1. Overcoming Geometry-Induced Stiffness with IMplicit-Explicit (IMEX) Runge-Kutta Algorithms on Unstructured Grids with Applications to CEM, CFD, and CAA

    NASA Technical Reports Server (NTRS)

    Kanevsky, Alex

    2004-01-01

    My goal is to develop and implement efficient, accurate, and robust Implicit-Explicit Runge-Kutta (IMEX RK) methods [9] for overcoming geometry-induced stiffness with applications to computational electromagnetics (CEM), computational fluid dynamics (CFD) and computational aeroacoustics (CAA). IMEX algorithms solve the non-stiff portions of the domain using explicit methods, and isolate and solve the more expensive stiff portions using implicit methods. Current algorithms in CEM can only simulate purely harmonic (up to lOGHz plane wave) EM scattering by fighter aircraft, which are assumed to be pure metallic shells, and cannot handle the inclusion of coatings, penetration into and radiation out of the aircraft. Efficient MEX RK methods could potentially increase current CEM capabilities by 1-2 orders of magnitude, allowing scientists and engineers to attack more challenging and realistic problems.

  2. A localized model of spatial cognition in chemistry

    NASA Astrophysics Data System (ADS)

    Stieff, Mike

    This dissertation challenges the assumption that spatial cognition, particularly visualization, is the key component to problem solving in chemistry. In contrast to this assumption, I posit a localized, or task-specific, model of spatial cognition in chemistry problem solving to locate the exact tasks in a traditional organic chemistry curriculum that require students to use visualization strategies to problem solve. Instead of assuming that visualization is required for most chemistry tasks simply because chemistry concerns invisible three-dimensional entities, I instead use the framework of the localized model to identify how students do and do not make use of visualization strategies on a wide variety of assessment tasks regardless of each task's explicit demand for spatial cognition. I establish the dimensions of the localized model with five studies. First, I designed two novel psychometrics to reveal how students selectively use visualization strategies to interpret and analyze molecular structures. The third study comprised a document analysis of the organic chemistry assessments that empirically determined only 12% of these tasks explicitly require visualization. The fourth study concerned a series of correlation analyses between measures of visuo-spatial ability and chemistry performance to clarify the impact of individual differences. Finally, I performed a series of micro-genetic analyses of student problem solving that confirmed the earlier findings and revealed students prefer to visualize molecules from alternative perspectives without using mental rotation. The results of each study reveal that occurrences of sophisticated spatial cognition are relatively infrequent in chemistry, despite instructors' ostensible emphasis on the visualization of three-dimensional structures. To the contrary, students eschew visualization strategies and instead rely on the use of molecular diagrams to scaffold spatial cognition. Visualization does play a key role, however, in problem solving on a select group of chemistry tasks that require students to translate molecular representations or fundamentally alter the morphology of a molecule. Ultimately, this dissertation calls into question the assumption that individual differences in visuo-spatial ability play a critical role in determining who succeeds in chemistry. The results of this work establish a foundation for defining the precise manner in which visualization tools can best support problem solving.

  3. Helping students learn effective problem solving strategies by reflecting with peers

    NASA Astrophysics Data System (ADS)

    Mason, Andrew; Singh, Chandralekha

    2010-07-01

    We study how introductory physics students engage in reflection with peers about problem solving. The recitations for an introductory physics course with 200 students were broken into a "peer reflection" (PR) group and a traditional group. Each week in recitation, small teams of students in the PR group reflected on selected problems from the homework and discussed why the solutions of some students employed better problem solving strategies than others. The graduate and undergraduate teaching assistants in the PR recitations provided guidance and coaching to help students learn effective problem solving heuristics. In the traditional group recitations students could ask the graduate TA questions about the homework before they took a weekly quiz. The traditional group recitation quiz questions were similar to the homework questions selected for peer reflection in the PR group recitations. As one measure of the impact of this intervention, we investigated how likely students were to draw diagrams to help with problem solving on the final exam with only multiple-choice questions. We found that the PR group drew diagrams on more problems than the traditional group even when there was no explicit reward for doing so. Also, students who drew more diagrams for the multiple-choice questions outperformed those who did not, regardless of which group they were a member.

  4. Virtual-pulse time integral methodology: A new explicit approach for computational dynamics - Theoretical developments for general nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong

    1993-01-01

    The present paper describes a new explicit virtual-pulse time integral methodology for nonlinear structural dynamics problems. The purpose of the paper is to provide the theoretical basis of the methodology and to demonstrate applicability of the proposed formulations to nonlinear dynamic structures. Different from the existing numerical methods such as direct time integrations or mode superposition techniques, the proposed methodology offers new perspectives and methodology of development, and possesses several unique and attractive computational characteristics. The methodology is tested and compared with the implicit Newmark method (trapezoidal rule) through a nonlinear softening and hardening spring dynamic models. The numerical results indicate that the proposed explicit virtual-pulse time integral methodology is an excellent alternative for solving general nonlinear dynamic problems.

  5. Prospects for mirage mediation

    NASA Astrophysics Data System (ADS)

    Pierce, Aaron; Thaler, Jesse

    2006-09-01

    Mirage mediation reduces the fine-tuning in the minimal supersymmetric standard model by dynamically arranging a cancellation between anomaly-mediated and modulus-mediated supersymmetry breaking. We explore the conditions under which a mirage ``messenger scale'' is generated near the weak scale and the little hierarchy problem is solved. We do this by explicitly including the dynamics of the SUSY-breaking sector needed to cancel the cosmological constant. The most plausible scenario for generating a low mirage scale does not readily admit an extra-dimensional interpretation. We also review the possibilities for solving the μ/Bμ problem in such theories, a potential hidden source of fine-tuning.

  6. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  7. A point implicit time integration technique for slow transient flow problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.

    2015-05-01

    We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less

  8. Building Arguments: Key to Collaborative Scaffolding

    ERIC Educational Resources Information Center

    Cáceres, M.; Nussbaum, M.; Marroquín, M.; Gleisner, S.; Marquínez, J. T.

    2018-01-01

    Collaborative problem-solving in the classroom is a student-centred pedagogical practice that looks to improve learning. However, collaboration does not occur spontaneously; instead it needs to be guided by appropriate scaffolding. In this study we explore whether a script that explicitly incorporates constructing arguments in collaborative…

  9. Explicit solutions of a gravity-induced film flow along a convectively heated vertical wall.

    PubMed

    Raees, Ammarah; Xu, Hang

    2013-01-01

    The gravity-driven film flow has been analyzed along a vertical wall subjected to a convective boundary condition. The Boussinesq approximation is applied to simplify the buoyancy term, and similarity transformations are used on the mathematical model of the problem under consideration, to obtain a set of coupled ordinary differential equations. Then the reduced equations are solved explicitly by using homotopy analysis method (HAM). The resulting solutions are investigated for heat transfer effects on velocity and temperature profiles.

  10. Bethe-Salpeter Eigenvalue Solver Package (BSEPACK) v0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SHAO, MEIYEU; YANG, CHAO

    2017-04-25

    The BSEPACK contains a set of subroutines for solving the Bethe-Salpeter Eigenvalue (BSE) problem. This type of problem arises in this study of optical excitation of nanoscale materials. The BSE problem is a structured non-Hermitian eigenvalue problem. The BSEPACK software can be used to compute all or subset of eigenpairs of a BSE Hamiltonian. It can also be used to compute the optical absorption spectrum without computing BSE eigenvalues and eigenvectors explicitly. The package makes use of the ScaLAPACK, LAPACK and BLAS.

  11. It Is More about Telling Interesting Stories: Use Explicit Hints in Storytelling to Help College Students Solve Ill-defined Problems

    ERIC Educational Resources Information Center

    Hseih, Wen-Lan; Smith, Brian K.; Stephanou, Spiro E.

    2004-01-01

    A team consisting of three faculty members from Agricultural Economics, Agribusiness management, and Food Science with two research assistants at Penn State University has been working for three years on creating a food product case library for a problem-based learning and case-based instruction course. With the assistance of experts from the food…

  12. Time and band limiting for matrix valued functions: an integral and a commuting differential operator

    NASA Astrophysics Data System (ADS)

    Grünbaum, F. A.; Pacharoni, I.; Zurrián, I.

    2017-02-01

    The problem of recovering a signal of finite duration from a piece of its Fourier transform was solved at Bell Labs in the 1960’s, by exploiting a ‘miracle’: a certain naturally appearing integral operator commutes with an explicit differential one. Here we show that this same miracle holds in a matrix valued version of the same problem.

  13. The piecewise-linear predictor-corrector code - A Lagrangian-remap method for astrophysical flows

    NASA Technical Reports Server (NTRS)

    Lufkin, Eric A.; Hawley, John F.

    1993-01-01

    We describe a time-explicit finite-difference algorithm for solving the nonlinear fluid equations. The method is similar to existing Eulerian schemes in its use of operator-splitting and artificial viscosity, except that we solve the Lagrangian equations of motion with a predictor-corrector and then remap onto a fixed Eulerian grid. The remap is formulated to eliminate errors associated with coordinate singularities, with a general prescription for remaps of arbitrary order. We perform a comprehensive series of tests on standard problems. Self-convergence tests show that the code has a second-order rate of convergence in smooth, two-dimensional flow, with pressure forces, gravity, and curvilinear geometry included. While not as accurate on idealized problems as high-order Riemann-solving schemes, the predictor-corrector Lagrangian-remap code has great flexibility for application to a variety of astrophysical problems.

  14. Modifying a Research-Based Problem-Solving Intervention to Improve the Problem-Solving Performance of Fifth and Sixth Graders With and Without Learning Disabilities.

    PubMed

    Krawec, Jennifer; Huang, Jia

    The purpose of the present study was to test the efficacy of a modified cognitive strategy instructional intervention originally developed to improve the mathematical problem solving of middle and high school students with learning disabilities (LD). Fifth and sixth grade general education mathematics teachers and their students of varying ability (i.e., average-achieving [AA] students, low-achieving [LA] students, and students with LD) participated in the research study. Several features of the intervention were modified, including (a) explicitness of instruction, (b) emphasis on meta-cognition, (c) focus on problem-solving prerequisites, (d) extended duration of initial intervention, and (e) addition of visual supports. General education math teachers taught all instructional sessions to their inclusive classrooms. Curriculum-based measures (CBMs) of math problem solving were administered five times over the course of the year. A multilevel model (repeated measures nested within students and students nested within schools) was used to analyze student progress on CBMs. Though CBM scores in the intervention group were initially lower than that of the comparison group, intervention students improved significantly more in the first phase, with no differences in the second phase. Implications for instruction are discussed as well as directions for future research.

  15. Optimal Control Problems with Switching Points. Ph.D. Thesis, 1990 Final Report

    NASA Technical Reports Server (NTRS)

    Seywald, Hans

    1991-01-01

    The main idea of this report is to give an overview of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated explicitly. These are altitude maximization for a sounding rocket (Goddard Problem) in the presence of a dynamic pressure limit, and range maximization for a supersonic aircraft flying in the vertical, also in the presence of a dynamic pressure limit. In the second problem singular control appears along arcs with active dynamic pressure limit, which in the context of optimal control, represents a first-order state inequality constraint. An extension of the Generalized Legendre-Clebsch Condition to the case of singular control along state/control constrained arcs is presented and is applied to the aircraft range maximization problem stated above. A contribution to the field of Jacobi Necessary Conditions is made by giving a new proof for the non-optimality of conjugate paths in the Accessory Minimum Problem. Because of its simple and explicit character, the new proof may provide the basis for an extension of Jacobi's Necessary Condition to the case of the trajectories with interior point constraints. Finally, the result that touch points cannot occur for first-order state inequality constraints is extended to the case of vector valued control functions.

  16. Minimization of the root of a quadratic functional under a system of affine equality constraints with application to portfolio management

    NASA Astrophysics Data System (ADS)

    Landsman, Zinoviy

    2008-10-01

    We present an explicit closed form solution of the problem of minimizing the root of a quadratic functional subject to a system of affine constraints. The result generalizes Z. Landsman, Minimization of the root of a quadratic functional under an affine equality constraint, J. Comput. Appl. Math. 2007, to appear, see , articles in press, where the optimization problem was solved under only one linear constraint. This is of interest for solving significant problems pertaining to financial economics as well as some classes of feasibility and optimization problems which frequently occur in tomography and other fields. The results are illustrated in the problem of optimal portfolio selection and the particular case when the expected return of finance portfolio is certain is discussed.

  17. Learning to forget: continual prediction with LSTM.

    PubMed

    Gers, F A; Schmidhuber, J; Cummins, F

    2000-10-01

    Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive "forget gate" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way.

  18. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1992-01-01

    The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.

  19. Validation of a High-Order Prefactored Compact Scheme on Nonlinear Flows with Complex Geometries

    NASA Technical Reports Server (NTRS)

    Hixon, Ray; Mankbadi, Reda R.; Povinelli, L. A. (Technical Monitor)

    2000-01-01

    Three benchmark problems are solved using a sixth-order prefactored compact scheme employing an explicit 10th-order filter with optimized fourth-order Runge-Kutta time stepping. The problems solved are the following: (1) propagation of sound waves through a transonic nozzle; (2) shock-sound interaction; and (3) single airfoil gust response. In the first two problems, the spatial accuracy of the scheme is tested on a stretched grid, and the effectiveness of boundary conditions is shown. The solution stability and accuracy near a shock discontinuity is shown as well. Also, 1-D nonlinear characteristic boundary conditions will be evaluated. In the third problem, a nonlinear Euler solver will be used that solves the equations in generalized curvilinear coordinates using the chain rule transformation. This work, continuing earlier work on flat-plate cascades and Joukowski airfoils, will focus mainly on the effect of the grid and boundary conditions on the accuracy of the solution. The grids were generated using a commercially available grid generator, GridPro/az3000.

  20. Combined AIE/EBE/GMRES approach to incompressible flows. [Adaptive Implicit-Explicit/Grouped Element-by-Element/Generalized Minimum Residuals

    NASA Technical Reports Server (NTRS)

    Liou, J.; Tezduyar, T. E.

    1990-01-01

    Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.

  1. Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests

    NASA Astrophysics Data System (ADS)

    Toth, G.; Keppens, R.; Botchev, M. A.

    1998-04-01

    We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.

  2. Partitioning problems in parallel, pipelined and distributed computing

    NASA Technical Reports Server (NTRS)

    Bokhari, S.

    1985-01-01

    The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.

  3. Three-dimensional implicit lambda methods

    NASA Technical Reports Server (NTRS)

    Napolitano, M.; Dadone, A.

    1983-01-01

    This paper derives the three dimensional lambda-formulation equations for a general orthogonal curvilinear coordinate system and provides various block-explicit and block-implicit methods for solving them, numerically. Three model problems, characterized by subsonic, supersonic and transonic flow conditions, are used to assess the reliability and compare the efficiency of the proposed methods.

  4. Successfully Carrying out Complex Learning-Tasks through Guiding Teams' Qualitative and Quantitative Reasoning

    ERIC Educational Resources Information Center

    Slof, B.; Erkens, G.; Kirschner, P. A.; Janssen, J.; Jaspers, J. G. M.

    2012-01-01

    This study investigated whether and how scripting learners' use of representational tools in a computer supported collaborative learning (CSCL)-environment fostered their collaborative performance on a complex business-economics task. Scripting the problem-solving process sequenced and made its phase-related part-task demands explicit, namely…

  5. The Use of Screencasting to Transform Traditional Pedagogy in a Preservice Mathematics Content Course

    ERIC Educational Resources Information Center

    Guerrero, Shannon; Baumgartel, Drew; Zobott, Maren

    2013-01-01

    Screencasting, or digital recordings of computer screen outputs, can be used to promote pedagogical transformation in the mathematics classroom by moving explicit, procedural-based instruction to the online environment, thus freeing classroom time for more student-centered investigations, problem solving, communication, and collaboration. This…

  6. Effortful Control, Explicit Processing, and the Regulation of Human Evolved Predispositions

    ERIC Educational Resources Information Center

    MacDonald, Kevin B.

    2008-01-01

    This article analyzes the effortful control of automatic processing related to social and emotional behavior, including control over evolved modules designed to solve problems of survival and reproduction that were recurrent over evolutionary time. The inputs to effortful control mechanisms include a wide range of nonrecurrent…

  7. Effects of Blended Instructional Models on Math Performance

    ERIC Educational Resources Information Center

    Bottge, Brian A.; Ma, Xin; Gassaway, Linda; Toland, Michael D.; Butler, Mark; Cho, Sun-Joo

    2014-01-01

    A pretest-posttest cluster-randomized trial involving 31 middle schools and 335 students with disabilities tested the effects of combining explicit and anchored instruction on fraction computation and problem solving. Results of standardized and researcher-developed tests showed that students who were taught with the blended units outscored…

  8. Using High-Probability Instructional Sequences and Explicit Instruction to Teach Multiplication Facts

    ERIC Educational Resources Information Center

    Leach, Debra

    2016-01-01

    Students with learning disabilities often struggle with math fact fluency and require specialized interventions to recall basic facts. Deficits in math fact fluency can result in later difficulties when learning higher-level mathematical computation, concepts, and problem solving. The response-to-intervention (RTI) and…

  9. Suboptimal Tradeoffs in Information Seeking

    ERIC Educational Resources Information Center

    Fu, Wai-Tat; Gray, Wayne D.

    2006-01-01

    Explicit information-seeking actions are needed to evaluate alternative actions in problem-solving tasks. Information-seeking costs are often traded off against the utility of information. We present three experiments that show how subjects adapt to the cost and information structures of environments in a map-navigation task. We found that…

  10. Direct SQP-methods for solving optimal control problems with delays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goellmann, L.; Bueskens, C.; Maurer, H.

    The maximum principle for optimal control problems with delays leads to a boundary value problem (BVP) which is retarded in the state and advanced in the costate function. Based on shooting techniques, solution methods for this type of BVP have been proposed. In recent years, direct optimization methods have been favored for solving control problems without delays. Direct methods approximate the control and the state over a fixed mesh and solve the resulting NLP-problem with SQP-methods. These methods dispense with the costate function and have shown to be robust and efficient. In this paper, we propose a direct SQP-method formore » retarded control problems. In contrast to conventional direct methods, only the control variable is approximated by e.g. spline-functions. The state is computed via a high order Runge-Kutta type algorithm and does not enter explicitly the NLP-problem through an equation. This approach reduces the number of optimization variables considerably and is implementable even on a PC. Our method is illustrated by the numerical solution of retarded control problems with constraints. In particular, we consider the control of a continuous stirred tank reactor which has been solved by dynamic programming. This example illustrates the robustness and efficiency of the proposed method. Open questions concerning sufficient conditions and convergence of discretized NLP-problems are discussed.« less

  11. Transient Finite Element Computations on a Variable Transputer System

    NASA Technical Reports Server (NTRS)

    Smolinski, Patrick J.; Lapczyk, Ireneusz

    1993-01-01

    A parallel program to analyze transient finite element problems was written and implemented on a system of transputer processors. The program uses the explicit time integration algorithm which eliminates the need for equation solving, making it more suitable for parallel computations. An interprocessor communication scheme was developed for arbitrary two dimensional grid processor configurations. Several 3-D problems were analyzed on a system with a small number of processors.

  12. Gas evolution from spheres

    NASA Astrophysics Data System (ADS)

    Longhurst, G. R.

    1991-04-01

    Gas evolution from spherical solids or liquids where no convective processes are active is analyzed. Three problem classes are considered: (1) constant concentration boundary, (2) Henry's law (first order) boundary, and (3) Sieverts' law (second order) boundary. General expressions are derived for dimensionless times and transport parameters appropriate to each of the classes considered. However, in the second order case, the non-linearities of the problem require the presence of explicit dimensional variables in the solution. Sample problems are solved to illustrate the method.

  13. Implicit memory. Retention without remembering.

    PubMed

    Roediger, H L

    1990-09-01

    Explicit measures of human memory, such as recall or recognition, reflect conscious recollection of the past. Implicit tests of retention measure transfer (or priming) from past experience on tasks that do not require conscious recollection of recent experiences for their performance. The article reviews research on the relation between explicit and implicit memory. The evidence points to substantial differences between standard explicit and implicit tests, because many variables create dissociations between these tests. For example, although pictures are remembered better than words on explicit tests, words produce more priming than do pictures on several implicit tests. These dissociations may implicate different memory systems that subserve distinct memorial functions, but the present argument is that many dissociations can be understood by appealing to general principles that apply to both explicit and implicit tests. Phenomena studied under the rubric of implicit memory may have important implications in many other fields, including social cognition, problem solving, and cognitive development.

  14. Increasing Explanatory Behaviour, Problem-Solving, and Reasoning within Classes Using Cooperative Group Work

    ERIC Educational Resources Information Center

    Gillies, Robyn M.; Haynes, Michele

    2011-01-01

    The present study builds on research that indicates that teachers play a key role in promoting those interactional behaviours that challenge children's thinking and scaffold their learning. It does this by seeking to determine whether teachers who implement cooperative learning and receive training in explicit strategic questioning strategies…

  15. Language Learning of Children with Typical Development Using a Deductive Metalinguistic Procedure

    ERIC Educational Resources Information Center

    Finestack, Lizbeth H.

    2014-01-01

    Purpose: In the current study, the author aimed to determine whether 4- to 6-year-old typically developing children possess requisite problem-solving and language abilities to produce, generalize, and retain a novel verb inflection when taught using an explicit, deductive teaching procedure. Method: Study participants included a cross-sectional…

  16. Do Pre-Service Science Teachers Have Understanding of the Nature of Science?: Explicit-Reflective Approach

    ERIC Educational Resources Information Center

    Örnek, Funda; Turkey, Kocaeli

    2014-01-01

    Current approaches in Science Education attempt to enable students to develop an understanding of the nature of science, develop fundamental scientific concepts, and develop the ability to structure, analyze, reason, and communicate effectively. Students pose, solve, and interpret scientific problems, and eventually set goals and regulate their…

  17. Scattering on two Aharonov-Bohm vortices

    NASA Astrophysics Data System (ADS)

    Bogomolny, E.

    2016-12-01

    The problem of two Aharonov-Bohm (AB) vortices for the Helmholtz equation is examined in detail. It is demonstrated that the method proposed by Myers (1963 J. Math. Phys. 6 1839) for slit diffraction can be generalised to obtain an explicit solution for AB vortices. Due to the singular nature of AB interaction the Green function and scattering amplitude for two AB vortices obey a series of partial differential equations. Coefficients entering these equations, fulfil ordinary non-linear differential equations whose solutions can be obtained by solving the Painlevé III equation. The asymptotics of necessary functions for very large and very small vortex separations are calculated explicitly. Taken together, this means that the problem of two AB vortices is exactly solvable.

  18. A Solution Adaptive Structured/Unstructured Overset Grid Flow Solver with Applications to Helicopter Rotor Flows

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Biswas, Rupak; Strawn, Roger C.

    1995-01-01

    This paper summarizes a method that solves both the three dimensional thin-layer Navier-Stokes equations and the Euler equations using overset structured and solution adaptive unstructured grids with applications to helicopter rotor flowfields. The overset structured grids use an implicit finite-difference method to solve the thin-layer Navier-Stokes/Euler equations while the unstructured grid uses an explicit finite-volume method to solve the Euler equations. Solutions on a helicopter rotor in hover show the ability to accurately convect the rotor wake. However, isotropic subdivision of the tetrahedral mesh rapidly increases the overall problem size.

  19. Coupling Conceptual and Quantitative Problems to Develop Expertise in Introductory Physics Students

    NASA Astrophysics Data System (ADS)

    Singh, Chandralekha

    2008-10-01

    We discuss the effect of administering conceptual and quantitative isomorphic problem pairs (CQIPP) back to back vs. asking students to solve only one of the problems in the CQIPP in introductory physics courses. Students who answered both questions in a CQIPP often performed better on the conceptual questions than those who answered the corresponding conceptual questions only. Although students often took advantage of the quantitative counterpart to answer a conceptual question of a CQIPP correctly, when only given the conceptual question, students seldom tried to convert it into a quantitative question, solve it and then reason about the solution conceptually. Even in individual interviews, when students who were only given conceptual questions had difficulty and the interviewer explicitly encouraged them to convert the conceptual question into the corresponding quantitative problem by choosing appropriate variables, a majority of students were reluctant and preferred to guess the answer to the conceptual question based upon their gut feeling.

  20. SDG Fermion-Pair Algebraic SO(12) and Sp(10) Models and Their Boson Realizations

    NASA Astrophysics Data System (ADS)

    Navratil, P.; Geyer, H. B.; Dobes, J.; Dobaczewski, J.

    1995-11-01

    It is shown how the boson mapping formalism may be applied as a useful many-body tool to solve a fermion problem. This is done in the context of generalized Ginocchio models for which we introduce S-, D-, and G-pairs of fermions and subsequently construct the sdg-boson realizations of the generalized Dyson type. The constructed SO(12) and Sp(10) fermion models are solved beyond the explicit symmetry limits. Phase transitions to rotational structures are obtained also in situations where there is no underlying SU(3) symmetry.

  1. Large-N -approximated field theory for multipartite entanglement

    NASA Astrophysics Data System (ADS)

    Facchi, P.; Florio, G.; Parisi, G.; Pascazio, S.; Scardicchio, A.

    2015-12-01

    We try to characterize the statistics of multipartite entanglement of the random states of an n -qubit system. Unable to solve the problem exactly we generalize it, replacing complex numbers with real vectors with Nc components (the original problem is recovered for Nc=2 ). Studying the leading diagrams in the large-Nc approximation, we unearth the presence of a phase transition and, in an explicit example, show that the so-called entanglement frustration disappears in the large-Nc limit.

  2. Jump phenomena. [large amplitude responses of nonlinear systems

    NASA Technical Reports Server (NTRS)

    Reiss, E. L.

    1980-01-01

    The paper considers jump phenomena composed of large amplitude responses of nonlinear systems caused by small amplitude disturbances. Physical problems where large jumps in the solution amplitude are important features of the response are described, including snap buckling of elastic shells, chemical reactions leading to combustion and explosion, and long-term climatic changes of the earth's atmosphere. A new method of rational functions was then developed which consists of representing the solutions of the jump problems as rational functions of the small disturbance parameter; this method can solve jump problems explicitly.

  3. Computing an upper bound on contact stress with surrogate duality

    NASA Astrophysics Data System (ADS)

    Xuan, Zhaocheng; Papadopoulos, Panayiotis

    2016-07-01

    We present a method for computing an upper bound on the contact stress of elastic bodies. The continuum model of elastic bodies with contact is first modeled as a constrained optimization problem by using finite elements. An explicit formulation of the total contact force, a fraction function with the numerator as a linear function and the denominator as a quadratic convex function, is derived with only the normalized nodal contact forces as the constrained variables in a standard simplex. Then two bounds are obtained for the sum of the nodal contact forces. The first is an explicit formulation of matrices of the finite element model, derived by maximizing the fraction function under the constraint that the sum of the normalized nodal contact forces is one. The second bound is solved by first maximizing the fraction function subject to the standard simplex and then using Dinkelbach's algorithm for fractional programming to find the maximum—since the fraction function is pseudo concave in a neighborhood of the solution. These two bounds are solved with the problem dimensions being only the number of contact nodes or node pairs, which are much smaller than the dimension for the original problem, namely, the number of degrees of freedom. Next, a scheme for constructing an upper bound on the contact stress is proposed that uses the bounds on the sum of the nodal contact forces obtained on a fine finite element mesh and the nodal contact forces obtained on a coarse finite element mesh, which are problems that can be solved at a lower computational cost. Finally, the proposed method is verified through some examples concerning both frictionless and frictional contact to demonstrate the method's feasibility, efficiency, and robustness.

  4. Improving Science Scores of Middle School Students with Learning Disabilities through Engineering Problem Solving Activities

    ERIC Educational Resources Information Center

    Starling, A. Leyf Peirce; Lo, Ya-Yu; Rivera, Christopher J.

    2015-01-01

    This study evaluated the differential effects of three different science teaching methods, namely engineering teaching kit (ETK), explicit instruction (EI), and a combination of the two methods (ETK+EI), in two sixth-grade science classrooms. Twelve students with learning disabilities (LD) and/or attention deficit hyperactivity disorder (ADHD)…

  5. A Synergy between the Technological Process and a Methodology for Web Design: Implications for Technological Problem Solving and Design

    ERIC Educational Resources Information Center

    Jakovljevic, Maria; Ankiewicz, Piet; De swardt, Estelle; Gross, Elna

    2004-01-01

    Traditional instructional methodology in the Information System Design (ISD) environment lacks explicit strategies for promoting the cognitive skills of prospective system designers. This contributes to the fragmented knowledge and low motivational and creative involvement of learners in system design tasks. In addition, present ISD methodologies,…

  6. Who Is Granted Authority in the Mathematics Classroom? An Analysis of the Observed and Perceived Distribution of Authority

    ERIC Educational Resources Information Center

    Depaepe, Fien; De Corte, Erik; Verschaffel, Lieven

    2012-01-01

    The article deals with the way in which authority was established and interpreted by teachers and students in two Flemish sixth-grade mathematics classrooms. Problem-solving lessons during a seven-month observation period were analysed regarding three aspects of teacher-student interactions that explicitly or implicitly reflect who bears…

  7. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  8. Algebraic criteria for positive realness relative to the unit circle.

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.

    1973-01-01

    A definition is presented of the circle positive realness of real rational functions relative to the unit circle in the complex variable plane. The problem of testing this kind of positive reality is reduced to the algebraic problem of determining the distribution of zeros of a real polynomial with respect to and on the unit circle. Such reformulation of the problem avoids the search for explicit information about imaginary poles of rational functions. The stated algebraic problem is solved by applying the polynomial criteria of Marden (1966) and Jury (1964), and a completely recursive algorithm for circle positive realness is obtained.

  9. QCD axion dark matter from long-lived domain walls during matter domination

    NASA Astrophysics Data System (ADS)

    Harigaya, Keisuke; Kawasaki, Masahiro

    2018-07-01

    The domain wall problem of the Peccei-Quinn mechanism can be solved if the Peccei-Quinn symmetry is explicitly broken by a small amount. Domain walls decay into axions, which may account for dark matter of the universe. This scheme is however strongly constrained by overproduction of axions unless the phase of the explicit breaking term is tuned. We investigate the case where the universe is matter-dominated around the temperature of the MeV scale and domain walls decay during this matter dominated epoch. We show how the viable parameter space is expanded.

  10. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  11. Fast Eigensolver for Computing 3D Earth's Normal Modes

    NASA Astrophysics Data System (ADS)

    Shi, J.; De Hoop, M. V.; Li, R.; Xi, Y.; Saad, Y.

    2017-12-01

    We present a novel parallel computational approach to compute Earth's normal modes. We discretize Earth via an unstructured tetrahedral mesh and apply the continuous Galerkin finite element method to the elasto-gravitational system. To resolve the eigenvalue pollution issue, following the analysis separating the seismic point spectrum, we utilize explicitly a representation of the displacement for describing the oscillations of the non-seismic modes in the fluid outer core. Effectively, we separate out the essential spectrum which is naturally related to the Brunt-Väisälä frequency. We introduce two Lanczos approaches with polynomial and rational filtering for solving this generalized eigenvalue problem in prescribed intervals. The polynomial filtering technique only accesses the matrix pair through matrix-vector products and is an ideal candidate for solving three-dimensional large-scale eigenvalue problems. The matrix-free scheme allows us to deal with fluid separation and self-gravitation in an efficient way, while the standard shift-and-invert method typically needs an explicit shifted matrix and its factorization. The rational filtering method converges much faster than the standard shift-and-invert procedure when computing all the eigenvalues inside an interval. Both two Lanczos approaches solve for the internal eigenvalues extremely accurately, comparing with the standard eigensolver. In our computational experiments, we compare our results with the radial earth model benchmark, and visualize the normal modes using vector plots to illustrate the properties of the displacements in different modes.

  12. Eighth-order explicit two-step hybrid methods with symmetric nodes and weights for solving orbital and oscillatory IVPs

    NASA Astrophysics Data System (ADS)

    Franco, J. M.; Rández, L.

    The construction of new two-step hybrid (TSH) methods of explicit type with symmetric nodes and weights for the numerical integration of orbital and oscillatory second-order initial value problems (IVPs) is analyzed. These methods attain algebraic order eight with a computational cost of six or eight function evaluations per step (it is one of the lowest costs that we know in the literature) and they are optimal among the TSH methods in the sense that they reach a certain order of accuracy with minimal cost per step. The new TSH schemes also have high dispersion and dissipation orders (greater than 8) in order to be adapted to the solution of IVPs with oscillatory solutions. The numerical experiments carried out with several orbital and oscillatory problems show that the new eighth-order explicit TSH methods are more efficient than other standard TSH or Numerov-type methods proposed in the scientific literature.

  13. A Navier-Strokes Chimera Code on the Connection Machine CM-5: Design and Performance

    NASA Technical Reports Server (NTRS)

    Jespersen, Dennis C.; Levit, Creon; Kwak, Dochan (Technical Monitor)

    1994-01-01

    We have implemented a three-dimensional compressible Navier-Stokes code on the Connection Machine CM-5. The code is set up for implicit time-stepping on single or multiple structured grids. For multiple grids and geometrically complex problems, we follow the 'chimera' approach, where flow data on one zone is interpolated onto another in the region of overlap. We will describe our design philosophy and give some timing results for the current code. A parallel machine like the CM-5 is well-suited for finite-difference methods on structured grids. The regular pattern of connections of a structured mesh maps well onto the architecture of the machine. So the first design choice, finite differences on a structured mesh, is natural. We use centered differences in space, with added artificial dissipation terms. When numerically solving the Navier-Stokes equations, there are liable to be some mesh cells near a solid body that are small in at least one direction. This mesh cell geometry can impose a very severe CFL (Courant-Friedrichs-Lewy) condition on the time step for explicit time-stepping methods. Thus, though explicit time-stepping is well-suited to the architecture of the machine, we have adopted implicit time-stepping. We have further taken the approximate factorization approach. This creates the need to solve large banded linear systems and creates the first possible barrier to an efficient algorithm. To overcome this first possible barrier we have considered two options. The first is just to solve the banded linear systems with data spread over the whole machine, using whatever fast method is available. This option is adequate for solving scalar tridiagonal systems, but for scalar pentadiagonal or block tridiagonal systems it is somewhat slower than desired. The second option is to 'transpose' the flow and geometry variables as part of the time-stepping process: Start with x-lines of data in-processor. Form explicit terms in x, then transpose so y-lines of data are in-processor. Form explicit terms in y, then transpose so z-lines are in processor. Form explicit terms in z, then solve linear systems in the z-direction. Transpose to the y-direction, then solve linear systems in the y-direction. Finally transpose to the x direction and solve linear systems in the x-direction. This strategy avoids inter-processor communication when differencing and solving linear systems, but requires a large amount of communication when doing the transposes. The transpose method is more efficient than the non-transpose strategy when dealing with scalar pentadiagonal or block tridiagonal systems. For handling geometrically complex problems the chimera strategy was adopted. For multiple zone cases we compute on each zone sequentially (using the whole parallel machine), then send the chimera interpolation data to a distributed data structure (array) laid out over the whole machine. This information transfer implies an irregular communication pattern, and is the second possible barrier to an efficient algorithm. We have implemented these ideas on the CM-5 using CMF (Connection Machine Fortran), a data parallel language which combines elements of Fortran 90 and certain extensions, and which bears a strong similarity to High Performance Fortran. We make use of the Connection Machine Scientific Software Library (CMSSL) for the linear solver and array transpose operations.

  14. Clinical Reasoning Terms Included in Clinical Problem Solving Exercises?

    PubMed Central

    Musgrove, John L.; Morris, Jason; Estrada, Carlos A.; Kraemer, Ryan R.

    2016-01-01

    Background Published clinical problem solving exercises have emerged as a common tool to illustrate aspects of the clinical reasoning process. The specific clinical reasoning terms mentioned in such exercises is unknown. Objective We identified which clinical reasoning terms are mentioned in published clinical problem solving exercises and compared them to clinical reasoning terms given high priority by clinician educators. Methods A convenience sample of clinician educators prioritized a list of clinical reasoning terms (whether to include, weight percentage of top 20 terms). The authors then electronically searched the terms in the text of published reports of 4 internal medicine journals between January 2010 and May 2013. Results The top 5 clinical reasoning terms ranked by educators were dual-process thinking (weight percentage = 24%), problem representation (12%), illness scripts (9%), hypothesis generation (7%), and problem categorization (7%). The top clinical reasoning terms mentioned in the text of 79 published reports were context specificity (n = 20, 25%), bias (n = 13, 17%), dual-process thinking (n = 11, 14%), illness scripts (n = 11, 14%), and problem representation (n = 10, 13%). Context specificity and bias were not ranked highly by educators. Conclusions Some core concepts of modern clinical reasoning theory ranked highly by educators are mentioned explicitly in published clinical problem solving exercises. However, some highly ranked terms were not used, and some terms used were not ranked by the clinician educators. Effort to teach clinical reasoning to trainees may benefit from a common nomenclature of clinical reasoning terms. PMID:27168884

  15. Clinical Reasoning Terms Included in Clinical Problem Solving Exercises?

    PubMed

    Musgrove, John L; Morris, Jason; Estrada, Carlos A; Kraemer, Ryan R

    2016-05-01

    Background Published clinical problem solving exercises have emerged as a common tool to illustrate aspects of the clinical reasoning process. The specific clinical reasoning terms mentioned in such exercises is unknown. Objective We identified which clinical reasoning terms are mentioned in published clinical problem solving exercises and compared them to clinical reasoning terms given high priority by clinician educators. Methods A convenience sample of clinician educators prioritized a list of clinical reasoning terms (whether to include, weight percentage of top 20 terms). The authors then electronically searched the terms in the text of published reports of 4 internal medicine journals between January 2010 and May 2013. Results The top 5 clinical reasoning terms ranked by educators were dual-process thinking (weight percentage = 24%), problem representation (12%), illness scripts (9%), hypothesis generation (7%), and problem categorization (7%). The top clinical reasoning terms mentioned in the text of 79 published reports were context specificity (n = 20, 25%), bias (n = 13, 17%), dual-process thinking (n = 11, 14%), illness scripts (n = 11, 14%), and problem representation (n = 10, 13%). Context specificity and bias were not ranked highly by educators. Conclusions Some core concepts of modern clinical reasoning theory ranked highly by educators are mentioned explicitly in published clinical problem solving exercises. However, some highly ranked terms were not used, and some terms used were not ranked by the clinician educators. Effort to teach clinical reasoning to trainees may benefit from a common nomenclature of clinical reasoning terms.

  16. Computational strategy for the solution of large strain nonlinear problems using the Wilkins explicit finite-difference approach

    NASA Technical Reports Server (NTRS)

    Hofmann, R.

    1980-01-01

    The STEALTH code system, which solves large strain, nonlinear continuum mechanics problems, was rigorously structured in both overall design and programming standards. The design is based on the theoretical elements of analysis while the programming standards attempt to establish a parallelism between physical theory, programming structure, and documentation. These features have made it easy to maintain, modify, and transport the codes. It has also guaranteed users a high level of quality control and quality assurance.

  17. High performance techniques for space mission scheduling

    NASA Technical Reports Server (NTRS)

    Smith, Stephen F.

    1994-01-01

    In this paper, we summarize current research at Carnegie Mellon University aimed at development of high performance techniques and tools for space mission scheduling. Similar to prior research in opportunistic scheduling, our approach assumes the use of dynamic analysis of problem constraints as a basis for heuristic focusing of problem solving search. This methodology, however, is grounded in representational assumptions more akin to those adopted in recent temporal planning research, and in a problem solving framework which similarly emphasizes constraint posting in an explicitly maintained solution constraint network. These more general representational assumptions are necessitated by the predominance of state-dependent constraints in space mission planning domains, and the consequent need to integrate resource allocation and plan synthesis processes. First, we review the space mission problems we have considered to date and indicate the results obtained in these application domains. Next, we summarize recent work in constraint posting scheduling procedures, which offer the promise of better future solutions to this class of problems.

  18. Stability analysis of Eulerian-Lagrangian methods for the one-dimensional shallow-water equations

    USGS Publications Warehouse

    Casulli, V.; Cheng, R.T.

    1990-01-01

    In this paper stability and error analyses are discussed for some finite difference methods when applied to the one-dimensional shallow-water equations. Two finite difference formulations, which are based on a combined Eulerian-Lagrangian approach, are discussed. In the first part of this paper the results of numerical analyses for an explicit Eulerian-Lagrangian method (ELM) have shown that the method is unconditionally stable. This method, which is a generalized fixed grid method of characteristics, covers the Courant-Isaacson-Rees method as a special case. Some artificial viscosity is introduced by this scheme. However, because the method is unconditionally stable, the artificial viscosity can be brought under control either by reducing the spatial increment or by increasing the size of time step. The second part of the paper discusses a class of semi-implicit finite difference methods for the one-dimensional shallow-water equations. This method, when the Eulerian-Lagrangian approach is used for the convective terms, is also unconditionally stable and highly accurate for small space increments or large time steps. The semi-implicit methods seem to be more computationally efficient than the explicit ELM; at each time step a single tridiagonal system of linear equations is solved. The combined explicit and implicit ELM is best used in formulating a solution strategy for solving a network of interconnected channels. The explicit ELM is used at channel junctions for each time step. The semi-implicit method is then applied to the interior points in each channel segment. Following this solution strategy, the channel network problem can be reduced to a set of independent one-dimensional open-channel flow problems. Numerical results support properties given by the stability and error analyses. ?? 1990.

  19. L1-Based Approximations of PDEs and Applications

    DTIC Science & Technology

    2012-09-05

    the analysis of the Navier-Stokes equations. The early versions of artificial vis- cosities being overly dissipative, the interest for these technique ...Guermond, and B. Popov. Stability analysis of explicit en- tropy viscosity methods for non-linear scalar conservation equations. Math. Comp., 2012... methods for solv- ing mathematical models of nonlinear phenomena such as nonlinear conservation laws, surface/image/data reconstruction problems

  20. Integrating planning, execution, and learning

    NASA Technical Reports Server (NTRS)

    Kuokka, Daniel R.

    1989-01-01

    To achieve the goal of building an autonomous agent, the usually disjoint capabilities of planning, execution, and learning must be used together. An architecture, called MAX, within which cognitive capabilities can be purposefully and intelligently integrated is described. The architecture supports the codification of capabilities as explicit knowledge that can be reasoned about. In addition, specific problem solving, learning, and integration knowledge is developed.

  1. Comprehension and computation in Bayesian problem solving

    PubMed Central

    Johnson, Eric D.; Tubau, Elisabet

    2015-01-01

    Humans have long been characterized as poor probabilistic reasoners when presented with explicit numerical information. Bayesian word problems provide a well-known example of this, where even highly educated and cognitively skilled individuals fail to adhere to mathematical norms. It is widely agreed that natural frequencies can facilitate Bayesian inferences relative to normalized formats (e.g., probabilities, percentages), both by clarifying logical set-subset relations and by simplifying numerical calculations. Nevertheless, between-study performance on “transparent” Bayesian problems varies widely, and generally remains rather unimpressive. We suggest there has been an over-focus on this representational facilitator (i.e., transparent problem structures) at the expense of the specific logical and numerical processing requirements and the corresponding individual abilities and skills necessary for providing Bayesian-like output given specific verbal and numerical input. We further suggest that understanding this task-individual pair could benefit from considerations from the literature on mathematical cognition, which emphasizes text comprehension and problem solving, along with contributions of online executive working memory, metacognitive regulation, and relevant stored knowledge and skills. We conclude by offering avenues for future research aimed at identifying the stages in problem solving at which correct vs. incorrect reasoners depart, and how individual differences might influence this time point. PMID:26283976

  2. Replicating the benefits of Deutschian closed timelike curves without breaking causality

    NASA Astrophysics Data System (ADS)

    Yuan, Xiao; Assad, Syed M.; Thompson, Jayne; Haw, Jing Yan; Vedral, Vlatko; Ralph, Timothy C.; Lam, Ping Koy; Weedbrook, Christian; Gu, Mile

    2015-11-01

    In general relativity, closed timelike curves can break causality with remarkable and unsettling consequences. At the classical level, they induce causal paradoxes disturbing enough to motivate conjectures that explicitly prevent their existence. At the quantum level such problems can be resolved through the Deutschian formalism, however this induces radical benefits—from cloning unknown quantum states to solving problems intractable to quantum computers. Instinctively, one expects these benefits to vanish if causality is respected. Here we show that in harnessing entanglement, we can efficiently solve NP-complete problems and clone arbitrary quantum states—even when all time-travelling systems are completely isolated from the past. Thus, the many defining benefits of Deutschian closed timelike curves can still be harnessed, even when causality is preserved. Our results unveil a subtle interplay between entanglement and general relativity, and significantly improve the potential of probing the radical effects that may exist at the interface between relativity and quantum theory.

  3. Application of an enriched FEM technique in thermo-mechanical contact problems

    NASA Astrophysics Data System (ADS)

    Khoei, A. R.; Bahmani, B.

    2018-02-01

    In this paper, an enriched FEM technique is employed for thermo-mechanical contact problem based on the extended finite element method. A fully coupled thermo-mechanical contact formulation is presented in the framework of X-FEM technique that takes into account the deformable continuum mechanics and the transient heat transfer analysis. The Coulomb frictional law is applied for the mechanical contact problem and a pressure dependent thermal contact model is employed through an explicit formulation in the weak form of X-FEM method. The equilibrium equations are discretized by the Newmark time splitting method and the final set of non-linear equations are solved based on the Newton-Raphson method using a staggered algorithm. Finally, in order to illustrate the capability of the proposed computational model several numerical examples are solved and the results are compared with those reported in literature.

  4. Sleep does not facilitate insight in older adults.

    PubMed

    Debarnot, Ursula; Rossi, Marta; Faraguna, Ugo; Schwartz, Sophie; Sebastiani, Laura

    2017-04-01

    Sleep has been shown to foster the process of insight generation in young adults during problem solving activities. Aging is characterized by substantial changes in sleep architecture altering memory consolidation. Whether sleep might promote the occurrence of insight in older adults as well has not yet been tested experimentally. To address this issue, we tested healthy young and old volunteers on an insight problem solving task, involving both explicit and implicit features, before and after a night of sleep or a comparable wakefulness period. Data showed that insight emerged significantly less frequently after a night of sleep in older adults compared to young. Moreover, there was no difference in the magnitude of insight occurrence following sleep and daytime -consolidation in aged participants. We further found that acquisition of implicit knowledge in the task before sleep potentiated the gain of insight in young participants, but this effect was not observed in aged participants. Overall, present findings demonstrate that a period of sleep does not significantly promote insight in problem solving in older adults. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. The Dreyfus model of clinical problem-solving skills acquisition: a critical perspective

    PubMed Central

    Peña, Adolfo

    2010-01-01

    Context The Dreyfus model describes how individuals progress through various levels in their acquisition of skills and subsumes ideas with regard to how individuals learn. Such a model is being accepted almost without debate from physicians to explain the ‘acquisition’ of clinical skills. Objectives This paper reviews such a model, discusses several controversial points, clarifies what kind of knowledge the model is about, and examines its coherence in terms of problem-solving skills. Dreyfus' main idea that intuition is a major aspect of expertise is also discussed in some detail. Relevant scientific evidence from cognitive science, psychology, and neuroscience is reviewed to accomplish these aims. Conclusions Although the Dreyfus model may partially explain the ‘acquisition’ of some skills, it is debatable if it can explain the acquisition of clinical skills. The complex nature of clinical problem-solving skills and the rich interplay between the implicit and explicit forms of knowledge must be taken into consideration when we want to explain ‘acquisition’ of clinical skills. The idea that experts work from intuition, not from reason, should be evaluated carefully. PMID:20563279

  6. A linked simulation-optimization model for solving the unknown groundwater pollution source identification problems.

    PubMed

    Ayvaz, M Tamer

    2010-09-20

    This study proposes a linked simulation-optimization model for solving the unknown groundwater pollution source identification problems. In the proposed model, MODFLOW and MT3DMS packages are used to simulate the flow and transport processes in the groundwater system. These models are then integrated with an optimization model which is based on the heuristic harmony search (HS) algorithm. In the proposed simulation-optimization model, the locations and release histories of the pollution sources are treated as the explicit decision variables and determined through the optimization model. Also, an implicit solution procedure is proposed to determine the optimum number of pollution sources which is an advantage of this model. The performance of the proposed model is evaluated on two hypothetical examples for simple and complex aquifer geometries, measurement error conditions, and different HS solution parameter sets. Identified results indicated that the proposed simulation-optimization model is an effective way and may be used to solve the inverse pollution source identification problems. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  7. Teaching Creativity and Inventive Problem Solving in Science

    PubMed Central

    2009-01-01

    Engaging learners in the excitement of science, helping them discover the value of evidence-based reasoning and higher-order cognitive skills, and teaching them to become creative problem solvers have long been goals of science education reformers. But the means to achieve these goals, especially methods to promote creative thinking in scientific problem solving, have not become widely known or used. In this essay, I review the evidence that creativity is not a single hard-to-measure property. The creative process can be explained by reference to increasingly well-understood cognitive skills such as cognitive flexibility and inhibitory control that are widely distributed in the population. I explore the relationship between creativity and the higher-order cognitive skills, review assessment methods, and describe several instructional strategies for enhancing creative problem solving in the college classroom. Evidence suggests that instruction to support the development of creativity requires inquiry-based teaching that includes explicit strategies to promote cognitive flexibility. Students need to be repeatedly reminded and shown how to be creative, to integrate material across subject areas, to question their own assumptions, and to imagine other viewpoints and possibilities. Further research is required to determine whether college students' learning will be enhanced by these measures. PMID:19723812

  8. Teaching creativity and inventive problem solving in science.

    PubMed

    DeHaan, Robert L

    2009-01-01

    Engaging learners in the excitement of science, helping them discover the value of evidence-based reasoning and higher-order cognitive skills, and teaching them to become creative problem solvers have long been goals of science education reformers. But the means to achieve these goals, especially methods to promote creative thinking in scientific problem solving, have not become widely known or used. In this essay, I review the evidence that creativity is not a single hard-to-measure property. The creative process can be explained by reference to increasingly well-understood cognitive skills such as cognitive flexibility and inhibitory control that are widely distributed in the population. I explore the relationship between creativity and the higher-order cognitive skills, review assessment methods, and describe several instructional strategies for enhancing creative problem solving in the college classroom. Evidence suggests that instruction to support the development of creativity requires inquiry-based teaching that includes explicit strategies to promote cognitive flexibility. Students need to be repeatedly reminded and shown how to be creative, to integrate material across subject areas, to question their own assumptions, and to imagine other viewpoints and possibilities. Further research is required to determine whether college students' learning will be enhanced by these measures.

  9. Cognitive development in introductory physics: A research-based approach to curriculum reform

    NASA Astrophysics Data System (ADS)

    Teodorescu, Raluca Elena

    This project describes the research on a classification of physics problems in the context of introductory physics courses. This classification, called the Taxonomy of Introductory Physics Problems (TIPP), relates physics problems to the cognitive processes required to solve them. TIPP was created for designing and clarifying educational objectives, for developing assessments that can evaluate individual component processes of the problem-solving process, and for guiding curriculum design in introductory physics courses, specifically within the context of a "thinking-skills" curriculum. TIPP relies on the following resources: (1) cognitive research findings adopted by physics education research, (2) expert-novice research discoveries acknowledged by physics education research, (3) an educational psychology taxonomy for educational objectives, and (4) various collections of physics problems created by physics education researchers or developed by textbook authors. TIPP was used in the years 2006--2008 to reform the first semester of the introductory algebra-based physics course (called Phys 11) at The George Washington University. The reform sought to transform our curriculum into a "thinking-skills" curriculum that trades "breadth for depth" by focusing on fewer topics while targeting the students' cognitive development. We employed existing research on the physics problem-solving expert-novice behavior, cognitive science and behavioral science findings, and educational psychology recommendations. Our pedagogy relies on didactic constructs such as the GW-ACCESS problem-solving protocol, learning progressions and concept maps that we have developed and implemented in our introductory physics course. These tools were designed based on TIPP. Their purpose is: (1) to help students build local and global coherent knowledge structures, (2) to develop more context-independent problem-solving abilities, (3) to gain confidence in problem solving, and (4) to establish connections between everyday phenomena and underlying physics concepts. We organize traditional and research-based physics problems such that students experience a gradual increase in complexity related to problem context, problem features and cognitive processes needed to solve the problem. The instructional environment that we designed allows for explicit monitoring, control and measurement of the cognitive processes exercised during the instruction period. It is easily adaptable to any kind of curriculum and can be readily adjusted throughout the semester. To assess the development of students' problem-solving abilities, we created rubrics that measure specific aspects of the thinking involved in physics problem solving. The Colorado Learning Attitudes about Science Survey (CLASS) was administered pre- and post-instruction to determine students' shift in dispositions towards learning physics. The Force Concept Inventory (FCI) was administered pre- and post-instruction to determine students' level of conceptual understanding. The results feature improvements in students' problem-solving abilities and in their attitudes towards learning physics.

  10. Two alternative ways for solving the coordination problem in multilevel optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1991-01-01

    Two techniques for formulating the coupling between levels in multilevel optimization by linear decomposition, proposed as improvements over the original formulation, now several years old, that relied on explicit equality constraints which were shown by application experience as occasionally causing numerical difficulties. The two new techniques represent the coupling without using explicit equality constraints, thus avoiding the above diffuculties and also reducing computational cost of the procedure. The old and new formulations are presented in detail and illustrated by an example of a structural optimization. A generic version of the improved algorithm is also developed for applications to multidisciplinary systems not limited to structures.

  11. GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.

    PubMed

    Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N

    2018-01-01

    Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.

  12. A three-dimensional method-of-characteristics solute-transport model (MOC3D)

    USGS Publications Warehouse

    Konikow, Leonard F.; Goode, D.J.; Hornberger, G.Z.

    1996-01-01

    This report presents a model, MOC3D, that simulates three-dimensional solute transport in flowing ground water. The model computes changes in concentration of a single dissolved chemical constituent over time that are caused by advective transport, hydrodynamic dispersion (including both mechanical dispersion and diffusion), mixing (or dilution) from fluid sources, and mathematically simple chemical reactions (including linear sorption, which is represented by a retardation factor, and decay). The transport model is integrated with MODFLOW, a three-dimensional ground-water flow model that uses implicit finite-difference methods to solve the transient flow equation. MOC3D uses the method of characteristics to solve the transport equation on the basis of the hydraulic gradients computed with MODFLOW for a given time step. This implementation of the method of characteristics uses particle tracking to represent advective transport and explicit finite-difference methods to calculate the effects of other processes. However, the explicit procedure has several stability criteria that may limit the size of time increments for solving the transport equation; these are automatically determined by the program. For improved efficiency, the user can apply MOC3D to a subgrid of the primary MODFLOW grid that is used to solve the flow equation. However, the transport subgrid must have uniform grid spacing along rows and columns. The report includes a description of the theoretical basis of the model, a detailed description of input requirements and output options, and the results of model testing and evaluation. The model was evaluated for several problems for which exact analytical solutions are available and by benchmarking against other numerical codes for selected complex problems for which no exact solutions are available. These test results indicate that the model is very accurate for a wide range of conditions and yields minimal numerical dispersion for advection-dominated problems. Mass-balance errors are generally less than 10 percent, and tend to decrease and stabilize with time.

  13. An iterative phase-space explicit discontinuous Galerkin method for stellar radiative transfer in extended atmospheres

    NASA Astrophysics Data System (ADS)

    de Almeida, Valmor F.

    2017-07-01

    A phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equation and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.

  14. Effects of Singapore Model Method with Explicit Instruction on Math Problem Solving Skills of Students at Risk for or Identified with Learning Disabilities

    ERIC Educational Resources Information Center

    Preston, Angela Irene

    2016-01-01

    Over the last two decades, students in Singapore consistently scored above students from other nations on the Trends in International Mathematics and Science Study (TIMSS; Provasnik et al., 2012). In contrast, students in the United States have not performed as well on international and national mathematics assessments and students with…

  15. Solutions of the Taylor-Green Vortex Problem Using High-Resolution Explicit Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2013-01-01

    A computational fluid dynamics code that solves the compressible Navier-Stokes equations was applied to the Taylor-Green vortex problem to examine the code s ability to accurately simulate the vortex decay and subsequent turbulence. The code, WRLES (Wave Resolving Large-Eddy Simulation), uses explicit central-differencing to compute the spatial derivatives and explicit Low Dispersion Runge-Kutta methods for the temporal discretization. The flow was first studied and characterized using Bogey & Bailley s 13-point dispersion relation preserving (DRP) scheme. The kinetic energy dissipation rate, computed both directly and from the enstrophy field, vorticity contours, and the energy spectra are examined. Results are in excellent agreement with a reference solution obtained using a spectral method and provide insight into computations of turbulent flows. In addition the following studies were performed: a comparison of 4th-, 8th-, 12th- and DRP spatial differencing schemes, the effect of the solution filtering on the results, the effect of large-eddy simulation sub-grid scale models, and the effect of high-order discretization of the viscous terms.

  16. Inductive System for Reliable Magnesium Level Detection in a Titanium Reduction Reactor

    NASA Astrophysics Data System (ADS)

    Krauter, Nico; Eckert, Sven; Gundrum, Thomas; Stefani, Frank; Wondrak, Thomas; Frick, Peter; Khalilov, Ruslan; Teimurazov, Andrei

    2018-05-01

    The determination of the Magnesium level in a Titanium reduction retort by inductive methods is often hampered by the formation of Titanium sponge rings which disturb the propagation of electromagnetic signals between excitation and receiver coils. We present a new method for the reliable identification of the Magnesium level which explicitly takes into account the presence of sponge rings with unknown geometry and conductivity. The inverse problem is solved by a look-up-table method, based on the solution of the inductive forward problems for several tens of thousands parameter combinations.

  17. Solving time-dependent two-dimensional eddy current problems

    NASA Technical Reports Server (NTRS)

    Lee, Min Eig; Hariharan, S. I.; Ida, Nathan

    1990-01-01

    Transient eddy current calculations are presented for an EM wave-scattering and field-penetrating case in which a two-dimensional transverse magnetic field is incident on a good (i.e., not perfect) and infinitely long conductor. The problem thus posed is of initial boundary-value interface type, where the boundary of the conductor constitutes the interface. A potential function is used for time-domain modeling of the situation, and finite difference-time domain techniques are used to march the potential function explicitly in time. Attention is given to the case of LF radiation conditions.

  18. 3D transient electromagnetic simulation using a modified correspondence principle for wave and diffusion fields

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Ji, Y.; Egbert, G. D.

    2015-12-01

    The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.

  19. User's guide for NASCRIN: A vectorized code for calculating two-dimensional supersonic internal flow fields

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1984-01-01

    A computer program NASCRIN has been developed for analyzing two-dimensional flow fields in high-speed inlets. It solves the two-dimensional Euler or Navier-Stokes equations in conservation form by an explicit, two-step finite-difference method. An explicit-implicit method can also be used at the user's discretion for viscous flow calculations. For turbulent flow, an algebraic, two-layer eddy-viscosity model is used. The code is operational on the CDC CYBER 203 computer system and is highly vectorized to take full advantage of the vector-processing capability of the system. It is highly user oriented and is structured in such a way that for most supersonic flow problems, the user has to make only a few changes. Although the code is primarily written for supersonic internal flow, it can be used with suitable changes in the boundary conditions for a variety of other problems.

  20. Local Minima Free Parameterized Appearance Models

    PubMed Central

    Nguyen, Minh Hoai; De la Torre, Fernando

    2010-01-01

    Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches. PMID:21804750

  1. The pulsating orb: solving the wave equation outside a ball

    PubMed Central

    2016-01-01

    Transient acoustic waves are generated by the oscillations of an object or are scattered by the object. This leads to initial-boundary value problems (IBVPs) for the wave equation. Basic properties of this equation are reviewed, with emphasis on characteristics, wavefronts and compatibility conditions. IBVPs are formulated and their properties reviewed, with emphasis on weak solutions and the constraints imposed by the underlying continuum mechanics. The use of the Laplace transform to treat the IBVPs is also reviewed, with emphasis on situations where the solution is discontinuous across wavefronts. All these notions are made explicit by solving simple IBVPs for a sphere in some detail. PMID:27279773

  2. High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.

    2015-03-01

    In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.

  3. Cognitive skill learning and schizophrenia: implications for cognitive remediation.

    PubMed

    Michel, L; Danion, J M; Grangé, D; Sandner, G

    1998-10-01

    The ability to acquire a motor and cognitive skill was investigated in 26 patients with schizophrenia and 26 normal participants using repeated testing on the Tower of Toronto puzzle. Seven patients with defective performance were retested using additional trials and immediate feedback designed to facilitate problem solving. A component analysis of performance was used based on J. R. Anderson's (1987) model of cognitive skill learning. Patients exhibited a performance deficit on both motor and cognitive skills. However, their acquisition rate was similar to that of normal participants on most parameters, indicating that skill learning suffered little or no impairment. Performance deficit was accounted for by poor problem-solving ability, explicit memory, and general intellectual capacities. It was remediable in some, but not all, patients. Remediation failure was also related to severe defects of cognitive functions.

  4. Optimal Control of Stochastic Systems Driven by Fractional Brownian Motions

    DTIC Science & Technology

    2014-10-09

    problems for stochastic partial differential equations driven by fractional Brownian motions are explicitly solved. For the control of a continuous time...linear systems with Brownian motion or a discrete time linear system with a white Gaussian noise and costs 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 stochastic optimal control, fractional Brownian motion , stochastic

  5. Subsurface Electromagnetic Induction Imaging for Unexploded Ordnance Detection

    DTIC Science & Technology

    2012-01-01

    Baum, 1999; Pasion and Oldenburg, 2001). The EMI- response problem has been solved analytically for spheroids (Ao et al., 2002; Barrowes et al., 2004...components. We also have made explicit the fact that the polarizabilities are always positive ( Pasion et al., 2008); we impose this constraint in the...Wiley-Blackwell, Chichester, UK. Pasion , L.R., Oldenburg, D.W., 2001. A discrimination algorithm for UXO using time- domain electromagnetic induction

  6. Electromagnetic van Kampen waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ignatov, A. M., E-mail: aign@fpl.gpi.ru

    2017-01-15

    The theory of van Kampen waves in plasma with an arbitrary anisotropic distribution function is developed. The obtained solutions are explicitly expressed in terms of the permittivity tensor. There are three types of perturbations, one of which is characterized by the frequency dependence on the wave vector, while for the other two, the dispersion relation is lacking. Solutions to the conjugate equations allowing one to solve the initial value problem are analyzed.

  7. Co-simulation coupling spectral/finite elements for 3D soil/structure interaction problems

    NASA Astrophysics Data System (ADS)

    Zuchowski, Loïc; Brun, Michael; De Martin, Florent

    2018-05-01

    The coupling between an implicit finite elements (FE) code and an explicit spectral elements (SE) code has been explored for solving the elastic wave propagation in the case of soil/structure interaction problem. The coupling approach is based on domain decomposition methods in transient dynamics. The spatial coupling at the interface is managed by a standard coupling mortar approach, whereas the time integration is dealt with an hybrid asynchronous time integrator. An external coupling software, handling the interface problem, has been set up in order to couple the FE software Code_Aster with the SE software EFISPEC3D.

  8. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  9. Solving time-dependent two-dimensional eddy current problems

    NASA Technical Reports Server (NTRS)

    Lee, Min Eig; Hariharan, S. I.; Ida, Nathan

    1988-01-01

    Results of transient eddy current calculations are reported. For simplicity, a two-dimensional transverse magnetic field which is incident on an infinitely long conductor is considered. The conductor is assumed to be a good but not perfect conductor. The resulting problem is an interface initial boundary value problem with the boundary of the conductor being the interface. A finite difference method is used to march the solution explicitly in time. The method is shown. Treatment of appropriate radiation conditions is given special consideration. Results are validated with approximate analytic solutions. Two stringent test cases of high and low frequency incident waves are considered to validate the results.

  10. An integral equation method for the homogenization of unidirectional fibre-reinforced media; antiplane elasticity and other potential problems.

    PubMed

    Joyce, Duncan; Parnell, William J; Assier, Raphaël C; Abrahams, I David

    2017-05-01

    In Parnell & Abrahams (2008 Proc. R. Soc. A 464 , 1461-1482. (doi:10.1098/rspa.2007.0254)), a homogenization scheme was developed that gave rise to explicit forms for the effective antiplane shear moduli of a periodic unidirectional fibre-reinforced medium where fibres have non-circular cross section. The explicit expressions are rational functions in the volume fraction. In that scheme, a (non-dilute) approximation was invoked to determine leading-order expressions. Agreement with existing methods was shown to be good except at very high volume fractions. Here, the theory is extended in order to determine higher-order terms in the expansion. Explicit expressions for effective properties can be derived for fibres with non-circular cross section, without recourse to numerical methods. Terms appearing in the expressions are identified as being associated with the lattice geometry of the periodic fibre distribution, fibre cross-sectional shape and host/fibre material properties. Results are derived in the context of antiplane elasticity but the analogy with the potential problem illustrates the broad applicability of the method to, e.g. thermal, electrostatic and magnetostatic problems. The efficacy of the scheme is illustrated by comparison with the well-established method of asymptotic homogenization where for fibres of general cross section, the associated cell problem must be solved by some computational scheme.

  11. An integral equation method for the homogenization of unidirectional fibre-reinforced media; antiplane elasticity and other potential problems

    PubMed Central

    Joyce, Duncan

    2017-01-01

    In Parnell & Abrahams (2008 Proc. R. Soc. A 464, 1461–1482. (doi:10.1098/rspa.2007.0254)), a homogenization scheme was developed that gave rise to explicit forms for the effective antiplane shear moduli of a periodic unidirectional fibre-reinforced medium where fibres have non-circular cross section. The explicit expressions are rational functions in the volume fraction. In that scheme, a (non-dilute) approximation was invoked to determine leading-order expressions. Agreement with existing methods was shown to be good except at very high volume fractions. Here, the theory is extended in order to determine higher-order terms in the expansion. Explicit expressions for effective properties can be derived for fibres with non-circular cross section, without recourse to numerical methods. Terms appearing in the expressions are identified as being associated with the lattice geometry of the periodic fibre distribution, fibre cross-sectional shape and host/fibre material properties. Results are derived in the context of antiplane elasticity but the analogy with the potential problem illustrates the broad applicability of the method to, e.g. thermal, electrostatic and magnetostatic problems. The efficacy of the scheme is illustrated by comparison with the well-established method of asymptotic homogenization where for fibres of general cross section, the associated cell problem must be solved by some computational scheme. PMID:28588412

  12. Development and Implementation of a Transport Method for the Transport and Reaction Simulation Engine (TaRSE) based on the Godunov-Mixed Finite Element Method

    USGS Publications Warehouse

    James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael

    2009-01-01

    A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.

  13. A global stochastic programming approach for the optimal placement of gas detectors with nonuniform unavailabilities

    DOE PAGES

    Liu, Jianfeng; Laird, Carl Damon

    2017-09-22

    Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less

  14. A global stochastic programming approach for the optimal placement of gas detectors with nonuniform unavailabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jianfeng; Laird, Carl Damon

    Optimal design of a gas detection systems is challenging because of the numerous sources of uncertainty, including weather and environmental conditions, leak location and characteristics, and process conditions. Rigorous CFD simulations of dispersion scenarios combined with stochastic programming techniques have been successfully applied to the problem of optimal gas detector placement; however, rigorous treatment of sensor failure and nonuniform unavailability has received less attention. To improve reliability of the design, this paper proposes a problem formulation that explicitly considers nonuniform unavailabilities and all backup detection levels. The resulting sensor placement problem is a large-scale mixed-integer nonlinear programming (MINLP) problem thatmore » requires a tailored solution approach for efficient solution. We have developed a multitree method which depends on iteratively solving a sequence of upper-bounding master problems and lower-bounding subproblems. The tailored global solution strategy is tested on a real data problem and the encouraging numerical results indicate that our solution framework is promising in solving sensor placement problems. This study was selected for the special issue in JLPPI from the 2016 International Symposium of the MKO Process Safety Center.« less

  15. HEATING 7. 1 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, K.W.

    1991-07-01

    HEATING is a FORTRAN program designed to solve steady-state and/or transient heat conduction problems in one-, two-, or three- dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heating generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- and position-dependent. The boundary conditions, which maymore » be surface-to-boundary or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General graybody radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING is variably dimensioned and utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution (for one-dimensional or two-dimensional problems), and conjugate gradient. Transient problems may be solved using one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method (which for some circumstances allows a time step greater than the CEP stability criterion). The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less

  16. Solving the dynamic ambulance relocation and dispatching problem using approximate dynamic programming

    PubMed Central

    Schmid, Verena

    2012-01-01

    Emergency service providers are supposed to locate ambulances such that in case of emergency patients can be reached in a time-efficient manner. Two fundamental decisions and choices need to be made real-time. First of all immediately after a request emerges an appropriate vehicle needs to be dispatched and send to the requests’ site. After having served a request the vehicle needs to be relocated to its next waiting location. We are going to propose a model and solve the underlying optimization problem using approximate dynamic programming (ADP), an emerging and powerful tool for solving stochastic and dynamic problems typically arising in the field of operations research. Empirical tests based on real data from the city of Vienna indicate that by deviating from the classical dispatching rules the average response time can be decreased from 4.60 to 4.01 minutes, which corresponds to an improvement of 12.89%. Furthermore we are going to show that it is essential to consider time-dependent information such as travel times and changes with respect to the request volume explicitly. Ignoring the current time and its consequences thereafter during the stage of modeling and optimization leads to suboptimal decisions. PMID:25540476

  17. An empirical evaluation of graphical interfaces to support flight planning

    NASA Technical Reports Server (NTRS)

    Smith, Philip J.; Mccoy, Elaine; Layton, Chuck; Bihari, Tom

    1995-01-01

    Whether optimization techniques or expert systems technologies are used, the underlying inference processes and the model or knowledge base for a computerized problem-solving system are likely to be incomplete for any given complex, real-world task. To deal with the resultant brittleness, it has been suggested that 'cooperative' rather than 'automated' problem-solving systems be designed. Such cooperative systems are proposed to explicitly enhance the collaboration of people and the computer system when working in partnership to solve problems. This study evaluates the impact of alternative design concepts on the performance of airline pilots interacting with such a cooperative system designed to support enroute flight planning. Thirty pilots were studied using three different versions of the system. The results clearly demonstrate that different system design concepts can strongly influence the cognitive processes of users. Indeed, one of the designs studied caused four times as many pilots to accept a poor flight amendment. Based on think-aloud protocols, cognitive models are proposed to account for how features of the computer system interacted with specific types of scenarios to influence exploration and decision-making by the pilots. The results are then used to develop recommendations for guiding the design of cooperative systems.

  18. An iterative phase-space explicit discontinuous Galerkin method for stellar radiative transfer in extended atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Almeida, Valmor F.

    In this work, a phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equationmore » and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.« less

  19. An iterative phase-space explicit discontinuous Galerkin method for stellar radiative transfer in extended atmospheres

    DOE PAGES

    de Almeida, Valmor F.

    2017-04-19

    In this work, a phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equationmore » and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.« less

  20. Evaluation of the eigenvalue method in the solution of transient heat conduction problems

    NASA Astrophysics Data System (ADS)

    Landry, D. W.

    1985-01-01

    The eigenvalue method is evaluated to determine the advantages and disadvantages of the method as compared to fully explicit, fully implicit, and Crank-Nicolson methods. Time comparisons and accuracy comparisons are made in an effort to rank the eigenvalue method in relation to the comparison schemes. The eigenvalue method is used to solve the parabolic heat equation in multidimensions with transient temperatures. Extensions into three dimensions are made to determine the method's feasibility in handling large geometry problems requiring great numbers of internal mesh points. The eigenvalue method proves to be slightly better in accuracy than the comparison routines because of an exact treatment, as opposed to a numerical approximation, of the time derivative in the heat equation. It has the potential of being a very powerful routine in solving long transient type problems. The method is not well suited to finely meshed grid arrays or large regions because of the time and memory requirements necessary for calculating large sets of eigenvalues and eigenvectors.

  1. Beyond input-output computings: error-driven emergence with parallel non-distributed slime mold computer.

    PubMed

    Aono, Masashi; Gunji, Yukio-Pegio

    2003-10-01

    The emergence derived from errors is the key importance for both novel computing and novel usage of the computer. In this paper, we propose an implementable experimental plan for the biological computing so as to elicit the emergent property of complex systems. An individual plasmodium of the true slime mold Physarum polycephalum acts in the slime mold computer. Modifying the Elementary Cellular Automaton as it entails the global synchronization problem upon the parallel computing provides the NP-complete problem solved by the slime mold computer. The possibility to solve the problem by giving neither all possible results nor explicit prescription of solution-seeking is discussed. In slime mold computing, the distributivity in the local computing logic can change dynamically, and its parallel non-distributed computing cannot be reduced into the spatial addition of multiple serial computings. The computing system based on exhaustive absence of the super-system may produce, something more than filling the vacancy.

  2. Facility Composer Design Wizards: A Method for Extensible Codified Design Logic Based on Explicit Facility Criteria

    DTIC Science & Technology

    2004-11-01

    institutionalized approaches to solving problems, company/client specific mission priorities (for example, State Department vs . Army Reserve and... independent variables that let the user leave a particular step before fin- ishing all the items, and to return at a later time without any data loss. One...Sales, Main Exchange, Miscellane- ous Shops, Post Office, Restaurant , and Theater.) Authorized customers served 04 Other criteria pro- vided by the

  3. Mathematical modelling of the beam under axial compression force applied at any point – the buckling problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magnucka-Blandzi, Ewa

    The study is devoted to stability of simply supported beam under axial compression. The beam is subjected to an axial load located at any point along the axis of the beam. The buckling problem has been desribed and solved mathematically. Critical loads have been calculated. In the particular case, the Euler’s buckling load is obtained. Explicit solutions are given. The values of critical loads are collected in tables and shown in figure. The relation between the point of the load application and the critical load is presented.

  4. QA4, a language for artificial intelligence.

    NASA Technical Reports Server (NTRS)

    Derksen, J. A. C.

    1973-01-01

    Introduction of a language for problem solving and specifically robot planning, program verification, and synthesis and theorem proving. This language, called question-answerer 4 (QA4), embodies many features that have been found useful for constructing problem solvers but have to be programmed explicitly by the user of a conventional language. The most important features of QA4 are described, and examples are provided for most of the material introduced. Language features include backtracking, parallel processing, pattern matching, set manipulation, and pattern-triggered function activation. The language is most convenient for use in an interactive way and has extensive trace and edit facilities.

  5. A tensor Banach algebra approach to abstract kinetic equations

    NASA Astrophysics Data System (ADS)

    Greenberg, W.; van der Mee, C. V. M.

    The study deals with a concrete algebraic construction providing the existence theory for abstract kinetic equation boundary-value problems, when the collision operator A is an accretive finite-rank perturbation of the identity operator in a Hilbert space H. An algebraic generalization of the Bochner-Phillips theorem is utilized to study solvability of the abstract boundary-value problem without any regulatory condition. A Banach algebra in which the convolution kernel acts is obtained explicitly, and this result is used to prove a perturbation theorem for bisemigroups, which then plays a vital role in solving the initial equations.

  6. Stability of mixing layers

    NASA Technical Reports Server (NTRS)

    Tam, Christopher; Krothapalli, A

    1993-01-01

    The research program for the first year of this project (see the original research proposal) consists of developing an explicit marching scheme for solving the parabolized stability equations (PSE). Performing mathematical analysis of the computational algorithm including numerical stability analysis and the determination of the proper boundary conditions needed at the boundary of the computation domain are implicit in the task. Before one can solve the parabolized stability equations for high-speed mixing layers, the mean flow must first be found. In the past, instability analysis of high-speed mixing layer has mostly been performed on mean flow profiles calculated by the boundary layer equations. In carrying out this project, it is believed that the boundary layer equations might not give an accurate enough nonparallel, nonlinear mean flow needed for parabolized stability analysis. A more accurate mean flow can, however, be found by solving the parabolized Navier-Stokes equations. The advantage of the parabolized Navier-Stokes equations is that its accuracy is consistent with the PSE method. Furthermore, the method of solution is similar. Hence, the major part of the effort of the work of this year has been devoted to the development of an explicit numerical marching scheme for the solution of the Parabolized Navier-Stokes equation as applied to the high-seed mixing layer problem.

  7. High-order finite-volume solutions of the steady-state advection-diffusion equation with nonlinear Robin boundary conditions

    NASA Astrophysics Data System (ADS)

    Lin, Zhi; Zhang, Qinghai

    2017-09-01

    We propose high-order finite-volume schemes for numerically solving the steady-state advection-diffusion equation with nonlinear Robin boundary conditions. Although the original motivation comes from a mathematical model of blood clotting, the nonlinear boundary conditions may also apply to other scientific problems. The main contribution of this work is a generic algorithm for generating third-order, fourth-order, and even higher-order explicit ghost-filling formulas to enforce nonlinear Robin boundary conditions in multiple dimensions. Under the framework of finite volume methods, this appears to be the first algorithm of its kind. Numerical experiments on boundary value problems show that the proposed fourth-order formula can be much more accurate and efficient than a simple second-order formula. Furthermore, the proposed ghost-filling formulas may also be useful for solving other partial differential equations.

  8. Quantum integrability and functional equations

    NASA Astrophysics Data System (ADS)

    Volin, Dmytro

    2010-03-01

    In this thesis a general procedure to represent the integral Bethe Ansatz equations in the form of the Reimann-Hilbert problem is given. This allows us to study in simple way integrable spin chains in the thermodynamic limit. Based on the functional equations we give the procedure that allows finding the subleading orders in the solution of various integral equations solved to the leading order by the Wiener-Hopf technics. The integral equations are studied in the context of the AdS/CFT correspondence, where their solution allows verification of the integrability conjecture up to two loops of the strong coupling expansion. In the context of the two-dimensional sigma models we analyze the large-order behavior of the asymptotic perturbative expansion. Obtained experience with the functional representation of the integral equations allowed us also to solve explicitly the crossing equations that appear in the AdS/CFT spectral problem.

  9. Looking for Creativity: Where Do We Look When We Look for New Ideas?

    PubMed Central

    Salvi, Carola; Bowden, Edward M.

    2016-01-01

    Recent work using the eye movement monitoring technique has demonstrated that when people are engaged in thought they tend to disengage from the external world by blinking or fixating on an empty portion of the visual field, such as a blank wall, or out the window at the sky. This ‘looking at nothing’ behavior has been observed during thinking that does not explicitly involve visual imagery (mind wandering, insight in problem solving, memory encoding and search) and it is associated with reduced analysis of the external visual environment. Thus, it appears to indicate (and likely facilitate) a shift of attention from external to internal stimuli that benefits creativity and problem solving by reducing the cognitive load and enhancing attention to internally evolving activation. We briefly mention some possible reasons to collect eye movement data in future studies of creativity. PMID:26913018

  10. Stuck in the moment: cognitive inflexibility in preschoolers following an extended time period

    PubMed Central

    Garcia, Carolina; Dick, Anthony Steven

    2013-01-01

    Preschoolers display surprising inflexibility in problem solving, but seem to approach new challenges with a fresh slate. We provide evidence that while the former is true the latter is not. Here, we examined whether brief exposure to stimuli can influence children’s problem solving following several weeks after first exposure to the stimuli. We administered a common executive function task, the Dimensional Change Card Sort, which requires children to sort picture cards by one dimension (e.g., color) and then switch to sort the same cards by a conflicting dimension (e.g., shape). After a week or after a month delay, we administered the second rule again. We found that 70% of preschoolers continued to sort by the initial sorting rule, even after a month delay, and even though they are explicitly told what to do. We discuss implications for theories of executive function development, and for classroom learning. PMID:24399978

  11. Integer-ambiguity resolution in astronomy and geodesy

    NASA Astrophysics Data System (ADS)

    Lannes, A.; Prieur, J.-L.

    2014-02-01

    Recent theoretical developments in astronomical aperture synthesis have revealed the existence of integer-ambiguity problems. Those problems, which appear in the self-calibration procedures of radio imaging, have been shown to be similar to the nearest-lattice point (NLP) problems encountered in high-precision geodetic positioning and in global navigation satellite systems. In this paper we analyse the theoretical aspects of the matter and propose new methods for solving those NLP~problems. The related optimization aspects concern both the preconditioning stage, and the discrete-search stage in which the integer ambiguities are finally fixed. Our algorithms, which are described in an explicit manner, can easily be implemented. They lead to substantial gains in the processing time of both stages. Their efficiency was shown via intensive numerical tests.

  12. A unified monolithic approach for multi-fluid flows and fluid-structure interaction using the Particle Finite Element Method with fixed mesh

    NASA Astrophysics Data System (ADS)

    Becker, P.; Idelsohn, S. R.; Oñate, E.

    2015-06-01

    This paper describes a strategy to solve multi-fluid and fluid-structure interaction (FSI) problems using Lagrangian particles combined with a fixed finite element (FE) mesh. Our approach is an extension of the fluid-only PFEM-2 (Idelsohn et al., Eng Comput 30(2):2-2, 2013; Idelsohn et al., J Numer Methods Fluids, 2014) which uses explicit integration over the streamlines to improve accuracy. As a result, the convective term does not appear in the set of equations solved on the fixed mesh. Enrichments in the pressure field are used to improve the description of the interface between phases.

  13. Contact-aware simulations of particulate Stokesian suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Libin; Rahimian, Abtin; Zorin, Denis

    2017-10-01

    We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.

  14. An efficient and flexible Abel-inversion method for noisy data

    NASA Astrophysics Data System (ADS)

    Antokhin, Igor I.

    2016-12-01

    We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

  15. On the regularization of impact without collision: the Painlevé paradox and compliance

    NASA Astrophysics Data System (ADS)

    Hogan, S. J.; Kristiansen, K. Uldall

    2017-06-01

    We consider the problem of a rigid body, subject to a unilateral constraint, in the presence of Coulomb friction. We regularize the problem by assuming compliance (with both stiffness and damping) at the point of contact, for a general class of normal reaction forces. Using a rigorous mathematical approach, we recover impact without collision (IWC) in both the inconsistent and the indeterminate Painlevé paradoxes, in the latter case giving an exact formula for conditions that separate IWC and lift-off. We solve the problem for arbitrary values of the compliance damping and give explicit asymptotic expressions in the limiting cases of small and large damping, all for a large class of rigid bodies.

  16. ROENTGEN: case-based reasoning and radiation therapy planning.

    PubMed Central

    Berger, J.

    1992-01-01

    ROENTGEN is a design assistant for radiation therapy planning which uses case-based reasoning, an artificial intelligence technique. It learns both from specific problem-solving experiences and from direct instruction from the user. The first sort of learning is the normal case-based method of storing problem solutions so that they can be reused. The second sort is necessary because ROENTGEN does not, initially, have an internal model of the physics of its problem domain. This dependence on explicit user instruction brings to the forefront representational questions regarding indexing, failure definition, failure explanation and repair. This paper presents the techniques used by ROENTGEN in its knowledge acquisition and design activities. PMID:1482869

  17. The nonlinear modified equation approach to analyzing finite difference schemes

    NASA Technical Reports Server (NTRS)

    Klopfer, G. H.; Mcrae, D. S.

    1981-01-01

    The nonlinear modified equation approach is taken in this paper to analyze the generalized Lax-Wendroff explicit scheme approximation to the unsteady one- and two-dimensional equations of gas dynamics. Three important applications of the method are demonstrated. The nonlinear modified equation analysis is used to (1) generate higher order accurate schemes, (2) obtain more accurate estimates of the discretization error for nonlinear systems of partial differential equations, and (3) generate an adaptive mesh procedure for the unsteady gas dynamic equations. Results are obtained for all three areas. For the adaptive mesh procedure, mesh point requirements for equal resolution of discontinuities were reduced by a factor of five for a 1-D shock tube problem solved by the explicit MacCormack scheme.

  18. The Power of Implicit Social Relation in Rating Prediction of Social Recommender Systems

    PubMed Central

    Reafee, Waleed; Salim, Naomie; Khan, Atif

    2016-01-01

    The explosive growth of social networks in recent times has presented a powerful source of information to be utilized as an extra source for assisting in the social recommendation problems. The social recommendation methods that are based on probabilistic matrix factorization improved the recommendation accuracy and partly solved the cold-start and data sparsity problems. However, these methods only exploited the explicit social relations and almost completely ignored the implicit social relations. In this article, we firstly propose an algorithm to extract the implicit relation in the undirected graphs of social networks by exploiting the link prediction techniques. Furthermore, we propose a new probabilistic matrix factorization method to alleviate the data sparsity problem through incorporating explicit friendship and implicit friendship. We evaluate our proposed approach on two real datasets, Last.Fm and Douban. The experimental results show that our method performs much better than the state-of-the-art approaches, which indicates the importance of incorporating implicit social relations in the recommendation process to address the poor prediction accuracy. PMID:27152663

  19. Efficient dynamic modeling of manipulators containing closed kinematic loops

    NASA Astrophysics Data System (ADS)

    Ferretti, Gianni; Rocco, Paolo

    An approach to efficiently solve the forward dynamics problem for manipulators containing closed chains is proposed. The two main distinctive features of this approach are: the dynamics of the equivalent open loop tree structures (any closed loop can be in general modeled by imposing some additional kinematic constraints to a suitable tree structure) is computed through an efficient Newton Euler formulation; the constraint equations relative to the most commonly adopted closed chains in industrial manipulators are explicitly solved, thus, overcoming the redundancy of Lagrange's multipliers method while avoiding the inefficiency due to a numerical solution of the implicit constraint equations. The constraint equations considered for an explicit solution are those imposed by articulated gear mechanisms and planar closed chains (pantograph type structures). Articulated gear mechanisms are actually used in all industrial robots to transmit motion from actuators to links, while planar closed chains are usefully employed to increase the stiffness of the manipulators and their load capacity, as well to reduce the kinematic coupling of joint axes. The accuracy and the efficiency of the proposed approach are shown through a simulation test.

  20. Estimating the Inertia Matrix of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Keim, Jason; Shields, Joel

    2007-01-01

    A paper presents a method of utilizing some flight data, aboard a spacecraft that includes reaction wheels for attitude control, to estimate the inertia matrix of the spacecraft. The required data are digitized samples of (1) the spacecraft attitude in an inertial reference frame as measured, for example, by use of a star tracker and (2) speeds of rotation of the reaction wheels, the moments of inertia of which are deemed to be known. Starting from the classical equations for conservation of angular momentum of a rigid body, the inertia-matrix-estimation problem is formulated as a constrained least-squares minimization problem with explicit bounds on the inertia matrix incorporated as linear matrix inequalities. The explicit bounds reflect physical bounds on the inertia matrix and reduce the volume of data that must be processed to obtain a solution. The resulting minimization problem is a semidefinite optimization problem that can be solved efficiently, with guaranteed convergence to the global optimum, by use of readily available algorithms. In a test case involving a model attitude platform rotating on an air bearing, it is shown that, relative to a prior method, the present method produces better estimates from few data.

  1. Heating 7.2 user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, K.W.

    1993-02-01

    HEATING is a general-purpose conduction heat transfer program written in Fortran 77. HEATING can solve steady-state and/or transient heat conduction problems in one-, two-, or three-dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may also be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heat-generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- andmore » position-dependent. The boundary conditions, which may be surface-to-environment or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General gray-body radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING uses a runtime memory allocation scheme to avoid having to recompile to match memory requirements for each specific problem. HEATING utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution, and conjugate gradient. Transient problems may be solved using any one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method. The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less

  2. Heating 7. 2 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, K.W.

    1993-02-01

    HEATING is a general-purpose conduction heat transfer program written in Fortran 77. HEATING can solve steady-state and/or transient heat conduction problems in one-, two-, or three-dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may also be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heat-generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- andmore » position-dependent. The boundary conditions, which may be surface-to-environment or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General gray-body radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING uses a runtime memory allocation scheme to avoid having to recompile to match memory requirements for each specific problem. HEATING utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution, and conjugate gradient. Transient problems may be solved using any one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method. The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less

  3. Non-Gaussianity and Excursion Set Theory: Halo Bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adshead, Peter; Baxter, Eric J.; Dodelson, Scott

    2012-09-01

    We study the impact of primordial non-Gaussianity generated during inflation on the bias of halos using excursion set theory. We recapture the familiar result that the bias scales asmore » $$k^{-2}$$ on large scales for local type non-Gaussianity but explicitly identify the approximations that go into this conclusion and the corrections to it. We solve the more complicated problem of non-spherical halos, for which the collapse threshold is scale dependent.« less

  4. A massively parallel computational approach to coupled thermoelastic/porous gas flow problems

    NASA Technical Reports Server (NTRS)

    Shia, David; Mcmanus, Hugh L.

    1995-01-01

    A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.

  5. An implicit dispersive transport algorithm for the US Geological Survey MOC3D solute-transport model

    USGS Publications Warehouse

    Kipp, K.L.; Konikow, Leonard F.; Hornberger, G.Z.

    1998-01-01

    This report documents an extension to the U.S. Geological Survey MOC3D transport model that incorporates an implicit-in-time difference approximation for the dispersive transport equation, including source/sink terms. The original MOC3D transport model (Version 1) uses the method of characteristics to solve the transport equation on the basis of the velocity field. The original MOC3D solution algorithm incorporates particle tracking to represent advective processes and an explicit finite-difference formulation to calculate dispersive fluxes. The new implicit procedure eliminates several stability criteria required for the previous explicit formulation. This allows much larger transport time increments to be used in dispersion-dominated problems. The decoupling of advective and dispersive transport in MOC3D, however, is unchanged. With the implicit extension, the MOC3D model is upgraded to Version 2. A description of the numerical method of the implicit dispersion calculation, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. Version 2 of MOC3D was evaluated for the same set of problems used for verification of Version 1. These test results indicate that the implicit calculation of Version 2 matches the accuracy of Version 1, yet is more efficient than the explicit calculation for transport problems that are characterized by a grid Peclet number less than about 1.0.

  6. An efficient method for solving the steady Euler equations

    NASA Technical Reports Server (NTRS)

    Liou, M. S.

    1986-01-01

    An efficient numerical procedure for solving a set of nonlinear partial differential equations is given, specifically for the steady Euler equations. Solutions of the equations were obtained by Newton's linearization procedure, commonly used to solve the roots of nonlinear algebraic equations. In application of the same procedure for solving a set of differential equations we give a theorem showing that a quadratic convergence rate can be achieved. While the domain of quadratic convergence depends on the problems studied and is unknown a priori, we show that firstand second-order derivatives of flux vectors determine whether the condition for quadratic convergence is satisfied. The first derivatives enter as an implicit operator for yielding new iterates and the second derivatives indicates smoothness of the flows considered. Consequently flows involving shocks are expected to require larger number of iterations. First-order upwind discretization in conjunction with the Steger-Warming flux-vector splitting is employed on the implicit operator and a diagonal dominant matrix results. However the explicit operator is represented by first- and seond-order upwind differencings, using both Steger-Warming's and van Leer's splittings. We discuss treatment of boundary conditions and solution procedures for solving the resulting block matrix system. With a set of test problems for one- and two-dimensional flows, we show detailed study as to the efficiency, accuracy, and convergence of the present method.

  7. Implicitly solving phase appearance and disappearance problems using two-fluid six-equation model

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-01-25

    Phase appearance and disappearance issue presents serious numerical challenges in two-phase flow simulations using the two-fluid six-equation model. Numerical challenges arise from the singular equation system when one phase is absent, as well as from the discontinuity in the solution space when one phase appears or disappears. In this work, a high-resolution spatial discretization scheme on staggered grids and fully implicit methods were applied for the simulation of two-phase flow problems using the two-fluid six-equation model. A Jacobian-free Newton-Krylov (JFNK) method was used to solve the discretized nonlinear problem. An improved numerical treatment was proposed and proved to be effectivemore » to handle the numerical challenges. The treatment scheme is conceptually simple, easy to implement, and does not require explicit truncations on solutions, which is essential to conserve mass and energy. Various types of phase appearance and disappearance problems relevant to thermal-hydraulics analysis have been investigated, including a sedimentation problem, an oscillating manometer problem, a non-condensable gas injection problem, a single-phase flow with heat addition problem and a subcooled flow boiling problem. Successful simulations of these problems demonstrate the capability and robustness of the proposed numerical methods and numerical treatments. As a result, volume fraction of the absent phase can be calculated effectively as zero.« less

  8. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  9. A weak-coupling immersed boundary method for fluid-structure interaction with low density ratio of solid to fluid

    NASA Astrophysics Data System (ADS)

    Kim, Woojin; Lee, Injae; Choi, Haecheon

    2018-04-01

    We present a weak-coupling approach for fluid-structure interaction with low density ratio (ρ) of solid to fluid. For accurate and stable solutions, we introduce predictors, an explicit two-step method and the implicit Euler method, to obtain provisional velocity and position of fluid-structure interface at each time step, respectively. The incompressible Navier-Stokes equations, together with these provisional velocity and position at the fluid-structure interface, are solved in an Eulerian coordinate using an immersed-boundary finite-volume method on a staggered mesh. The dynamic equation of an elastic solid-body motion, together with the hydrodynamic force at the provisional position of the interface, is solved in a Lagrangian coordinate using a finite element method. Each governing equation for fluid and structure is implicitly solved using second-order time integrators. The overall second-order temporal accuracy is preserved even with the use of lower-order predictors. A linear stability analysis is also conducted for an ideal case to find the optimal explicit two-step method that provides stable solutions down to the lowest density ratio. With the present weak coupling, three different fluid-structure interaction problems were simulated: flows around an elastically mounted rigid circular cylinder, an elastic beam attached to the base of a stationary circular cylinder, and a flexible plate, respectively. The lowest density ratios providing stable solutions are searched for the first two problems and they are much lower than 1 (ρmin = 0.21 and 0.31, respectively). The simulation results agree well with those from strong coupling suggested here and also from previous numerical and experimental studies, indicating the efficiency and accuracy of the present weak coupling.

  10. Solving quantum optimal control problems using Clebsch variables and Lin constraints

    NASA Astrophysics Data System (ADS)

    Delgado-Téllez, M.; Ibort, A.; Rodríguez de la Peña, T.

    2018-01-01

    Clebsch variables (and Lin constraints) are applied to the study of a class of optimal control problems for affine-controlled quantum systems. The optimal control problem will be modelled with controls defined on an auxiliary space where the dynamical group of the system acts freely. The reciprocity between both theories: the classical theory defined by the objective functional and the quantum system, is established by using a suitable version of Lagrange’s multipliers theorem and a geometrical interpretation of the constraints of the system as defining a subspace of horizontal curves in an associated bundle. It is shown how the solutions of the variational problem defined by the objective functional determine solutions of the quantum problem. Then a new way of obtaining explicit solutions for a family of optimal control problems for affine-controlled quantum systems (finite or infinite dimensional) is obtained. One of its main advantages, is the the use of Clebsch variables allows to compute such solutions from solutions of invariant problems that can often be computed explicitly. This procedure can be presented as an algorithm that can be applied to a large class of systems. Finally, some simple examples, spin control, a simple quantum Hamiltonian with an ‘Elroy beanie’ type classical model and a controlled one-dimensional quantum harmonic oscillator, illustrating the main features of the theory, will be discussed.

  11. GILA User's Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHRISTON, MARK A.

    2003-06-01

    GILA is a finite element code that has been developed specifically to attack the class of transient, incompressible, viscous, fluid dynamics problems that are predominant in the world that surrounds us. The purpose for this document is to provide sufficient information for an experienced analyst to use GILA in an effective way. The GILA User's Manual presents a technical outline of the governing equations for time-dependent incompressible flow, and the explicit and semi-implicit projection methods used in GILA to solve the equations. This manual also presents a brief overview of some of GILA's capabilities along with the keyword input syntaxmore » and sample problems.« less

  12. The uniform asymptotic swallowtail approximation - Practical methods for oscillating integrals with four coalescing saddle points

    NASA Technical Reports Server (NTRS)

    Connor, J. N. L.; Curtis, P. R.; Farrelly, D.

    1984-01-01

    Methods that can be used in the numerical implementation of the uniform swallowtail approximation are described. An explicit expression for that approximation is presented to the lowest order, showing that there are three problems which must be overcome in practice before the approximation can be applied to any given problem. It is shown that a recently developed quadrature method can be used for the accurate numerical evaluation of the swallowtail canonical integral and its partial derivatives. Isometric plots of these are presented to illustrate some of their properties. The problem of obtaining the arguments of the swallowtail integral from an analytical function of its argument is considered, describing two methods of solving this problem. The asymptotic evaluation of the butterfly canonical integral is addressed.

  13. Fast optimization algorithms and the cosmological constant

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad

    2017-11-01

    Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.

  14. Newton-like methods for Navier-Stokes solution

    NASA Astrophysics Data System (ADS)

    Qin, N.; Xu, X.; Richards, B. E.

    1992-12-01

    The paper reports on Newton-like methods called SFDN-alpha-GMRES and SQN-alpha-GMRES methods that have been devised and proven as powerful schemes for large nonlinear problems typical of viscous compressible Navier-Stokes solutions. They can be applied using a partially converged solution from a conventional explicit or approximate implicit method. Developments have included the efficient parallelization of the schemes on a distributed memory parallel computer. The methods are illustrated using a RISC workstation and a transputer parallel system respectively to solve a hypersonic vortical flow.

  15. A Novel Connectionist Network for Solving Long Time-Lag Prediction Tasks

    NASA Astrophysics Data System (ADS)

    Johnson, Keith; MacNish, Cara

    Traditional Recurrent Neural Networks (RNNs) perform poorly on learning tasks involving long time-lag dependencies. More recent approaches such as LSTM and its variants significantly improve on RNNs ability to learn this type of problem. We present an alternative approach to encoding temporal dependencies that associates temporal features with nodes rather than state values, where the nodes explicitly encode dependencies over variable time delays. We show promising results comparing the network's performance to LSTM variants on an extended Reber grammar task.

  16. Exact solution of two collinear cracks normal to the boundaries of a 1D layered hexagonal piezoelectric quasicrystal

    NASA Astrophysics Data System (ADS)

    Zhou, Y.-B.; Li, X.-F.

    2018-07-01

    The electroelastic problem related to two collinear cracks of equal length and normal to the boundaries of a one-dimensional hexagonal piezoelectric quasicrystal layer is analysed. By using the finite Fourier transform, a mixed boundary value problem is solved when antiplane mechanical loading and inplane electric loading are applied. The problem is reduce to triple series equations, which are then transformed to a singular integral equation. For uniform remote loading, an exact solution is obtained in closed form, and explicit expressions for the electroelastic field are determined. The intensity factors of the electroelastic field and the energy release rate at the inner and outer crack tips are given and presented graphically.

  17. University Students' Strategies for Constructing Hypothesis when Tackling Paper-and-Pencil Tasks in Physics

    NASA Astrophysics Data System (ADS)

    Guisasola, Jenaro; Ceberio, Mikel; Zubimendi, José Luis

    2006-09-01

    The study we present tries to explore how first year engineering students formulate hypotheses in order to construct their own problem solving structure when confronted with problems in physics. Under the constructivistic perspective of the teaching-learning process, the formulation of hypotheses plays a key role in contrasting the coherence of the students' ideas with the theoretical frame. The main research instrument used to identify students' reasoning is the written report by the student on how they have attempted four problem solving tasks in which they have been asked explicitly to formulate hypotheses. The protocols used in the assessment of the solutions consisted of a semi-quantitative study based on grids designed for the analysis of written answers. In this paper we have included two of the tasks used and the corresponding scheme for the categorisation of the answers. Details of the other two tasks are also outlined. According to our findings we would say that the majority of students judge a hypothesis to be plausible if it is congruent with their previous knowledge without rigorously checking it against the theoretical framework explained in class.

  18. Concurrent optimization of material spatial distribution and material anisotropy repartition for two-dimensional structures

    NASA Astrophysics Data System (ADS)

    Ranaivomiarana, Narindra; Irisarri, François-Xavier; Bettebghor, Dimitri; Desmorat, Boris

    2018-04-01

    An optimization methodology to find concurrently material spatial distribution and material anisotropy repartition is proposed for orthotropic, linear and elastic two-dimensional membrane structures. The shape of the structure is parameterized by a density variable that determines the presence or absence of material. The polar method is used to parameterize a general orthotropic material by its elasticity tensor invariants by change of frame. A global structural stiffness maximization problem written as a compliance minimization problem is treated, and a volume constraint is applied. The compliance minimization can be put into a double minimization of complementary energy. An extension of the alternate directions algorithm is proposed to solve the double minimization problem. The algorithm iterates between local minimizations in each element of the structure and global minimizations. Thanks to the polar method, the local minimizations are solved explicitly providing analytical solutions. The global minimizations are performed with finite element calculations. The method is shown to be straightforward and efficient. Concurrent optimization of density and anisotropy distribution of a cantilever beam and a bridge are presented.

  19. Speed selection for traveling-wave solutions to the diffusion-reaction equation with cubic reaction term and Burgers nonlinear convection.

    PubMed

    Sabelnikov, V A; Lipatnikov, A N

    2014-09-01

    The problem of traveling wave (TW) speed selection for solutions to a generalized Murray-Burgers-KPP-Fisher parabolic equation with a strictly positive cubic reaction term is considered theoretically and the initial boundary value problem is numerically solved in order to support obtained analytical results. Depending on the magnitude of a parameter inherent in the reaction term (i) the term is either a concave function or a function with the inflection point and (ii) transition from pulled to pushed TW solution occurs due to interplay of two nonlinear terms; the reaction term and the Burgers convection term. Explicit pushed TW solutions are derived. It is shown that physically observable TW solutions, i.e., solutions obtained by solving the initial boundary value problem with a sufficiently steep initial condition, can be determined by seeking the TW solution characterized by the maximum decay rate at its leading edge. In the Appendix, the developed approach is applied to a non-linear diffusion-reaction equation that is widely used to model premixed turbulent combustion.

  20. Explicit solutions from eigenfunction symmetry of the Korteweg-de Vries equation.

    PubMed

    Hu, Xiao-Rui; Lou, Sen-Yue; Chen, Yong

    2012-05-01

    In nonlinear science, it is very difficult to find exact interaction solutions among solitons and other kinds of complicated waves such as cnoidal waves and Painlevé waves. Actually, even if for the most well-known prototypical models such as the Kortewet-de Vries (KdV) equation and the Kadomtsev-Petviashvili (KP) equation, this kind of problem has not yet been solved. In this paper, the explicit analytic interaction solutions between solitary waves and cnoidal waves are obtained through the localization procedure of nonlocal symmetries which are related to Darboux transformation for the well-known KdV equation. The same approach also yields some other types of interaction solutions among different types of solutions such as solitary waves, rational solutions, Bessel function solutions, and/or general Painlevé II solutions.

  1. SCOUT: simultaneous time segmentation and community detection in dynamic networks

    PubMed Central

    Hulovatyy, Yuriy; Milenković, Tijana

    2016-01-01

    Many evolving complex real-world systems can be modeled via dynamic networks. An important problem in dynamic network research is community detection, which finds groups of topologically related nodes. Typically, this problem is approached by assuming either that each time point has a distinct community organization or that all time points share a single community organization. The reality likely lies between these two extremes. To find the compromise, we consider community detection in the context of the problem of segment detection, which identifies contiguous time periods with consistent network structure. Consequently, we formulate a combined problem of segment community detection (SCD), which simultaneously partitions the network into contiguous time segments with consistent community organization and finds this community organization for each segment. To solve SCD, we introduce SCOUT, an optimization framework that explicitly considers both segmentation quality and partition quality. SCOUT addresses limitations of existing methods that can be adapted to solve SCD, which consider only one of segmentation quality or partition quality. In a thorough evaluation, SCOUT outperforms the existing methods in terms of both accuracy and computational complexity. We apply SCOUT to biological network data to study human aging. PMID:27881879

  2. Solution procedure of dynamical contact problems with friction

    NASA Astrophysics Data System (ADS)

    Abdelhakim, Lotfi

    2017-07-01

    Dynamical contact is one of the common research topics because of its wide applications in the engineering field. The main goal of this work is to develop a time-stepping algorithm for dynamic contact problems. We propose a finite element approach for elastodynamics contact problems [1]. Sticking, sliding and frictional contact can be taken into account. Lagrange multipliers are used to enforce non-penetration condition. For the time discretization, we propose a scheme equivalent to the explicit Newmark scheme. Each time step requires solving a nonlinear problem similar to a static friction problem. The nonlinearity of the system of equation needs an iterative solution procedure based on Uzawa's algorithm [2][3]. The applicability of the algorithm is illustrated by selected sample numerical solutions to static and dynamic contact problems. Results obtained with the model have been compared and verified with results from an independent numerical method.

  3. Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions

    NASA Astrophysics Data System (ADS)

    Ilgen, Marc R.

    This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value of the flight path angle. A summary of performance results for all these guidance laws is presented in the fifth part of this thesis along with recommendations for further research.

  4. Recovery Discontinuous Galerkin Jacobian-Free Newton-Krylov Method for All-Speed Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HyeongKae Park; Robert Nourgaliev; Vincent Mousseau

    2008-07-01

    A novel numerical algorithm (rDG-JFNK) for all-speed fluid flows with heat conduction and viscosity is introduced. The rDG-JFNK combines the Discontinuous Galerkin spatial discretization with the implicit Runge-Kutta time integration under the Jacobian-free Newton-Krylov framework. We solve fully-compressible Navier-Stokes equations without operator-splitting of hyperbolic, diffusion and reaction terms, which enables fully-coupled high-order temporal discretization. The stability constraint is removed due to the L-stable Explicit, Singly Diagonal Implicit Runge-Kutta (ESDIRK) scheme. The governing equations are solved in the conservative form, which allows one to accurately compute shock dynamics, as well as low-speed flows. For spatial discretization, we develop a “recovery” familymore » of DG, exhibiting nearly-spectral accuracy. To precondition the Krylov-based linear solver (GMRES), we developed an “Operator-Split”-(OS) Physics Based Preconditioner (PBP), in which we transform/simplify the fully-coupled system to a sequence of segregated scalar problems, each can be solved efficiently with Multigrid method. Each scalar problem is designed to target/cluster eigenvalues of the Jacobian matrix associated with a specific physics.« less

  5. A finite element solver for 3-D compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Reddy, K. C.; Reddy, J. N.; Nayani, S.

    1990-01-01

    Computation of the flow field inside a space shuttle main engine (SSME) requires the application of state of the art computational fluid dynamic (CFD) technology. Several computer codes are under development to solve 3-D flow through the hot gas manifold. Some algorithms were designed to solve the unsteady compressible Navier-Stokes equations, either by implicit or explicit factorization methods, using several hundred or thousands of time steps to reach a steady state solution. A new iterative algorithm is being developed for the solution of the implicit finite element equations without assembling global matrices. It is an efficient iteration scheme based on a modified nonlinear Gauss-Seidel iteration with symmetric sweeps. The algorithm is analyzed for a model equation and is shown to be unconditionally stable. Results from a series of test problems are presented. The finite element code was tested for couette flow, which is flow under a pressure gradient between two parallel plates in relative motion. Another problem that was solved is viscous laminar flow over a flat plate. The general 3-D finite element code was used to compute the flow in an axisymmetric turnaround duct at low Mach numbers.

  6. Automatic alignment for three-dimensional tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.

    2018-02-01

    In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.

  7. The role of competing knowledge structures in undermining learning: Newton's second and third laws

    NASA Astrophysics Data System (ADS)

    Low, David J.; Wilson, Kate F.

    2017-01-01

    We investigate the development of student understanding of Newton's laws using a pre-instruction test (the Force Concept Inventory), followed by a series of post-instruction tests and interviews. While some students' somewhat naive, pre-existing models of Newton's third law are largely eliminated following a semester of teaching, we find that a particular inconsistent model is highly resilient to, and may even be strengthened by, instruction. If test items contain words that cue students to think of Newton's second law, then students are more likely to apply a "net force" approach to solving problems, even if it is inappropriate to do so. Additional instruction, reinforcing physical concepts in multiple settings and from multiple sources, appears to help students develop a more connected and consistent level of understanding. We recommend explicitly encouraging students to check their work for consistency with physical principles, along with the standard checks for dimensionality and order of magnitude, to encourage reflective and rigorous problem solving.

  8. Numerical simulation of phase transition problems with explicit interface tracking

    DOE PAGES

    Hu, Yijing; Shi, Qiangqiang; de Almeida, Valmor F.; ...

    2015-12-19

    Phase change is ubiquitous in nature and industrial processes. Started from the Stefan problem, it is a topic with a long history in applied mathematics and sciences and continues to generate outstanding mathematical problems. For instance, the explicit tracking of the Gibbs dividing surface between phases is still a grand challenge. Our work has been motivated by such challenge and here we report on progress made in solving the governing equations of continuum transport in the presence of a moving interface by the front tracking method. The most pressing issue is the accounting of topological changes suffered by the interfacemore » between phases wherein break up and/or merge takes place. The underlying physics of topological changes require the incorporation of space-time subscales not at reach at the moment. Therefore we use heuristic geometrical arguments to reconnect phases in space. This heuristic approach provides new insight in various applications and it is extensible to include subscale physics and chemistry in the future. We demonstrate the method on applications such as simulating freezing, melting, dissolution, and precipitation. The later examples also include the coupling of the phase transition solution with the Navier-Stokes equations for the effect of flow convection.« less

  9. A three-dimensional finite-volume Eulerian-Lagrangian Localized Adjoint Method (ELLAM) for solute-transport modeling

    USGS Publications Warehouse

    Heberton, C.I.; Russell, T.F.; Konikow, Leonard F.; Hornberger, G.Z.

    2000-01-01

    This report documents the U.S. Geological Survey Eulerian-Lagrangian Localized Adjoint Method (ELLAM) algorithm that solves an integral form of the solute-transport equation, incorporating an implicit-in-time difference approximation for the dispersive and sink terms. Like the algorithm in the original version of the U.S. Geological Survey MOC3D transport model, ELLAM uses a method of characteristics approach to solve the transport equation on the basis of the velocity field. The ELLAM algorithm, however, is based on an integral formulation of conservation of mass and uses appropriate numerical techniques to obtain global conservation of mass. The implicit procedure eliminates several stability criteria required for an explicit formulation. Consequently, ELLAM allows large transport time increments to be used. ELLAM can produce qualitatively good results using a small number of transport time steps. A description of the ELLAM numerical method, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. The ELLAM algorithm was evaluated for the same set of problems used to test and evaluate Version 1 and Version 2 of MOC3D. These test results indicate that ELLAM offers a viable alternative to the explicit and implicit solvers in MOC3D. Its use is desirable when mass balance is imperative or a fast, qualitative model result is needed. Although accurate solutions can be generated using ELLAM, its efficiency relative to the two previously documented solution algorithms is problem dependent.

  10. Enforcing realizability in explicit multi-component species transport

    PubMed Central

    McDermott, Randall J.; Floyd, Jason E.

    2015-01-01

    We propose a strategy to guarantee realizability of species mass fractions in explicit time integration of the partial differential equations governing fire dynamics, which is a multi-component transport problem (realizability requires all mass fractions are greater than or equal to zero and that the sum equals unity). For a mixture of n species, the conventional strategy is to solve for n − 1 species mass fractions and to obtain the nth (or “background”) species mass fraction from one minus the sum of the others. The numerical difficulties inherent in the background species approach are discussed and the potential for realizability violations is illustrated. The new strategy solves all n species transport equations and obtains density from the sum of the species mass densities. To guarantee realizability the species mass densities must remain positive (semidefinite). A scalar boundedness correction is proposed that is based on a minimal diffusion operator. The overall scheme is implemented in a publicly available large-eddy simulation code called the Fire Dynamics Simulator. A set of test cases is presented to verify that the new strategy enforces realizability, does not generate spurious mass, and maintains second-order accuracy for transport. PMID:26692634

  11. Schrödinger and Dirac solutions to few-body problems

    NASA Astrophysics Data System (ADS)

    Muolo, Andrea; Reiher, Markus

    We elaborate on the variational solution of the Schrödinger and Dirac equations for small atomic and molecular systems without relying on the Born-Oppenheimer approximation. The all-particle equations of motion are solved in a numerical procedure that relies on the variational principle, Cartesian coordinates and parametrized explicitly correlated Gaussians functions. A stochastic optimization of the variational parameters allows the calculation of accurate wave functions for ground and excited states. Expectation values such as the radial and angular distribution functions or the dipole moment can be calculated. We developed a simple strategy for the elimination of the global translation that allows to generally adopt laboratory-fixed cartesian coordinates. Simple expressions for the coordinates and operators are then preserved throughout the formalism. For relativistic calculations we devised a kinetic-balance condition for explicitly correlated basis functions. We demonstrate that the kinetic-balance condition can be obtained from the row reduction process commonly applied to solve systems of linear equations. The resulting form of kinetic balance establishes a relation between all components of the spinor of an N-fermion system. ETH Zürich, Laboratorium für Physikalische Chemie, CH-8093 Zürich, Switzerland.

  12. Free vibration of functionally graded beams and frameworks using the dynamic stiffness method

    NASA Astrophysics Data System (ADS)

    Banerjee, J. R.; Ananthapuvirajah, A.

    2018-05-01

    The free vibration analysis of functionally graded beams (FGBs) and frameworks containing FGBs is carried out by applying the dynamic stiffness method and deriving the elements of the dynamic stiffness matrix in explicit algebraic form. The usually adopted rule that the material properties of the FGB vary continuously through the thickness according to a power law forms the fundamental basis of the governing differential equations of motion in free vibration. The differential equations are solved in closed analytical form when the free vibratory motion is harmonic. The dynamic stiffness matrix is then formulated by relating the amplitudes of forces to those of the displacements at the two ends of the beam. Next, the explicit algebraic expressions for the dynamic stiffness elements are derived with the help of symbolic computation. Finally the Wittrick-Williams algorithm is applied as solution technique to solve the free vibration problems of FGBs with uniform cross-section, stepped FGBs and frameworks consisting of FGBs. Some numerical results are validated against published results, but in the absence of published results for frameworks containing FGBs, consistency checks on the reliability of results are performed. The paper closes with discussion of results and conclusions.

  13. Investigating the predictive validity of implicit and explicit measures of motivation in problem-solving behavioural tasks.

    PubMed

    Keatley, David; Clarke, David D; Hagger, Martin S

    2013-09-01

    Research into the effects of individuals'autonomous motivation on behaviour has traditionally adopted explicit measures and self-reported outcome assessment. Recently, there has been increased interest in the effects of implicit motivational processes underlying behaviour from a self-determination theory (SDT) perspective. The aim of the present research was to provide support for the predictive validity of an implicit measure of autonomous motivation on behavioural persistence on two objectively measurable tasks. SDT and a dual-systems model were adopted as frameworks to explain the unique effects offered by explicit and implicit autonomous motivational constructs on behavioural persistence. In both studies, implicit autonomous motivation significantly predicted unique variance in time spent on each task. Several explicit measures of autonomous motivation also significantly predicted persistence. Results provide support for the proposed model and the inclusion of implicit measures in research on motivated behaviour. In addition, implicit measures of autonomous motivation appear to be better suited to explaining variance in behaviours that are more spontaneous or unplanned. Future implications for research examining implicit motivation from dual-systems models and SDT approaches are outlined. © 2012 The British Psychological Society.

  14. Solution of the finite Milne problem in stochastic media with RVT Technique

    NASA Astrophysics Data System (ADS)

    Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.

    2017-12-01

    This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.

  15. Learning Problem-Solving Rules as Search Through a Hypothesis Space.

    PubMed

    Lee, Hee Seung; Betts, Shawn; Anderson, John R

    2016-07-01

    Learning to solve a class of problems can be characterized as a search through a space of hypotheses about the rules for solving these problems. A series of four experiments studied how different learning conditions affected the search among hypotheses about the solution rule for a simple computational problem. Experiment 1 showed that a problem property such as computational difficulty of the rules biased the search process and so affected learning. Experiment 2 examined the impact of examples as instructional tools and found that their effectiveness was determined by whether they uniquely pointed to the correct rule. Experiment 3 compared verbal directions with examples and found that both could guide search. The final experiment tried to improve learning by using more explicit verbal directions or by adding scaffolding to the example. While both manipulations improved learning, learning still took the form of a search through a hypothesis space of possible rules. We describe a model that embodies two assumptions: (1) the instruction can bias the rules participants hypothesize rather than directly be encoded into a rule; (2) participants do not have memory for past wrong hypotheses and are likely to retry them. These assumptions are realized in a Markov model that fits all the data by estimating two sets of probabilities. First, the learning condition induced one set of Start probabilities of trying various rules. Second, should this first hypothesis prove wrong, the learning condition induced a second set of Choice probabilities of considering various rules. These findings broaden our understanding of effective instruction and provide implications for instructional design. Copyright © 2015 Cognitive Science Society, Inc.

  16. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    NASA Astrophysics Data System (ADS)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.

  17. Assessing Cognitive Learning of Analytical Problem Solving

    NASA Astrophysics Data System (ADS)

    Billionniere, Elodie V.

    Introductory programming courses, also known as CS1, have a specific set of expected outcomes related to the learning of the most basic and essential computational concepts in computer science (CS). However, two of the most often heard complaints in such courses are that (1) they are divorced from the reality of application and (2) they make the learning of the basic concepts tedious. The concepts introduced in CS1 courses are highly abstract and not easily comprehensible. In general, the difficulty is intrinsic to the field of computing, often described as "too mathematical or too abstract." This dissertation presents a small-scale mixed method study conducted during the fall 2009 semester of CS1 courses at Arizona State University. This study explored and assessed students' comprehension of three core computational concepts---abstraction, arrays of objects, and inheritance---in both algorithm design and problem solving. Through this investigation students' profiles were categorized based on their scores and based on their mistakes categorized into instances of five computational thinking concepts: abstraction, algorithm, scalability, linguistics, and reasoning. It was shown that even though the notion of computational thinking is not explicit in the curriculum, participants possessed and/or developed this skill through the learning and application of the CS1 core concepts. Furthermore, problem-solving experiences had a direct impact on participants' knowledge skills, explanation skills, and confidence. Implications for teaching CS1 and for future research are also considered.

  18. A Typology for Modeling Processes in Clinical Guidelines and Protocols

    NASA Astrophysics Data System (ADS)

    Tu, Samson W.; Musen, Mark A.

    We analyzed the graphical representations that are used by various guideline-modeling methods to express process information embodied in clinical guidelines and protocols. From this analysis, we distilled four modeling formalisms and the processes they typically model: (1) flowcharts for capturing problem-solving processes, (2) disease-state maps that link decision points in managing patient problems over time, (3) plans that specify sequences of activities that contribute toward a goal, (4) workflow specifications that model care processes in an organization. We characterized the four approaches and showed that each captures some aspect of what a guideline may specify. We believe that a general guideline-modeling system must provide explicit representation for each type of process.

  19. A family of four stages embedded explicit six-step methods with eliminated phase-lag and its derivatives for the numerical solution of the second order problems

    NASA Astrophysics Data System (ADS)

    Simos, T. E.

    2017-11-01

    A family of four stages high algebraic order embedded explicit six-step methods, for the numerical solution of second order initial or boundary-value problems with periodical and/or oscillating solutions, are studied in this paper. The free parameters of the new proposed methods are calculated solving the linear system of equations which is produced by requesting the vanishing of the phase-lag of the methods and the vanishing of the phase-lag's derivatives of the schemes. For the new obtained methods we investigate: • Its local truncation error (LTE) of the methods.• The asymptotic form of the LTE obtained using as model problem the radial Schrödinger equation.• The comparison of the asymptotic forms of LTEs for several methods of the same family. This comparison leads to conclusions on the efficiency of each method of the family.• The stability and the interval of periodicity of the obtained methods of the new family of embedded finite difference pairs.• The applications of the new obtained family of embedded finite difference pairs to the numerical solution of several second order problems like the radial Schrödinger equation, astronomical problems etc. The above applications lead to conclusion on the efficiency of the methods of the new family of embedded finite difference pairs.

  20. A comparative study and validation of upwind and central-difference Navier-Stokes codes for high-speed flows

    NASA Technical Reports Server (NTRS)

    Rudy, David H.; Kumar, Ajay; Thomas, James L.; Gnoffo, Peter A.; Chakravarthy, Sukumar R.

    1988-01-01

    A comparative study was made using 4 different computer codes for solving the compressible Navier-Stokes equations. Three different test problems were used, each of which has features typical of high speed internal flow problems of practical importance in the design and analysis of propulsion systems for advanced hypersonic vehicles. These problems are the supersonic flow between two walls, one of which contains a 10 deg compression ramp, the flow through a hypersonic inlet, and the flow in a 3-D corner formed by the intersection of two symmetric wedges. Three of the computer codes use similar recently developed implicit upwind differencing technology, while the fourth uses a well established explicit method. The computed results were compared with experimental data where available.

  1. Penalty methods for the numerical solution of American multi-asset option problems

    NASA Astrophysics Data System (ADS)

    Nielsen, Bjørn Fredrik; Skavhaug, Ola; Tveito, Aslak

    2008-12-01

    We derive and analyze a penalty method for solving American multi-asset option problems. A small, non-linear penalty term is added to the Black-Scholes equation. This approach gives a fixed solution domain, removing the free and moving boundary imposed by the early exercise feature of the contract. Explicit, implicit and semi-implicit finite difference schemes are derived, and in the case of independent assets, we prove that the approximate option prices satisfy some basic properties of the American option problem. Several numerical experiments are carried out in order to investigate the performance of the schemes. We give examples indicating that our results are sharp. Finally, the experiments indicate that in the case of correlated underlying assets, the same properties are valid as in the independent case.

  2. Estimation of Surface Temperature and Heat Flux by Inverse Heat Transfer Methods Using Internal Temperatures Measured While Radiantly Heating a Carbon/Carbon Specimen up to 1920 F

    NASA Technical Reports Server (NTRS)

    Pizzo, Michelle; Daryabeigi, Kamran; Glass, David

    2015-01-01

    The ability to solve the heat conduction equation is needed when designing materials to be used on vehicles exposed to extremely high temperatures; e.g. vehicles used for atmospheric entry or hypersonic flight. When using test and flight data, computational methods such as finite difference schemes may be used to solve for both the direct heat conduction problem, i.e., solving between internal temperature measurements, and the inverse heat conduction problem, i.e., using the direct solution to march forward in space to the surface of the material to estimate both surface temperature and heat flux. The completed research first discusses the methods used in developing a computational code to solve both the direct and inverse heat transfer problems using one dimensional, centered, implicit finite volume schemes and one dimensional, centered, explicit space marching techniques. The developed code assumed the boundary conditions to be specified time varying temperatures and also considered temperature dependent thermal properties. The completed research then discusses the results of analyzing temperature data measured while radiantly heating a carbon/carbon specimen up to 1920 F. The temperature was measured using thermocouple (TC) plugs (small carbon/carbon material specimens) with four embedded TC plugs inserted into the larger carbon/carbon specimen. The purpose of analyzing the test data was to estimate the surface heat flux and temperature values from the internal temperature measurements using direct and inverse heat transfer methods, thus aiding in the thermal and structural design and analysis of high temperature vehicles.

  3. Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.

    PubMed

    Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews

    2015-03-01

    This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.

  4. Quantitative description of realistic wealth distributions by kinetic trading models

    NASA Astrophysics Data System (ADS)

    Lammoglia, Nelson; Muñoz, Víctor; Rogan, José; Toledo, Benjamín; Zarama, Roberto; Valdivia, Juan Alejandro

    2008-10-01

    Data on wealth distributions in trading markets show a power law behavior x-(1+α) at the high end, where, in general, α is greater than 1 (Pareto’s law). Models based on kinetic theory, where a set of interacting agents trade money, yield power law tails if agents are assigned a saving propensity. In this paper we are solving the inverse problem, that is, in finding the saving propensity distribution which yields a given wealth distribution for all wealth ranges. This is done explicitly for two recently published and comprehensive wealth datasets.

  5. Green operators for low regularity spacetimes

    NASA Astrophysics Data System (ADS)

    Sanchez Sanchez, Yafet; Vickers, James

    2018-02-01

    In this paper we define and construct advanced and retarded Green operators for the wave operator on spacetimes with low regularity. In order to do so we require that the spacetime satisfies the condition of generalised hyperbolicity which is equivalent to well-posedness of the classical inhomogeneous problem with zero initial data where weak solutions are properly supported. Moreover, we provide an explicit formula for the kernel of the Green operators in terms of an arbitrary eigenbasis of H 1 and a suitable Green matrix that solves a system of second order ODEs.

  6. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1991-01-01

    Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.

  7. Documentation of computer program VS2D to solve the equations of fluid flow in variably saturated porous media

    USGS Publications Warehouse

    Lappala, E.G.; Healy, R.W.; Weeks, E.P.

    1987-01-01

    This report documents FORTRAN computer code for solving problems involving variably saturated single-phase flow in porous media. The flow equation is written with total hydraulic potential as the dependent variable, which allows straightforward treatment of both saturated and unsaturated conditions. The spatial derivatives in the flow equation are approximated by central differences, and time derivatives are approximated either by a fully implicit backward or by a centered-difference scheme. Nonlinear conductance and storage terms may be linearized using either an explicit method or an implicit Newton-Raphson method. Relative hydraulic conductivity is evaluated at cell boundaries by using either full upstream weighting, the arithmetic mean, or the geometric mean of values from adjacent cells. Nonlinear boundary conditions treated by the code include infiltration, evaporation, and seepage faces. Extraction by plant roots that is caused by atmospheric demand is included as a nonlinear sink term. These nonlinear boundary and sink terms are linearized implicitly. The code has been verified for several one-dimensional linear problems for which analytical solutions exist and against two nonlinear problems that have been simulated with other numerical models. A complete listing of data-entry requirements and data entry and results for three example problems are provided. (USGS)

  8. Guided waves dispersion equations for orthotropic multilayered pipes solved using standard finite elements code.

    PubMed

    Predoi, Mihai Valentin

    2014-09-01

    The dispersion curves for hollow multilayered cylinders are prerequisites in any practical guided waves application on such structures. The equations for homogeneous isotropic materials have been established more than 120 years ago. The difficulties in finding numerical solutions to analytic expressions remain considerable, especially if the materials are orthotropic visco-elastic as in the composites used for pipes in the last decades. Among other numerical techniques, the semi-analytical finite elements method has proven its capability of solving this problem. Two possibilities exist to model a finite elements eigenvalue problem: a two-dimensional cross-section model of the pipe or a radial segment model, intersecting the layers between the inner and the outer radius of the pipe. The last possibility is here adopted and distinct differential problems are deduced for longitudinal L(0,n), torsional T(0,n) and flexural F(m,n) modes. Eigenvalue problems are deduced for the three modes classes, offering explicit forms of each coefficient for the matrices used in an available general purpose finite elements code. Comparisons with existing solutions for pipes filled with non-linear viscoelastic fluid or visco-elastic coatings as well as for a fully orthotropic hollow cylinder are all proving the reliability and ease of use of this method. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Parallel Preconditioning for CFD Problems on the CM-5

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.

  10. [Current problems in the data acquisition of digitized virtual human and the countermeasures].

    PubMed

    Zhong, Shi-zhen; Yuan, Lin

    2003-06-01

    As a relatively new field of medical science research that has attracted the attention from worldwide researchers, study of digitized virtual human still awaits long-term dedicated effort for its full development. In the full array of research projects of the integrated Virtual Chinese Human project, virtual visible human, virtual physical human, virtual physiome, and intellectualized virtual human must be included as the four essential constitutional opponents. The primary importance should be given to solving the problems concerning the data acquisition for the dataset of this immense project. Currently 9 virtual human datasets have been established worldwide, which are subjected to critical analyses in the paper with special attention given to the problems in the data storage and the techniques employed, for instance, in these datasets. On the basis of current research status of Virtual Chinese Human project, the authors propose some countermeasures for solving the problems in the data acquisition for the dataset, which include (1) giving the priority to the quality control instead of merely racing for quantity and speed, and (2) improving the setting up of the markers specific for the tissues and organs to meet the requirement from information technology, (3) with also attention to the development potential of the dataset which should have explicit pertinence to specific actual applications.

  11. Comptonization of X-rays by low-temperature electrons. [photon wavelength redistribution in cosmic sources

    NASA Technical Reports Server (NTRS)

    Illarionov, A.; Kallman, T.; Mccray, R.; Ross, R.

    1979-01-01

    A method is described for calculating the spectrum that results from the Compton scattering of a monochromatic source of X-rays by low-temperature electrons, both for initial-value relaxation problems and for steady-state spatial diffusion problems. The method gives an exact solution of the inital-value problem for evolution of the spectrum in an infinite homogeneous medium if Klein-Nishina corrections to the Thomson cross section are neglected. This, together with approximate solutions for problems in which Klein-Nishina corrections are significant and/or spatial diffusion occurs, shows spectral structure near the original photon wavelength that may be used to infer physical conditions in cosmic X-ray sources. Explicit results, shown for examples of time relaxation in an infinite medium and spatial diffusion through a uniform sphere, are compared with results obtained by Monte Carlo calculations and by solving the appropriate Fokker-Planck equation.

  12. A SEMI-LAGRANGIAN TWO-LEVEL PRECONDITIONED NEWTON-KRYLOV SOLVER FOR CONSTRAINED DIFFEOMORPHIC IMAGE REGISTRATION.

    PubMed

    Mang, Andreas; Biros, George

    2017-01-01

    We propose an efficient numerical algorithm for the solution of diffeomorphic image registration problems. We use a variational formulation constrained by a partial differential equation (PDE), where the constraints are a scalar transport equation. We use a pseudospectral discretization in space and second-order accurate semi-Lagrangian time stepping scheme for the transport equations. We solve for a stationary velocity field using a preconditioned, globalized, matrix-free Newton-Krylov scheme. We propose and test a two-level Hessian preconditioner. We consider two strategies for inverting the preconditioner on the coarse grid: a nested preconditioned conjugate gradient method (exact solve) and a nested Chebyshev iterative method (inexact solve) with a fixed number of iterations. We test the performance of our solver in different synthetic and real-world two-dimensional application scenarios. We study grid convergence and computational efficiency of our new scheme. We compare the performance of our solver against our initial implementation that uses the same spatial discretization but a standard, explicit, second-order Runge-Kutta scheme for the numerical time integration of the transport equations and a single-level preconditioner. Our improved scheme delivers significant speedups over our original implementation. As a highlight, we observe a 20 × speedup for a two dimensional, real world multi-subject medical image registration problem.

  13. A Spectral Multi-Domain Penalty Method for Elliptic Problems Arising From a Time-Splitting Algorithm For the Incompressible Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Diamantopoulos, Theodore; Rowe, Kristopher; Diamessis, Peter

    2017-11-01

    The Collocation Penalty Method (CPM) solves a PDE on the interior of a domain, while weakly enforcing boundary conditions at domain edges via penalty terms, and naturally lends itself to high-order and multi-domain discretization. Such spectral multi-domain penalty methods (SMPM) have been used to solve the Navier-Stokes equations. Bounds for penalty coefficients are typically derived using the energy method to guarantee stability for time-dependent problems. The choice of collocation points and penalty parameter can greatly affect the conditioning and accuracy of a solution. Effort has been made in recent years to relate various high-order methods on multiple elements or domains under the umbrella of the Correction Procedure via Reconstruction (CPR). Most applications of CPR have focused on solving the compressible Navier-Stokes equations using explicit time-stepping procedures. A particularly important aspect which is still missing in the context of the SMPM is a study of the Helmholtz equation arising in many popular time-splitting schemes for the incompressible Navier-Stokes equations. Stability and convergence results for the SMPM for the Helmholtz equation will be presented. Emphasis will be placed on the efficiency and accuracy of high-order methods.

  14. Acoustic streaming: an arbitrary Lagrangian-Eulerian perspective.

    PubMed

    Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco

    2017-08-25

    We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid-structure interaction problems in microacoustofluidic devices. After the formulation's exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches.

  15. Acoustic streaming: an arbitrary Lagrangian–Eulerian perspective

    PubMed Central

    Nama, Nitesh; Huang, Tony Jun; Costanzo, Francesco

    2017-01-01

    We analyse acoustic streaming flows using an arbitrary Lagrangian Eulerian (ALE) perspective. The formulation stems from an explicit separation of time scales resulting in two subproblems: a first-order problem, formulated in terms of the fluid displacement at the fast scale, and a second-order problem, formulated in terms of the Lagrangian flow velocity at the slow time scale. Following a rigorous time-averaging procedure, the second-order problem is shown to be intrinsically steady, and with exact boundary conditions at the oscillating walls. Also, as the second-order problem is solved directly for the Lagrangian velocity, the formulation does not need to employ the notion of Stokes drift, or any associated post-processing, thus facilitating a direct comparison with experiments. Because the first-order problem is formulated in terms of the displacement field, our formulation is directly applicable to more complex fluid–structure interaction problems in microacoustofluidic devices. After the formulation’s exposition, we present numerical results that illustrate the advantages of the formulation with respect to current approaches. PMID:29051631

  16. Object-Image Correspondence for Algebraic Curves under Projections

    NASA Astrophysics Data System (ADS)

    Burdis, Joseph M.; Kogan, Irina A.; Hong, Hoon

    2013-03-01

    We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence p! roblem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.

  17. Cohesive phase-field fracture and a PDE constrained optimization approach to fracture inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tupek, Michael R.

    2016-06-30

    In recent years there has been a proliferation of modeling techniques for forward predictions of crack propagation in brittle materials, including: phase-field/gradient damage models, peridynamics, cohesive-zone models, and G/XFEM enrichment techniques. However, progress on the corresponding inverse problems has been relatively lacking. Taking advantage of key features of existing modeling approaches, we propose a parabolic regularization of Barenblatt cohesive models which borrows extensively from previous phase-field and gradient damage formulations. An efficient explicit time integration strategy for this type of nonlocal fracture model is then proposed and justified. In addition, we present a C++ computational framework for computing in- putmore » parameter sensitivities efficiently for explicit dynamic problems using the adjoint method. This capability allows for solving inverse problems involving crack propagation to answer interesting engineering questions such as: 1) what is the optimal design topology and material placement for a heterogeneous structure to maximize fracture resistance, 2) what loads must have been applied to a structure for it to have failed in an observed way, 3) what are the existing cracks in a structure given various experimental observations, etc. In this work, we focus on the first of these engineering questions and demonstrate a capability to automatically and efficiently compute optimal designs intended to minimize crack propagation in structures.« less

  18. Implicit and explicit subgrid-scale modeling in discontinuous Galerkin methods for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Fernandez, Pablo; Nguyen, Ngoc-Cuong; Peraire, Jaime

    2017-11-01

    Over the past few years, high-order discontinuous Galerkin (DG) methods for Large-Eddy Simulation (LES) have emerged as a promising approach to solve complex turbulent flows. Despite the significant research investment, the relation between the discretization scheme, the Riemann flux, the subgrid-scale (SGS) model and the accuracy of the resulting LES solver remains unclear. In this talk, we investigate the role of the Riemann solver and the SGS model in the ability to predict a variety of flow regimes, including transition to turbulence, wall-free turbulence, wall-bounded turbulence, and turbulence decay. The Taylor-Green vortex problem and the turbulent channel flow at various Reynolds numbers are considered. Numerical results show that DG methods implicitly introduce numerical dissipation in under-resolved turbulence simulations and, even in the high Reynolds number limit, this implicit dissipation provides a more accurate representation of the actual subgrid-scale dissipation than that by explicit models.

  19. Stabilisation of time-varying linear systems via Lyapunov differential equations

    NASA Astrophysics Data System (ADS)

    Zhou, Bin; Cai, Guang-Bin; Duan, Guang-Ren

    2013-02-01

    This article studies stabilisation problem for time-varying linear systems via state feedback. Two types of controllers are designed by utilising solutions to Lyapunov differential equations. The first type of feedback controllers involves the unique positive-definite solution to a parametric Lyapunov differential equation, which can be solved when either the state transition matrix of the open-loop system is exactly known, or the future information of the system matrices are accessible in advance. Different from the first class of controllers which may be difficult to implement in practice, the second type of controllers can be easily implemented by solving a state-dependent Lyapunov differential equation with a given positive-definite initial condition. In both cases, explicit conditions are obtained to guarantee the exponentially asymptotic stability of the associated closed-loop systems. Numerical examples show the effectiveness of the proposed approaches.

  20. A Note on Substructuring Preconditioning for Nonconforming Finite Element Approximations of Second Order Elliptic Problems

    NASA Technical Reports Server (NTRS)

    Maliassov, Serguei

    1996-01-01

    In this paper an algebraic substructuring preconditioner is considered for nonconforming finite element approximations of second order elliptic problems in 3D domains with a piecewise constant diffusion coefficient. Using a substructuring idea and a block Gauss elimination, part of the unknowns is eliminated and the Schur complement obtained is preconditioned by a spectrally equivalent very sparse matrix. In the case of quasiuniform tetrahedral mesh an appropriate algebraic multigrid solver can be used to solve the problem with this matrix. Explicit estimates of condition numbers and implementation algorithms are established for the constructed preconditioner. It is shown that the condition number of the preconditioned matrix does not depend on either the mesh step size or the jump of the coefficient. Finally, numerical experiments are presented to illustrate the theory being developed.

  1. Implementation of Rosenbrock methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shampine, L. F.

    1980-11-01

    Rosenbrock formulas have shown promise in research codes for the solution of initial-value problems for stiff systems of ordinary differential equations (ODEs). To help assess their practical value, the author wrote an item of mathematical software based on such a formula. This required a variety of algorithmic and software developments. Those of general interest are reported in this paper. Among them is a way to select automatically, at every step, an explicit Runge-Kutta formula or a Rosenbrock formula according to the stiffness of the problem. Solving linear systems is important to methods for stiff ODEs, and is rather special formore » Rosenbrock methods. A cheap, effective estimate of the condition of the linear systems is derived. Some numerical results are presented to illustrate the developments.« less

  2. Solution to the Problems of the Sustainable Development Management

    NASA Astrophysics Data System (ADS)

    Rusko, Miroslav; Procházková, Dana

    2011-01-01

    The paper shows that environment is one of the basic public assets of a human system, and it must be therefore specially protected. According to our present knowledge, the sustainability is necessary for all human systems and it is necessary to invoke the sustainable development principles in all human system assets. Sustainable development is understood as a development that does not erode ecological, social or politic systems on which it depends, but it explicitly approves ecological limitation under the economic activity frame and it has full comprehension for support of human needs. The paper summarises the conditions for sustainable development, tools, methods and techniques to solve the environmental problems and the tasks of executive governance in the environmental segment.

  3. Progress with multigrid schemes for hypersonic flow problems

    NASA Technical Reports Server (NTRS)

    Radespiel, R.; Swanson, R. C.

    1991-01-01

    Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm uses upwind spatial discretization with explicit multistage time stepping. Two level versions of the various multigrid algorithms are applied to the two dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving three different hypersonic flow problems. Some new multigrid schemes based on semicoarsening strategies are shown to be quite effective in relieving the stiffness caused by the high aspect ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 x 10(exp 6) and Mach numbers up to 25.

  4. ɛ-subgradient algorithms for bilevel convex optimization

    NASA Astrophysics Data System (ADS)

    Helou, Elias S.; Simões, Lucas E. A.

    2017-05-01

    This paper introduces and studies the convergence properties of a new class of explicit ɛ-subgradient methods for the task of minimizing a convex function over a set of minimizers of another convex minimization problem. The general algorithm specializes to some important cases, such as first-order methods applied to a varying objective function, which have computationally cheap iterations. We present numerical experimentation concerning certain applications where the theoretical framework encompasses efficient algorithmic techniques, enabling the use of the resulting methods to solve very large practical problems arising in tomographic image reconstruction. ES Helou was supported by FAPESP grants 2013/07375-0 and 2013/16508-3 and CNPq grant 311476/2014-7. LEA Simões was supported by FAPESP grants 2011/02219-4 and 2013/14615-7.

  5. On integrability of the Killing equation

    NASA Astrophysics Data System (ADS)

    Houri, Tsuyoshi; Tomoda, Kentaro; Yasui, Yukinori

    2018-04-01

    Killing tensor fields have been thought of as describing the hidden symmetry of space(-time) since they are in one-to-one correspondence with polynomial first integrals of geodesic equations. Since many problems in classical mechanics can be formulated as geodesic problems in curved space and spacetime, solving the defining equation for Killing tensor fields (the Killing equation) is a powerful way to integrate equations of motion. Thus it has been desirable to formulate the integrability conditions of the Killing equation, which serve to determine the number of linearly independent solutions and also to restrict the possible forms of solutions tightly. In this paper, we show the prolongation for the Killing equation in a manner that uses Young symmetrizers. Using the prolonged equations, we provide the integrability conditions explicitly.

  6. Fingerprints selection for topological localization

    NASA Astrophysics Data System (ADS)

    Popov, Vladimir

    2017-07-01

    Problems of visual navigation are extensively studied in contemporary robotics. In particular, we can mention different problems of visual landmarks selection, the problem of selection of a minimal set of visual landmarks, selection of partially distinguishable guards, the problem of placement of visual landmarks. In this paper, we consider one-dimensional color panoramas. Such panoramas can be used for creating fingerprints. Fingerprints give us unique identifiers for visually distinct locations by recovering statistically significant features. Fingerprints can be used as visual landmarks for the solution of various problems of mobile robot navigation. In this paper, we consider a method for automatic generation of fingerprints. In particular, we consider the bounded Post correspondence problem and applications of the problem to consensus fingerprints and topological localization. We propose an efficient approach to solve the bounded Post correspondence problem. In particular, we use an explicit reduction from the decision version of the problem to the satisfiability problem. We present the results of computational experiments for different satisfiability algorithms. In robotic experiments, we consider the average accuracy of reaching of the target point for different lengths of routes and types of fingerprints.

  7. Unstructured Finite Elements and Dynamic Meshing for Explicit Phase Tracking in Multiphase Problems

    NASA Astrophysics Data System (ADS)

    Chandra, Anirban; Yang, Fan; Zhang, Yu; Shams, Ehsan; Sahni, Onkar; Oberai, Assad; Shephard, Mark

    2017-11-01

    Multi-phase processes involving phase change at interfaces, such as evaporation of a liquid or combustion of a solid, represent an interesting class of problems with varied applications. Large density ratio across phases, discontinuous fields at the interface and rapidly evolving geometries are some of the inherent challenges which influence the numerical modeling of multi-phase phase change problems. In this work, a mathematically consistent and robust computational approach to address these issues is presented. We use stabilized finite element methods on mixed topology unstructured grids for solving the compressible Navier-Stokes equations. Appropriate jump conditions derived from conservations laws across the interface are handled by using discontinuous interpolations, while the continuity of temperature and tangential velocity is enforced using a penalty parameter. The arbitrary Lagrangian-Eulerian (ALE) technique is utilized to explicitly track the interface motion. Mesh at the interface is constrained to move with the interface while elsewhere it is moved using the linear elasticity analogy. Repositioning is applied to the layered mesh that maintains its structure and normal resolution. In addition, mesh modification is used to preserve the quality of the volumetric mesh. This work is supported by the U.S. Army Grants W911NF1410301 and W911NF16C0117.

  8. Fractional cable model for signal conduction in spiny neuronal dendrites

    NASA Astrophysics Data System (ADS)

    Vitali, Silvia; Mainardi, Francesco

    2017-06-01

    The cable model is widely used in several fields of science to describe the propagation of signals. A relevant medical and biological example is the anomalous subdiffusion in spiny neuronal dendrites observed in several studies of the last decade. Anomalous subdiffusion can be modelled in several ways introducing some fractional component into the classical cable model. The Chauchy problem associated to these kind of models has been investigated by many authors, but up to our knowledge an explicit solution for the signalling problem has not yet been published. Here we propose how this solution can be derived applying the generalized convolution theorem (known as Efros theorem) for Laplace transforms. The fractional cable model considered in this paper is defined by replacing the first order time derivative with a fractional derivative of order α ∈ (0, 1) of Caputo type. The signalling problem is solved for any input function applied to the accessible end of a semi-infinite cable, which satisfies the requirements of the Efros theorem. The solutions corresponding to the simple cases of impulsive and step inputs are explicitly calculated in integral form containing Wright functions. Thanks to the variability of the parameter α, the corresponding solutions are expected to adapt to the qualitative behaviour of the membrane potential observed in experiments better than in the standard case α = 1.

  9. A parallel finite element procedure for contact-impact problems using edge-based smooth triangular element and GPU

    NASA Astrophysics Data System (ADS)

    Cai, Yong; Cui, Xiangyang; Li, Guangyao; Liu, Wenyang

    2018-04-01

    The edge-smooth finite element method (ES-FEM) can improve the computational accuracy of triangular shell elements and the mesh partition efficiency of complex models. In this paper, an approach is developed to perform explicit finite element simulations of contact-impact problems with a graphical processing unit (GPU) using a special edge-smooth triangular shell element based on ES-FEM. Of critical importance for this problem is achieving finer-grained parallelism to enable efficient data loading and to minimize communication between the device and host. Four kinds of parallel strategies are then developed to efficiently solve these ES-FEM based shell element formulas, and various optimization methods are adopted to ensure aligned memory access. Special focus is dedicated to developing an approach for the parallel construction of edge systems. A parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty function calculation method are embedded in this parallel explicit algorithm. Finally, the program flow is well designed, and a GPU-based simulation system is developed, using Nvidia's CUDA. Several numerical examples are presented to illustrate the high quality of the results obtained with the proposed methods. In addition, the GPU-based parallel computation is shown to significantly reduce the computing time.

  10. Learning stoichiometry: A comparison of text and multimedia instructional formats

    NASA Astrophysics Data System (ADS)

    Evans, Karen L.

    Even after multiple instructional opportunities, first year college chemistry students are often unable to apply stoichiometry knowledge in equilibrium and acid-base chemistry problem solving. Cognitive research findings suggest that for learning to be meaningful, learners need to actively construct their own knowledge by integrating new information into, and reorganizing, their prior understandings. Scaffolded inquiry in which facts, procedures, and principles are introduced as needed within the context of authentic problem solving may provide the practice and encoding opportunities necessary for construction of a memorable and usable knowledge base. The dynamic and interactive capabilities of online technology may facilitate stoichiometry instruction that promotes this meaningful learning. Entering college freshmen were randomly assigned to either a technology-rich or text-only set of cognitively informed stoichiometry review materials. Analysis of posttest scores revealed a significant but small difference in the performance of the two treatment groups, with the technology-rich group having the advantage. Both SAT and gender, however, explained more of the variability in the scores. Analysis of the posttest scores from the technology-rich treatment group revealed that the degree of interaction with the Virtual Lab simulation was significantly related to posttest performance and subsumed any effect of prior knowledge as measured by SAT scores. Future users of the online course should be encouraged to engage with the problem-solving opportunities provided by the Virtual Lab simulation through either explicit instruction and/or implementation of some level of program control within the course's navigational features.

  11. Multiagent optimization system for solving the traveling salesman problem (TSP).

    PubMed

    Xie, Xiao-Feng; Liu, Jiming

    2009-04-01

    The multiagent optimization system (MAOS) is a nature-inspired method, which supports cooperative search by the self-organization of a group of compact agents situated in an environment with certain sharing public knowledge. Moreover, each agent in MAOS is an autonomous entity with personal declarative memory and behavioral components. In this paper, MAOS is refined for solving the traveling salesman problem (TSP), which is a classic hard computational problem. Based on a simplified MAOS version, in which each agent manipulates on extremely limited declarative knowledge, some simple and efficient components for solving TSP, including two improving heuristics based on a generalized edge assembly recombination, are implemented. Compared with metaheuristics in adaptive memory programming, MAOS is particularly suitable for supporting cooperative search. The experimental results on two TSP benchmark data sets show that MAOS is competitive as compared with some state-of-the-art algorithms, including the Lin-Kernighan-Helsgaun, IBGLK, PHGA, etc., although MAOS does not use any explicit local search during the runtime. The contributions of MAOS components are investigated. It indicates that certain clues can be positive for making suitable selections before time-consuming computation. More importantly, it shows that the cooperative search of agents can achieve an overall good performance with a macro rule in the switch mode, which deploys certain alternate search rules with the offline performance in negative correlations. Using simple alternate rules may prevent the high difficulty of seeking an omnipotent rule that is efficient for a large data set.

  12. A Lagrangian discontinuous Galerkin hydrodynamic method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xiaodong; Morgan, Nathaniel Ray; Burton, Donald E.

    Here, we present a new Lagrangian discontinuous Galerkin (DG) hydrodynamic method for solving the two-dimensional gas dynamic equations on unstructured hybrid meshes. The physical conservation laws for the momentum and total energy are discretized using a DG method based on linear Taylor expansions. Three different approaches are investigated for calculating the density variation over the element. The first approach evolves a Taylor expansion of the specific volume field. The second approach follows certain finite element methods and uses the strong mass conservation to calculate the density field at a location inside the element or on the element surface. The thirdmore » approach evolves a Taylor expansion of the density field. The nodal velocity, and the corresponding forces, are explicitly calculated by solving a multidirectional approximate Riemann problem. An effective limiting strategy is presented that ensures monotonicity of the primitive variables. This new Lagrangian DG hydrodynamic method conserves mass, momentum, and total energy. Results from a suite of test problems are presented to demonstrate the robustness and expected second-order accuracy of this new method.« less

  13. A three dimensional immersed smoothed finite element method (3D IS-FEM) for fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-Qian; Liu, G. R.; Khoo, Boo Cheong

    2013-02-01

    A three-dimensional immersed smoothed finite element method (3D IS-FEM) using four-node tetrahedral element is proposed to solve 3D fluid-structure interaction (FSI) problems. The 3D IS-FEM is able to determine accurately the physical deformation of the nonlinear solids placed within the incompressible viscous fluid governed by Navier-Stokes equations. The method employs the semi-implicit characteristic-based split scheme to solve the fluid flows and smoothed finite element methods to calculate the transient dynamics responses of the nonlinear solids based on explicit time integration. To impose the FSI conditions, a novel, effective and sufficiently general technique via simple linear interpolation is presented based on Lagrangian fictitious fluid meshes coinciding with the moving and deforming solid meshes. In the comparisons to the referenced works including experiments, it is clear that the proposed 3D IS-FEM ensures stability of the scheme with the second order spatial convergence property; and the IS-FEM is fairly independent of a wide range of mesh size ratio.

  14. Three-dimensional, ten-moment multifluid simulation of the solar wind interaction with Mercury

    NASA Astrophysics Data System (ADS)

    Dong, Chuanfei; Hakim, Ammar; Wang, Liang; Bhattacharjee, Amitava; Germaschewski, Kai; Dibraccio, Gina

    2017-10-01

    We investigate Mercury's magnetosphere by using Gkeyll ten-moment multifluid code that solves the continuity, momentum and pressure tensor equations of both protons and electrons, as well as the full Maxwell equations. Non-ideal effects like the Hall effect, inertia, and tensorial pressures are self-consistently embedded without the need to explicitly solve a generalized Ohm's law. Previously, we have benchmarked this approach in classical test problems like the Orszag-Tang vortex and GEM reconnection challenge problem. We first validate the model by using MESSENGER magnetic field data through data-model comparisons. Both day- and night-side magnetic reconnection are studied in detail. In addition, we include a mantle layer (with a resistivity profile) and a perfect conducting core inside the planet body to accurately represent Mercury's interior. The intrinsic dipole magnetic fields may be modified inside the planetary body due to the weak magnetic moment of Mercury. By including the planetary interior, we can capture the correct plasma boundary locations (e.g., bow shock and magnetopause), especially during a space weather event.

  15. Substructure of fuzzy dark matter haloes

    NASA Astrophysics Data System (ADS)

    Du, Xiaolong; Behrens, Christoph; Niemeyer, Jens C.

    2017-02-01

    We derive the halo mass function (HMF) for fuzzy dark matter (FDM) by solving the excursion set problem explicitly with a mass-dependent barrier function, which has not been done before. We find that compared to the naive approach of the Sheth-Tormen HMF for FDM, our approach has a higher cutoff mass and the cutoff mass changes less strongly with redshifts. Using merger trees constructed with a modified version of the Lacey & Cole formalism that accounts for suppressed small-scale power and the scale-dependent growth of FDM haloes and the semi-analytic GALACTICUS code, we study the statistics of halo substructure including the effects from dynamical friction and tidal stripping. We find that if the dark matter is a mixture of cold dark matter (CDM) and FDM, there will be a suppression on the halo substructure on small scales which may be able to solve the missing satellites problem faced by the pure CDM model. The suppression becomes stronger with increasing FDM fraction or decreasing FDM mass. Thus, it may be used to constrain the FDM model.

  16. A Lagrangian discontinuous Galerkin hydrodynamic method

    DOE PAGES

    Liu, Xiaodong; Morgan, Nathaniel Ray; Burton, Donald E.

    2017-12-11

    Here, we present a new Lagrangian discontinuous Galerkin (DG) hydrodynamic method for solving the two-dimensional gas dynamic equations on unstructured hybrid meshes. The physical conservation laws for the momentum and total energy are discretized using a DG method based on linear Taylor expansions. Three different approaches are investigated for calculating the density variation over the element. The first approach evolves a Taylor expansion of the specific volume field. The second approach follows certain finite element methods and uses the strong mass conservation to calculate the density field at a location inside the element or on the element surface. The thirdmore » approach evolves a Taylor expansion of the density field. The nodal velocity, and the corresponding forces, are explicitly calculated by solving a multidirectional approximate Riemann problem. An effective limiting strategy is presented that ensures monotonicity of the primitive variables. This new Lagrangian DG hydrodynamic method conserves mass, momentum, and total energy. Results from a suite of test problems are presented to demonstrate the robustness and expected second-order accuracy of this new method.« less

  17. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    PubMed Central

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  18. FBILI method for multi-level line transfer

    NASA Astrophysics Data System (ADS)

    Kuzmanovska, O.; Atanacković, O.; Faurobert, M.

    2017-07-01

    Efficient non-LTE multilevel radiative transfer calculations are needed for a proper interpretation of astrophysical spectra. In particular, realistic simulations of time-dependent processes or multi-dimensional phenomena require that the iterative method used to solve such non-linear and non-local problem is as fast as possible. There are several multilevel codes based on efficient iterative schemes that provide a very high convergence rate, especially when combined with mathematical acceleration techniques. The Forth-and-Back Implicit Lambda Iteration (FBILI) developed by Atanacković-Vukmanović et al. [1] is a Gauss-Seidel-type iterative scheme that is characterized by a very high convergence rate without the need of complementing it with additional acceleration techniques. In this paper we make the implementation of the FBILI method to the multilevel atom line transfer in 1D more explicit. We also consider some of its variants and investigate their convergence properties by solving the benchmark problem of CaII line formation in the solar atmosphere. Finally, we compare our solutions with results obtained with the well known code MULTI.

  19. Grounded for life: creative symbol-grounding for lexical invention

    NASA Astrophysics Data System (ADS)

    Veale, Tony; Al-Najjar, Khalid

    2016-04-01

    One of the challenges of linguistic creativity is to use words in a way that is novel and striking and even whimsical, to convey meanings that remain stubbornly grounded in the very same world of familiar experiences as serves to anchor the most literal and unimaginative language. The challenge remains unmet by systems that merely shuttle or arrange words to achieve novel arrangements without concern as to how those arrangements are to spur the processes of meaning construction in a listener. In this paper we explore a problem of lexical invention that cannot be solved without a model - explicit or implicit - of the perceptual grounding of language: the invention of apt new names for colours. To solve this problem here we shall call upon the notion of a linguistic readymade, a phrase that is wrenched from its original context of use to be given new meaning and new resonance in new settings. To ensure that our linguistic readymades - which owe a great deal to Marcel Duchamp's notion of found art - are anchored in a consensus model of perception, we introduce the notion of a lexicalised colour stereotype.

  20. General relativistic radiative transfer code in rotating black hole space-time: ARTIST

    NASA Astrophysics Data System (ADS)

    Takahashi, Rohta; Umemura, Masayuki

    2017-02-01

    We present a general relativistic radiative transfer code, ARTIST (Authentic Radiative Transfer In Space-Time), that is a perfectly causal scheme to pursue the propagation of radiation with absorption and scattering around a Kerr black hole. The code explicitly solves the invariant radiation intensity along null geodesics in the Kerr-Schild coordinates, and therefore properly includes light bending, Doppler boosting, frame dragging, and gravitational redshifts. The notable aspect of ARTIST is that it conserves the radiative energy with high accuracy, and is not subject to the numerical diffusion, since the transfer is solved on long characteristics along null geodesics. We first solve the wavefront propagation around a Kerr black hole that was originally explored by Hanni. This demonstrates repeated wavefront collisions, light bending, and causal propagation of radiation with the speed of light. We show that the decay rate of the total energy of wavefronts near a black hole is determined solely by the black hole spin in late phases, in agreement with analytic expectations. As a result, the ARTIST turns out to correctly solve the general relativistic radiation fields until late phases as t ˜ 90 M. We also explore the effects of absorption and scattering, and apply this code for a photon wall problem and an orbiting hotspot problem. All the simulations in this study are performed in the equatorial plane around a Kerr black hole. The ARTIST is the first step to realize the general relativistic radiation hydrodynamics.

  1. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.

  2. Multigrid calculation of three-dimensional turbomachinery flows

    NASA Technical Reports Server (NTRS)

    Caughey, David A.

    1989-01-01

    Research was performed in the general area of computational aerodynamics, with particular emphasis on the development of efficient techniques for the solution of the Euler and Navier-Stokes equations for transonic flows through the complex blade passages associated with turbomachines. In particular, multigrid methods were developed, using both explicit and implicit time-stepping schemes as smoothing algorithms. The specific accomplishments of the research have included: (1) the development of an explicit multigrid method to solve the Euler equations for three-dimensional turbomachinery flows based upon the multigrid implementation of Jameson's explicit Runge-Kutta scheme (Jameson 1983); (2) the development of an implicit multigrid scheme for the three-dimensional Euler equations based upon lower-upper factorization; (3) the development of a multigrid scheme using a diagonalized alternating direction implicit (ADI) algorithm; (4) the extension of the diagonalized ADI multigrid method to solve the Euler equations of inviscid flow for three-dimensional turbomachinery flows; and also (5) the extension of the diagonalized ADI multigrid scheme to solve the Reynolds-averaged Navier-Stokes equations for two-dimensional turbomachinery flows.

  3. Positive attitudinal shifts with the Physics by Inquiry curriculum across multiple implementations

    NASA Astrophysics Data System (ADS)

    Lindsey, Beth A.; Hsu, Leonardo; Sadaghiani, Homeyra; Taylor, Jack W.; Cummings, Karen

    2012-06-01

    Recent publications have documented positive attitudinal shifts on the Colorado Learning Attitudes about Science Survey (CLASS) among students enrolled in courses with an explicit epistemological focus. We now report positive attitudinal shifts in classes using the Physics by Inquiry (PbI) curriculum, which has only an implicit focus on student epistemologies and nature of science issues. These positive shifts have occurred in several different implementations of the curriculum, across multiple institutions and multiple semesters. In many classes, students experienced significant attitudinal shifts in the problem-solving categories of the CLASS, despite the conceptual focus of most PbI courses.

  4. Concise calculation of the scaling function, exponents, and probability functional of the Edwards-Wilkinson equation with correlated noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y.; Pang, N.; Halpin-Healy, T.

    1994-12-01

    The linear Langevin equation proposed by Edwards and Wilkinson [Proc. R. Soc. London A 381, 17 (1982)] is solved in closed form for noise of arbitrary space and time correlation. Furthermore, the temporal development of the full probability functional describing the height fluctuations is derived exactly, exhibiting an interesting evolution between two distinct Gaussian forms. We determine explicitly the dynamic scaling function for the interfacial width for any given initial condition, isolate the early-time behavior, and discover an invariance that was unsuspected in this problem of arbitrary spatiotemporal noise.

  5. Thermally stratified flow of second grade fluid with non-Fourier heat flux and temperature dependent thermal conductivity

    NASA Astrophysics Data System (ADS)

    Khan, M. Ijaz; Zia, Q. M. Zaigham; Alsaedi, A.; Hayat, T.

    2018-03-01

    This attempt explores stagnation point flow of second grade material towards an impermeable stretched cylinder. Non-Fourier heat flux and thermal stratification are considered. Thermal conductivity dependents upon temperature. Governing non-linear differential system is solved using homotopic procedure. Interval of convergence for the obtained series solutions is explicitly determined. Physical quantities of interest have been examined for the influential variables entering into the problems. It is examined that curvature parameter leads to an enhancement in velocity and temperature. Further temperature for non-Fourier heat flux model is less than Fourier's heat conduction law.

  6. OTIS 3.2 Software Released

    NASA Technical Reports Server (NTRS)

    Riehl, John P.; Sjauw, Waldy K.

    2004-01-01

    Trajectory, mission, and vehicle engineers concern themselves with finding the best way for an object to get from one place to another. These engineers rely upon special software to assist them in this. For a number of years, many engineers have used the OTIS program for this assistance. With OTIS, an engineer can fully optimize trajectories for airplanes, launch vehicles like the space shuttle, interplanetary spacecraft, and orbital transfer vehicles. OTIS provides four modes of operation, with each mode providing successively stronger optimization capability. The most powerful mode uses a mathematical method called implicit integration to solve what engineers and mathematicians call the optimal control problem. OTIS 3.2, which was developed at the NASA Glenn Research Center, is the latest release of this industry workhorse and features new capabilities for parameter optimization and mission design. OTIS stands for Optimal Control by Implicit Simulation, and it is implicit integration that makes OTIS so powerful at solving trajectory optimization problems. Why is this so important? The optimization process not only determines how to get from point A to point B, but it can also determine how to do this with the least amount of propellant, with the lightest starting weight, or in the fastest time possible while avoiding certain obstacles along the way. There are numerous conditions that engineers can use to define optimal, or best. OTIS provides a framework for defining the starting and ending points of the trajectory (point A and point B), the constraints on the trajectory (requirements like "avoid these regions where obstacles occur"), and what is being optimized (e.g., minimize propellant). The implicit integration method can find solutions to very complicated problems when there is not a lot of information available about what the optimal trajectory might be. The method was first developed for solving two-point boundary value problems and was adapted for use in OTIS. Implicit integration usually allows OTIS to find solutions to problems much faster than programs that use explicit integration and parametric methods. Consequently, OTIS is best suited to solving very complicated and highly constrained problems.

  7. Time-fractional Cahn-Allen and time-fractional Klein-Gordon equations: Lie symmetry analysis, explicit solutions and convergence analysis

    NASA Astrophysics Data System (ADS)

    Inc, Mustafa; Yusuf, Abdullahi; Isa Aliyu, Aliyu; Baleanu, Dumitru

    2018-03-01

    This research analyzes the symmetry analysis, explicit solutions and convergence analysis to the time fractional Cahn-Allen (CA) and time-fractional Klein-Gordon (KG) equations with Riemann-Liouville (RL) derivative. The time fractional CA and time fractional KG are reduced to respective nonlinear ordinary differential equation of fractional order. We solve the reduced fractional ODEs using an explicit power series method. The convergence analysis for the obtained explicit solutions are investigated. Some figures for the obtained explicit solutions are also presented.

  8. Analysis of composite ablators using massively parallel computation

    NASA Technical Reports Server (NTRS)

    Shia, David

    1995-01-01

    In this work, the feasibility of using massively parallel computation to study the response of ablative materials is investigated. Explicit and implicit finite difference methods are used on a massively parallel computer, the Thinking Machines CM-5. The governing equations are a set of nonlinear partial differential equations. The governing equations are developed for three sample problems: (1) transpiration cooling, (2) ablative composite plate, and (3) restrained thermal growth testing. The transpiration cooling problem is solved using a solution scheme based solely on the explicit finite difference method. The results are compared with available analytical steady-state through-thickness temperature and pressure distributions and good agreement between the numerical and analytical solutions is found. It is also found that a solution scheme based on the explicit finite difference method has the following advantages: incorporates complex physics easily, results in a simple algorithm, and is easily parallelizable. However, a solution scheme of this kind needs very small time steps to maintain stability. A solution scheme based on the implicit finite difference method has the advantage that it does not require very small times steps to maintain stability. However, this kind of solution scheme has the disadvantages that complex physics cannot be easily incorporated into the algorithm and that the solution scheme is difficult to parallelize. A hybrid solution scheme is then developed to combine the strengths of the explicit and implicit finite difference methods and minimize their weaknesses. This is achieved by identifying the critical time scale associated with the governing equations and applying the appropriate finite difference method according to this critical time scale. The hybrid solution scheme is then applied to the ablative composite plate and restrained thermal growth problems. The gas storage term is included in the explicit pressure calculation of both problems. Results from ablative composite plate problems are compared with previous numerical results which did not include the gas storage term. It is found that the through-thickness temperature distribution is not affected much by the gas storage term. However, the through-thickness pressure and stress distributions, and the extent of chemical reactions are different from the previous numerical results. Two types of chemical reaction models are used in the restrained thermal growth testing problem: (1) pressure-independent Arrhenius type rate equations and (2) pressure-dependent Arrhenius type rate equations. The numerical results are compared to experimental results and the pressure-dependent model is able to capture the trend better than the pressure-independent one. Finally, a performance study is done on the hybrid algorithm using the ablative composite plate problem. It is found that there is a good speedup of performance on the CM-5. For 32 CPU's, the speedup of performance is 20. The efficiency of the algorithm is found to be a function of the size and execution time of a given problem and the effective parallelization of the algorithm. It also seems that there is an optimum number of CPU's to use for a given problem.

  9. Automatic partitioning of unstructured meshes for the parallel solution of problems in computational mechanics

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Lesoinne, Michel

    1993-01-01

    Most of the recently proposed computational methods for solving partial differential equations on multiprocessor architectures stem from the 'divide and conquer' paradigm and involve some form of domain decomposition. For those methods which also require grids of points or patches of elements, it is often necessary to explicitly partition the underlying mesh, especially when working with local memory parallel processors. In this paper, a family of cost-effective algorithms for the automatic partitioning of arbitrary two- and three-dimensional finite element and finite difference meshes is presented and discussed in view of a domain decomposed solution procedure and parallel processing. The influence of the algorithmic aspects of a solution method (implicit/explicit computations), and the architectural specifics of a multiprocessor (SIMD/MIMD, startup/transmission time), on the design of a mesh partitioning algorithm are discussed. The impact of the partitioning strategy on load balancing, operation count, operator conditioning, rate of convergence and processor mapping is also addressed. Finally, the proposed mesh decomposition algorithms are demonstrated with realistic examples of finite element, finite volume, and finite difference meshes associated with the parallel solution of solid and fluid mechanics problems on the iPSC/2 and iPSC/860 multiprocessors.

  10. Developing an approach for teaching and learning about Lewis structures

    NASA Astrophysics Data System (ADS)

    Kaufmann, Ilana; Hamza, Karim M.; Rundgren, Carl-Johan; Eriksson, Lars

    2017-08-01

    This study explores first-year university students' reasoning as they learn to draw Lewis structures. We also present a theoretical account of the formal procedure commonly taught for drawing these structures. Students' discussions during problem-solving activities were video recorded and detailed analyses of the discussions were made through the use of practical epistemology analysis (PEA). Our results show that the formal procedure was central for drawing Lewis structures, but its use varied depending on situational aspects. Commonly, the use of individual steps of the formal procedure was contingent on experiences of chemical structures, and other information such as the characteristics of the problem given. The analysis revealed a number of patterns in how students constructed, checked and modified the structure in relation to the formal procedure and the situational aspects. We suggest that explicitly teaching the formal procedure as a process of constructing, checking and modifying might be helpful for students learning to draw Lewis structures. By doing so, the students may learn to check the accuracy of the generated structure not only in relation to the octet rule and formal charge, but also to other experiences that are not explicitly included in the formal procedure.

  11. Green-Ampt approximations: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  12. Normalization in Lie algebras via mould calculus and applications

    NASA Astrophysics Data System (ADS)

    Paul, Thierry; Sauzin, David

    2017-11-01

    We establish Écalle's mould calculus in an abstract Lie-theoretic setting and use it to solve a normalization problem, which covers several formal normal form problems in the theory of dynamical systems. The mould formalism allows us to reduce the Lie-theoretic problem to a mould equation, the solutions of which are remarkably explicit and can be fully described by means of a gauge transformation group. The dynamical applications include the construction of Poincaré-Dulac formal normal forms for a vector field around an equilibrium point, a formal infinite-order multiphase averaging procedure for vector fields with fast angular variables (Hamiltonian or not), or the construction of Birkhoff normal forms both in classical and quantum situations. As a by-product we obtain, in the case of harmonic oscillators, the convergence of the quantum Birkhoff form to the classical one, without any Diophantine hypothesis on the frequencies of the unperturbed Hamiltonians.

  13. Numerical solution of the full potential equation using a chimera grid approach

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    1995-01-01

    A numerical scheme utilizing a chimera zonal grid approach for solving the full potential equation in two spatial dimensions is described. Within each grid zone a fully-implicit approximate factorization scheme is used to advance the solution one interaction. This is followed by the explicit advance of all common zonal grid boundaries using a bilinear interpolation of the velocity potential. The presentation is highlighted with numerical results simulating the flow about a two-dimensional, nonlifting, circular cylinder. For this problem, the flow domain is divided into two parts: an inner portion covered by a polar grid and an outer portion covered by a Cartesian grid. Both incompressible and compressible (transonic) flow solutions are included. Comparisons made with an analytic solution as well as single grid results indicate that the chimera zonal grid approach is a viable technique for solving the full potential equation.

  14. Finite difference methods for transient signal propagation in stratified dispersive media

    NASA Technical Reports Server (NTRS)

    Lam, D. H.

    1975-01-01

    Explicit difference equations are presented for the solution of a signal of arbitrary waveform propagating in an ohmic dielectric, a cold plasma, a Debye model dielectric, and a Lorentz model dielectric. These difference equations are derived from the governing time-dependent integro-differential equations for the electric fields by a finite difference method. A special difference equation is derived for the grid point at the boundary of two different media. Employing this difference equation, transient signal propagation in an inhomogeneous media can be solved provided that the medium is approximated in a step-wise fashion. The solutions are generated simply by marching on in time. It is concluded that while the classical transform methods will remain useful in certain cases, with the development of the finite difference methods described, an extensive class of problems of transient signal propagating in stratified dispersive media can be effectively solved by numerical methods.

  15. Regularization destriping of remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle

    2017-07-01

    We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.

  16. The Astrocentric Hypothesis: proposed role of astrocytes in consciousness and memory formation.

    PubMed

    Robertson, James M

    2002-01-01

    Consciousness is self-awareness. This process is closely associated with attention and working memory, a special form of short-term memory, which is vital when solving explicit task. Edelman has equated consciousness as the "remembered present" to highlight the importance of this form of memory (G.M. Edelman, Bright Air, Brilliant Fire, Basic Books, New York, 1992). The majority of other memories are recollections of past events that are encoded, stored, and brought back into consciousness if appropriate for solving new problems. Encoding prior experiences into memories is based on the salience of each event (A.R. Damasio, Descartes' Error, G.P. Putnam's Sons, New York, 1994; G.M. Edelman, Bright Air, Brilliant Fire, Basic Books, New York, 1992). It is proposed that protoplasmic astrocytes bind attended sensory information into consciousness and store encoded memories. This conclusion is supported by research conducted by gliobiologist over the past 15 years. Copyright 2002 Elsevier Science Ltd.

  17. Modification of the nuclear landscape in the inverse problem framework using the generalized Bethe-Weizsäcker mass formula

    NASA Astrophysics Data System (ADS)

    Mavrodiev, S. Cht.; Deliyergiyev, M. A.

    We formalized the nuclear mass problem in the inverse problem framework. This approach allows us to infer the underlying model parameters from experimental observation, rather than to predict the observations from the model parameters. The inverse problem was formulated for the numerically generalized semi-empirical mass formula of Bethe and von Weizsäcker. It was solved in a step-by-step way based on the AME2012 nuclear database. The established parametrization describes the measured nuclear masses of 2564 isotopes with a maximum deviation less than 2.6MeV, starting from the number of protons and number of neutrons equal to 1. The explicit form of unknown functions in the generalized mass formula was discovered in a step-by-step way using the modified least χ2 procedure, that realized in the algorithms which were developed by Lubomir Aleksandrov to solve the nonlinear systems of equations via the Gauss-Newton method, lets us to choose the better one between two functions with same χ2. In the obtained generalized model, the corrections to the binding energy depend on nine proton (2, 8, 14, 20, 28, 50, 82, 108, 124) and ten neutron (2, 8, 14, 20, 28, 50, 82, 124, 152, 202) magic numbers as well on the asymptotic boundaries of their influence. The obtained results were compared with the predictions of other models.

  18. A single network adaptive critic (SNAC) architecture for optimal control synthesis for a class of nonlinear systems.

    PubMed

    Padhi, Radhakant; Unnikrishnan, Nishant; Wang, Xiaohua; Balakrishnan, S N

    2006-12-01

    Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the "Single Network Adaptive Critic (SNAC)" is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.

  19. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  20. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  1. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  2. Nonlinear vibration of viscoelastic beams described using fractional order derivatives

    NASA Astrophysics Data System (ADS)

    Lewandowski, Roman; Wielentejczyk, Przemysław

    2017-07-01

    The problem of non-linear, steady state vibration of beams, harmonically excited by harmonic forces is investigated in the paper. The viscoelastic material of the beams is described using the Zener rheological model with fractional derivatives. The constitutive equation, which contains derivatives of both stress and strain, significantly complicates the solution to the problem. The von Karman theory is applied to take into account geometric nonlinearities. Amplitude equations are obtained using the finite element method together with the harmonic balance method, and solved using the continuation method. The tangent matrix of the amplitude equations is determined in an explicit form. The stability of the steady-state solution is also examined. A parametric study is carried out to determine the influence of viscoelastic properties of the material on the beam's responses.

  3. Optical systolic solutions of linear algebraic equations

    NASA Technical Reports Server (NTRS)

    Neuman, C. P.; Casasent, D.

    1984-01-01

    The philosophy and data encoding possible in systolic array optical processor (SAOP) were reviewed. The multitude of linear algebraic operations achievable on this architecture is examined. These operations include such linear algebraic algorithms as: matrix-decomposition, direct and indirect solutions, implicit and explicit methods for partial differential equations, eigenvalue and eigenvector calculations, and singular value decomposition. This architecture can be utilized to realize general techniques for solving matrix linear and nonlinear algebraic equations, least mean square error solutions, FIR filters, and nested-loop algorithms for control engineering applications. The data flow and pipelining of operations, design of parallel algorithms and flexible architectures, application of these architectures to computationally intensive physical problems, error source modeling of optical processors, and matching of the computational needs of practical engineering problems to the capabilities of optical processors are emphasized.

  4. Histories approach to general relativity: I. The spacetime character of the canonical description

    NASA Astrophysics Data System (ADS)

    Savvidou, Ntina

    2004-01-01

    The problem of time in canonical quantum gravity is related to the fact that the canonical description is based on the prior choice of a spacelike foliation, hence making a reference to a spacetime metric. However, the metric is expected to be a dynamical, fluctuating quantity in quantum gravity. We show how this problem can be solved in the histories formulation of general relativity. We implement the 3 + 1 decomposition using metric-dependent foliations which remain spacelike with respect to all possible Lorentzian metrics. This allows us to find an explicit relation of covariant and canonical quantities which preserves the spacetime character of the canonical description. In this new construction, we also have the coexistence of the spacetime diffeomorphisms group, Diff(M), and the Dirac algebra of constraints.

  5. Energy dissipation in a friction-controlled slide of a body excited by random motions of the foundation

    NASA Astrophysics Data System (ADS)

    Berezin, Sergey; Zayats, Oleg

    2018-01-01

    We study a friction-controlled slide of a body excited by random motions of the foundation it is placed on. Specifically, we are interested in such quantities as displacement, traveled distance, and energy loss due to friction. We assume that the random excitation is switched off at some time (possibly infinite) and show that the problem can be treated in an analytic, explicit, manner. Particularly, we derive formulas for the moments of the displacement and distance, and also for the average energy loss. To accomplish that we use the Pugachev-Sveshnikov equation for the characteristic function of a continuous random process given by a system of SDEs. This equation is solved by reduction to a parametric Riemann boundary value problem of complex analysis.

  6. Cognitive conflict without explicit conflict monitoring in a dynamical agent.

    PubMed

    Ward, Robert; Ward, Ronnie

    2006-11-01

    We examine mechanisms for resolving cognitive conflict in an embodied, situated, and dynamic agent, developed through an evolutionary learning process. The agent was required to solve problems of response conflict in a dual-target "catching" task, focusing response on one of the targets while ignoring the other. Conflict in the agent was revealed at the behavioral level in terms of increased latencies to the second target. This behavioral interference was correlated to peak violations of the network's stable state equation. At the level of the agent's neural network, peak violations were also correlated to periods of disagreement in source inputs to the agent's motor effectors. Despite observing conflict at these numerous levels, we did not find any explicit conflict monitoring mechanisms within the agent. We instead found evidence of a distributed conflict management system, characterized by competitive sources within the network. In contrast to the conflict monitoring hypothesis [Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S., & Cohen, J. D. (2001). Conflict monitoring and cognitive control. Psychological Review, 108(3), 624-652], this agent demonstrates that resolution of cognitive conflict does not require explicit conflict monitoring. We consider the implications of our results for the conflict monitoring hypothesis.

  7. Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Jianyuan; Qin, Hong; Liu, Jian

    2015-11-01

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint arXiv: 1505.06076 (2015)], which produces fivemore » exactly soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave. (C) 2015 AIP Publishing LLC.« less

  8. A geostatistical approach to the change-of-support problem and variable-support data fusion in spatial analysis

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Wang, Yang; Zeng, Hui

    2016-01-01

    A key issue to address in synthesizing spatial data with variable-support in spatial analysis and modeling is the change-of-support problem. We present an approach for solving the change-of-support and variable-support data fusion problems. This approach is based on geostatistical inverse modeling that explicitly accounts for differences in spatial support. The inverse model is applied here to produce both the best predictions of a target support and prediction uncertainties, based on one or more measurements, while honoring measurements. Spatial data covering large geographic areas often exhibit spatial nonstationarity and can lead to computational challenge due to the large data size. We developed a local-window geostatistical inverse modeling approach to accommodate these issues of spatial nonstationarity and alleviate computational burden. We conducted experiments using synthetic and real-world raster data. Synthetic data were generated and aggregated to multiple supports and downscaled back to the original support to analyze the accuracy of spatial predictions and the correctness of prediction uncertainties. Similar experiments were conducted for real-world raster data. Real-world data with variable-support were statistically fused to produce single-support predictions and associated uncertainties. The modeling results demonstrate that geostatistical inverse modeling can produce accurate predictions and associated prediction uncertainties. It is shown that the local-window geostatistical inverse modeling approach suggested offers a practical way to solve the well-known change-of-support problem and variable-support data fusion problem in spatial analysis and modeling.

  9. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    NASA Astrophysics Data System (ADS)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  10. A method of boundary equations for unsteady hyperbolic problems in 3D

    NASA Astrophysics Data System (ADS)

    Petropavlovsky, S.; Tsynkov, S.; Turkel, E.

    2018-07-01

    We consider interior and exterior initial boundary value problems for the three-dimensional wave (d'Alembert) equation. First, we reduce a given problem to an equivalent operator equation with respect to unknown sources defined only at the boundary of the original domain. In doing so, the Huygens' principle enables us to obtain the operator equation in a form that involves only finite and non-increasing pre-history of the solution in time. Next, we discretize the resulting boundary equation and solve it efficiently by the method of difference potentials (MDP). The overall numerical algorithm handles boundaries of general shape using regular structured grids with no deterioration of accuracy. For long simulation times it offers sub-linear complexity with respect to the grid dimension, i.e., is asymptotically cheaper than the cost of a typical explicit scheme. In addition, our algorithm allows one to share the computational cost between multiple similar problems. On multi-processor (multi-core) platforms, it benefits from what can be considered an effective parallelization in time.

  11. Investigation of High School Students' Online Science Information Searching Performance: The Role of Implicit and Explicit Strategies

    NASA Astrophysics Data System (ADS)

    Tsai, Meng-Jung; Hsu, Chung-Yuan; Tsai, Chin-Chung

    2012-04-01

    Due to a growing trend of exploring scientific knowledge on the Web, a number of studies have been conducted to highlight examination of students' online searching strategies. The investigation of online searching generally employs methods including a survey, interview, screen-capturing, or transactional logs. The present study firstly intended to utilize a survey, the Online Information Searching Strategies Inventory (OISSI), to examine users' searching strategies in terms of control, orientation, trial and error, problem solving, purposeful thinking, selecting main ideas, and evaluation, which is defined as implicit strategies. Second, this study conducted screen-capturing to investigate the students' searching behaviors regarding the number of keywords, the quantity and depth of Web page exploration, and time attributes, which is defined as explicit strategies. Ultimately, this study explored the role that these two types of strategies played in predicting the students' online science information searching outcomes. A total of 103 Grade 10 students were recruited from a high school in northern Taiwan. Through Pearson correlation and multiple regression analyses, the results showed that the students' explicit strategies, particularly the time attributes proposed in the present study, were more successful than their implicit strategies in predicting their outcomes of searching science information. The participants who spent more time on detailed reading (explicit strategies) and had better skills of evaluating Web information (implicit strategies) tended to have superior searching performance.

  12. Inverse scattering in 1-D nonhomogeneous media and recovery of the wave speed

    NASA Astrophysics Data System (ADS)

    Aktosun, Tuncay; Klaus, Martin; van der Mee, Cornelis

    1992-04-01

    The inverse scattering problem for the 1-D Schrödinger equation d2ψ/dx2 + k2ψ= k2P(x)ψ + Q(x)ψ is studied. This equation is equivalent to the 1-D wave equation with speed 1/√1-P(x) in a nonhomogeneous medium where Q(x) acts as a restoring force. When Q(x) is integrable with a finite first moment, P(x)<1 and bounded below and satisfies two integrability conditions, P(x) is recovered uniquely when the scattering data and Q(x) are known. Some explicitly solved examples are provided.

  13. COMOC: Three dimensional boundary region variant, programmer's manual

    NASA Technical Reports Server (NTRS)

    Orzechowski, J. A.; Baker, A. J.

    1974-01-01

    The three-dimensional boundary region variant of the COMOC computer program system solves the partial differential equation system governing certain three-dimensional flows of a viscous, heat conducting, multiple-species, compressible fluid including combustion. The solution is established in physical variables, using a finite element algorithm for the boundary value portion of the problem description in combination with an explicit marching technique for the initial value character. The computational lattice may be arbitrarily nonregular, and boundary condition constraints are readily applied. The theoretical foundation of the algorithm, a detailed description on the construction and operation of the program, and instructions on utilization of the many features of the code are presented.

  14. Comet composition and density analyzer

    NASA Technical Reports Server (NTRS)

    Clark, B. C.

    1982-01-01

    Distinctions between cometary material and other extraterrestrial materials (meteorite suites and stratospherically-captured cosmic dust) are addressed. The technique of X-ray fluorescence (XRF) for analysis of elemental composition is involved. Concomitant with these investigations, the problem of collecting representative samples of comet dust (for rendezvous missions) was solved, and several related techniques such as mineralogic analysis (X-ray diffraction), direct analysis of the nucleus without docking (electron macroprobe), dust flux rate measurement, and test sample preparation were evaluated. An explicit experiment concept based upon X-ray fluorescence analysis of biased and unbiased sample collections was scoped and proposed for a future rendezvous mission with a short-period comet.

  15. Perspectives on Industrial Innovation from Agilent, HP, and Bell Labs

    NASA Astrophysics Data System (ADS)

    Hollenhorst, James

    2014-03-01

    Innovation is the life blood of technology companies. I will give perspectives gleaned from a career in research and development at Bell Labs, HP Labs, and Agilent Labs, from the point of view of an individual contributor and a manager. Physicists bring a unique set of skills to the corporate environment, including a desire to understand the fundamentals, a solid foundation in physical principles, expertise in applied mathematics, and most importantly, an attitude: namely, that hard problems can be solved by breaking them into manageable pieces. In my experience, hiring managers in industry seldom explicitly search for physicists, but they want people with those skills.

  16. Modal Logics with Counting

    NASA Astrophysics Data System (ADS)

    Areces, Carlos; Hoffmann, Guillaume; Denis, Alexandre

    We present a modal language that includes explicit operators to count the number of elements that a model might include in the extension of a formula, and we discuss how this logic has been previously investigated under different guises. We show that the language is related to graded modalities and to hybrid logics. We illustrate a possible application of the language to the treatment of plural objects and queries in natural language. We investigate the expressive power of this logic via bisimulations, discuss the complexity of its satisfiability problem, define a new reasoning task that retrieves the cardinality bound of the extension of a given input formula, and provide an algorithm to solve it.

  17. Three-dimensional, ten-moment multifluid simulation of the solar wind interaction with Mercury

    NASA Astrophysics Data System (ADS)

    Dong, C.; Hakim, A.; Wang, L.; Bhattacharjee, A.; Germaschewski, K.; DiBraccio, G. A.

    2017-12-01

    We investigate Mercury's magnetosphere by using Gkeyll ten-moment multifluid code that solves the continuity, momentum and pressure tensor equations of both protons and electrons, as well as the full Maxwell equations. Non-ideal effects like the Hall effect, inertia, and tensorial pressures are self-consistently embedded without the need to explicitly solve a generalized Ohm's law. Previously, we have benchmarked this approach in classical test problems like the Orszag-Tang vortex and GEM reconnection challenge problem. We first validate the model by using MESSENGER magnetic field data through data-model comparisons. Both day- and night-side magnetic reconnection are studied in detail. In addition, we include a mantle layer (with a resistivity profile) and a perfect conducting core inside the planet body to accurately represent Mercury's interior. The intrinsic dipole magnetic fields may be modified inside the planetary body due to the weak magnetic moment of Mercury. By including the planetary interior, we can capture the correct plasma boundary locations (e.g., bow shock and magnetopause), especially during a space weather event. This study has the potential to enhance the science returns of both the MESSENGER mission and the upcoming BepiColombo mission (to be launched to Mercury in 2018).

  18. Dynamic Relaxation: A Technique for Detailed Thermo-Elastic Structural Analysis of Transportation Structures

    NASA Astrophysics Data System (ADS)

    Shoukry, Samir N.; William, Gergis W.; Riad, Mourad Y.; McBride, Kevyn C.

    2006-08-01

    Dynamic relaxation is a technique developed to solve static problems through an explicit integration in finite element. The main advantage of such a technique is the ability to solve a large problem in a relatively short time compared with the traditional implicit techniques, especially when using nonlinear material models. This paper describes the use of such a technique in analyzing large transportation structures as dowel jointed concrete pavements and 306-m-long, reinforced concrete bridge superstructure under the effect of temperature variations. The main feature of the pavement model is the detailed modeling of dowel bars and their interfaces with the surrounding concrete using extremely fine mesh of solid elements, while in the bridge structure it is the detailed modeling of the girder-deck interface as well as the bracing members between the girders. The 3DFE results were found to be in a good agreement with experimentally measured data obtained from an instrumented pavements sections and a highway bridge constructed in West Virginia. Thus, such a technique provides a good tool for analyzing the response of large structures to static loads in a fraction of the time required by traditional, implicit finite element methods.

  19. Computer model of two-dimensional solute transport and dispersion in ground water

    USGS Publications Warehouse

    Konikow, Leonard F.; Bredehoeft, J.D.

    1978-01-01

    This report presents a model that simulates solute transport in flowing ground water. The model is both general and flexible in that it can be applied to a wide range of problem types. It is applicable to one- or two-dimensional problems involving steady-state or transient flow. The model computes changes in concentration over time caused by the processes of convective transport, hydrodynamic dispersion, and mixing (or dilution) from fluid sources. The model assumes that the solute is non-reactive and that gradients of fluid density, viscosity, and temperature do not affect the velocity distribution. However, the aquifer may be heterogeneous and (or) anisotropic. The model couples the ground-water flow equation with the solute-transport equation. The digital computer program uses an alternating-direction implicit procedure to solve a finite-difference approximation to the ground-water flow equation, and it uses the method of characteristics to solve the solute-transport equation. The latter uses a particle- tracking procedure to represent convective transport and a two-step explicit procedure to solve a finite-difference equation that describes the effects of hydrodynamic dispersion, fluid sources and sinks, and divergence of velocity. This explicit procedure has several stability criteria, but the consequent time-step limitations are automatically determined by the program. The report includes a listing of the computer program, which is written in FORTRAN IV and contains about 2,000 lines. The model is based on a rectangular, block-centered, finite difference grid. It allows the specification of any number of injection or withdrawal wells and of spatially varying diffuse recharge or discharge, saturated thickness, transmissivity, boundary conditions, and initial heads and concentrations. The program also permits the designation of up to five nodes as observation points, for which a summary table of head and concentration versus time is printed at the end of the calculations. The data input formats for the model require three data cards and from seven to nine data sets to describe the aquifer properties, boundaries, and stresses. The accuracy of the model was evaluated for two idealized problems for which analytical solutions could be obtained. In the case of one-dimensional flow the agreement was nearly exact, but in the case of plane radial flow a small amount of numerical dispersion occurred. An analysis of several test problems indicates that the error in the mass balance will be generally less than 10 percent. The test problems demonstrated that the accuracy and precision of the numerical solution is sensitive to the initial number of particles placed in each cell and to the size of the time increment, as determined by the stability criteria. Mass balance errors are commonly the greatest during the first several time increments, but tend to decrease and stabilize with time.

  20. Prompting children to reason proportionally: Processing discrete units as continuous amounts.

    PubMed

    Boyer, Ty W; Levine, Susan C

    2015-05-01

    Recent studies reveal that children can solve proportional reasoning problems presented with continuous amounts that enable intuitive strategies by around 6 years of age but have difficulties with problems presented with discrete units that tend to elicit explicit count-and-match strategies until at least 10 years of age. The current study tests whether performance on discrete unit problems might be improved by prompting intuitive reasoning with continuous-format problems. Participants were kindergarten, second-grade, and fourth-grade students (N = 194) assigned to either an experimental condition, where they were given continuous amount proportion problems before discrete unit proportion problems, or a control condition, where they were given all discrete unit problems. Results of a three-way mixed-model analysis of variance examining school grade, experimental condition, and block of trials indicated that fourth-grade students in the experimental condition outperformed those in the control condition on discrete unit problems in the second half of the experiment, but kindergarten and second-grade students did not differ by condition. This suggests that older children can be prompted to use intuitive strategies to reason proportionally. (c) 2015 APA, all rights reserved).

  1. Analysis of problem solving on project based learning with resource based learning approach computer-aided program

    NASA Astrophysics Data System (ADS)

    Kuncoro, K. S.; Junaedi, I.; Dwijanto

    2018-03-01

    This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.

  2. Solving the Self-Interaction Problem in Kohn-Sham Density Functional Theory. Application to Atoms

    DOE PAGES

    Daene, M.; Gonis, A.; Nicholson, D. M.; ...

    2014-10-14

    Previously, we proposed a computational methodology that addresses the elimination of the self-interaction error from the Kohn–Sham formulation of the density functional theory. We demonstrated how the exchange potential can be obtained, and presented results of calculations for atomic systems up to Kr carried out within a Cartesian coordinate system. In our paper, we provide complete details of this self-interaction free method formulated in spherical coordinates based on the explicit equidensity basis ansatz. We also prove analytically that derivatives obtained using this method satisfy the Virial theorem for spherical orbitals, where the problem can be reduced to one dimension. Wemore » present the results of calculations of ground-state energies of atomic systems throughout the periodic table carried out within the exchange-only mode.« less

  3. Understanding wheel dynamics.

    PubMed

    Proffitt, D R; Kaiser, M K; Whelan, S M

    1990-07-01

    In five experiments, assessments were made of people's understandings about the dynamics of wheels. It was found that undergraduates make highly erroneous dynamical judgments about the motions of this commonplace event, both in explicit problem-solving contexts and when viewing ongoing events. These problems were also presented to bicycle racers and high-school physics teachers; both groups were found to exhibit misunderstandings similar to those of naive undergraduates. Findings were related to our account of dynamical event complexity. The essence of this account is that people encounter difficulties when evaluating the dynamics of any mechanical system that has more than one dynamically relevant object parameter. A rotating wheel is multidimensional in this respect: in addition to the motion of its center of mass, its mass distribution is also of dynamical relevance. People do not spontaneously form the essential multidimensional quantities required to adequately evaluate wheel dynamics.

  4. Photon scattering from a system of multilevel quantum emitters. I. Formalism

    NASA Astrophysics Data System (ADS)

    Das, Sumanta; Elfving, Vincent E.; Reiter, Florentin; Sørensen, Anders S.

    2018-04-01

    We introduce a formalism to solve the problem of photon scattering from a system of multilevel quantum emitters. Our approach provides a direct solution of the scattering dynamics. As such the formalism gives the scattered fields' amplitudes in the limit of a weak incident intensity. Our formalism is equipped to treat both multiemitter and multilevel emitter systems, and is applicable to a plethora of photon-scattering problems, including conditional state preparation by photodetection. In this paper, we develop the general formalism for an arbitrary geometry. In the following paper (part II) S. Das et al. [Phys. Rev. A 97, 043838 (2018), 10.1103/PhysRevA.97.043838], we reduce the general photon-scattering formalism to a form that is applicable to one-dimensional waveguides and show its applicability by considering explicit examples with various emitter configurations.

  5. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  6. Geometric multigrid for an implicit-time immersed boundary method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.

    2014-10-12

    The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less

  7. Symbolic programming language in molecular multicenter integral problem

    NASA Astrophysics Data System (ADS)

    Safouhi, Hassan; Bouferguene, Ahmed

    It is well known that in any ab initio molecular orbital (MO) calculation, the major task involves the computation of molecular integrals, among which the computation of three-center nuclear attraction and Coulomb integrals is the most frequently encountered. As the molecular system becomes larger, computation of these integrals becomes one of the most laborious and time-consuming steps in molecular systems calculation. Improvement of the computational methods of molecular integrals would be indispensable to further development in computational studies of large molecular systems. To develop fast and accurate algorithms for the numerical evaluation of these integrals over B functions, we used nonlinear transformations for improving convergence of highly oscillatory integrals. These methods form the basis of new methods for solving various problems that were unsolvable otherwise and have many applications as well. To apply these nonlinear transformations, the integrands should satisfy linear differential equations with coefficients having asymptotic power series in the sense of Poincaré, which in their turn should satisfy some limit conditions. These differential equations are very difficult to obtain explicitly. In the case of molecular integrals, we used a symbolic programming language (MAPLE) to demonstrate that all the conditions required to apply these nonlinear transformation methods are satisfied. Differential equations are obtained explicitly, allowing us to demonstrate that the limit conditions are also satisfied.

  8. Neophilia Ranking of Scientific Journals.

    PubMed

    Packalen, Mikko; Bhattacharya, Jay

    2017-01-01

    The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)-these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists' incentives to pursue innovative work.

  9. Toward Solving the Problem of Problem Solving: An Analysis Framework

    ERIC Educational Resources Information Center

    Roesler, Rebecca A.

    2016-01-01

    Teaching is replete with problem solving. Problem solving as a skill, however, is seldom addressed directly within music teacher education curricula, and research in music education has not examined problem solving systematically. A framework detailing problem-solving component skills would provide a needed foundation. I observed problem solving…

  10. An application of Social Values for Ecosystem Services (SolVES) to three national forests in Colorado and Wyoming

    USGS Publications Warehouse

    Sherrouse, Benson C.; Semmens, Darius J.; Clement, Jessica M.

    2014-01-01

    Despite widespread recognition that social-value information is needed to inform stakeholders and decision makers regarding trade-offs in environmental management, it too often remains absent from ecosystem service assessments. Although quantitative indicators of social values need to be explicitly accounted for in the decision-making process, they need not be monetary. Ongoing efforts to map such values demonstrate how they can also be made spatially explicit and relatable to underlying ecological information. We originally developed Social Values for Ecosystem Services (SolVES) as a tool to assess, map, and quantify nonmarket values perceived by various groups of ecosystem stakeholders. With SolVES 2.0 we have extended the functionality by integrating SolVES with Maxent maximum entropy modeling software to generate more complete social-value maps from available value and preference survey data and to produce more robust models describing the relationship between social values and ecosystems. The current study has two objectives: (1) evaluate how effectively the value index, a quantitative, nonmonetary social-value indicator calculated by SolVES, reproduces results from more common statistical methods of social-survey data analysis and (2) examine how the spatial results produced by SolVES provide additional information that could be used by managers and stakeholders to better understand more complex relationships among stakeholder values, attitudes, and preferences. To achieve these objectives, we applied SolVES to value and preference survey data collected for three national forests, the Pike and San Isabel in Colorado and the Bridger–Teton and the Shoshone in Wyoming. Value index results were generally consistent with results found through more common statistical analyses of the survey data such as frequency, discriminant function, and correlation analyses. In addition, spatial analysis of the social-value maps produced by SolVES provided information that was useful for explaining relationships between stakeholder values and forest uses. Our results suggest that SolVES can effectively reproduce information derived from traditional statistical analyses while adding spatially explicit, social-value information that can contribute to integrated resource assessment, planning, and management of forests and other ecosystems.

  11. Goals and everyday problem solving: examining the link between age-related goals and problem-solving strategy use.

    PubMed

    Hoppmann, Christiane A; Coats, Abby Heckman; Blanchard-Fields, Fredda

    2008-07-01

    Qualitative interviews on family and financial problems from 332 adolescents, young, middle-aged, and older adults, demonstrated that developmentally relevant goals predicted problem-solving strategy use over and above problem domain. Four focal goals concerned autonomy, generativity, maintaining good relationships with others, and changing another person. We examined both self- and other-focused problem-solving strategies. Autonomy goals were associated with self-focused instrumental problem solving and generative goals were related to other-focused instrumental problem solving in family and financial problems. Goals of changing another person were related to other-focused instrumental problem solving in the family domain only. The match between goals and strategies, an indicator of problem-solving adaptiveness, showed that young individuals displayed the greatest match between autonomy goals and self-focused problem solving, whereas older adults showed a greater match between generative goals and other-focused problem solving. Findings speak to the importance of considering goals in investigations of age-related differences in everyday problem solving.

  12. Incompressible spectral-element method: Derivation of equations

    NASA Technical Reports Server (NTRS)

    Deanna, Russell G.

    1993-01-01

    A fractional-step splitting scheme breaks the full Navier-Stokes equations into explicit and implicit portions amenable to the calculus of variations. Beginning with the functional forms of the Poisson and Helmholtz equations, we substitute finite expansion series for the dependent variables and derive the matrix equations for the unknown expansion coefficients. This method employs a new splitting scheme which differs from conventional three-step (nonlinear, pressure, viscous) schemes. The nonlinear step appears in the conventional, explicit manner, the difference occurs in the pressure step. Instead of solving for the pressure gradient using the nonlinear velocity, we add the viscous portion of the Navier-Stokes equation from the previous time step to the velocity before solving for the pressure gradient. By combining this 'predicted' pressure gradient with the nonlinear velocity in an explicit term, and the Crank-Nicholson method for the viscous terms, we develop a Helmholtz equation for the final velocity.

  13. Generalized elimination of the global translation from explicitly correlated Gaussian functions

    NASA Astrophysics Data System (ADS)

    Muolo, Andrea; Mátyus, Edit; Reiher, Markus

    2018-02-01

    This paper presents the multi-channel generalization of the center-of-mass kinetic energy elimination approach [B. Simmen et al., Mol. Phys. 111, 2086 (2013)] when the Schrödinger equation is solved variationally with explicitly correlated Gaussian functions. The approach has immediate relevance in many-particle systems which are handled without the Born-Oppenheimer approximation and can be employed also for Dirac-type Hamiltonians. The practical realization and numerical properties of solving the Schrödinger equation in laboratory-frame Cartesian coordinates are demonstrated for the ground rovibronic state of the H2+={p+,p+,e- } ion and the H2 = {p+, p+, e-, e-} molecule.

  14. Generalized elimination of the global translation from explicitly correlated Gaussian functions.

    PubMed

    Muolo, Andrea; Mátyus, Edit; Reiher, Markus

    2018-02-28

    This paper presents the multi-channel generalization of the center-of-mass kinetic energy elimination approach [B. Simmen et al., Mol. Phys. 111, 2086 (2013)] when the Schrödinger equation is solved variationally with explicitly correlated Gaussian functions. The approach has immediate relevance in many-particle systems which are handled without the Born-Oppenheimer approximation and can be employed also for Dirac-type Hamiltonians. The practical realization and numerical properties of solving the Schrödinger equation in laboratory-frame Cartesian coordinates are demonstrated for the ground rovibronic state of the H 2 + ={p + ,p + ,e - } ion and the H 2 = {p + , p + , e - , e - } molecule.

  15. Resources in Technology: Problem-Solving.

    ERIC Educational Resources Information Center

    Technology Teacher, 1986

    1986-01-01

    This instructional module examines a key function of science and technology: problem solving. It studies the meaning of problem solving, looks at techniques for problem solving, examines case studies that exemplify the problem-solving approach, presents problems for the reader to solve, and provides a student self-quiz. (Author/CT)

  16. HFEM3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiss, Chester J

    Software solves the three-dimensional Poisson equation div(k(grad(u)) = f, by the finite element method for the case when material properties, k, are distributed over hierarchy of edges, facets and tetrahedra in the finite element mesh. Method is described in Weiss, CJ, Finite element analysis for model parameters distributed on a hierarchy of geometric simplices, Geophysics, v82, E155-167, doi:10.1190/GEO2017-0058.1 (2017). A standard finite element method for solving Poisson’s equation is augmented by including in the 3D stiffness matrix additional 2D and 1D stiffness matrices representing the contributions from material properties associated with mesh faces and edges, respectively. The resulting linear systemmore » is solved iteratively using the conjugate gradient method with Jacobi preconditioning. To minimize computer storage for program execution, the linear solver computes matrix-vector contractions element-by-element over the mesh, without explicit storage of the global stiffness matrix. Program output vtk compliant for visualization and rendering by 3rd party software. Program uses dynamic memory allocation and as such there are no hard limits on problem size outside of those imposed by the operating system and configuration on which the software is run. Dimension, N, of the finite element solution vector is constrained by the the addressable space in 32-vs-64 bit operating systems. Total storage requirements for the problem. Total working space required for the program is approximately 13*N double precision words.« less

  17. Time-periodic solutions of the Benjamin-Ono equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambrose , D.M.; Wilkening, Jon

    2008-04-01

    We present a spectrally accurate numerical method for finding non-trivial time-periodic solutions of non-linear partial differential equations. The method is based on minimizing a functional (of the initial condition and the period) that is positive unless the solution is periodic, in which case it is zero. We solve an adjoint PDE to compute the gradient of this functional with respect to the initial condition. We include additional terms in the functional to specify the free parameters, which, in the case of the Benjamin-Ono equation, are the mean, a spatial phase, a temporal phase and the real part of one ofmore » the Fourier modes at t = 0. We use our method to study global paths of non-trivial time-periodic solutions connecting stationary and traveling waves of the Benjamin-Ono equation. As a starting guess for each path, we compute periodic solutions of the linearized problem by solving an infinite dimensional eigenvalue problem in closed form. We then use our numerical method to continue these solutions beyond the realm of linear theory until another traveling wave is reached (or until the solution blows up). By experimentation with data fitting, we identify the analytical form of the solutions on the path connecting the one-hump stationary solution to the two-hump traveling wave. We then derive exact formulas for these solutions by explicitly solving the system of ODE's governing the evolution of solitons using the ansatz suggested by the numerical simulations.« less

  18. Consideration of learning orientations as an application of achievement goals in evaluating life science majors in introductory physics

    NASA Astrophysics Data System (ADS)

    Mason, Andrew J.; Bertram, Charles A.

    2018-06-01

    When considering performing an Introductory Physics for Life Sciences course transformation for one's own institution, life science majors' achievement goals are a necessary consideration to ensure the pedagogical transformation will be effective. However, achievement goals are rarely an explicit consideration in physics education research topics such as metacognition. We investigate a sample population of 218 students in a first-semester introductory algebra-based physics course, drawn from 14 laboratory sections within six semesters of course sections, to determine the influence of achievement goals on life science majors' attitudes towards physics. Learning orientations that, respectively, pertain to mastery goals and performance goals, in addition to a learning orientation that does not report a performance goal, were recorded from students in the specific context of learning a problem-solving framework during an in-class exercise. Students' learning orientations, defined within the context of students' self-reported statements in the specific context of a problem-solving-related research-based course implementation, are compared to pre-post results on physics problem-solving items in a well-established attitudinal survey instrument, in order to establish the categories' validity. In addition, mastery-related and performance-related orientations appear to extend to overall pre-post attitudinal shifts, but not to force and motion concepts or to overall course grade, within the scope of an introductory physics course. There also appears to be differentiation regarding overall course performance within health science majors, but not within biology majors, in terms of learning orientations; however, health science majors generally appear to fare less well on all measurements in the study than do biology majors, regardless of learning orientations.

  19. Te Ira Tangata: a Zelen randomised controlled trial of a treatment package including problem solving therapy compared to treatment as usual in Maori who present to hospital after self harm.

    PubMed

    Hatcher, Simon; Coupe, Nicole; Durie, Mason; Elder, Hinemoa; Tapsell, Rees; Wikiriwhi, Karen; Parag, Varsha

    2011-05-11

    Maori, the indigenous people of New Zealand, who present to hospital after intentionally harming themselves, do so at a higher rate than non-Maori. There have been no previous treatment trials in Maori who self harm and previous reviews of interventions in other populations have been inconclusive as existing trials have been under powered and done on unrepresentative populations. These reviews have however indicated that problem solving therapy and sending regular postcards after the self harm attempt may be an effective treatment. There is also a small literature on sense of belonging in self harm and the importance of culture. This protocol describes a pragmatic trial of a package of measures which include problem solving therapy, postcards, patient support, cultural assessment, improved access to primary care and a risk management strategy in Maori who present to hospital after self harm using a novel design. We propose to use a double consent Zelen design where participants are randomised prior to giving consent to enrol a representative cohort of patients. The main outcome will be the number of Maori scoring below nine on the Beck Hopelessness Scale. Secondary outcomes will be hospital repetition at one year; self reported self harm; anxiety; depression; quality of life; social function; and hospital use at three months and one year. A strength of the study is that it is a pragmatic trial which aims to recruit Maori using a Maori clinical team and protocol. It does not exclude people if English is not their first language. A potential limitation is the analysis of the results which is complex and may underestimate any effect if a large number of people refuse their consent in the group randomised to problem solving therapy as they will effectively cross over to the treatment as usual group. This study is the first randomised control trial to explicitly use cultural assessment and management. Australia and New Zealand Clinical Trials Register (ANZCTR): ACTRN12609000952246.

  20. Te Ira Tangata: A Zelen randomised controlled trial of a treatment package including problem solving therapy compared to treatment as usual in Maori who present to hospital after self harm

    PubMed Central

    2011-01-01

    Background Maori, the indigenous people of New Zealand, who present to hospital after intentionally harming themselves, do so at a higher rate than non-Maori. There have been no previous treatment trials in Maori who self harm and previous reviews of interventions in other populations have been inconclusive as existing trials have been under powered and done on unrepresentative populations. These reviews have however indicated that problem solving therapy and sending regular postcards after the self harm attempt may be an effective treatment. There is also a small literature on sense of belonging in self harm and the importance of culture. This protocol describes a pragmatic trial of a package of measures which include problem solving therapy, postcards, patient support, cultural assessment, improved access to primary care and a risk management strategy in Maori who present to hospital after self harm using a novel design. Methods We propose to use a double consent Zelen design where participants are randomised prior to giving consent to enrol a representative cohort of patients. The main outcome will be the number of Maori scoring below nine on the Beck Hopelessness Scale. Secondary outcomes will be hospital repetition at one year; self reported self harm; anxiety; depression; quality of life; social function; and hospital use at three months and one year. Discussion A strength of the study is that it is a pragmatic trial which aims to recruit Maori using a Maori clinical team and protocol. It does not exclude people if English is not their first language. A potential limitation is the analysis of the results which is complex and may underestimate any effect if a large number of people refuse their consent in the group randomised to problem solving therapy as they will effectively cross over to the treatment as usual group. This study is the first randomised control trial to explicitly use cultural assessment and management. Trial registration Australia and New Zealand Clinical Trials Register (ANZCTR): ACTRN12609000952246 PMID:21569300

  1. A fast immersed boundary method for external incompressible viscous flows using lattice Green's functions

    NASA Astrophysics Data System (ADS)

    Liska, Sebastian; Colonius, Tim

    2017-02-01

    A new parallel, computationally efficient immersed boundary method for solving three-dimensional, viscous, incompressible flows on unbounded domains is presented. Immersed surfaces with prescribed motions are generated using the interpolation and regularization operators obtained from the discrete delta function approach of the original (Peskin's) immersed boundary method. Unlike Peskin's method, boundary forces are regarded as Lagrange multipliers that are used to satisfy the no-slip condition. The incompressible Navier-Stokes equations are discretized on an unbounded staggered Cartesian grid and are solved in a finite number of operations using lattice Green's function techniques. These techniques are used to automatically enforce the natural free-space boundary conditions and to implement a novel block-wise adaptive grid that significantly reduces the run-time cost of solutions by limiting operations to grid cells in the immediate vicinity and near-wake region of the immersed surface. These techniques also enable the construction of practical discrete viscous integrating factors that are used in combination with specialized half-explicit Runge-Kutta schemes to accurately and efficiently solve the differential algebraic equations describing the discrete momentum equation, incompressibility constraint, and no-slip constraint. Linear systems of equations resulting from the time integration scheme are efficiently solved using an approximation-free nested projection technique. The algebraic properties of the discrete operators are used to reduce projection steps to simple discrete elliptic problems, e.g. discrete Poisson problems, that are compatible with recent parallel fast multipole methods for difference equations. Numerical experiments on low-aspect-ratio flat plates and spheres at Reynolds numbers up to 3700 are used to verify the accuracy and physical fidelity of the formulation.

  2. Computational efficiency improvements for image colorization

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Sharma, Gaurav; Aly, Hussein

    2013-03-01

    We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.

  3. A Cognitive Analysis of Students’ Mathematical Problem Solving Ability on Geometry

    NASA Astrophysics Data System (ADS)

    Rusyda, N. A.; Kusnandi, K.; Suhendra, S.

    2017-09-01

    The purpose of this research is to analyze of mathematical problem solving ability of students in one of secondary school on geometry. This research was conducted by using quantitative approach with descriptive method. Population in this research was all students of that school and the sample was twenty five students that was chosen by purposive sampling technique. Data of mathematical problem solving were collected through essay test. The results showed the percentage of achievement of mathematical problem solving indicators of students were: 1) solve closed mathematical problems with context in math was 50%; 2) solve the closed mathematical problems with the context beyond mathematics was 24%; 3) solving open mathematical problems with contexts in mathematics was 35%; And 4) solving open mathematical problems with contexts outside mathematics was 44%. Based on the percentage, it can be concluded that the level of achievement of mathematical problem solving ability in geometry still low. This is because students are not used to solving problems that measure mathematical problem solving ability, weaknesses remember previous knowledge, and lack of problem solving framework. So the students’ ability of mathematical problems solving need to be improved with implement appropriate learning strategy.

  4. Metacognitive gimmicks and their use by upper level physics students

    NASA Astrophysics Data System (ADS)

    White, Gary; Sikorski, Tiffany-Rose; Landay, Justin

    2017-01-01

    We report on the initial phases of a study of three particular metacognitive gimmicks that upper-level physics students can use as a tool in their problem-solving kit, namely: checking units for consistency, discerning whether limiting cases match physical intuition, and computing numerical values for reasonable-ness. Students in a one semester Griffiths electromagnetism course at a small private urban university campus are asked to respond to explicit prompts that encourage adopting these three methods for checking answers to physics problems, especially those problems for which an algebraic expression is part of the final answer. We explore how, and to what extent, these students adopt these gimmicks, as well as the time development of their use. While the term ``gimmick'' carries with it some pejorative baggage, we feel it describes the essential nature of the pedagogical idea adequately in that it gets attention, is easy for the students to remember, and represents, albeit perhaps in a surface way, some key ideas about which professional physicists care.

  5. Study of flow over object problems by a nodal discontinuous Galerkin-lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Shen, Meng; Liu, Chen

    2018-04-01

    The flow over object problems are studied by a nodal discontinuous Galerkin-lattice Boltzmann method (NDG-LBM) in this work. Different from the standard lattice Boltzmann method, the current method applies the nodal discontinuous Galerkin method into the streaming process in LBM to solve the resultant pure convection equation, in which the spatial discretization is completed on unstructured grids and the low-storage explicit Runge-Kutta scheme is used for time marching. The present method then overcomes the disadvantage of standard LBM for depending on the uniform meshes. Moreover, the collision process in the LBM is completed by using the multiple-relaxation-time scheme. After the validation of the NDG-LBM by simulating the lid-driven cavity flow, the simulations of flows over a fixed circular cylinder, a stationary airfoil and rotating-stationary cylinders are performed. Good agreement of present results with previous results is achieved, which indicates that the current NDG-LBM is accurate and effective for flow over object problems.

  6. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropymore » distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.« less

  7. A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method

    NASA Astrophysics Data System (ADS)

    Zhan, Lei; Xiong, Juntao; Liu, Feng

    2016-05-01

    The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.

  8. Extending fields in a level set method by solving a biharmonic equation

    NASA Astrophysics Data System (ADS)

    Moroney, Timothy J.; Lusmore, Dylan R.; McCue, Scott W.; McElwain, D. L. Sean

    2017-08-01

    We present an approach for computing extensions of velocities or other fields in level set methods by solving a biharmonic equation. The approach differs from other commonly used approaches to velocity extension because it deals with the interface fully implicitly through the level set function. No explicit properties of the interface, such as its location or the velocity on the interface, are required in computing the extension. These features lead to a particularly simple implementation using either a sparse direct solver or a matrix-free conjugate gradient solver. Furthermore, we propose a fast Poisson preconditioner that can be used to accelerate the convergence of the latter. We demonstrate the biharmonic extension on a number of test problems that serve to illustrate its effectiveness at producing smooth and accurate extensions near interfaces. A further feature of the method is the natural way in which it deals with symmetry and periodicity, ensuring through its construction that the extension field also respects these symmetries.

  9. Path integral solution for a Klein-Gordon particle in vector and scalar deformed radial Rosen-Morse-type potentials

    NASA Astrophysics Data System (ADS)

    Khodja, A.; Kadja, A.; Benamira, F.; Guechi, L.

    2017-12-01

    The problem of a Klein-Gordon particle moving in equal vector and scalar Rosen-Morse-type potentials is solved in the framework of Feynman's path integral approach. Explicit path integration leads to a closed form for the radial Green's function associated with different shapes of the potentials. For q≤-1, and 1/2α ln | q|0, it is shown that the quantization conditions for the bound state energy levels E_{nr} are transcendental equations which can be solved numerically. Three special cases such as the standard radial Manning-Rosen potential (| q| =1), the standard radial Rosen-Morse potential (V2→ -V2,q=1) and the radial Eckart potential (V1→ -V1,q=1) are also briefly discussed.

  10. An efficient model for coupling structural vibrations with acoustic radiation

    NASA Technical Reports Server (NTRS)

    Frendi, Abdelkader; Maestrello, Lucio; Ting, LU

    1993-01-01

    The scattering of an incident wave by a flexible panel is studied. The panel vibration is governed by the nonlinear plate equations while the loading on the panel, which is the pressure difference across the panel, depends on the reflected and transmitted waves. Two models are used to calculate this structural-acoustic interaction problem. One solves the three dimensional nonlinear Euler equations for the flow-field coupled with the plate equations (the fully coupled model). The second uses the linear wave equation for the acoustic field and expresses the load as a double integral involving the panel oscillation (the decoupled model). The panel oscillation governed by a system of integro-differential equations is solved numerically and the acoustic field is then defined by an explicit formula. Numerical results are obtained using the two models for linear and nonlinear panel vibrations. The predictions given by these two models are in good agreement but the computational time needed for the 'fully coupled model' is 60 times longer than that for 'the decoupled model'.

  11. Implicit high-order discontinuous Galerkin method with HWENO type limiters for steady viscous flow simulations

    NASA Astrophysics Data System (ADS)

    Jiang, Zhen-Hua; Yan, Chao; Yu, Jian

    2013-08-01

    Two types of implicit algorithms have been improved for high order discontinuous Galerkin (DG) method to solve compressible Navier-Stokes (NS) equations on triangular grids. A block lower-upper symmetric Gauss-Seidel (BLU-SGS) approach is implemented as a nonlinear iterative scheme. And a modified LU-SGS (LLU-SGS) approach is suggested to reduce the memory requirements while retain the good convergence performance of the original LU-SGS approach. Both implicit schemes have the significant advantage that only the diagonal block matrix is stored. The resulting implicit high-order DG methods are applied, in combination with Hermite weighted essentially non-oscillatory (HWENO) limiters, to solve viscous flow problems. Numerical results demonstrate that the present implicit methods are able to achieve significant efficiency improvements over explicit counterparts and for viscous flows with shocks, and the HWENO limiters can be used to achieve the desired essentially non-oscillatory shock transition and the designed high-order accuracy simultaneously.

  12. [Tacit and explicit knowledge: comparative analysis of the prioritization of maternal health problems in Mexico].

    PubMed

    Moreno Zegbe, Estephania; Becerril Montekio, Víctor; Alcalde Rabanal, Jacqueline

    To identify coincidences and differences in the identification and prioritization of maternal healthcare service problems in Mexico based on the perspective of tacit knowledge and explicit knowledge that may offer evidence that can contribute to attaining the Sustainable Development Goals. Mixed study performed in three stages: 1) systematization of maternal healthcare service problems identified by tacit knowledge (derived from professional experience); 2) identification of maternal healthcare service problems in Latin America addressed by explicit knowledge (scientific publications); 3) comparison between the problems identified by tacit and explicit knowledge. The main problems of maternal health services identified by tacit knowledge are related to poor quality of care, while the predominant problems studied in the scientific literature are related to access barriers to health services. Approximately, 70% of the problems identified by tacit knowledge are also mentioned in the explicit knowledge. Conversely, 70% of the problems identified in the literature are also considered by tacit knowledge. Nevertheless, when looking at the problems taken one by one, no statistically significant similarities were found. The study discovered that the identification of maternal health service problems by tacit knowledge and explicit knowledge is fairly comparable, according to the comparability index used in the study, and highlights the interest of integrating both approaches in order to improve prioritization and decision making towards the Sustainable Development Goals. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. First-arrival traveltime computation for quasi-P waves in 2D transversely isotropic media using Fermat’s principle-based fast marching

    NASA Astrophysics Data System (ADS)

    Hu, Jiangtao; Cao, Junxing; Wang, Huazhong; Wang, Xingjian; Jiang, Xudong

    2017-12-01

    First-arrival traveltime computation for quasi-P waves in transversely isotropic (TI) media is the key component of tomography and depth migration. It is appealing to use the fast marching method in isotropic media as it efficiently computes traveltime along an expanding wavefront. It uses the finite difference method to solve the eikonal equation. However, applying the fast marching method in anisotropic media faces challenges because the anisotropy introduces additional nonlinearity in the eikonal equation and solving this nonlinear eikonal equation with the finite difference method is challenging. To address this problem, we present a Fermat’s principle-based fast marching method to compute traveltime in two-dimensional TI media. This method is applicable in both vertical and tilted TI (VTI and TTI) media. It computes traveltime along an expanding wavefront using Fermat’s principle instead of the eikonal equation. Thus, it does not suffer from the nonlinearity of the eikonal equation in TI media. To compute traveltime using Fermat’s principle, the explicit expression of group velocity in TI media is required to describe the ray propagation. The moveout approximation is adopted to obtain the explicit expression of group velocity. Numerical examples on both VTI and TTI models show that the traveltime contour obtained by the proposed method matches well with the wavefront from the wave equation. This shows that the proposed method could be used in depth migration and tomography.

  14. Least Squares Approach to the Alignment of the Generic High Precision Tracking System

    NASA Astrophysics Data System (ADS)

    de Renstrom, Pawel Brückman; Haywood, Stephen

    2006-04-01

    A least squares method to solve a generic alignment problem of a high granularity tracking system is presented. The algorithm is based on an analytical linear expansion and allows for multiple nested fits, e.g. imposing a common vertex for groups of particle tracks is of particular interest. We present a consistent and complete recipe to impose constraints on either implicit or explicit parameters. The method has been applied to the full simulation of a subset of the ATLAS silicon tracking system. The ultimate goal is to determine ≈35,000 degrees of freedom (DoF's). We present a limited scale exercise exploring various aspects of the solution.

  15. Some Investigations Relating to the Elastostatics of a Tapered Tube

    DTIC Science & Technology

    1978-03-01

    regularity of the solution on the Z axis. Indeed the assumption of such’regularity is stated explicitly by Heins (p. 789) and the problems solved (e.g. a... assumptions , becomes where t h e integrand is evaluated a t ( + i ,O). This i s a form P a of t he i n t e g r a l representa t ion of t h e...solut ion. Now l e t us look a t t h e assumptions on Q. F i r s t of a l l , i n order t o be sure t h a t our operations a r e l eg i

  16. Synchronization error estimation and controller design for delayed Lur'e systems with parameter mismatches.

    PubMed

    He, Wangli; Qian, Feng; Han, Qing-Long; Cao, Jinde

    2012-10-01

    This paper investigates the problem of master-slave synchronization of two delayed Lur'e systems in the presence of parameter mismatches. First, by analyzing the corresponding synchronization error system, synchronization with an error level, which is referred to as quasi-synchronization, is established. Some delay-dependent quasi-synchronization criteria are derived. An estimation of the synchronization error bound is given, and an explicit expression of error levels is obtained. Second, sufficient conditions on the existence of feedback controllers under a predetermined error level are provided. The controller gains are obtained by solving a set of linear matrix inequalities. Finally, a delayed Chua's circuit is chosen to illustrate the effectiveness of the derived results.

  17. Correcting GOES-R Magnetometer Data for Stray Fields

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Freesland, Douglas C.; Tadikonda, Sivakumara K.; Kronenwetter, Jeffrey; Todirita, Monica; Dahya, Melissa; Chu, Donald

    2016-01-01

    Time-varying spacecraft magnetic fields or stray fields are a problem for magnetometer systems. While constant fields can be removed with zero offset calibration, stray fields are difficult to distinguish from ambient field variations. Putting two magnetometers on a long boom and solving for both the ambient and stray fields can be a good idea, but this gradiometer solution is even more susceptible to noise than a single magnetometer. Unless the stray fields are larger than the magnetometer noise, simply averaging the two measurements is a more accurate approach. If averaging is used, it may be worthwhile to explicitly estimate and remove stray fields. Models and estimation algorithms are provided for solar array, arcjet and reaction wheel fields.

  18. Multigrid schemes for viscous hypersonic flows

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Radespiel, R.

    1993-01-01

    Several multigrid schemes are considered for the numerical computation of viscous hypersonic flows. For each scheme, the basic solution algorithm employs upwind spatial discretization with explicit multistage time stepping. Two-level versions of the various multigrid algorithms are applied to the two-dimensional advection equation, and Fourier analysis is used to determine their damping properties. The capabilities of the multigrid methods are assessed by solving two different hypersonic flow problems. Some new multigrid schemes, based on semicoarsening strategies, are shown to be quite effective in relieving the stiffness caused by the high-aspect-ratio cells required to resolve high Reynolds number flows. These schemes exhibit good convergence rates for Reynolds numbers up to 200 x 10(exp 6).

  19. Angular velocity of gravitational radiation from precessing binaries and the corotating frame

    NASA Astrophysics Data System (ADS)

    Boyle, Michael

    2013-05-01

    This paper defines an angular velocity for time-dependent functions on the sphere and applies it to gravitational waveforms from compact binaries. Because it is geometrically meaningful and has a clear physical motivation, the angular velocity is uniquely useful in helping to solve an important—and largely ignored—problem in models of compact binaries: the inverse problem of deducing the physical parameters of a system from the gravitational waves alone. It is also used to define the corotating frame of the waveform. When decomposed in this frame, the waveform has no rotational dynamics and is therefore as slowly evolving as possible. The resulting simplifications lead to straightforward methods for accurately comparing waveforms and constructing hybrids. As formulated in this paper, the methods can be applied robustly to both precessing and nonprecessing waveforms, providing a clear, comprehensive, and consistent framework for waveform analysis. Explicit implementations of all these methods are provided in accompanying computer code.

  20. New Formulae for the High-Order Derivatives of Some Jacobi Polynomials: An Application to Some High-Order Boundary Value Problems

    PubMed Central

    Abd-Elhameed, W. M.

    2014-01-01

    This paper is concerned with deriving some new formulae expressing explicitly the high-order derivatives of Jacobi polynomials whose parameters difference is one or two of any degree and of any order in terms of their corresponding Jacobi polynomials. The derivatives formulae for Chebyshev polynomials of third and fourth kinds of any degree and of any order in terms of their corresponding Chebyshev polynomials are deduced as special cases. Some new reduction formulae for summing some terminating hypergeometric functions of unit argument are also deduced. As an application, and with the aid of the new introduced derivatives formulae, an algorithm for solving special sixth-order boundary value problems are implemented with the aid of applying Galerkin method. A numerical example is presented hoping to ascertain the validity and the applicability of the proposed algorithms. PMID:25386599

  1. Observer-based H∞ resilient control for a class of switched LPV systems and its application

    NASA Astrophysics Data System (ADS)

    Yang, Dong; Zhao, Jun

    2016-11-01

    This paper deals with the issue of observer-based H∞ resilient control for a class of switched linear parameter-varying (LPV) systems by utilising a multiple parameter-dependent Lyapunov functions method. First, attention is focused upon the design of a resilient observer, an observer-based resilient controller and a parameter and estimate state-dependent switching signal, which can stabilise and achieve the disturbance attenuation for the given systems. Then, a solvability condition of the H∞ resilient control problem is given in terms of matrix inequality for the switched LPV systems. This condition allows the H∞ resilient control problem for each individual subsystem to be unsolvable. The observer, controller, and switching signal are explicitly computed by solving linear matrix inequalities (LMIs). Finally, the effectiveness of the proposed control scheme is illustrated by its application to a turbofan engine, which can hardly be handled by the existing approaches.

  2. "On Second Thoughts…": Changes of Mind as an Indication of Competing Knowledge Structures

    NASA Astrophysics Data System (ADS)

    Wilson, Kate F.; Low, David J.

    2015-09-01

    A review of student answers to diagnostic questions concerned with Newton's Laws showed a tendency for some students to change their answer to a question when the following question caused them to think more about the situation. We investigate this behavior and interpret it in the framework of the resource model; in particular, a weak Newton's Third Law structure being dominated by an inconsistent Newton's Second Law (or "Net Force") structure, in the absence of a strong, consistent Newtonian structure. This observation highlights the hidden problem in instruction where the implicit use of Newton's Third Law is dominated by the explicit conceptual and mathematical application of Newton's Second Law, both within individual courses and across a degree program. To facilitate students' development of a consistent Newtonian knowledge structure, it is important that instructors highlight the interrelated nature of Newton's Laws in problem solving.

  3. Model-Free Adaptive Control for Unknown Nonlinear Zero-Sum Differential Game.

    PubMed

    Zhong, Xiangnan; He, Haibo; Wang, Ding; Ni, Zhen

    2018-05-01

    In this paper, we present a new model-free globalized dual heuristic dynamic programming (GDHP) approach for the discrete-time nonlinear zero-sum game problems. First, the online learning algorithm is proposed based on the GDHP method to solve the Hamilton-Jacobi-Isaacs equation associated with optimal regulation control problem. By setting backward one step of the definition of performance index, the requirement of system dynamics, or an identifier is relaxed in the proposed method. Then, three neural networks are established to approximate the optimal saddle point feedback control law, the disturbance law, and the performance index, respectively. The explicit updating rules for these three neural networks are provided based on the data generated during the online learning along the system trajectories. The stability analysis in terms of the neural network approximation errors is discussed based on the Lyapunov approach. Finally, two simulation examples are provided to show the effectiveness of the proposed method.

  4. A Two-moment Radiation Hydrodynamics Module in ATHENA Using a Godunov Method

    NASA Astrophysics Data System (ADS)

    Skinner, M. A.; Ostriker, E. C.

    2013-04-01

    We describe a module for the Athena code that solves the grey equations of radiation hydrodynamics (RHD) using a local variable Eddington tensor (VET) based on the M1 closure of the two-moment hierarchy of the transfer equation. The variables are updated via a combination of explicit Godunov methods to advance the gas and radiation variables including the non-stiff source terms, and a local implicit method to integrate the stiff source terms. We employ the reduced speed of light approximation (RSLA) with subcycling of the radiation variables in order to reduce computational costs. The streaming and diffusion limits are well-described by the M1 closure model, and our implementation shows excellent behavior for problems containing both regimes simultaneously. Our operator-split method is ideally suited for problems with a slowly-varying radiation field and dynamical gas flows, in which the effect of the RSLA is minimal.

  5. The Ablowitz–Ladik system on a finite set of integers

    NASA Astrophysics Data System (ADS)

    Xia, Baoqiang

    2018-07-01

    We show how to solve initial-boundary value problems for integrable nonlinear differential–difference equations on a finite set of integers. The method we employ is the discrete analogue of the unified transform (Fokas method). The implementation of this method to the Ablowitz–Ladik system yields the solution in terms of the unique solution of a matrix Riemann–Hilbert problem, which has a jump matrix with explicit -dependence involving certain functions referred to as spectral functions. Some of these functions are defined in terms of the initial value, while the remaining spectral functions are defined in terms of two sets of boundary values. These spectral functions are not independent but satisfy an algebraic relation called global relation. We analyze the global relation to characterize the unknown boundary values in terms of the given initial and boundary values. We also discuss the linearizable boundary conditions.

  6. Coordinated Dynamic Behaviors for Multirobot Systems With Collision Avoidance.

    PubMed

    Sabattini, Lorenzo; Secchi, Cristian; Fantuzzi, Cesare

    2017-12-01

    In this paper, we propose a novel methodology for achieving complex dynamic behaviors in multirobot systems. In particular, we consider a multirobot system partitioned into two subgroups: 1) dependent and 2) independent robots. Independent robots are utilized as a control input, and their motion is controlled in such a way that the dependent robots solve a tracking problem, that is following arbitrarily defined setpoint trajectories, in a coordinated manner. The control strategy proposed in this paper explicitly addresses the collision avoidance problem, utilizing a null space-based behavioral approach: this leads to combining, in a non conflicting manner, the tracking control law with a collision avoidance strategy. The combination of these control actions allows the robots to execute their task in a safe way. Avoidance of collisions is formally proven in this paper, and the proposed methodology is validated by means of simulations and experiments on real robots.

  7. Quasi-periodic Solutions of the Kaup-Kupershmidt Hierarchy

    NASA Astrophysics Data System (ADS)

    Geng, Xianguo; Wu, Lihua; He, Guoliang

    2013-08-01

    Based on solving the Lenard recursion equations and the zero-curvature equation, we derive the Kaup-Kupershmidt hierarchy associated with a 3×3 matrix spectral problem. Resorting to the characteristic polynomial of the Lax matrix for the Kaup-Kupershmidt hierarchy, we introduce a trigonal curve {K}_{m-1} and present the corresponding Baker-Akhiezer function and meromorphic function on it. The Abel map is introduced to straighten out the Kaup-Kupershmidt flows. With the aid of the properties of the Baker-Akhiezer function and the meromorphic function and their asymptotic expansions, we arrive at their explicit Riemann theta function representations. The Riemann-Jacobi inversion problem is achieved by comparing the asymptotic expansion of the Baker-Akhiezer function and its Riemann theta function representation, from which quasi-periodic solutions of the entire Kaup-Kupershmidt hierarchy are obtained in terms of the Riemann theta functions.

  8. Effects of variable electrical conductivity and thermal conductivity on unsteady MHD free convection flow past an exponential accelerated inclined plate

    NASA Astrophysics Data System (ADS)

    Rana, B. M. Jewel; Ahmed, Rubel; Ahmmed, S. F.

    2017-06-01

    An analysis is carried out to investigate the effects of variable viscosity, thermal radiation, absorption of radiation and cross diffusion past an inclined exponential accelerated plate under the influence of variable heat and mass transfer. A set of suitable transformations has been used to obtain the non-dimensional coupled governing equations. Explicit finite difference technique has been used to solve the obtained numerical solutions of the present problem. Stability and convergence of the finite difference scheme have been carried out for this problem. Compaq Visual Fortran 6.6a has been used to calculate the numerical results. The effects of various physical parameters on the fluid velocity, temperature, concentration, coefficient of skin friction, rate of heat transfer, rate of mass transfer, streamlines and isotherms on the flow field have been presented graphically and discussed in details.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slattery, Stuart R

    ExaMPM is a mini-application for the Material Point Method (MPM) for studying the application of MPM to future exascale computing systems. MPM is a general method for computational mechanics and fluids and is used in a wide variety of science and engineering disciplines to study problems with large deformations, phase change, fracture, and other phenomena. ExaMPM provides a reference implementation of MPM as described in the 1994 work of Sulsky et.al. (Sulsky, Deborah, Zhen Chen, and Howard L. Schreyer. "A particle method for history-dependent materials." Computer methods in applied mechanics and engineering 118.1-2 (1994): 179-196.). The software can solve basicmore » MPM problems in solid mechanics using the original algorithm of Sulsky with explicit time integration, basic geometries, and free-slip and no-slip boundary conditions as described in the reference. ExaMPM is intended to be used as a starting point to design new parallel algorithms for the next generation of DOE supercomputers.« less

  10. The challenges of incorporating cultural ecosystem services into environmental assessment.

    PubMed

    Satz, Debra; Gould, Rachelle K; Chan, Kai M A; Guerry, Anne; Norton, Bryan; Satterfield, Terre; Halpern, Benjamin S; Levine, Jordan; Woodside, Ulalia; Hannahs, Neil; Basurto, Xavier; Klain, Sarah

    2013-10-01

    The ecosystem services concept is used to make explicit the diverse benefits ecosystems provide to people, with the goal of improving assessment and, ultimately, decision-making. Alongside material benefits such as natural resources (e.g., clean water, timber), this concept includes-through the 'cultural' category of ecosystem services-diverse non-material benefits that people obtain through interactions with ecosystems (e.g., spiritual inspiration, cultural identity, recreation). Despite the longstanding focus of ecosystem services research on measurement, most cultural ecosystem services have defined measurement and inclusion alongside other more 'material' services. This gap in measurement of cultural ecosystem services is a product of several perceived problems, some of which are not real problems and some of which can be mitigated or even solved without undue difficulty. Because of the fractured nature of the literature, these problems continue to plague the discussion of cultural services. In this paper we discuss several such problems, which although they have been addressed singly, have not been brought together in a single discussion. There is a need for a single, accessible treatment of the importance and feasibility of integrating cultural ecosystem services alongside others.

  11. Mathematical Metaphors: Problem Reformulation and Analysis Strategies

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    2005-01-01

    This paper addresses the critical need for the development of intelligent or assisting software tools for the scientist who is working in the initial problem formulation and mathematical model representation stage of research. In particular, examples of that representation in fluid dynamics and instability theory are discussed. The creation of a mathematical model that is ready for application of certain solution strategies requires extensive symbolic manipulation of the original mathematical model. These manipulations can be as simple as term reordering or as complicated as discovery of various symmetry groups embodied in the equations, whereby Backlund-type transformations create new determining equations and integrability conditions or create differential Grobner bases that are then solved in place of the original nonlinear PDEs. Several examples are presented of the kinds of problem formulations and transforms that can be frequently encountered in model representation for fluids problems. The capability of intelligently automating these types of transforms, available prior to actual mathematical solution, is advocated. Physical meaning and assumption-understanding can then be propagated through the mathematical transformations, allowing for explicit strategy development.

  12. Toward interactive scheduling systems for managing medical resources.

    PubMed

    Oddi, A; Cesta, A

    2000-10-01

    Managers of medico-hospital facilities are facing two general problems when allocating resources to activities: (1) to find an agreement between several and contrasting requirements; (2) to manage dynamic and uncertain situations when constraints suddenly change over time due to medical needs. This paper describes the results of a research aimed at applying constraint-based scheduling techniques to the management of medical resources. A mixed-initiative problem solving approach is adopted in which a user and a decision support system interact to incrementally achieve a satisfactory solution to the problem. A running prototype is described called Interactive Scheduler which offers a set of functionalities for a mixed-initiative interaction to cope with the medical resource management. Interactive Scheduler is endowed with a representation schema used for describing the medical environment, a set of algorithms that address the specific problems of the domain, and an innovative interaction module that offers functionalities for the dialogue between the support system and its user. A particular contribution of this work is the explicit representation of constraint violations, and the definition of scheduling algorithms that aim at minimizing the amount of constraint violations in a solution.

  13. First and second order derivatives for optimizing parallel RF excitation waveforms.

    PubMed

    Majewski, Kurt; Ritter, Dieter

    2015-09-01

    For piecewise constant magnetic fields, the Bloch equations (without relaxation terms) can be solved explicitly. This way the magnetization created by an excitation pulse can be written as a concatenation of rotations applied to the initial magnetization. For fixed gradient trajectories, the problem of finding parallel RF waveforms, which minimize the difference between achieved and desired magnetization on a number of voxels, can thus be represented as a finite-dimensional minimization problem. We use quaternion calculus to formulate this optimization problem in the magnitude least squares variant and specify first and second order derivatives of the objective function. We obtain a small tip angle approximation as first order Taylor development from the first order derivatives and also develop algorithms for first and second order derivatives for this small tip angle approximation. All algorithms are accompanied by precise floating point operation counts to assess and compare the computational efforts. We have implemented these algorithms as callback functions of an interior-point solver. We have applied this numerical optimization method to example problems from the literature and report key observations. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. First and second order derivatives for optimizing parallel RF excitation waveforms

    NASA Astrophysics Data System (ADS)

    Majewski, Kurt; Ritter, Dieter

    2015-09-01

    For piecewise constant magnetic fields, the Bloch equations (without relaxation terms) can be solved explicitly. This way the magnetization created by an excitation pulse can be written as a concatenation of rotations applied to the initial magnetization. For fixed gradient trajectories, the problem of finding parallel RF waveforms, which minimize the difference between achieved and desired magnetization on a number of voxels, can thus be represented as a finite-dimensional minimization problem. We use quaternion calculus to formulate this optimization problem in the magnitude least squares variant and specify first and second order derivatives of the objective function. We obtain a small tip angle approximation as first order Taylor development from the first order derivatives and also develop algorithms for first and second order derivatives for this small tip angle approximation. All algorithms are accompanied by precise floating point operation counts to assess and compare the computational efforts. We have implemented these algorithms as callback functions of an interior-point solver. We have applied this numerical optimization method to example problems from the literature and report key observations.

  15. Problem-solving variability in older spouses: how is it linked to problem-, person-, and couple-characteristics?

    PubMed

    Hoppmann, Christiane A; Blanchard-Fields, Fredda

    2011-09-01

    Problem-solving does not take place in isolation and often involves social others such as spouses. Using repeated daily life assessments from 98 older spouses (M age = 72 years; M marriage length = 42 years), the present study examined theoretical notions from social-contextual models of coping regarding (a) the origins of problem-solving variability and (b) associations between problem-solving and specific problem-, person-, and couple- characteristics. Multilevel models indicate that the lion's share of variability in everyday problem-solving is located at the level of the problem situation. Importantly, participants reported more proactive emotion regulation and collaborative problem-solving for social than nonsocial problems. We also found person-specific consistencies in problem-solving. That is, older spouses high in Neuroticism reported more problems across the study period as well as less instrumental problem-solving and more passive emotion regulation than older spouses low in Neuroticism. Contrary to expectations, relationship satisfaction was unrelated to problem-solving in the present sample. Results are in line with the stress and coping literature in demonstrating that everyday problem-solving is a dynamic process that has to be viewed in the broader context in which it occurs. Our findings also complement previous laboratory-based work on everyday problem-solving by underscoring the benefits of examining everyday problem-solving as it unfolds in spouses' own environment.

  16. Resource Letter RPS-1: Research in problem solving

    NASA Astrophysics Data System (ADS)

    Hsu, Leonardo; Brewe, Eric; Foster, Thomas M.; Harper, Kathleen A.

    2004-09-01

    This Resource Letter provides a guide to the literature on research in problem solving, especially in physics. The references were compiled with two audiences in mind: physicists who are (or might become) engaged in research on problem solving, and physics instructors who are interested in using research results to improve their students' learning of problem solving. In addition to general references, journal articles and books are cited for the following topics: cognitive aspects of problem solving, expert-novice problem-solver characteristics, problem solving in mathematics, alternative problem types, curricular interventions, and the use of computers in problem solving.

  17. New core-reflector boundary conditions for transient nodal reactor calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, E.K.; Kim, C.H.; Joo, H.K.

    1995-09-01

    New core-reflector boundary conditions designed for the exclusion of the reflector region in transient nodal reactor calculations are formulated. Spatially flat frequency approximations for the temporal neutron behavior and two types of transverse leakage approximations in the reflector region are introduced to solve the transverse-integrated time-dependent one-dimensional diffusion equation and then to obtain relationships between net current and flux at the core-reflector interfaces. To examine the effectiveness of new core-reflector boundary conditions in transient nodal reactor computations, nodal expansion method (NEM) computations with and without explicit representation of the reflector are performed for Laboratorium fuer Reaktorregelung und Anlagen (LRA) boilingmore » water reactor (BWR) and Nuclear Energy Agency Committee on Reactor Physics (NEACRP) pressurized water reactor (PWR) rod ejection kinetics benchmark problems. Good agreement between two NEM computations is demonstrated in all the important transient parameters of two benchmark problems. A significant amount of CPU time saving is also demonstrated with the boundary condition model with transverse leakage (BCMTL) approximations in the reflector region. In the three-dimensional LRA BWR, the BCMTL and the explicit reflector model computations differ by {approximately}4% in transient peak power density while the BCMTL results in >40% of CPU time saving by excluding both the axial and the radial reflector regions from explicit computational nodes. In the NEACRP PWR problem, which includes six different transient cases, the largest difference is 24.4% in the transient maximum power in the one-node-per-assembly B1 transient results. This difference in the transient maximum power of the B1 case is shown to reduce to 11.7% in the four-node-per-assembly computations. As for the computing time, BCMTL is shown to reduce the CPU time >20% in all six transient cases of the NEACRP PWR.« less

  18. Students’ difficulties in probabilistic problem-solving

    NASA Astrophysics Data System (ADS)

    Arum, D. P.; Kusmayadi, T. A.; Pramudya, I.

    2018-03-01

    There are many errors can be identified when students solving mathematics problems, particularly in solving the probabilistic problem. This present study aims to investigate students’ difficulties in solving the probabilistic problem. It focuses on analyzing and describing students errors during solving the problem. This research used the qualitative method with case study strategy. The subjects in this research involve ten students of 9th grade that were selected by purposive sampling. Data in this research involve students’ probabilistic problem-solving result and recorded interview regarding students’ difficulties in solving the problem. Those data were analyzed descriptively using Miles and Huberman steps. The results show that students have difficulties in solving the probabilistic problem and can be divided into three categories. First difficulties relate to students’ difficulties in understanding the probabilistic problem. Second, students’ difficulties in choosing and using appropriate strategies for solving the problem. Third, students’ difficulties with the computational process in solving the problem. Based on the result seems that students still have difficulties in solving the probabilistic problem. It means that students have not able to use their knowledge and ability for responding probabilistic problem yet. Therefore, it is important for mathematics teachers to plan probabilistic learning which could optimize students probabilistic thinking ability.

  19. Numerical study of hydrogen-air supersonic combustion by using elliptic and parabolized equations

    NASA Technical Reports Server (NTRS)

    Chitsomboon, T.; Tiwari, S. N.

    1986-01-01

    The two-dimensional Navier-Stokes and species continuity equations are used to investigate supersonic chemically reacting flow problems which are related to scramjet-engine configurations. A global two-step finite-rate chemistry model is employed to represent the hydrogen-air combustion in the flow. An algebraic turbulent model is adopted for turbulent flow calculations. The explicit unsplit MacCormack finite-difference algorithm is used to develop a computer program suitable for a vector processing computer. The computer program developed is then used to integrate the system of the governing equations in time until convergence is attained. The chemistry source terms in the species continuity equations are evaluated implicitly to alleviate stiffness associated with fast chemical reactions. The problems solved by the elliptic code are re-investigated by using a set of two-dimensional parabolized Navier-Stokes and species equations. A linearized fully-coupled fully-implicit finite difference algorithm is used to develop a second computer code which solves the governing equations by marching in spce rather than time, resulting in a considerable saving in computer resources. Results obtained by using the parabolized formulation are compared with the results obtained by using the fully-elliptic equations. The comparisons indicate fairly good agreement of the results of the two formulations.

  20. Investigation of supersonic chemically reacting and radiating channel flow

    NASA Technical Reports Server (NTRS)

    Mani, Mortaza; Tiwari, Surendra N.

    1988-01-01

    The 2-D time-dependent Navier-Stokes equations are used to investigate supersonic flows undergoing finite rate chemical reaction and radiation interaction for a hydrogen-air system. The explicit multistage finite volume technique of Jameson is used to advance the governing equations in time until convergence is achieved. The chemistry source term in the species equation is treated implicitly to alleviate the stiffness associated with fast reactions. The multidimensional radiative transfer equations for a nongray model are provided for a general configuration and then reduced for a planar geometry. Both pseudo-gray and nongray models are used to represent the absorption-emission characteristics of the participating species. The supersonic inviscid and viscous, nonreacting flows are solved by employing the finite volume technique of Jameson and the unsplit finite difference scheme of MacCormack. The specified problem considered is of the flow in a channel with a 10 deg compression-expansion ramp. The calculated results are compared with those of an upwind scheme. The problem of chemically reacting and radiating flows are solved for the flow of premixed hydrogen-air through a channel with parallel boundaries, and a channel with a compression corner. Results obtained for specific conditions indicate that the radiative interaction can have a significant influence on the entire flow field.

  1. Development of a problem solving evaluation instrument; untangling of specific problem solving assets

    NASA Astrophysics Data System (ADS)

    Adams, Wendy Kristine

    The purpose of my research was to produce a problem solving evaluation tool for physics. To do this it was necessary to gain a thorough understanding of how students solve problems. Although physics educators highly value problem solving and have put extensive effort into understanding successful problem solving, there is currently no efficient way to evaluate problem solving skill. Attempts have been made in the past; however, knowledge of the principles required to solve the subject problem are so absolutely critical that they completely overshadow any other skills students may use when solving a problem. The work presented here is unique because the evaluation tool removes the requirement that the student already have a grasp of physics concepts. It is also unique because I picked a wide range of people and picked a wide range of tasks for evaluation. This is an important design feature that helps make things emerge more clearly. This dissertation includes an extensive literature review of problem solving in physics, math, education and cognitive science as well as descriptions of studies involving student use of interactive computer simulations, the design and validation of a beliefs about physics survey and finally the design of the problem solving evaluation tool. I have successfully developed and validated a problem solving evaluation tool that identifies 44 separate assets (skills) necessary for solving problems. Rigorous validation studies, including work with an independent interviewer, show these assets identified by this content-free evaluation tool are the same assets that students use to solve problems in mechanics and quantum mechanics. Understanding this set of component assets will help teachers and researchers address problem solving within the classroom.

  2. Age differences in everyday problem-solving effectiveness: older adults select more effective strategies for interpersonal problems.

    PubMed

    Blanchard-Fields, Fredda; Mienaltowski, Andrew; Seay, Renee Baldi

    2007-01-01

    Using the Everyday Problem Solving Inventory of Cornelius and Caspi, we examined differences in problem-solving strategy endorsement and effectiveness in two domains of everyday functioning (instrumental or interpersonal, and a mixture of the two domains) and for four strategies (avoidance-denial, passive dependence, planful problem solving, and cognitive analysis). Consistent with past research, our research showed that older adults were more problem focused than young adults in their approach to solving instrumental problems, whereas older adults selected more avoidant-denial strategies than young adults when solving interpersonal problems. Overall, older adults were also more effective than young adults when solving everyday problems, in particular for interpersonal problems.

  3. Spontaneous gestures influence strategy choices in problem solving.

    PubMed

    Alibali, Martha W; Spencer, Robert C; Knox, Lucy; Kita, Sotaro

    2011-09-01

    Do gestures merely reflect problem-solving processes, or do they play a functional role in problem solving? We hypothesized that gestures highlight and structure perceptual-motor information, and thereby make such information more likely to be used in problem solving. Participants in two experiments solved problems requiring the prediction of gear movement, either with gesture allowed or with gesture prohibited. Such problems can be correctly solved using either a perceptual-motor strategy (simulation of gear movements) or an abstract strategy (the parity strategy). Participants in the gesture-allowed condition were more likely to use perceptual-motor strategies than were participants in the gesture-prohibited condition. Gesture promoted use of perceptual-motor strategies both for participants who talked aloud while solving the problems (Experiment 1) and for participants who solved the problems silently (Experiment 2). Thus, spontaneous gestures influence strategy choices in problem solving.

  4. Too upset to think: the interplay of borderline personality features, negative emotions, and social problem solving in the laboratory.

    PubMed

    Dixon-Gordon, Katherine L; Chapman, Alexander L; Lovasz, Nathalie; Walters, Kris

    2011-10-01

    Borderline personality disorder (BPD) is associated with poor social problem solving and problems with emotion regulation. In this study, the social problem-solving performance of undergraduates with high (n = 26), mid (n = 32), or low (n = 29) levels of BPD features was assessed with the Social Problem-Solving Inventory-Revised and using the means-ends problem-solving procedure before and after a social rejection stressor. The high-BP group, but not the low-BP group, showed a significant reduction in relevant solutions to social problems and more inappropriate solutions following the negative emotion induction. Increases in self-reported negative emotions during the emotion induction mediated the relationship between BP features and reductions in social problem-solving performance. In addition, the high-BP group demonstrated trait deficits in social problem solving on the Social Problem-Solving Inventory-Revised. These findings suggest that future research must examine social problem solving under differing emotional conditions, and that clinical interventions to improve social problem solving among persons with BP features should focus on responses to emotional contexts.

  5. Accelerating moderately stiff chemical kinetics in reactive-flow simulations using GPUs

    NASA Astrophysics Data System (ADS)

    Niemeyer, Kyle E.; Sung, Chih-Jen

    2014-01-01

    The chemical kinetics ODEs arising from operator-split reactive-flow simulations were solved on GPUs using explicit integration algorithms. Nonstiff chemical kinetics of a hydrogen oxidation mechanism (9 species and 38 irreversible reactions) were computed using the explicit fifth-order Runge-Kutta-Cash-Karp method, and the GPU-accelerated version performed faster than single- and six-core CPU versions by factors of 126 and 25, respectively, for 524,288 ODEs. Moderately stiff kinetics, represented with mechanisms for hydrogen/carbon-monoxide (13 species and 54 irreversible reactions) and methane (53 species and 634 irreversible reactions) oxidation, were computed using the stabilized explicit second-order Runge-Kutta-Chebyshev (RKC) algorithm. The GPU-based RKC implementation demonstrated an increase in performance of nearly 59 and 10 times, for problem sizes consisting of 262,144 ODEs and larger, than the single- and six-core CPU-based RKC algorithms using the hydrogen/carbon-monoxide mechanism. With the methane mechanism, RKC-GPU performed more than 65 and 11 times faster, for problem sizes consisting of 131,072 ODEs and larger, than the single- and six-core RKC-CPU versions, and up to 57 times faster than the six-core CPU-based implicit VODE algorithm on 65,536 ODEs. In the presence of more severe stiffness, such as ethylene oxidation (111 species and 1566 irreversible reactions), RKC-GPU performed more than 17 times faster than RKC-CPU on six cores for 32,768 ODEs and larger, and at best 4.5 times faster than VODE on six CPU cores for 65,536 ODEs. With a larger time step size, RKC-GPU performed at best 2.5 times slower than six-core VODE for 8192 ODEs and larger. Therefore, the need for developing new strategies for integrating stiff chemistry on GPUs was discussed.

  6. An Investigation of Secondary Teachers’ Understanding and Belief on Mathematical Problem Solving

    NASA Astrophysics Data System (ADS)

    Yuli Eko Siswono, Tatag; Wachidul Kohar, Ahmad; Kurniasari, Ika; Puji Astuti, Yuliani

    2016-02-01

    Weaknesses on problem solving of Indonesian students as reported by recent international surveys give rise to questions on how Indonesian teachers bring out idea of problem solving in mathematics lesson. An explorative study was undertaken to investigate how secondary teachers who teach mathematics at junior high school level understand and show belief toward mathematical problem solving. Participants were teachers from four cities in East Java province comprising 45 state teachers and 25 private teachers. Data was obtained through questionnaires and written test. The results of this study point out that the teachers understand pedagogical problem solving knowledge well as indicated by high score of observed teachers‘ responses showing understanding on problem solving as instruction as well as implementation of problem solving in teaching practice. However, they less understand on problem solving content knowledge such as problem solving strategies and meaning of problem itself. Regarding teacher's difficulties, teachers admitted to most frequently fail in (1) determining a precise mathematical model or strategies when carrying out problem solving steps which is supported by data of test result that revealed transformation error as the most frequently observed errors in teachers’ work and (2) choosing suitable real situation when designing context-based problem solving task. Meanwhile, analysis of teacher's beliefs on problem solving shows that teachers tend to view both mathematics and how students should learn mathematics as body static perspective, while they tend to believe to apply idea of problem solving as dynamic approach when teaching mathematics.

  7. The Impact of Teacher Training on Creative Writing and Problem-Solving Using Futuristic Scenarios for Creative Problem Solving and Creative Problem Solving Programs

    ERIC Educational Resources Information Center

    Hayel Al-Srour, Nadia; Al-Ali, Safa M.; Al-Oweidi, Alia

    2016-01-01

    The present study aims to detect the impact of teacher training on creative writing and problem-solving using both Futuristic scenarios program to solve problems creatively, and creative problem solving. To achieve the objectives of the study, the sample was divided into two groups, the first consist of 20 teachers, and 23 teachers to second…

  8. A "Reverse-Schur" Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.

  9. A “Reverse-Schur” Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839

  10. Sustainable knowledge development across cultural boundaries: Experiences from the EU-project SILMAS (Toolbox for conflict solving instruments in Alpine Lake Management)

    NASA Astrophysics Data System (ADS)

    Fegerl, Michael; Wieden, Wilfried

    2013-04-01

    Increasingly people have to communicate knowledge across cultural and language boundaries. Even though recent technologies offer powerful communication facilities people often feel confronted with barriers which clearly reduce their chances of making their interaction a success. Concrete evidence concerning such problems derives from a number of projects, where generated knowledge often results in dead-end products. In the Alpine Space-project SILMAS (Sustainable Instruments for Lake Management in Alpine Space), in which both authors were involved, a special approach (syneris® ) was taken to avoid this problem and to manage project knowledge in sustainable form. Under this approach knowledge input and output are handled interactively: Relevant knowledge can be developed continuously and users can always access the latest state of expertise. Resort to the respective tools and procedures can also assist in closing knowledge gaps and in developing innovative responses to familiar or novel problems. This contribution intends to describe possible ways and means which have been found to increase the chances of success of knowledge communication across cultural boundaries. The process of trans-cultural discussions of experts to find a standardized solution is highlighted as well as the problem of dissemination of expert knowledge to variant stakeholders. Finally lessons learned are made accessible, where a main task lies in the creation of a tool box for conflict solving instruments, as a demonstrable result of the project and for the time thereafter. The interactive web-based toolbox enables lake managers to access best practice instruments in standardized, explicit and cross-linguistic form.

  11. Problem-solving skills in high school biology: The effectiveness of the IMMEX problem-solving assessment software

    NASA Astrophysics Data System (ADS)

    Palacio-Cayetano, Joycelin

    "Problem-solving through reflective thinking should be both the method and valuable outcome of science instruction in America's schools" proclaimed John Dewey (Gabel, 1995). If the development of problem-solving is a primary goal of science education, more problem-solving opportunities must be an integral part of K-16 education. To examine the effective use of technology in developing and assessing problem-solving skills, a problem-solving authoring, learning, and assessment software, the UCLA IMMEX Program-Interactive Multimedia Exercises-was investigated. This study was a twenty-week quasi-experimental study that was implemented as a control-group time series design among 120 tenth grade students. Both the experimental group (n = 60) and the control group (n = 60) participated in a problem-based learning curriculum; however, the experimental group received regular intensive experiences with IMMEX problem-solving and the control group did not. Problem-solving pretest and posttest were administered to all students. The instruments used were a 35-item Processes of Biological Inquiry Test and an IMMEX problem-solving assessment test, True Roots. Students who participated in the IMMEX Program achieved significant (p <.05) gains in problem-solving skills on both problem-solving assessment instruments. This study provided evidence that IMMEX software is highly efficient in evaluating salient elements of problem-solving. Outputs of students' problem-solving strategies revealed that unsuccessful problem solvers primarily used the following four strategies: (1) no data search strategy, students simply guessed; (2) limited data search strategy leading to insufficient data and premature closing; (3) irrelevant data search strategy, students focus in areas bearing no substantive data; and (4) extensive data search strategy with inadequate integration and analysis. On the contrary, successful problem solvers used the following strategies; (1) focused search strategy coupled with the ability to fill in knowledge gaps by accessing the appropriate resources; (2) targeted search strategy coupled with high level of analytical and integration skills; and (3) focused search strategy coupled with superior discrimination, analytical, and integration skills. The strategies of students who were successful and unsuccessful solving IMMEX problems were consistent with those of expert and novice problem solvers identified in the literature on problem-solving.

  12. Block Preconditioning to Enable Physics-Compatible Implicit Multifluid Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Phillips, Edward; Shadid, John; Cyr, Eric; Miller, Sean

    2017-10-01

    Multifluid plasma simulations involve large systems of partial differential equations in which many time-scales ranging over many orders of magnitude arise. Since the fastest of these time-scales may set a restrictively small time-step limit for explicit methods, the use of implicit or implicit-explicit time integrators can be more tractable for obtaining dynamics at time-scales of interest. Furthermore, to enforce properties such as charge conservation and divergence-free magnetic field, mixed discretizations using volume, nodal, edge-based, and face-based degrees of freedom are often employed in some form. Together with the presence of stiff modes due to integrating over fast time-scales, the mixed discretization makes the required linear solves for implicit methods particularly difficult for black box and monolithic solvers. This work presents a block preconditioning strategy for multifluid plasma systems that segregates the linear system based on discretization type and approximates off-diagonal coupling in block diagonal Schur complement operators. By employing multilevel methods for the block diagonal subsolves, this strategy yields algorithmic and parallel scalability which we demonstrate on a range of problems.

  13. Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy

    NASA Astrophysics Data System (ADS)

    Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.

    2011-08-01

    The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.

  14. NASCRIN - NUMERICAL ANALYSIS OF SCRAMJET INLET

    NASA Technical Reports Server (NTRS)

    Kumar, A.

    1994-01-01

    The NASCRIN program was developed for analyzing two-dimensional flow fields in supersonic combustion ramjet (scramjet) inlets. NASCRIN solves the two-dimensional Euler or Navier-Stokes equations in conservative form by an unsplit, explicit, two-step finite-difference method. A more recent explicit-implicit, two-step scheme has also been incorporated in the code for viscous flow analysis. An algebraic, two-layer eddy-viscosity model is used for the turbulent flow calculations. NASCRIN can analyze both inviscid and viscous flows with no struts, one strut, or multiple struts embedded in the flow field. NASCRIN can be used in a quasi-three-dimensional sense for some scramjet inlets under certain simplifying assumptions. Although developed for supersonic internal flow, NASCRIN may be adapted to a variety of other flow problems. In particular, it should be readily adaptable to subsonic inflow with supersonic outflow, supersonic inflow with subsonic outflow, or fully subsonic flow. The NASCRIN program is available for batch execution on the CDC CYBER 203. The vectorized FORTRAN version was developed in 1983. NASCRIN has a central memory requirement of approximately 300K words for a grid size of about 3,000 points.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Jianyuan; Liu, Jian; He, Yang

    Explicit high-order non-canonical symplectic particle-in-cell algorithms for classical particle-field systems governed by the Vlasov-Maxwell equations are developed. The algorithms conserve a discrete non-canonical symplectic structure derived from the Lagrangian of the particle-field system, which is naturally discrete in particles. The electromagnetic field is spatially discretized using the method of discrete exterior calculus with high-order interpolating differential forms for a cubic grid. The resulting time-domain Lagrangian assumes a non-canonical symplectic structure. It is also gauge invariant and conserves charge. The system is then solved using a structure-preserving splitting method discovered by He et al. [preprint http://arxiv.org/abs/arXiv:1505.06076 (2015)], which produces five exactlymore » soluble sub-systems, and high-order structure-preserving algorithms follow by combinations. The explicit, high-order, and conservative nature of the algorithms is especially suitable for long-term simulations of particle-field systems with extremely large number of degrees of freedom on massively parallel supercomputers. The algorithms have been tested and verified by the two physics problems, i.e., the nonlinear Landau damping and the electron Bernstein wave.« less

  16. Recurrences and explicit formulae for the expansion and connection coefficients in series of Bessel polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2004-08-01

    A formula expressing explicitly the derivatives of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another explicit formula, which expresses the Bessel expansion coefficients of a general-order derivative of an infinitely differentiable function in terms of its original Bessel coefficients, is also given. A formula for the Bessel coefficients of the moments of one single Bessel polynomial of certain degree is proved. A formula for the Bessel coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Bessel coefficients is also obtained. Application of these formulae for solving ordinary differential equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Bessel-Bessel polynomials is described. An explicit formula for these coefficients between Jacobi and Bessel polynomials is given, of which the ultraspherical polynomial and its consequences are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Bessel and Hermite-Bessel are also developed.

  17. Pre-Service Class Teacher' Ability in Solving Mathematical Problems and Skills in Solving Daily Problems

    ERIC Educational Resources Information Center

    Aljaberi, Nahil M.; Gheith, Eman

    2016-01-01

    This study aims to investigate the ability of pre-service class teacher at University of Petrain solving mathematical problems using Polya's Techniques, their level of problem solving skills in daily-life issues. The study also investigates the correlation between their ability to solve mathematical problems and their level of problem solving…

  18. The Association between Motivation, Affect, and Self-regulated Learning When Solving Problems.

    PubMed

    Baars, Martine; Wijnia, Lisette; Paas, Fred

    2017-01-01

    Self-regulated learning (SRL) skills are essential for learning during school years, particularly in complex problem-solving domains, such as biology and math. Although a lot of studies have focused on the cognitive resources that are needed for learning to solve problems in a self-regulated way, affective and motivational resources have received much less research attention. The current study investigated the relation between affect (i.e., Positive Affect and Negative Affect Scale), motivation (i.e., autonomous and controlled motivation), mental effort, SRL skills, and problem-solving performance when learning to solve biology problems in a self-regulated online learning environment. In the learning phase, secondary education students studied video-modeling examples of how to solve hereditary problems, solved hereditary problems which they chose themselves from a set of problems with different complexity levels (i.e., five levels). In the posttest, students solved hereditary problems, self-assessed their performance, and chose a next problem from the set of problems but did not solve these problems. The results from this study showed that negative affect, inaccurate self-assessments during the posttest, and higher perceptions of mental effort during the posttest were negatively associated with problem-solving performance after learning in a self-regulated way.

  19. Implicit and Explicit Self-Esteem Discrepancies, Victimization and the Development of Late Childhood Internalizing Problems.

    PubMed

    Leeuwis, Franca H; Koot, Hans M; Creemers, Daan H M; van Lier, Pol A C

    2015-07-01

    Discrepancies between implicit and explicit self-esteem have been linked with internalizing problems among mainly adolescents and adults. Longitudinal research on this association in children is lacking. This study examined the longitudinal link between self-esteem discrepancies and the development of internalizing problems in children. It furthermore examined the possible mediating role of self-esteem discrepancies in the longitudinal link between experiences of peer victimization and internalizing problems development. Children (N = 330, M(age) = 11.2 year; 52.5 % female) were followed over grades five (age 11 years) and six (age 12 years). Self-report measures were used annually to test for victimization and internalizing problems. Implicit self-esteem was assessed using an implicit association test, while explicit self-esteem was assessed via self-reports. Self-esteem discrepancies represented the difference between implicit and explicit self-esteem. Results showed that victimization was associated with increases in damaged self-esteem (higher levels of implicit than explicit self-esteem. Additionally, damaged self-esteem at age 11 years predicted an increase in internalizing problems in children over ages 11 to 12 years. Furthermore, damaged self-esteem mediated the relationship between age 11 years victimization and the development of internalizing problems. No impact of fragile self-esteem (lower levels of implicit than explicit self-esteem) on internalizing problems was found. The results thus underscore that, as found in adolescent and adult samples, damaged self-esteem is a predictor of increases in childhood internalizing problems. Moreover, damaged self-esteem might explain why children who are victimized develop internalizing problems. Implications are discussed.

  20. Run-time scheduling and execution of loops on message passing machines

    NASA Technical Reports Server (NTRS)

    Crowley, Kay; Saltz, Joel; Mirchandaney, Ravi; Berryman, Harry

    1989-01-01

    Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.

  1. Run-time scheduling and execution of loops on message passing machines

    NASA Technical Reports Server (NTRS)

    Saltz, Joel; Crowley, Kathleen; Mirchandaney, Ravi; Berryman, Harry

    1990-01-01

    Sparse system solvers and general purpose codes for solving partial differential equations are examples of the many types of problems whose irregularity can result in poor performance on distributed memory machines. Often, the data structures used in these problems are very flexible. Crucial details concerning loop dependences are encoded in these structures rather than being explicitly represented in the program. Good methods for parallelizing and partitioning these types of problems require assignment of computations in rather arbitrary ways. Naive implementations of programs on distributed memory machines requiring general loop partitions can be extremely inefficient. Instead, the scheduling mechanism needs to capture the data reference patterns of the loops in order to partition the problem. First, the indices assigned to each processor must be locally numbered. Next, it is necessary to precompute what information is needed by each processor at various points in the computation. The precomputed information is then used to generate an execution template designed to carry out the computation, communication, and partitioning of data, in an optimized manner. The design is presented for a general preprocessor and schedule executer, the structures of which do not vary, even though the details of the computation and of the type of information are problem dependent.

  2. Lessons learnt? The importance of metacognition and its implications for Cognitive Remediation in schizophrenia

    PubMed Central

    Cella, Matteo; Reeder, Clare; Wykes, Til

    2015-01-01

    The cognitive problems experienced by people with schizophrenia not only impede recovery but also interfere with treatments designed to improve overall functioning. Hence there has been a proliferation of new therapies to treat cognitive problems with the hope that improvements will benefit future intervention and recovery outcomes. Cognitive remediation therapy (CR) that relies on intensive task practice can support basic cognitive functioning but there is little evidence on how these therapies lead to transfer to real life skills. However, there is increasing evidence that CR including elements of transfer training (e.g., strategy use and problem solving schemas) produce higher functional outcomes. It is hypothesized that these therapies achieve higher transfer by improving metacognition. People with schizophrenia have metacognitive problems; these include poor self-awareness and difficulties in planning for complex tasks. This paper reviews this evidence as well as research on why metacognition needs to be explicitly taught as part of cognitive treatments. The evidence is based on research on learning spanning from neuroscience to the field of education. Learning programmes, and CRT, may be able to achieve better outcomes if they explicitly teach metacognition including metacognitive knowledge (i.e., awareness of the cognitive requirements and approaches to tasks) and metacognitive regulation (i.e., cognitive control over the different task relevant cognitive requirements). These types of metacognition are essential for successful task performance, in particular, for controlling effort, accuracy and efficient strategy use. We consider metacognition vital for the transfer of therapeutic gains to everyday life tasks making it a therapy target that may yield greater gains compared to cognition alone for recovery interventions. PMID:26388797

  3. Implicit and explicit motor sequence learning in children born very preterm.

    PubMed

    Jongbloed-Pereboom, Marjolein; Janssen, Anjo J W M; Steiner, K; Steenbergen, Bert; Nijhuis-van der Sanden, Maria W G

    2017-01-01

    Motor skills can be learned explicitly (dependent on working memory (WM)) or implicitly (relatively independent of WM). Children born very preterm (VPT) often have working memory deficits. Explicit learning may be compromised in these children. This study investigated implicit and explicit motor learning and the role of working memory in VPT children and controls. Three groups (6-9 years) participated: 20 VPT children with motor problems, 20 VPT children without motor problems, and 20 controls. A nine button sequence was learned implicitly (pressing the lighted button as quickly as possible) and explicitly (discovering the sequence via trial-and-error). Children learned implicitly and explicitly, evidenced by decreased movement duration of the sequence over time. In the explicit condition, children also reduced the number of errors over time. Controls made more errors than VPT children without motor problems. Visual WM had positive effects on both explicit and implicit performance. VPT birth and low motor proficiency did not negatively affect implicit or explicit learning. Visual WM was positively related to both implicit and explicit performance, but did not influence learning curves. These findings question the theoretical difference between implicit and explicit learning and the proposed role of visual WM therein. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. UCAV path planning in the presence of radar-guided surface-to-air missile threats

    NASA Astrophysics Data System (ADS)

    Zeitz, Frederick H., III

    This dissertation addresses the problem of path planning for unmanned combat aerial vehicles (UCAVs) in the presence of radar-guided surface-to-air missiles (SAMs). The radars, collocated with SAM launch sites, operate within the structure of an Integrated Air Defense System (IADS) that permits communication and cooperation between individual radars. The problem is formulated in the framework of the interaction between three sub-systems: the aircraft, the IADS, and the missile. The main features of this integrated model are: The aircraft radar cross section (RCS) depends explicitly on both the aspect and bank angles; hence, the RCS and aircraft dynamics are coupled. The probabilistic nature of IADS tracking is accounted for; namely, the probability that the aircraft has been continuously tracked by the IADS depends on the aircraft RCS and range from the perspective of each radar within the IADS. Finally, the requirement to maintain tracking prior to missile launch and during missile flyout are also modeled. Based on this model, the problem of UCAV path planning is formulated as a minimax optimal control problem, with the aircraft bank angle serving as control. Necessary conditions of optimality for this minimax problem are derived. Based on these necessary conditions, properties of the optimal paths are derived. These properties are used to discretize the dynamic optimization problem into a finite-dimensional, nonlinear programming problem that can be solved numerically. Properties of the optimal paths are also used to initialize the numerical procedure. A homotopy method is proposed to solve the finite-dimensional, nonlinear programming problem, and a heuristic method is proposed to improve the discretization during the homotopy process. Based upon the properties of numerical solutions, a method is proposed for parameterizing and storing information for later recall in flight to permit rapid replanning in response to changing threats. Illustrative examples are presented that confirm the standard flying tactics of "denying range, aspect, and aim," by yielding flight paths that "weave" to avoid long exposures of aspects with large RCS.

  5. Extraction of a group-pair relation: problem-solving relation from web-board documents.

    PubMed

    Pechsiri, Chaveevan; Piriyakul, Rapepun

    2016-01-01

    This paper aims to extract a group-pair relation as a Problem-Solving relation, for example a DiseaseSymptom-Treatment relation and a CarProblem-Repair relation, between two event-explanation groups, a problem-concept group as a symptom/CarProblem-concept group and a solving-concept group as a treatment-concept/repair concept group from hospital-web-board and car-repair-guru-web-board documents. The Problem-Solving relation (particularly Symptom-Treatment relation) including the graphical representation benefits non-professional persons by supporting knowledge of primarily solving problems. The research contains three problems: how to identify an EDU (an Elementary Discourse Unit, which is a simple sentence) with the event concept of either a problem or a solution; how to determine a problem-concept EDU boundary and a solving-concept EDU boundary as two event-explanation groups, and how to determine the Problem-Solving relation between these two event-explanation groups. Therefore, we apply word co-occurrence to identify a problem-concept EDU and a solving-concept EDU, and machine-learning techniques to solve a problem-concept EDU boundary and a solving-concept EDU boundary. We propose using k-mean and Naïve Bayes to determine the Problem-Solving relation between the two event-explanation groups involved with clustering features. In contrast to previous works, the proposed approach enables group-pair relation extraction with high accuracy.

  6. Instituting interaction: normative transformations in human communicative practices.

    PubMed

    Elias, John Z; Tylén, Kristian

    2014-01-01

    Recent experiments in semiotics and linguistics demonstrate that groups tend to converge on a common set of signs or terms in response to presented problems, experiments which potentially bear on the emergence and establishment of institutional interactions. Taken together, these studies indicate a spectrum, ranging from the spontaneous convergence of communicative practices to their eventual conventionalization, a process which might be described as an implicit institutionalization of those practices. However, the emergence of such convergence and conventionalization does not in itself constitute an institution, in the strict sense of a social organization partly created and governed by explicit rules. A further step toward institutions proper may occur when others are instructed about a task. That is, given task situations which select for successful practices, instructions about such situations make explicit what was tacit practice, instructions which can then be followed correctly or incorrectly. This transition gives rise to the normative distinction between conditions of success versus conditions of correctness, a distinction which will be explored and complicated in the course of this paper. Using these experiments as a basis, then, the emergence of institutions will be characterized in evolutionary and normative terms, beginning with our adaptive responses to the selective pressures of certain situational environments, and continuing with our capacity to then shape, constrain, and institute those environments to further refine and streamline our problem-solving activity.

  7. Instituting interaction: normative transformations in human communicative practices

    PubMed Central

    Elias, John Z.; Tylén, Kristian

    2014-01-01

    Recent experiments in semiotics and linguistics demonstrate that groups tend to converge on a common set of signs or terms in response to presented problems, experiments which potentially bear on the emergence and establishment of institutional interactions. Taken together, these studies indicate a spectrum, ranging from the spontaneous convergence of communicative practices to their eventual conventionalization, a process which might be described as an implicit institutionalization of those practices. However, the emergence of such convergence and conventionalization does not in itself constitute an institution, in the strict sense of a social organization partly created and governed by explicit rules. A further step toward institutions proper may occur when others are instructed about a task. That is, given task situations which select for successful practices, instructions about such situations make explicit what was tacit practice, instructions which can then be followed correctly or incorrectly. This transition gives rise to the normative distinction between conditions of success versus conditions of correctness, a distinction which will be explored and complicated in the course of this paper. Using these experiments as a basis, then, the emergence of institutions will be characterized in evolutionary and normative terms, beginning with our adaptive responses to the selective pressures of certain situational environments, and continuing with our capacity to then shape, constrain, and institute those environments to further refine and streamline our problem-solving activity. PMID:25295020

  8. Neophilia Ranking of Scientific Journals

    PubMed Central

    Packalen, Mikko; Bhattacharya, Jay

    2017-01-01

    The ranking of scientific journals is important because of the signal it sends to scientists about what is considered most vital for scientific progress. Existing ranking systems focus on measuring the influence of a scientific paper (citations)—these rankings do not reward journals for publishing innovative work that builds on new ideas. We propose an alternative ranking based on the proclivity of journals to publish papers that build on new ideas, and we implement this ranking via a text-based analysis of all published biomedical papers dating back to 1946. In addition, we compare our neophilia ranking to citation-based (impact factor) rankings; this comparison shows that the two ranking approaches are distinct. Prior theoretical work suggests an active role for our neophilia index in science policy. Absent an explicit incentive to pursue novel science, scientists underinvest in innovative work because of a coordination problem: for work on a new idea to flourish, many scientists must decide to adopt it in their work. Rankings that are based purely on influence thus do not provide sufficient incentives for publishing innovative work. By contrast, adoption of the neophilia index as part of journal-ranking procedures by funding agencies and university administrators would provide an explicit incentive for journals to publish innovative work and thus help solve the coordination problem by increasing scientists' incentives to pursue innovative work. PMID:28713181

  9. Students’ Mathematical Problem-Solving Abilities Through The Application of Learning Models Problem Based Learning

    NASA Astrophysics Data System (ADS)

    Nasution, M. L.; Yerizon, Y.; Gusmiyanti, R.

    2018-04-01

    One of the purpose mathematic learning is to develop problem solving abilities. Problem solving is obtained through experience in questioning non-routine. Improving students’ mathematical problem-solving abilities required an appropriate strategy in learning activities one of them is models problem based learning (PBL). Thus, the purpose of this research is to determine whether the problem solving abilities of mathematical students’ who learn to use PBL better than on the ability of students’ mathematical problem solving by applying conventional learning. This research included quasi experiment with static group design and population is students class XI MIA SMAN 1 Lubuk Alung. Class experiment in the class XI MIA 5 and class control in the class XI MIA 6. The instrument of final test students’ mathematical problem solving used essay form. The result of data final test in analyzed with t-test. The result is students’ mathematical problem solving abilities with PBL better then on the ability of students’ mathematical problem solving by applying conventional learning. It’s seen from the high percentage achieved by the group of students who learn to use PBL for each indicator of students’ mathematical problem solving.

  10. Using a general problem-solving strategy to promote transfer.

    PubMed

    Youssef-Shalala, Amina; Ayres, Paul; Schubert, Carina; Sweller, John

    2014-09-01

    Cognitive load theory was used to hypothesize that a general problem-solving strategy based on a make-as-many-moves-as-possible heuristic could facilitate problem solutions for transfer problems. In four experiments, school students were required to learn about a topic through practice with a general problem-solving strategy, through a conventional problem solving strategy or by studying worked examples. In Experiments 1 and 2 using junior high school students learning geometry, low knowledge students in the general problem-solving group scored significantly higher on near or far transfer tests than the conventional problem-solving group. In Experiment 3, an advantage for a general problem-solving group over a group presented worked examples was obtained on far transfer tests using the same curriculum materials, again presented to junior high school students. No differences between conditions were found in Experiments 1, 2, or 3 using test problems similar to the acquisition problems. Experiment 4 used senior high school students studying economics and found the general problem-solving group scored significantly higher than the conventional problem-solving group on both similar and transfer tests. It was concluded that the general problem-solving strategy was helpful for novices, but not for students that had access to domain-specific knowledge. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  11. Revising explanatory models to accommodate anomalous genetic phenomena: Problem solving in the context of discovery

    NASA Astrophysics Data System (ADS)

    Hafner, Robert; Stewart, Jim

    Past problem-solving research has provided a basis for helping students structure their knowledge and apply appropriate problem-solving strategies to solve problems for which their knowledge (or mental models) of scientific phenomena is adequate (model-using problem solving). This research examines how problem solving in the domain of Mendelian genetics proceeds in situations where solvers' mental models are insufficient to solve problems at hand (model-revising problem solving). Such situations require solvers to use existing models to recognize anomalous data and to revise those models to accommodate the data. The study was conducted in the context of 9-week high school genetics course and addressed: the heuristics charactenstic of successful model-revising problem solving: the nature of the model revisions, made by students as well as the nature of model development across problem types; and the basis upon which solvers decide that a revised model is sufficient (that t has both predictive and explanatory power).

  12. Parent-Teacher Communication about Children with Autism Spectrum Disorder: An Examination of Collaborative Problem-Solving

    PubMed Central

    Azad, Gazi F.; Kim, Mina; Marcus, Steven C.; Mandell, David S.; Sheridan, Susan M.

    2016-01-01

    Effective parent-teacher communication involves problem-solving concerns about students. Few studies have examined problem solving interactions between parents and teachers of children with autism spectrum disorder (ASD), with a particular focus on identifying communication barriers and strategies for improving them. This study examined the problem-solving behaviors of parents and teachers of children with ASD. Participants included 18 teachers and 39 parents of children with ASD. Parent-teacher dyads were prompted to discuss and provide a solution for a problem that a student experienced at home and at school. Parents and teachers also reported on their problem-solving behaviors. Results showed that parents and teachers displayed limited use of the core elements of problem-solving. Teachers displayed more problem-solving behaviors than parents. Both groups reported engaging in more problem-solving behaviors than they were observed to display during their discussions. Our findings suggest that teacher and parent training programs should include collaborative approaches to problem-solving. PMID:28392604

  13. Parent-Teacher Communication about Children with Autism Spectrum Disorder: An Examination of Collaborative Problem-Solving.

    PubMed

    Azad, Gazi F; Kim, Mina; Marcus, Steven C; Mandell, David S; Sheridan, Susan M

    2016-12-01

    Effective parent-teacher communication involves problem-solving concerns about students. Few studies have examined problem solving interactions between parents and teachers of children with autism spectrum disorder (ASD), with a particular focus on identifying communication barriers and strategies for improving them. This study examined the problem-solving behaviors of parents and teachers of children with ASD. Participants included 18 teachers and 39 parents of children with ASD. Parent-teacher dyads were prompted to discuss and provide a solution for a problem that a student experienced at home and at school. Parents and teachers also reported on their problem-solving behaviors. Results showed that parents and teachers displayed limited use of the core elements of problem-solving. Teachers displayed more problem-solving behaviors than parents. Both groups reported engaging in more problem-solving behaviors than they were observed to display during their discussions. Our findings suggest that teacher and parent training programs should include collaborative approaches to problem-solving.

  14. Errors analysis of problem solving using the Newman stage after applying cooperative learning of TTW type

    NASA Astrophysics Data System (ADS)

    Rr Chusnul, C.; Mardiyana, S., Dewi Retno

    2017-12-01

    Problem solving is the basis of mathematics learning. Problem solving teaches us to clarify an issue coherently in order to avoid misunderstanding information. Sometimes there may be mistakes in problem solving due to misunderstanding the issue, choosing a wrong concept or misapplied concept. The problem-solving test was carried out after students were given treatment on learning by using cooperative learning of TTW type. The purpose of this study was to elucidate student problem regarding to problem solving errors after learning by using cooperative learning of TTW type. Newman stages were used to identify problem solving errors in this study. The new research used a descriptive method to find out problem solving errors in students. The subject in this study were students of Vocational Senior High School (SMK) in 10th grade. Test and interview was conducted for data collection. Thus, the results of this study suggested problem solving errors in students after learning by using cooperative learning of TTW type for Newman stages.

  15. Rejection Sensitivity and Depression: Indirect Effects Through Problem Solving.

    PubMed

    Kraines, Morganne A; Wells, Tony T

    2017-01-01

    Rejection sensitivity (RS) and deficits in social problem solving are risk factors for depression. Despite their relationship to depression and the potential connection between them, no studies have examined RS and social problem solving together in the context of depression. As such, we examined RS, five facets of social problem solving, and symptoms of depression in a young adult sample. A total of 180 participants completed measures of RS, social problem solving, and depressive symptoms. We used bootstrapping to examine the indirect effect of RS on depressive symptoms through problem solving. RS was positively associated with depressive symptoms. A negative problem orientation, impulsive/careless style, and avoidance style of social problem solving were positively associated with depressive symptoms, and a positive problem orientation was negatively associated with depressive symptoms. RS demonstrated an indirect effect on depressive symptoms through two social problem-solving facets: the tendency to view problems as threats to one's well-being and an avoidance problem-solving style characterized by procrastination, passivity, or overdependence on others. These results are consistent with prior research that found a positive association between RS and depression symptoms, but this is the first study to implicate specific problem-solving deficits in the relationship between RS and depression. Our results suggest that depressive symptoms in high RS individuals may result from viewing problems as threats and taking an avoidant, rather than proactive, approach to dealing with problems. These findings may have implications for problem-solving interventions for rejection sensitive individuals.

  16. The Cyclic Nature of Problem Solving: An Emergent Multidimensional Problem-Solving Framework

    ERIC Educational Resources Information Center

    Carlson, Marilyn P.; Bloom, Irene

    2005-01-01

    This paper describes the problem-solving behaviors of 12 mathematicians as they completed four mathematical tasks. The emergent problem-solving framework draws on the large body of research, as grounded by and modified in response to our close observations of these mathematicians. The resulting "Multidimensional Problem-Solving Framework" has four…

  17. Mathematical Problem Solving: A Review of the Literature.

    ERIC Educational Resources Information Center

    Funkhouser, Charles

    The major perspectives on problem solving of the twentieth century are reviewed--associationism, Gestalt psychology, and cognitive science. The results of the review on teaching problem solving and the uses of computers to teach problem solving are included. Four major issues related to the teaching of problem solving are discussed: (1)…

  18. Teaching Problem Solving Skills to Elementary Age Students with Autism

    ERIC Educational Resources Information Center

    Cote, Debra L.; Jones, Vita L.; Barnett, Crystal; Pavelek, Karin; Nguyen, Hoang; Sparks, Shannon L.

    2014-01-01

    Students with disabilities need problem-solving skills to promote their success in solving the problems of daily life. The research into problem-solving instruction has been limited for students with autism. Using a problem-solving intervention and the Self Determined Learning Model of Instruction, three elementary age students with autism were…

  19. The Association between Motivation, Affect, and Self-regulated Learning When Solving Problems

    PubMed Central

    Baars, Martine; Wijnia, Lisette; Paas, Fred

    2017-01-01

    Self-regulated learning (SRL) skills are essential for learning during school years, particularly in complex problem-solving domains, such as biology and math. Although a lot of studies have focused on the cognitive resources that are needed for learning to solve problems in a self-regulated way, affective and motivational resources have received much less research attention. The current study investigated the relation between affect (i.e., Positive Affect and Negative Affect Scale), motivation (i.e., autonomous and controlled motivation), mental effort, SRL skills, and problem-solving performance when learning to solve biology problems in a self-regulated online learning environment. In the learning phase, secondary education students studied video-modeling examples of how to solve hereditary problems, solved hereditary problems which they chose themselves from a set of problems with different complexity levels (i.e., five levels). In the posttest, students solved hereditary problems, self-assessed their performance, and chose a next problem from the set of problems but did not solve these problems. The results from this study showed that negative affect, inaccurate self-assessments during the posttest, and higher perceptions of mental effort during the posttest were negatively associated with problem-solving performance after learning in a self-regulated way. PMID:28848467

  20. Case-based medical informatics

    PubMed Central

    Pantazi, Stefan V; Arocha, José F; Moehr, Jochen R

    2004-01-01

    Background The "applied" nature distinguishes applied sciences from theoretical sciences. To emphasize this distinction, we begin with a general, meta-level overview of the scientific endeavor. We introduce the knowledge spectrum and four interconnected modalities of knowledge. In addition to the traditional differentiation between implicit and explicit knowledge we outline the concepts of general and individual knowledge. We connect general knowledge with the "frame problem," a fundamental issue of artificial intelligence, and individual knowledge with another important paradigm of artificial intelligence, case-based reasoning, a method of individual knowledge processing that aims at solving new problems based on the solutions to similar past problems. We outline the fundamental differences between Medical Informatics and theoretical sciences and propose that Medical Informatics research should advance individual knowledge processing (case-based reasoning) and that natural language processing research is an important step towards this goal that may have ethical implications for patient-centered health medicine. Discussion We focus on fundamental aspects of decision-making, which connect human expertise with individual knowledge processing. We continue with a knowledge spectrum perspective on biomedical knowledge and conclude that case-based reasoning is the paradigm that can advance towards personalized healthcare and that can enable the education of patients and providers. We center the discussion on formal methods of knowledge representation around the frame problem. We propose a context-dependent view on the notion of "meaning" and advocate the need for case-based reasoning research and natural language processing. In the context of memory based knowledge processing, pattern recognition, comparison and analogy-making, we conclude that while humans seem to naturally support the case-based reasoning paradigm (memory of past experiences of problem-solving and powerful case matching mechanisms), technical solutions are challenging. Finally, we discuss the major challenges for a technical solution: case record comprehensiveness, organization of information on similarity principles, development of pattern recognition and solving ethical issues. Summary Medical Informatics is an applied science that should be committed to advancing patient-centered medicine through individual knowledge processing. Case-based reasoning is the technical solution that enables a continuous individual knowledge processing and could be applied providing that challenges and ethical issues arising are addressed appropriately. PMID:15533257

  1. An experience sampling study of learning, affect, and the demands control support model.

    PubMed

    Daniels, Kevin; Boocock, Grahame; Glover, Jane; Holland, Julie; Hartley, Ruth

    2009-07-01

    The demands control support model (R. A. Karasek & T. Theorell, 1990) indicates that job control and social support enable workers to engage in problem solving. In turn, problem solving is thought to influence learning and well-being (e.g., anxious affect, activated pleasant affect). Two samples (N = 78, N = 106) provided data up to 4 times per day for up to 5 working days. The extent to which job control was used for problem solving was assessed by measuring the extent to which participants changed aspects of their work activities to solve problems. The extent to which social support was used to solve problems was assessed by measuring the extent to which participants discussed problems to solve problems. Learning mediated the relationship between changing aspects of work activities to solve problems and activated pleasant affect. Learning also mediated the relationship between discussing problems to solve problems and activated pleasant affect. The findings indicated that how individuals use control and support to respond to problem-solving demands is associated with organizational and individual phenomena, such as learning and affective well-being.

  2. What Does (and Doesn't) Make Analogical Problem Solving Easy? A Complexity-Theoretic Perspective

    ERIC Educational Resources Information Center

    Wareham, Todd; Evans, Patricia; van Rooij, Iris

    2011-01-01

    Solving new problems can be made easier if one can build on experiences with other problems one has already successfully solved. The ability to exploit earlier problem-solving experiences in solving new problems seems to require several cognitive sub-abilities. Minimally, one needs to be able to retrieve relevant knowledge of earlier solved…

  3. Synthesizing Huber's Problem Solving and Kolb's Learning Cycle: A Balanced Approach to Technical Problem Solving

    ERIC Educational Resources Information Center

    Kamis, Arnold; Khan, Beverly K.

    2009-01-01

    How do we model and improve technical problem solving, such as network subnetting? This paper reports an experimental study that tested several hypotheses derived from Kolb's experiential learning cycle and Huber's problem solving model. As subjects solved a network subnetting problem, they mapped their mental processes according to Huber's…

  4. Generalization of Social Skills: Strategies and Results of a Training Program in Problem Solving Skills.

    ERIC Educational Resources Information Center

    Paraschiv, Irina; Olley, J. Gregory

    This paper describes the "Problem Solving for Life" training program which trains adolescents and adults with mental retardation in skills for solving social problems. The program requires group participants to solve social problems by practicing two prerequisite skills (relaxation and positive self-statements) and four problem solving steps: (1)…

  5. Young Children's Analogical Problem Solving: Gaining Insights from Video Displays

    ERIC Educational Resources Information Center

    Chen, Zhe; Siegler, Robert S.

    2013-01-01

    This study examined how toddlers gain insights from source video displays and use the insights to solve analogous problems. Two- to 2.5-year-olds viewed a source video illustrating a problem-solving strategy and then attempted to solve analogous problems. Older but not younger toddlers extracted the problem-solving strategy depicted in the video…

  6. Investigating Problem-Solving Perseverance Using Lesson Study

    ERIC Educational Resources Information Center

    Bieda, Kristen N.; Huhn, Craig

    2017-01-01

    Problem solving has long been a focus of research and curriculum reform (Kilpatrick 1985; Lester 1994; NCTM 1989, 2000; CCSSI 2010). The importance of problem solving is not new, but the Common Core introduced the idea of making sense of problems and persevering in solving them (CCSSI 2010, p. 6) as an aspect of problem solving. Perseverance is…

  7. GPGPU-based explicit finite element computations for applications in biomechanics: the performance of material models, element technologies, and hardware generations.

    PubMed

    Strbac, V; Pierce, D M; Vander Sloten, J; Famaey, N

    2017-12-01

    Finite element (FE) simulations are increasingly valuable in assessing and improving the performance of biomedical devices and procedures. Due to high computational demands such simulations may become difficult or even infeasible, especially when considering nearly incompressible and anisotropic material models prevalent in analyses of soft tissues. Implementations of GPGPU-based explicit FEs predominantly cover isotropic materials, e.g. the neo-Hookean model. To elucidate the computational expense of anisotropic materials, we implement the Gasser-Ogden-Holzapfel dispersed, fiber-reinforced model and compare solution times against the neo-Hookean model. Implementations of GPGPU-based explicit FEs conventionally rely on single-point (under) integration. To elucidate the expense of full and selective-reduced integration (more reliable) we implement both and compare corresponding solution times against those generated using underintegration. To better understand the advancement of hardware, we compare results generated using representative Nvidia GPGPUs from three recent generations: Fermi (C2075), Kepler (K20c), and Maxwell (GTX980). We explore scaling by solving the same boundary value problem (an extension-inflation test on a segment of human aorta) with progressively larger FE meshes. Our results demonstrate substantial improvements in simulation speeds relative to two benchmark FE codes (up to 300[Formula: see text] while maintaining accuracy), and thus open many avenues to novel applications in biomechanics and medicine.

  8. Problem-solving deficits in Iranian people with borderline personality disorder.

    PubMed

    Akbari Dehaghi, Ashraf; Kaviani, Hossein; Tamanaeefar, Shima

    2014-01-01

    Interventions for people suffering from borderline personality disorder (BPD), such as dialectical behavior therapy, often include a problem-solving component. However, there is an absence of published studies examining the problem-solving abilities of this client group in Iran. The study compared inpatients and outpatients with BPD and a control group on problem-solving capabilities in an Iranian sample. It was hypothesized that patients with BPD would have more deficiencies in this area. Fifteen patients with BPD were compared to 15 healthy participants. Means-ends problem-solving task (MEPS) was used to measure problem-solving skills in both groups. BPD group reported less effective strategies in solving problems as opposed to the healthy group. Compared to the control group, participants with BPD provided empirical support for the use of problem-solving interventions with people suffering from BPD. The findings supported the idea that a problem-solving intervention can be efficiently applied either as a stand-alone therapy or in conjunction with other available psychotherapies to treat people with BPD.

  9. Impulsivity as a mediator in the relationship between problem solving and suicidal ideation.

    PubMed

    Gonzalez, Vivian M; Neander, Lucía L

    2018-03-15

    This study examined whether three facets of impulsivity previously shown to be associated with suicidal ideation and attempts (negative urgency, lack of premeditation, and lack of perseverance) help to account for the established association between problem solving deficits and suicidal ideation. Emerging adult college student drinkers with a history of at least passive suicidal ideation (N = 387) completed measures of problem solving, impulsivity, and suicidal ideation. A path analysis was conducted to examine the mediating role of impulsivity variables in the association between problem solving (rational problem solving, positive and negative problem orientation, and avoidance style) and suicidal ideation. Direct and indirect associations through impulsivity, particularly negative urgency, were found between problem solving and severity of suicidal ideation. Interventions aimed at teaching problem solving skills, as well as self-efficacy and optimism for solving life problems, may help to reduce impulsivity and suicidal ideation. © 2018 Wiley Periodicals, Inc.

  10. Improving mathematical problem solving skills through visual media

    NASA Astrophysics Data System (ADS)

    Widodo, S. A.; Darhim; Ikhwanudin, T.

    2018-01-01

    The purpose of this article was to find out the enhancement of students’ mathematical problem solving by using visual learning media. The ability to solve mathematical problems is the ability possessed by students to solve problems encountered, one of the problem-solving model of Polya. This preliminary study was not to make a model, but it only took a conceptual approach by comparing the various literature of problem-solving skills by linking visual learning media. The results of the study indicated that the use of learning media had not been appropriated so that the ability to solve mathematical problems was not optimal. The inappropriateness of media use was due to the instructional media that was not adapted to the characteristics of the learners. Suggestions that can be given is the need to develop visual media to increase the ability to solve problems.

  11. The Relationship between Students' Problem Posing and Problem Solving Abilities and Beliefs: A Small-Scale Study with Chinese Elementary School Children

    ERIC Educational Resources Information Center

    Limin, Chen; Van Dooren, Wim; Verschaffel, Lieven

    2013-01-01

    The goal of the present study is to investigate the relationship between pupils' problem posing and problem solving abilities, their beliefs about problem posing and problem solving, and their general mathematics abilities, in a Chinese context. Five instruments, i.e., a problem posing test, a problem solving test, a problem posing questionnaire,…

  12. A simple smoothness indicator for the WENO scheme with adaptive order

    NASA Astrophysics Data System (ADS)

    Huang, Cong; Chen, Li Li

    2018-01-01

    The fifth order WENO scheme with adaptive order is competent for solving hyperbolic conservation laws, its reconstruction is a convex combination of a fifth order linear reconstruction and three third order linear reconstructions. Note that, on uniform mesh, the computational cost of smoothness indicator for fifth order linear reconstruction is comparable with the sum of ones for three third order linear reconstructions, thus it is too heavy; on non-uniform mesh, the explicit form of smoothness indicator for fifth order linear reconstruction is difficult to be obtained, and its computational cost is much heavier than the one on uniform mesh. In order to overcome these problems, a simple smoothness indicator for fifth order linear reconstruction is proposed in this paper.

  13. Density Large Deviations for Multidimensional Stochastic Hyperbolic Conservation Laws

    NASA Astrophysics Data System (ADS)

    Barré, J.; Bernardin, C.; Chetrite, R.

    2018-02-01

    We investigate the density large deviation function for a multidimensional conservation law in the vanishing viscosity limit, when the probability concentrates on weak solutions of a hyperbolic conservation law. When the mobility and diffusivity matrices are proportional, i.e. an Einstein-like relation is satisfied, the problem has been solved in Bellettini and Mariani (Bull Greek Math Soc 57:31-45, 2010). When this proportionality does not hold, we compute explicitly the large deviation function for a step-like density profile, and we show that the associated optimal current has a non trivial structure. We also derive a lower bound for the large deviation function, valid for a more general weak solution, and leave the general large deviation function upper bound as a conjecture.

  14. Interacting Electrons in Graphene: Fermi Velocity Renormalization and Optical Response

    NASA Astrophysics Data System (ADS)

    Stauber, T.; Parida, P.; Trushin, M.; Ulybyshev, M. V.; Boyda, D. L.; Schliemann, J.

    2017-06-01

    We have developed a Hartree-Fock theory for electrons on a honeycomb lattice aiming to solve a long-standing problem of the Fermi velocity renormalization in graphene. Our model employs no fitting parameters (like an unknown band cutoff) but relies on a topological invariant (crystal structure function) that makes the Hartree-Fock sublattice spinor independent of the electron-electron interaction. Agreement with the experimental data is obtained assuming static self-screening including local field effects. As an application of the model, we derive an explicit expression for the optical conductivity and discuss the renormalization of the Drude weight. The optical conductivity is also obtained via precise quantum Monte Carlo calculations which compares well to our mean-field approach.

  15. Chiral topological phases from artificial neural networks

    NASA Astrophysics Data System (ADS)

    Kaubruegger, Raphael; Pastori, Lorenzo; Budich, Jan Carl

    2018-05-01

    Motivated by recent progress in applying techniques from the field of artificial neural networks (ANNs) to quantum many-body physics, we investigate to what extent the flexibility of ANNs can be used to efficiently study systems that host chiral topological phases such as fractional quantum Hall (FQH) phases. With benchmark examples, we demonstrate that training ANNs of restricted Boltzmann machine type in the framework of variational Monte Carlo can numerically solve FQH problems to good approximation. Furthermore, we show by explicit construction how n -body correlations can be kept at an exact level with ANN wave functions exhibiting polynomial scaling with power n in system size. Using this construction, we analytically represent the paradigmatic Laughlin wave function as an ANN state.

  16. An Investigation of the Effects on Students' Attitudes, Beliefs, and Abilities in Problem Solving and Mathematics after One Year of a Systematic Approach to the Learning of Problem Solving.

    ERIC Educational Resources Information Center

    Higgins, Karen M.

    This study investigated the effects of Oregon's Lane County "Problem Solving in Mathematics" (PSM) materials on middle-school students' attitudes, beliefs, and abilities in problem solving and mathematics. The instructional approach advocated in PSM includes: the direct teaching of five problem-solving skills, weekly challenge problems,…

  17. Student’s scheme in solving mathematics problems

    NASA Astrophysics Data System (ADS)

    Setyaningsih, Nining; Juniati, Dwi; Suwarsono

    2018-03-01

    The purpose of this study was to investigate students’ scheme in solving mathematics problems. Scheme are data structures for representing the concepts stored in memory. In this study, we used it in solving mathematics problems, especially ratio and proportion topics. Scheme is related to problem solving that assumes that a system is developed in the human mind by acquiring a structure in which problem solving procedures are integrated with some concepts. The data were collected by interview and students’ written works. The results of this study revealed are students’ scheme in solving the problem of ratio and proportion as follows: (1) the content scheme, where students can describe the selected components of the problem according to their prior knowledge, (2) the formal scheme, where students can explain in construct a mental model based on components that have been selected from the problem and can use existing schemes to build planning steps, create something that will be used to solve problems and (3) the language scheme, where students can identify terms, or symbols of the components of the problem.Therefore, by using the different strategies to solve the problems, the students’ scheme in solving the ratio and proportion problems will also differ.

  18. Factors of Problem-Solving Competency in a Virtual Chemistry Environment: The Role of Metacognitive Knowledge about Strategies

    ERIC Educational Resources Information Center

    Scherer, Ronny; Tiemann, Rudiger

    2012-01-01

    The ability to solve complex scientific problems is regarded as one of the key competencies in science education. Until now, research on problem solving focused on the relationship between analytical and complex problem solving, but rarely took into account the structure of problem-solving processes and metacognitive aspects. This paper,…

  19. Same Old Problem, New Name? Alerting Students to the Nature of the Problem-Solving Process

    ERIC Educational Resources Information Center

    Yerushalmi, Edit; Magen, Esther

    2006-01-01

    Students frequently misconceive the process of problem-solving, expecting the linear process required for solving an exercise, rather than the convoluted search process required to solve a genuine problem. In this paper we present an activity designed to foster in students realization and appreciation of the nature of the problem-solving process,…

  20. The Problem-Solving Process in Physics as Observed When Engineering Students at University Level Work in Groups

    ERIC Educational Resources Information Center

    Gustafsson, Peter; Jonsson, Gunnar; Enghag, Margareta

    2015-01-01

    The problem-solving process is investigated for five groups of students when solving context-rich problems in an introductory physics course included in an engineering programme. Through transcripts of their conversation, the paths in the problem-solving process have been traced and related to a general problem-solving model. All groups exhibit…

  1. Learning algebra on screen and on paper: The effect of using a digital tool on students' understanding

    NASA Astrophysics Data System (ADS)

    Jupri, Al; Drijvers, Paul; van den Heuvel-Panhuizen, Marja

    2016-02-01

    The use of digital tools in algebra education is expected to not only contribute to master skill, but also to acquire conceptual understanding. The question is how digital tools affect students" thinking and understanding. This paper presents an analysis of data of one group of three grade seventh students (12-13 year-old) on the use of a digital tool for algebra, the Cover-up applet for solving equations in particular. This case study was part of a larger teaching experiment on initial algebra enriched with digital technology which aimed to improve students" conceptual understanding and skills in solving equations in one variable. The qualitative analysis of a video observation, digital and written work showed that the use of the applet affects student thinking in terms of strategies used by students while dealing with the equations. We conclude that the effects of the use of the digital tool can be traced from student problem solving strategies on paper-and-pencil environment which are similar to strategies while working with the digital tool. In future research, we recommend to use specific theoretical lenses, such as the theory of instrumental genesis and the onto-semiotic approach, to reveal more explicit relationships between students" conceptual understanding and the use of a digital tool.

  2. Social Problem Solving and Depressive Symptoms Over Time: A Randomized Clinical Trial of Cognitive Behavioral Analysis System of Psychotherapy, Brief Supportive Psychotherapy, and Pharmacotherapy

    PubMed Central

    Klein, Daniel N.; Leon, Andrew C.; Li, Chunshan; D’Zurilla, Thomas J.; Black, Sarah R.; Vivian, Dina; Dowling, Frank; Arnow, Bruce A.; Manber, Rachel; Markowitz, John C.; Kocsis, James H.

    2011-01-01

    Objective Depression is associated with poor social problem-solving, and psychotherapies that focus on problem-solving skills are efficacious in treating depression. We examined the associations between treatment, social problem solving, and depression in a randomized clinical trial testing the efficacy of psychotherapy augmentation for chronically depressed patients who failed to fully respond to an initial trial of pharmacotherapy (Kocsis et al., 2009). Method Participants with chronic depression (n = 491) received Cognitive Behavioral Analysis System of Psychotherapy (CBASP), which emphasizes interpersonal problem-solving, plus medication; Brief Supportive Psychotherapy (BSP) plus medication; or medication alone for 12 weeks. Results CBASP plus pharmacotherapy was associated with significantly greater improvement in social problem solving than BSP plus pharmacotherapy, and a trend for greater improvement in problem solving than pharmacotherapy alone. In addition, change in social problem solving predicted subsequent change in depressive symptoms over time. However, the magnitude of the associations between changes in social problem solving and subsequent depressive symptoms did not differ across treatment conditions. Conclusions It does not appear that improved social problem solving is a mechanism that uniquely distinguishes CBASP from other treatment approaches. PMID:21500885

  3. Implementing thinking aloud pair and Pólya problem solving strategies in fractions

    NASA Astrophysics Data System (ADS)

    Simpol, N. S. H.; Shahrill, M.; Li, H.-C.; Prahmana, R. C. I.

    2017-12-01

    This study implemented two pedagogical strategies, the Thinking Aloud Pair Problem Solving and Pólya’s Problem Solving, to support students’ learning of fractions. The participants were 51 students (ages 11-13) from two Year 7 classes in a government secondary school in Brunei Darussalam. A mixed method design was employed in the present study, with data collected from the pre- and post-tests, problem solving behaviour questionnaire and interviews. The study aimed to explore if there were differences in the students’ problem solving behaviour before and after the implementation of the problem solving strategies. Results from the Wilcoxon Signed Rank Test revealed a significant difference in the test results regarding student problem solving behaviour, z = -3.68, p = .000, with a higher mean score for the post-test (M = 95.5, SD = 13.8) than for the pre-test (M = 88.9, SD = 15.2). This implied that there was improvement in the students’ problem solving performance from the pre-test to the post-test. Results from the questionnaire showed that more than half of the students increased scores in all four stages of the Pólya’s problem solving strategy, which provided further evidence of the students’ improvement in problem solving.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, F.; Banks, J. W.; Henshaw, W. D.

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  5. Genetic influences on insight problem solving: the role of catechol-O-methyltransferase (COMT) gene polymorphisms

    PubMed Central

    Jiang, Weili; Shang, Siyuan; Su, Yanjie

    2015-01-01

    People may experience an “aha” moment, when suddenly realizing a solution of a puzzling problem. This experience is called insight problem solving. Several findings suggest that catecholamine-related genes may contribute to insight problem solving, among which the catechol-O-methyltransferase (COMT) gene is the most promising candidate. The current study examined 753 healthy individuals to determine the associations between 7 candidate single nucleotide polymorphisms on the COMT gene and insight problem-solving performance, while considering gender differences. The results showed that individuals carrying A allele of rs4680 or T allele of rs4633 scored significantly higher on insight problem-solving tasks, and the COMT gene rs5993883 combined with gender interacted with correct solutions of insight problems, specifically showing that this gene only influenced insight problem-solving performance in males. This study presents the first investigation of the genetic impact on insight problem solving and provides evidence that highlights the role that the COMT gene plays in insight problem solving. PMID:26528222

  6. Genetic influences on insight problem solving: the role of catechol-O-methyltransferase (COMT) gene polymorphisms.

    PubMed

    Jiang, Weili; Shang, Siyuan; Su, Yanjie

    2015-01-01

    People may experience an "aha" moment, when suddenly realizing a solution of a puzzling problem. This experience is called insight problem solving. Several findings suggest that catecholamine-related genes may contribute to insight problem solving, among which the catechol-O-methyltransferase (COMT) gene is the most promising candidate. The current study examined 753 healthy individuals to determine the associations between 7 candidate single nucleotide polymorphisms on the COMT gene and insight problem-solving performance, while considering gender differences. The results showed that individuals carrying A allele of rs4680 or T allele of rs4633 scored significantly higher on insight problem-solving tasks, and the COMT gene rs5993883 combined with gender interacted with correct solutions of insight problems, specifically showing that this gene only influenced insight problem-solving performance in males. This study presents the first investigation of the genetic impact on insight problem solving and provides evidence that highlights the role that the COMT gene plays in insight problem solving.

  7. Understanding Undergraduates’ Problem-Solving Processes †

    PubMed Central

    Nehm, Ross H.

    2010-01-01

    Fostering effective problem-solving skills is one of the most longstanding and widely agreed upon goals of biology education. Nevertheless, undergraduate biology educators have yet to leverage many major findings about problem-solving processes from the educational and cognitive science research literatures. This article highlights key facets of problem-solving processes and introduces methodologies that may be used to reveal how undergraduate students perceive and represent biological problems. Overall, successful problem-solving entails a keen sensitivity to problem contexts, disciplined internal representation or modeling of the problem, and the principled management and deployment of cognitive resources. Context recognition tasks, problem representation practice, and cognitive resource management receive remarkably little emphasis in the biology curriculum, despite their central roles in problem-solving success. PMID:23653710

  8. Thinking Process of Naive Problem Solvers to Solve Mathematical Problems

    ERIC Educational Resources Information Center

    Mairing, Jackson Pasini

    2017-01-01

    Solving problems is not only a goal of mathematical learning. Students acquire ways of thinking, habits of persistence and curiosity, and confidence in unfamiliar situations by learning to solve problems. In fact, there were students who had difficulty in solving problems. The students were naive problem solvers. This research aimed to describe…

  9. Teaching Problem Solving without Modeling through "Thinking Aloud Pair Problem Solving."

    ERIC Educational Resources Information Center

    Pestel, Beverly C.

    1993-01-01

    Reviews research relevant to the problem of unsatisfactory student problem-solving abilities and suggests a teaching strategy that addresses the issue. Author explains how she uses teaching aloud problem solving (TAPS) in college chemistry and presents evaluation data. Among the findings are that the TAPS class got fewer problems completely right,…

  10. Social Problem Solving, Conduct Problems, and Callous-Unemotional Traits in Children

    ERIC Educational Resources Information Center

    Waschbusch, Daniel A.; Walsh, Trudi M.; Andrade, Brendan F.; King, Sara; Carrey, Normand J.

    2007-01-01

    This study examined the association between social problem solving, conduct problems (CP), and callous-unemotional (CU) traits in elementary age children. Participants were 53 children (40 boys and 13 girls) aged 7-12 years. Social problem solving was evaluated using the Social Problem Solving Test-Revised, which requires children to produce…

  11. Personality, problem solving, and adolescent substance use.

    PubMed

    Jaffee, William B; D'Zurilla, Thomas J

    2009-03-01

    The major aim of this study was to examine the role of social problem solving in the relationship between personality and substance use in adolescents. Although a number of studies have identified a relationship between personality and substance use, the precise mechanism by which this occurs is not clear. We hypothesized that problem-solving skills could be one such mechanism. More specifically, we sought to determine whether problem solving mediates, moderates, or both mediates and moderates the relationship between different personality traits and substance use. Three hundred and seven adolescents were administered the Substance Use Profile Scale, the Social Problem-Solving Inventory-Revised, and the Personality Experiences Inventory to assess personality, social problem-solving ability, and substance use, respectively. Results showed that the dimension of rational problem solving (i.e., effective problem-solving skills) significantly mediated the relationship between hopelessness and lifetime alcohol and marijuana use. The theoretical and clinical implications of these results were discussed.

  12. Social problem-solving in Chinese baccalaureate nursing students.

    PubMed

    Fang, Jinbo; Luo, Ying; Li, Yanhua; Huang, Wenxia

    2016-11-01

    To describe social problem solving in Chinese baccalaureate nursing students. A descriptive cross-sectional study was conducted with a cluster sample of 681 Chinese baccalaureate nursing students. The Chinese version of the Social Problem-Solving scale was used. Descriptive analyses, independent t-test and one-way analysis of variance were applied to analyze the data. The final year nursing students presented the highest scores of positive social problem-solving skills. Students with experiences of self-directed and problem-based learning presented significantly higher scores in Positive Problem Orientation subscale. The group with Critical thinking training experience, however, displayed higher negative problem solving scores compared with nonexperience group. Social problem solving abilities varied based upon teaching-learning strategies. Self-directed and problem-based learning may be recommended as effective way to improve social problem-solving ability. © 2016 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  13. Problem Solving and Chemical Equilibrium: Successful versus Unsuccessful Performance.

    ERIC Educational Resources Information Center

    Camacho, Moises; Good, Ron

    1989-01-01

    Describes the problem-solving behaviors of experts and novices engaged in solving seven chemical equilibrium problems. Lists 27 behavioral tendencies of successful and unsuccessful problem solvers. Discusses several implications for a problem solving theory, think-aloud techniques, adequacy of the chemistry domain, and chemistry instruction.…

  14. Solutions and debugging for data consistency in multiprocessors with noncoherent caches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, D.; Mendelson, B.; Breternitz, M. Jr.

    1995-02-01

    We analyze two important problems that arise in shared-memory multiprocessor systems. The stale data problem involves ensuring that data items in local memory of individual processors are current, independent of writes done by other processors. False sharing occurs when two processors have copies of the same shared data block but update different portions of the block. The false sharing problem involves guaranteeing that subsequent writes are properly combined. In modern architectures these problems are usually solved in hardware, by exploiting mechanisms for hardware controlled cache consistency. This leads to more expensive and nonscalable designs. Therefore, we are concentrating on softwaremore » methods for ensuring cache consistency that would allow for affordable and scalable multiprocessing systems. Unfortunately, providing software control is nontrivial, both for the compiler writer and for the application programmer. For this reason we are developing a debugging environment that will facilitate the development of compiler-based techniques and will help the programmer to tune his or her application using explicit cache management mechanisms. We extend the notion of a race condition for IBM Shared Memory System POWER/4, taking into consideration its noncoherent caches, and propose techniques for detection of false sharing problems. Identification of the stale data problem is discussed as well, and solutions are suggested.« less

  15. Worry and problem-solving skills and beliefs in primary school children.

    PubMed

    Parkinson, Monika; Creswell, Cathy

    2011-03-01

    To examine the association between worry and problem-solving skills and beliefs (confidence and perceived control) in primary school children. Children (8-11 years) were screened using the Penn State Worry Questionnaire for Children. High (N= 27) and low (N= 30) scorers completed measures of anxiety, problem-solving skills (generating alternative solutions to problems, planfulness, and effectiveness of solutions) and problem-solving beliefs (confidence and perceived control). High and low worry groups differed significantly on measures of anxiety and problem-solving beliefs (confidence and control) but not on problem-solving skills. Consistent with findings with adults, worry in children was associated with cognitive distortions, not skills deficits. Interventions for worried children may benefit from a focus on increasing positive problem-solving beliefs. ©2010 The British Psychological Society.

  16. The effectiveness of problem-based learning on students’ problem solving ability in vector analysis course

    NASA Astrophysics Data System (ADS)

    Mushlihuddin, R.; Nurafifah; Irvan

    2018-01-01

    The student’s low ability in mathematics problem solving proved to the less effective of a learning process in the classroom. Effective learning was a learning that affects student’s math skills, one of which is problem-solving abilities. Problem-solving capability consisted of several stages: understanding the problem, planning the settlement, solving the problem as planned, re-examining the procedure and the outcome. The purpose of this research was to know: (1) was there any influence of PBL model in improving ability Problem solving of student math in a subject of vector analysis?; (2) was the PBL model effective in improving students’ mathematical problem-solving skills in vector analysis courses? This research was a quasi-experiment research. The data analysis techniques performed from the test stages of data description, a prerequisite test is the normality test, and hypothesis test using the ANCOVA test and Gain test. The results showed that: (1) there was an influence of PBL model in improving students’ math problem-solving abilities in vector analysis courses; (2) the PBL model was effective in improving students’ problem-solving skills in vector analysis courses with a medium category.

  17. Effects of Training in Problem Solving on the Problem-Solving Abilities of Gifted Fourth Graders: A Comparison of the Future Problem Solving and Instrumental Enrichment Programs.

    ERIC Educational Resources Information Center

    Dufner, Hillrey A.; Alexander, Patricia A.

    The differential effects of two different types of problem-solving training on the problem-solving abilities of gifted fourth graders were studied. Two successive classes of gifted fourth graders from Weslaco Independent School District (Texas) were pretested with the Coloured Progressive Matrices (CPM) and Thinking Creatively With Pictures…

  18. Social problem-solving among adolescents treated for depression.

    PubMed

    Becker-Weidman, Emily G; Jacobs, Rachel H; Reinecke, Mark A; Silva, Susan G; March, John S

    2010-01-01

    Studies suggest that deficits in social problem-solving may be associated with increased risk of depression and suicidality in children and adolescents. It is unclear, however, which specific dimensions of social problem-solving are related to depression and suicidality among youth. Moreover, rational problem-solving strategies and problem-solving motivation may moderate or predict change in depression and suicidality among children and adolescents receiving treatment. The effect of social problem-solving on acute treatment outcomes were explored in a randomized controlled trial of 439 clinically depressed adolescents enrolled in the Treatment for Adolescents with Depression Study (TADS). Measures included the Children's Depression Rating Scale-Revised (CDRS-R), the Suicidal Ideation Questionnaire--Grades 7-9 (SIQ-Jr), and the Social Problem-Solving Inventory-Revised (SPSI-R). A random coefficients regression model was conducted to examine main and interaction effects of treatment and SPSI-R subscale scores on outcomes during the 12-week acute treatment stage. Negative problem orientation, positive problem orientation, and avoidant problem-solving style were non-specific predictors of depression severity. In terms of suicidality, avoidant problem-solving style and impulsiveness/carelessness style were predictors, whereas negative problem orientation and positive problem orientation were moderators of treatment outcome. Implications of these findings, limitations, and directions for future research are discussed. Copyright 2009 Elsevier Ltd. All rights reserved.

  19. Step by Step: Biology Undergraduates’ Problem-Solving Procedures during Multiple-Choice Assessment

    PubMed Central

    Prevost, Luanna B.; Lemons, Paula P.

    2016-01-01

    This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. PMID:27909021

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Y. B.; Zhu, X. W., E-mail: xiaowuzhu1026@znufe.edu.cn; Dai, H. H.

    Though widely used in modelling nano- and micro- structures, Eringen’s differential model shows some inconsistencies and recent study has demonstrated its differences between the integral model, which then implies the necessity of using the latter model. In this paper, an analytical study is taken to analyze static bending of nonlocal Euler-Bernoulli beams using Eringen’s two-phase local/nonlocal model. Firstly, a reduction method is proved rigorously, with which the integral equation in consideration can be reduced to a differential equation with mixed boundary value conditions. Then, the static bending problem is formulated and four types of boundary conditions with various loadings aremore » considered. By solving the corresponding differential equations, exact solutions are obtained explicitly in all of the cases, especially for the paradoxical cantilever beam problem. Finally, asymptotic analysis of the exact solutions reveals clearly that, unlike the differential model, the integral model adopted herein has a consistent softening effect. Comparisons are also made with existing analytical and numerical results, which further shows the advantages of the analytical results obtained. Additionally, it seems that the once controversial nonlocal bar problem in the literature is well resolved by the reduction method.« less

  1. A unified motion planning approach for redundant and non-redundant manipulators with actuator constraints. Ph.D. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Chung, Ching-Luan

    1990-01-01

    The term trajectory planning has been used to refer to the process of determining the time history of joint trajectory of each joint variable corresponding to a specified trajectory of the end effector. The trajectory planning problem was solved as a purely kinematic problem. The drawback is that there is no guarantee that the actuators can deliver the effort necessary to track the planned trajectory. To overcome this limitation, a motion planning approach which addresses the kinematics, dynamics, and feedback control of a manipulator in a unified framework was developed. Actuator constraints are taken into account explicitly and a priori in the synthesis of the feedback control law. Therefore the result of applying the motion planning approach described is not only the determination of the entire set of joint trajectories but also a complete specification of the feedback control strategy which would yield these joint trajectories without violating actuator constraints. The effectiveness of the unified motion planning approach is demonstrated on two problems which are of practical interest in manipulator robotics.

  2. Estimation of positive semidefinite correlation matrices by using convex quadratic semidefinite programming.

    PubMed

    Fushiki, Tadayoshi

    2009-07-01

    The correlation matrix is a fundamental statistic that is used in many fields. For example, GroupLens, a collaborative filtering system, uses the correlation between users for predictive purposes. Since the correlation is a natural similarity measure between users, the correlation matrix may be used in the Gram matrix in kernel methods. However, the estimated correlation matrix sometimes has a serious defect: although the correlation matrix is originally positive semidefinite, the estimated one may not be positive semidefinite when not all ratings are observed. To obtain a positive semidefinite correlation matrix, the nearest correlation matrix problem has recently been studied in the fields of numerical analysis and optimization. However, statistical properties are not explicitly used in such studies. To obtain a positive semidefinite correlation matrix, we assume the approximate model. By using the model, an estimate is obtained as the optimal point of an optimization problem formulated with information on the variances of the estimated correlation coefficients. The problem is solved by a convex quadratic semidefinite program. A penalized likelihood approach is also examined. The MovieLens data set is used to test our approach.

  3. Disciplinary Foundations for Solving Interdisciplinary Scientific Problems

    ERIC Educational Resources Information Center

    Zhang, Dongmei; Shen, Ji

    2015-01-01

    Problem-solving has been one of the major strands in science education research. But much of the problem-solving research has been conducted on discipline-based contexts; little research has been done on how students, especially individuals, solve interdisciplinary problems. To understand how individuals reason about interdisciplinary problems, we…

  4. Engineering students' experiences and perceptions of workplace problem solving

    NASA Astrophysics Data System (ADS)

    Pan, Rui

    In this study, I interviewed 22 engineering Co-Op students about their workplace problem solving experiences and reflections and explored: 1) Of Co-Op students who experienced workplace problem solving, what are the different ways in which students experience workplace problem solving? 2) How do students perceive a) the differences between workplace problem solving and classroom problem solving and b) in what areas are they prepared by their college education to solve workplace problems? To answer my first research question, I analyzed data through the lens of phenomenography and I conducted thematic analysis to answer my second research question. The results of this study have implications for engineering education and engineering practice. Specifically, the results reveal the different ways students experience workplace problem solving, which provide engineering educators and practicing engineers a better understanding of the nature of workplace engineering. In addition, the results indicate that there is still a gap between classroom engineering and workplace engineering. For engineering educators who aspire to prepare students to be future engineers, it is imperative to design problem solving experiences that can better prepare students with workplace competency.

  5. Problem-Solving Deficits in Iranian People with Borderline Personality Disorder

    PubMed Central

    Akbari Dehaghi, Ashraf; Kaviani, Hossein; Tamanaeefar, Shima

    2014-01-01

    Objective: Interventions for people suffering from borderline personality disorder (BPD), such as dialectical behavior therapy, often include a problem-solving component. However, there is an absence of published studies examining the problem-solving abilities of this client group in Iran. The study compared inpatients and outpatients with BPD and a control group on problem-solving capabilities in an Iranian sample. It was hypothesized that patients with BPD would have more deficiencies in this area. Methods: Fifteen patients with BPD were compared to 15 healthy participants. Means-ends problem-solving task (MEPS) was used to measure problem-solving skills in both groups. Results: BPD group reported less effective strategies in solving problems as opposed to the healthy group. Compared to the control group, participants with BPD provided empirical support for the use of problem-solving interventions with people suffering from BPD. Conclusions: The findings supported the idea that a problem-solving intervention can be efficiently applied either as a stand-alone therapy or in conjunction with other available psychotherapies to treat people with BPD. PMID:25798169

  6. Enhancing memory and imagination improves problem solving among individuals with depression.

    PubMed

    McFarland, Craig P; Primosch, Mark; Maxson, Chelsey M; Stewart, Brandon T

    2017-08-01

    Recent work has revealed links between memory, imagination, and problem solving, and suggests that increasing access to detailed memories can lead to improved imagination and problem-solving performance. Depression is often associated with overgeneral memory and imagination, along with problem-solving deficits. In this study, we tested the hypothesis that an interview designed to elicit detailed recollections would enhance imagination and problem solving among both depressed and nondepressed participants. In a within-subjects design, participants completed a control interview or an episodic specificity induction prior to completing memory, imagination, and problem-solving tasks. Results revealed that compared to the control interview, the episodic specificity induction fostered increased detail generation in memory and imagination and more relevant steps on the problem-solving task among depressed and nondepressed participants. This study builds on previous work by demonstrating that a brief interview can enhance problem solving among individuals with depression and supports the notion that episodic memory plays a key role in problem solving. It should be noted, however, that the results of the interview are relatively short-lived.

  7. Measuring Family Problem Solving: The Family Problem Solving Diary.

    ERIC Educational Resources Information Center

    Kieren, Dianne K.

    The development and use of the family problem-solving diary are described. The diary is one of several indicators and measures of family problem-solving behavior. It provides a record of each person's perception of day-to-day family problems (what the problem concerns, what happened, who got involved, what those involved did, how the problem…

  8. Goal specificity and knowledge acquisition in statistics problem solving: evidence for attentional focus.

    PubMed

    Trumpower, David L; Goldsmith, Timothy E; Guynn, Melissa J

    2004-12-01

    Solving training problems with nonspecific goals (NG; i.e., solving for all possible unknown values) often results in better transfer than solving training problems with standard goals (SG; i.e., solving for one particular unknown value). In this study, we evaluated an attentional focus explanation of the goal specificity effect. According to the attentional focus view, solving NG problems causes attention to be directed to local relations among successive problem states, whereas solving SG problems causes attention to be directed to relations between the various problem states and the goal state. Attention to the former is thought to enhance structural knowledge about the problem domain and thus promote transfer. Results supported this view because structurally different transfer problems were solved faster following NG training than following SG training. Moreover, structural knowledge representations revealed more links depicting local relations following NG training and more links to the training goal following SG training. As predicted, these effects were obtained only by domain novices.

  9. DRACO development for 3D simulations

    NASA Astrophysics Data System (ADS)

    Fatenejad, Milad; Moses, Gregory

    2006-10-01

    The DRACO (r-z) lagrangian radiation-hydrodynamics laser fusion simulation code is being extended to model 3D hydrodynamics in (x-y-z) coordinates with hexahedral cells on a structured grid. The equation of motion is solved with a lagrangian update with optional rezoning. The fluid equations are solved using an explicit scheme based on (Schulz, 1964) while the SALE-3D algorithm (Amsden, 1981) is used as a template for computing cell volumes and other quantities. A second order rezoner has been added which uses linear interpolation of the underlying continuous functions to preserve accuracy (Van Leer, 1976). Artificial restoring force terms and smoothing algorithms are used to avoid grid distortion in high aspect ratio cells. These include alternate node couplers along with a rotational restoring force based on the Tensor Code (Maenchen, 1964). Electron and ion thermal conduction is modeled using an extension of Kershaw's method (Kershaw, 1981) to 3D geometry. Test problem simulations will be presented to demonstrate the applicability of this new version of DRACO to the study of fluid instabilities in three dimensions.

  10. Nonlinearly preconditioned semismooth Newton methods for variational inequality solution of two-phase flow in porous media

    NASA Astrophysics Data System (ADS)

    Yang, Haijian; Sun, Shuyu; Yang, Chao

    2017-03-01

    Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.

  11. Problem-Solving After Traumatic Brain Injury in Adolescence: Associations With Functional Outcomes

    PubMed Central

    Wade, Shari L.; Cassedy, Amy E.; Fulks, Lauren E.; Taylor, H. Gerry; Stancin, Terry; Kirkwood, Michael W.; Yeates, Keith O.; Kurowski, Brad G.

    2017-01-01

    Objective To examine the association of problem-solving with functioning in youth with traumatic brain injury (TBI). Design Cross-sectional evaluation of pretreatment data from a randomized controlled trial. Setting Four children’s hospitals and 1 general hospital, with level 1 trauma units. Participants Youth, ages 11 to 18 years, who sustained moderate or severe TBI in the last 18 months (N=153). Main Outcome Measures Problem-solving skills were assessed using the Social Problem-Solving Inventory (SPSI) and the Dodge Social Information Processing Short Stories. Everyday functioning was assessed based on a structured clinical interview using the Child and Adolescent Functional Assessment Scale (CAFAS) and via adolescent ratings on the Youth Self Report (YSR). Correlations and multiple regression analyses were used to examine associations among measures. Results The TBI group endorsed lower levels of maladaptive problem-solving (negative problem orientation, careless/impulsive responding, and avoidant style) and lower levels of rational problem-solving, resulting in higher total problem-solving scores for the TBI group compared with a normative sample (P<.001). Dodge Social Information Processing Short Stories dimensions were correlated (r=.23–.37) with SPSI subscales in the anticipated direction. Although both maladaptive (P<.001) and adaptive (P=.006) problem-solving composites were associated with overall functioning on the CAFAS, only maladaptive problem-solving (P<.001) was related to the YSR total when outcomes were continuous. For the both CAFAS and YSR logistic models, maladaptive style was significantly associated with greater risk of impairment (P=.001). Conclusions Problem-solving after TBI differs from normative samples and is associated with functional impairments. The relation of problem-solving deficits after TBI with global functioning merits further investigation, with consideration of the potential effects of problem-solving interventions on functional outcomes. PMID:28389109

  12. Problem-Solving After Traumatic Brain Injury in Adolescence: Associations With Functional Outcomes.

    PubMed

    Wade, Shari L; Cassedy, Amy E; Fulks, Lauren E; Taylor, H Gerry; Stancin, Terry; Kirkwood, Michael W; Yeates, Keith O; Kurowski, Brad G

    2017-08-01

    To examine the association of problem-solving with functioning in youth with traumatic brain injury (TBI). Cross-sectional evaluation of pretreatment data from a randomized controlled trial. Four children's hospitals and 1 general hospital, with level 1 trauma units. Youth, ages 11 to 18 years, who sustained moderate or severe TBI in the last 18 months (N=153). Problem-solving skills were assessed using the Social Problem-Solving Inventory (SPSI) and the Dodge Social Information Processing Short Stories. Everyday functioning was assessed based on a structured clinical interview using the Child and Adolescent Functional Assessment Scale (CAFAS) and via adolescent ratings on the Youth Self Report (YSR). Correlations and multiple regression analyses were used to examine associations among measures. The TBI group endorsed lower levels of maladaptive problem-solving (negative problem orientation, careless/impulsive responding, and avoidant style) and lower levels of rational problem-solving, resulting in higher total problem-solving scores for the TBI group compared with a normative sample (P<.001). Dodge Social Information Processing Short Stories dimensions were correlated (r=.23-.37) with SPSI subscales in the anticipated direction. Although both maladaptive (P<.001) and adaptive (P=.006) problem-solving composites were associated with overall functioning on the CAFAS, only maladaptive problem-solving (P<.001) was related to the YSR total when outcomes were continuous. For the both CAFAS and YSR logistic models, maladaptive style was significantly associated with greater risk of impairment (P=.001). Problem-solving after TBI differs from normative samples and is associated with functional impairments. The relation of problem-solving deficits after TBI with global functioning merits further investigation, with consideration of the potential effects of problem-solving interventions on functional outcomes. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  13. Finite Element Modeling of Coupled Flexible Multibody Dynamics and Liquid Sloshing

    DTIC Science & Technology

    2006-09-01

    tanks is presented. The semi-discrete combined solid and fluid equations of motions are integrated using a time- accurate parallel explicit solver...Incompressible fluid flow in a moving/deforming container including accurate modeling of the free-surface, turbulence, and viscous effects ...paper, a single computational code which uses a time- accurate explicit solution procedure is used to solve both the solid and fluid equations of

  14. New Ideas on the Design of the Web-Based Learning System Oriented to Problem Solving from the Perspective of Question Chain and Learning Community

    ERIC Educational Resources Information Center

    Zhang, Yin; Chu, Samuel K. W.

    2016-01-01

    In recent years, a number of models concerning problem solving systems have been put forward. However, many of them stress on technology and neglect the research of problem solving itself, especially the learning mechanism related to problem solving. In this paper, we analyze the learning mechanism of problem solving, and propose that when…

  15. Perceived problem solving, stress, and health among college students.

    PubMed

    Largo-Wight, Erin; Peterson, P Michael; Chen, W William

    2005-01-01

    To study the relationships among perceived problem solving, stress, and physical health. The Perceived Stress Questionnaire (PSQ), Personal Problem solving Inventory (PSI), and a stress-related physical health symptoms checklist were used to measure perceived stress, problem solving, and health among undergraduate college students (N = 232). Perceived problem-solving ability predicted self-reported physical health symptoms (R2 = .12; P < .001) and perceived stress (R2 = .19; P < .001). Perceived problem solving was a stronger predictor of physical health and perceived stress than were physical activity, alcohol consumption, or social support. Implications for college health promotion are discussed.

  16. Examining Tasks that Facilitate the Experience of Incubation While Problem-Solving

    ERIC Educational Resources Information Center

    Both, Lilly; Needham, Douglas; Wood, Eileen

    2004-01-01

    The three studies presented here contrasted the problem-solving outcomes of university students when a break was provided or not provided during a problem-solving session. In addition, two studies explored the effect of providing hints (priming) and the placement of hints during the problem-solving session. First, the ability to solve a previously…

  17. The role of optimization in the next generation of computer-based design tools

    NASA Technical Reports Server (NTRS)

    Rogan, J. Edward

    1989-01-01

    There is a close relationship between design optimization and the emerging new generation of computer-based tools for engineering design. With some notable exceptions, the development of these new tools has not taken full advantage of recent advances in numerical design optimization theory and practice. Recent work in the field of design process architecture has included an assessment of the impact of next-generation computer-based design tools on the design process. These results are summarized, and insights into the role of optimization in a design process based on these next-generation tools are presented. An example problem has been worked out to illustrate the application of this technique. The example problem - layout of an aircraft main landing gear - is one that is simple enough to be solved by many other techniques. Although the mathematical relationships describing the objective function and constraints for the landing gear layout problem can be written explicitly and are quite straightforward, an approximation technique has been used in the solution of this problem that can just as easily be applied to integrate supportability or producibility assessments using theory of measurement techniques into the design decision-making process.

  18. Bounding solutions of geometrically nonlinear viscoelastic problems

    NASA Technical Reports Server (NTRS)

    Stubstad, J. M.; Simitses, G. J.

    1985-01-01

    Integral transform techniques, such as the Laplace transform, provide simple and direct methods for solving viscoelastic problems formulated within a context of linear material response and using linear measures for deformation. Application of the transform operator reduces the governing linear integro-differential equations to a set of algebraic relations between the transforms of the unknown functions, the viscoelastic operators, and the initial and boundary conditions. Inversion either directly or through the use of the appropriate convolution theorem, provides the time domain response once the unknown functions have been expressed in terms of sums, products or ratios of known transforms. When exact inversion is not possible approximate techniques may provide accurate results. The overall problem becomes substantially more complex when nonlinear effects must be included. Situations where a linear material constitutive law can still be productively employed but where the magnitude of the resulting time dependent deformations warrants the use of a nonlinear kinematic analysis are considered. The governing equations will be nonlinear integro-differential equations for this class of problems. Thus traditional as well as approximate techniques, such as cited above, cannot be employed since the transform of a nonlinear function is not explicitly expressible.

  19. Bounding solutions of geometrically nonlinear viscoelastic problems

    NASA Technical Reports Server (NTRS)

    Stubstad, J. M.; Simitses, G. J.

    1986-01-01

    Integral transform techniques, such as the Laplace transform, provide simple and direct methods for solving viscoelastic problems formulated within a context of linear material response and using linear measures for deformation. Application of the transform operator reduces the governing linear integro-differential equations to a set of algebraic relations between the transforms of the unknown functions, the viscoelastic operators, and the initial and boundary conditions. Inversion either directly or through the use of the appropriate convolution theorem, provides the time domain response once the unknown functions have been expressed in terms of sums, products or ratios of known transforms. When exact inversion is not possible approximate techniques may provide accurate results. The overall problem becomes substantially more complex when nonlinear effects must be included. Situations where a linear material constitutive law can still be productively employed but where the magnitude of the resulting time dependent deformations warrants the use of a nonlinear kinematic analysis are considered. The governing equations will be nonlinear integro-differential equations for this class of problems. Thus traditional as well as approximate techniques, such as cited above, cannot be employed since the transform of a nonlinear function is not explicitly expressible.

  20. THOR: an open-source exo-GCM

    NASA Astrophysics Data System (ADS)

    Grosheintz, Luc; Mendonça, João; Käppeli, Roger; Lukas Grimm, Simon; Mishra, Siddhartha; Heng, Kevin

    2015-12-01

    In this talk, I will present THOR, the first fully conservative, GPU-accelerated exo-GCM (general circulation model) on a nearly uniform, global grid that treats shocks and is non-hydrostatic. THOR will be freely available to the community as a standard tool.Unlike most GCMs THOR solves the full, non-hydrostatic Euler equations instead of the primitive equations. The equations are solved on a global three-dimensional icosahedral grid by a second order Finite Volume Method (FVM). Icosahedral grids are nearly uniform refinements of an icosahedron. We've implemented three different versions of this grid. FVM conserves the prognostic variables (density, momentum and energy) exactly and doesn't require a diffusion term (artificial viscosity) in the Euler equations to stabilize our solver. Historically FVM was designed to treat discontinuities correctly. Hence it excels at resolving shocks, including those present in hot exoplanetary atmospheres.Atmospheres are generally in near hydrostatic equilibrium. We therefore implement a well-balancing technique recently developed at the ETH Zurich. This well-balancing ensures that our FVM maintains hydrostatic equilibrium to machine precision. Better yet, it is able to resolve pressure perturbations from this equilibrium as small as one part in 100'000. It is important to realize that these perturbations are significantly smaller than the truncation error of the same scheme without well-balancing. If during the course of the simulation (due to forcing) the atmosphere becomes non-hydrostatic, our solver continues to function correctly.THOR just passed an important mile stone. We've implemented the explicit part of the solver. The explicit solver is useful to study instabilities or local problems on relatively short time scales. I'll show some nice properties of the explicit THOR. An explicit solver is not appropriate for climate study because the time step is limited by the sound speed. Therefore, we are working on the first fully implicit GCM. By ESS3, I hope to present results for the advection equation.THOR is part of the Exoclimes Simulation Platform (ESP), a set of open-source community codes for simulating and understanding the atmospheres of exoplanets. The ESP also includes tools for radiative transfer and retrieval (HELIOS), an opacity calculator (HELIOS-K), and a chemical kinetics solver (VULCAN). We expect to publicly release an initial version of THOR in 2016 on www.exoclime.org.

  1. The profile of students’ problem-solving skill in physics across interest program in the secondary school

    NASA Astrophysics Data System (ADS)

    Jua, S. K.; Sarwanto; Sukarmin

    2018-05-01

    Problem-solving skills are important skills in physics. However, according to some researchers, the problem-solving skill of Indonesian students’ problem in physics learning is categorized still low. The purpose of this study was to identify the profile of problem-solving skills of students who follow the across the interests program of physics. The subjects of the study were high school students of Social Sciences, grade X. The type of this research was descriptive research. The data which used to analyze the problem-solving skills were obtained through student questionnaires and the test results with impulse materials and collision. From the descriptive analysis results, the percentage of students’ problem-solving skill based on the test was 52.93% and indicators respectively. These results indicated that students’ problem-solving skill is categorized low.

  2. Factors influencing analysis of complex cognitive tasks: a framework and example from industrial process control.

    PubMed

    Prietula, M J; Feltovich, P J; Marchak, F

    2000-01-01

    We propose that considering four categories of task factors can facilitate knowledge elicitation efforts in the analysis of complex cognitive tasks: materials, strategies, knowledge characteristics, and goals. A study was conducted to examine the effects of altering aspects of two of these task categories on problem-solving behavior across skill levels: materials and goals. Two versions of an applied engineering problem were presented to expert, intermediate, and novice participants. Participants were to minimize the cost of running a steam generation facility by adjusting steam generation levels and flows. One version was cast in the form of a dynamic, computer-based simulation that provided immediate feedback on flows, costs, and constraint violations, thus incorporating key variable dynamics of the problem context. The other version was cast as a static computer-based model, with no dynamic components, cost feedback, or constraint checking. Experts performed better than the other groups across material conditions, and, when required, the presentation of the goal assisted the experts more than the other groups. The static group generated richer protocols than the dynamic group, but the dynamic group solved the problem in significantly less time. Little effect of feedback was found for intermediates, and none for novices. We conclude that demonstrating differences in performance in this task requires different materials than explicating underlying knowledge that leads to performance. We also conclude that substantial knowledge is required to exploit the information yielded by the dynamic form of the task or the explicit solution goal. This simple model can help to identify the contextual factors that influence elicitation and specification of knowledge, which is essential in the engineering of joint cognitive systems.

  3. A New Problem-Posing Approach Based on Problem-Solving Strategy: Analyzing Pre-Service Primary School Teachers' Performance

    ERIC Educational Resources Information Center

    Kiliç, Çigdem

    2017-01-01

    This study examined pre-service primary school teachers' performance in posing problems that require knowledge of problem-solving strategies. Quantitative and qualitative methods were combined. The 120 participants were asked to pose a problem that could be solved by using the find-a-pattern a particular problem-solving strategy. After that,…

  4. Case of Two Electrostatics Problems: Can Providing a Diagram Adversely Impact Introductory Physics Students' Problem Solving Performance?

    ERIC Educational Resources Information Center

    Maries, Alexandru; Singh, Chandralekha

    2018-01-01

    Drawing appropriate diagrams is a useful problem solving heuristic that can transform a problem into a representation that is easier to exploit for solving it. One major focus while helping introductory physics students learn effective problem solving is to help them understand that drawing diagrams can facilitate problem solution. We conducted an…

  5. School Leaders' Problem Framing: A Sense-Making Approach to Problem-Solving Processes of Beginning School Leaders

    ERIC Educational Resources Information Center

    Sleegers, Peter; Wassink, Hartger; van Veen, Klaas; Imants, Jeroen

    2009-01-01

    In addition to cognitive research on school leaders' problem solving, this study focuses on the situated and personal nature of problem framing by combining insights from cognitive research on problem solving and sense-making theory. The study reports the results of a case study of two school leaders solving problems in their daily context by…

  6. The Place of Problem Solving in Contemporary Mathematics Curriculum Documents

    ERIC Educational Resources Information Center

    Stacey, Kaye

    2005-01-01

    This paper reviews the presentation of problem solving and process aspects of mathematics in curriculum documents from Australia, UK, USA and Singapore. The place of problem solving in the documents is reviewed and contrasted, and illustrative problems from teachers' support materials are used to demonstrate how problem solving is now more often…

  7. Translation among Symbolic Representations in Problem-Solving. Revised.

    ERIC Educational Resources Information Center

    Shavelson, Richard J.; And Others

    This study investigated the relationships among the symbolic representation of problems given to students to solve, the mental representations they use to solve the problems, and the accuracy of their solutions. Twenty eleventh-grade science students were asked to think aloud as they solved problems on the ideal gas laws. The problems were…

  8. Using Students' Representations Constructed during Problem Solving to Infer Conceptual Understanding

    ERIC Educational Resources Information Center

    Domin, Daniel; Bodner, George

    2012-01-01

    The differences in the types of representations constructed during successful and unsuccessful problem-solving episodes were investigated within the context of graduate students working on problems that involve concepts from 2D-NMR. Success at problem solving was established by having the participants solve five problems relating to material just…

  9. Errors and Understanding: The Effects of Error-Management Training on Creative Problem-Solving

    ERIC Educational Resources Information Center

    Robledo, Issac C.; Hester, Kimberly S.; Peterson, David R.; Barrett, Jamie D.; Day, Eric A.; Hougen, Dean P.; Mumford, Michael D.

    2012-01-01

    People make errors in their creative problem-solving efforts. The intent of this article was to assess whether error-management training would improve performance on creative problem-solving tasks. Undergraduates were asked to solve an educational leadership problem known to call for creative thought where problem solutions were scored for…

  10. Encouraging Sixth-Grade Students' Problem-Solving Performance by Teaching through Problem Solving

    ERIC Educational Resources Information Center

    Bostic, Jonathan D.; Pape, Stephen J.; Jacobbe, Tim

    2016-01-01

    This teaching experiment provided students with continuous engagement in a problem-solving based instructional approach during one mathematics unit. Three sections of sixth-grade mathematics were sampled from a school in Florida, U.S.A. and one section was randomly assigned to experience teaching through problem solving. Students' problem-solving…

  11. King Oedipus and the Problem Solving Process.

    ERIC Educational Resources Information Center

    Borchardt, Donald A.

    An analysis of the problem solving process reveals at least three options: (1) finding the cause, (2) solving the problem, and (3) anticipating potential problems. These methods may be illustrated by examining "Oedipus Tyrannus," a play in which a king attempts to deal with a problem that appears to be beyond his ability to solve, and…

  12. Problem Solving with the Elementary Youngster.

    ERIC Educational Resources Information Center

    Swartz, Vicki

    This paper explores research on problem solving and suggests a problem-solving approach to elementary school social studies, using a culture study of the ancient Egyptians and King Tut as a sample unit. The premise is that problem solving is particularly effective in dealing with problems which do not have one simple and correct answer but rather…

  13. The Effect of Learning Environments Based on Problem Solving on Students' Achievements of Problem Solving

    ERIC Educational Resources Information Center

    Karatas, Ilhan; Baki, Adnan

    2013-01-01

    Problem solving is recognized as an important life skill involving a range of processes including analyzing, interpreting, reasoning, predicting, evaluating and reflecting. For that reason educating students as efficient problem solvers is an important role of mathematics education. Problem solving skill is the centre of mathematics curriculum.…

  14. The needs analysis of learning Inventive Problem Solving for technical and vocational students

    NASA Astrophysics Data System (ADS)

    Sai'en, Shanty; Tze Kiong, Tee; Yunos, Jailani Md; Foong, Lee Ming; Heong, Yee Mei; Mohaffyza Mohamad, Mimi

    2017-08-01

    Malaysian Ministry of Education highlighted in their National Higher Education Strategic plan that higher education’s need to focus adopting 21st century skills in order to increase a graduate’s employability. Current research indicates that most graduate lack of problem solving skills to help them securing the job. Realising the important of this skill hence an alternative way suggested as an option for high institution’s student to solve their problem. This study was undertaken to measure the level of problem solving skills, identify the needs of learning inventive problem solving skills and the needs of developing an Inventive problem solving module. Using a questionnaire, the study sampled 132 students from Faculty of Technical and Vocational Education. Findings indicated that majority of the students fail to define what is an inventive problem and the root cause of a problem. They also unable to state the objectives and goal thus fail to solve the problem. As a result, the students agreed on the developing Inventive Problem Solving Module to assist them.

  15. Automation and adaptation: Nurses' problem-solving behavior following the implementation of bar coded medication administration technology.

    PubMed

    Holden, Richard J; Rivera-Rodriguez, A Joy; Faye, Héléne; Scanlon, Matthew C; Karsh, Ben-Tzion

    2013-08-01

    The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses' operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA's impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians' work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign.

  16. Automation and adaptation: Nurses’ problem-solving behavior following the implementation of bar coded medication administration technology

    PubMed Central

    Holden, Richard J.; Rivera-Rodriguez, A. Joy; Faye, Héléne; Scanlon, Matthew C.; Karsh, Ben-Tzion

    2012-01-01

    The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses’ operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA’s impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians’ work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign. PMID:24443642

  17. Do problem-solving skills affect success in nursing process applications? An application among Turkish nursing students.

    PubMed

    Bayindir Çevik, Ayfer; Olgun, Nermin

    2015-04-01

    This study aimed to determine the relationship between problem-solving and nursing process application skills of nursing. This is a longitudinal and correlational study. The sample included 71 students. An information form, Problem-Solving Inventory, and nursing processes the students presented at the end of clinical courses were used for data collection. Although there was no significant relationship between problem-solving skills and nursing process grades, improving problem-solving skills increased successful grades. Problem-solving skills and nursing process skills can be concomitantly increased. Students were suggested to use critical thinking, practical approaches, and care plans, as well as revising nursing processes in order to improve their problem-solving skills and nursing process application skills. © 2014 NANDA International, Inc.

  18. Resolvent-Techniques for Multiple Exercise Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Sören, E-mail: christensen@math.uni-kiel.de; Lempa, Jukka, E-mail: jukka.lempa@hioa.no

    2015-02-15

    We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristicsmore » of the problems can be identified more explicitly. We illustrate the main results with explicit examples.« less

  19. Interleaved numerical renormalization group as an efficient multiband impurity solver

    NASA Astrophysics Data System (ADS)

    Stadler, K. M.; Mitchell, A. K.; von Delft, J.; Weichselbaum, A.

    2016-06-01

    Quantum impurity problems can be solved using the numerical renormalization group (NRG), which involves discretizing the free conduction electron system and mapping to a "Wilson chain." It was shown recently that Wilson chains for different electronic species can be interleaved by use of a modified discretization, dramatically increasing the numerical efficiency of the RG scheme [Phys. Rev. B 89, 121105(R) (2014), 10.1103/PhysRevB.89.121105]. Here we systematically examine the accuracy and efficiency of the "interleaved" NRG (iNRG) method in the context of the single impurity Anderson model, the two-channel Kondo model, and a three-channel Anderson-Hund model. The performance of iNRG is explicitly compared with "standard" NRG (sNRG): when the average number of states kept per iteration is the same in both calculations, the accuracy of iNRG is equivalent to that of sNRG but the computational costs are significantly lower in iNRG when the same symmetries are exploited. Although iNRG weakly breaks SU(N ) channel symmetry (if present), both accuracy and numerical cost are entirely competitive with sNRG exploiting full symmetries. iNRG is therefore shown to be a viable and technically simple alternative to sNRG for high-symmetry models. Moreover, iNRG can be used to solve a range of lower-symmetry multiband problems that are inaccessible to sNRG.

  20. Waste management with recourse: an inexact dynamic programming model containing fuzzy boundary intervals in objectives and constraints.

    PubMed

    Tan, Q; Huang, G H; Cai, Y P

    2010-09-01

    The existing inexact optimization methods based on interval-parameter linear programming can hardly address problems where coefficients in objective functions are subject to dual uncertainties. In this study, a superiority-inferiority-based inexact fuzzy two-stage mixed-integer linear programming (SI-IFTMILP) model was developed for supporting municipal solid waste management under uncertainty. The developed SI-IFTMILP approach is capable of tackling dual uncertainties presented as fuzzy boundary intervals (FuBIs) in not only constraints, but also objective functions. Uncertainties expressed as a combination of intervals and random variables could also be explicitly reflected. An algorithm with high computational efficiency was provided to solve SI-IFTMILP. SI-IFTMILP was then applied to a long-term waste management case to demonstrate its applicability. Useful interval solutions were obtained. SI-IFTMILP could help generate dynamic facility-expansion and waste-allocation plans, as well as provide corrective actions when anticipated waste management plans are violated. It could also greatly reduce system-violation risk and enhance system robustness through examining two sets of penalties resulting from variations in fuzziness and randomness. Moreover, four possible alternative models were formulated to solve the same problem; solutions from them were then compared with those from SI-IFTMILP. The results indicate that SI-IFTMILP could provide more reliable solutions than the alternatives. 2010 Elsevier Ltd. All rights reserved.

Top