Cognitive Load in Algebra: Element Interactivity in Solving Equations
ERIC Educational Resources Information Center
Ngu, Bing Hiong; Chung, Siu Fung; Yeung, Alexander Seeshing
2015-01-01
Central to equation solving is the maintenance of equivalence on both sides of the equation. However, when the process involves an interaction of multiple elements, solving an equation can impose a high cognitive load. The balance method requires operations on both sides of the equation, whereas the inverse method involves operations on one side…
Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.
1957-10-01
The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.
Developing a Blended Learning-Based Method for Problem-Solving in Capability Learning
ERIC Educational Resources Information Center
Dwiyogo, Wasis D.
2018-01-01
The main objectives of the study were to develop and investigate the implementation of blended learning based method for problem-solving. Three experts were involved in the study and all three had stated that the model was ready to be applied in the classroom. The implementation of the blended learning-based design for problem-solving was…
Shooting method for solution of boundary-layer flows with massive blowing
NASA Technical Reports Server (NTRS)
Liu, T.-M.; Nachtsheim, P. R.
1973-01-01
A modified, bidirectional shooting method is presented for solving boundary-layer equations under conditions of massive blowing. Unlike the conventional shooting method, which is unstable when the blowing rate increases, the proposed method avoids the unstable direction and is capable of solving complex boundary-layer problems involving mass and energy balance on the surface.
ERIC Educational Resources Information Center
Barak, Moshe
2013-01-01
This paper presents the outcomes of teaching an inventive problem-solving course in junior high schools in an attempt to deal with the current relative neglect of fostering students' creativity and problem-solving capabilities in traditional schooling. The method involves carrying out systematic manipulation with attributes, functions and…
Bai, Shirong; Skodje, Rex T
2017-08-17
A new approach is presented for simulating the time-evolution of chemically reactive systems. This method provides an alternative to conventional modeling of mass-action kinetics that involves solving differential equations for the species concentrations. The method presented here avoids the need to solve the rate equations by switching to a representation based on chemical pathways. In the Sum Over Histories Representation (or SOHR) method, any time-dependent kinetic observable, such as concentration, is written as a linear combination of probabilities for chemical pathways leading to a desired outcome. In this work, an iterative method is introduced that allows the time-dependent pathway probabilities to be generated from a knowledge of the elementary rate coefficients, thus avoiding the pitfalls involved in solving the differential equations of kinetics. The method is successfully applied to the model Lotka-Volterra system and to a realistic H 2 combustion model.
The Model Method: Singapore Children's Tool for Representing and Solving Algebraic Word Problems
ERIC Educational Resources Information Center
Ng, Swee Fong; Lee, Kerry
2009-01-01
Solving arithmetic and algebraic word problems is a key component of the Singapore elementary mathematics curriculum. One heuristic taught, the model method, involves drawing a diagram to represent key information in the problem. We describe the model method and a three-phase theoretical framework supporting its use. We conducted 2 studies to…
Problem solving using soft systems methodology.
Land, L
This article outlines a method of problem solving which considers holistic solutions to complex problems. Soft systems methodology allows people involved in the problem situation to have control over the decision-making process.
Solving ay'' + by' + cy = 0 with a Simple Product Rule Approach
ERIC Educational Resources Information Center
Tolle, John
2011-01-01
When elementary ordinary differential equations (ODEs) of first and second order are included in the calculus curriculum, second-order linear constant coefficient ODEs are typically solved by a method more appropriate to differential equations courses. This method involves the characteristic equation and its roots, complex-valued solutions, and…
NASA Astrophysics Data System (ADS)
Bonitati, Joey; Slimmer, Ben; Li, Weichuan; Potel, Gregory; Nunes, Filomena
2017-09-01
The calculable form of the R-matrix method has been previously shown to be a useful tool in approximately solving the Schrodinger equation in nuclear scattering problems. We use this technique combined with the Gauss quadrature for the Lagrange-mesh method to efficiently solve for the wave functions of projectile nuclei in low energy collisions (1-100 MeV) involving an arbitrary number of channels. We include the local Woods-Saxon potential, the non-local potential of Perey and Buck, a Coulomb potential, and a coupling potential to computationally solve for the wave function of two nuclei at short distances. Object oriented programming is used to increase modularity, and parallel programming techniques are introduced to reduce computation time. We conclude that the R-matrix method is an effective method to predict the wave functions of nuclei in scattering problems involving both multiple channels and non-local potentials. Michigan State University iCER ACRES REU.
ERIC Educational Resources Information Center
Verderber, Nadine L.
1992-01-01
Presents the use of spreadsheets as an alternative method for precalculus students to solve maximum or minimum problems involving surface area and volume. Concludes that students with less technical backgrounds can solve problems normally requiring calculus and suggests sources for additional problems. (MDH)
ERIC Educational Resources Information Center
Man, Yiu-Kwong
2012-01-01
In this note, a new method for computing the partial fraction decomposition of rational functions with irreducible quadratic factors in the denominators is presented. This method involves polynomial divisions and substitutions only, without having to solve for the complex roots of the irreducible quadratic polynomial or to solve a system of linear…
ERIC Educational Resources Information Center
Eyisi, Daniel
2016-01-01
Research in science education is to discover the truth which involves the combination of reasoning and experiences. In order to find out appropriate teaching methods that are necessary for teaching science students problem-solving skills, different research approaches are used by educational researchers based on the data collection and analysis…
Students’ difficulties in probabilistic problem-solving
NASA Astrophysics Data System (ADS)
Arum, D. P.; Kusmayadi, T. A.; Pramudya, I.
2018-03-01
There are many errors can be identified when students solving mathematics problems, particularly in solving the probabilistic problem. This present study aims to investigate students’ difficulties in solving the probabilistic problem. It focuses on analyzing and describing students errors during solving the problem. This research used the qualitative method with case study strategy. The subjects in this research involve ten students of 9th grade that were selected by purposive sampling. Data in this research involve students’ probabilistic problem-solving result and recorded interview regarding students’ difficulties in solving the problem. Those data were analyzed descriptively using Miles and Huberman steps. The results show that students have difficulties in solving the probabilistic problem and can be divided into three categories. First difficulties relate to students’ difficulties in understanding the probabilistic problem. Second, students’ difficulties in choosing and using appropriate strategies for solving the problem. Third, students’ difficulties with the computational process in solving the problem. Based on the result seems that students still have difficulties in solving the probabilistic problem. It means that students have not able to use their knowledge and ability for responding probabilistic problem yet. Therefore, it is important for mathematics teachers to plan probabilistic learning which could optimize students probabilistic thinking ability.
Fowler, Nicole R.; Hansen, Alexandra S.; Barnato, Amber E.; Garand, Linda
2013-01-01
Objective Measure perceived involvement in medical decision making and determine if anticipatory grief is associated with problem solving among family caregivers of older adults with cognitive impairment. Method Retrospective analysis of baseline data from a caregiver intervention (n=73). Multivariable regression models testing the association between caregivers’ anticipatory grief, measured by the Anticipatory Grief Scale (AGS), with problem solving abilities, measured by the Social Problem Solving Inventory – Revised: Short Form (SPSI-R: S). Results 47/73 (64%) of caregivers reported involvement in medical decision making. Mean AGS was 70.1 (± 14.8) and mean SPSI-R:S was 107.2 (± 11.6). Higher AGS scores were associated with lower positive problem orientation (P=0.041) and higher negative problem orientation scores (P=0.001) but not other components of problem solving- rational problem solving, avoidance style, and impulsivity/carelessness style. Discussion Higher anticipatory grief among family caregivers impaired problem solving, which could have negative consequences for their medical decision making responsibilities. PMID:23428394
A diffuse-interface method for two-phase flows with soluble surfactants
Teigen, Knut Erik; Song, Peng; Lowengrub, John; Voigt, Axel
2010-01-01
A method is presented to solve two-phase problems involving soluble surfactants. The incompressible Navier–Stokes equations are solved along with equations for the bulk and interfacial surfactant concentrations. A non-linear equation of state is used to relate the surface tension to the interfacial surfactant concentration. The method is based on the use of a diffuse interface, which allows a simple implementation using standard finite difference or finite element techniques. Here, finite difference methods on a block-structured adaptive grid are used, and the resulting equations are solved using a non-linear multigrid method. Results are presented for a drop in shear flow in both 2D and 3D, and the effect of solubility is discussed. PMID:21218125
The effects of cumulative practice on mathematics problem solving.
Mayfield, Kristin H; Chase, Philip N
2002-01-01
This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving.
The effects of cumulative practice on mathematics problem solving.
Mayfield, Kristin H; Chase, Philip N
2002-01-01
This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving. PMID:12102132
Interference thinking in constructing students’ knowledge to solve mathematical problems
NASA Astrophysics Data System (ADS)
Jayanti, W. E.; Usodo, B.; Subanti, S.
2018-04-01
This research aims to describe interference thinking in constructing students’ knowledge to solve mathematical problems. Interference thinking in solving problems occurs when students have two concepts that interfere with each other’s concept. Construction of problem-solving can be traced using Piaget’s assimilation and accommodation framework, helping to know the students’ thinking structures in solving the problems. The method of this research was a qualitative method with case research strategy. The data in this research involving problem-solving result and transcripts of interviews about students’ errors in solving the problem. The results of this research focus only on the student who experience proactive interference, where student in solving a problem using old information to interfere with the ability to recall new information. The student who experience interference thinking in constructing their knowledge occurs when the students’ thinking structures in the assimilation and accommodation process are incomplete. However, after being given reflection to the student, then the students’ thinking process has reached equilibrium condition even though the result obtained remains wrong.
The use of Galerkin finite-element methods to solve mass-transport equations
Grove, David B.
1977-01-01
The partial differential equation that describes the transport and reaction of chemical solutes in porous media was solved using the Galerkin finite-element technique. These finite elements were superimposed over finite-difference cells used to solve the flow equation. Both convection and flow due to hydraulic dispersion were considered. Linear and Hermite cubic approximations (basis functions) provided satisfactory results: however, the linear functions were computationally more efficient for two-dimensional problems. Successive over relaxation (SOR) and iteration techniques using Tchebyschef polynomials were used to solve the sparce matrices generated using the linear and Hermite cubic functions, respectively. Comparisons of the finite-element methods to the finite-difference methods, and to analytical results, indicated that a high degree of accuracy may be obtained using the method outlined. The technique was applied to a field problem involving an aquifer contaminated with chloride, tritium, and strontium-90. (Woodard-USGS)
Prospective Teachers' Beliefs about Problem Solving in Multiple Ways
ERIC Educational Resources Information Center
Arikan, Elif Esra
2016-01-01
The purpose of this study is to analyze whether prospective teachers believe solving a mathematics problem involves in using different solution methods. 60 mathematics prospective teachers who take the pedagogic training program in a state university were participated in this study. Five open-ended questions were asked. The study was carried out…
Computer-Based Assessment of Complex Problem Solving: Concept, Implementation, and Application
ERIC Educational Resources Information Center
Greiff, Samuel; Wustenberg, Sascha; Holt, Daniel V.; Goldhammer, Frank; Funke, Joachim
2013-01-01
Complex Problem Solving (CPS) skills are essential to successfully deal with environments that change dynamically and involve a large number of interconnected and partially unknown causal influences. The increasing importance of such skills in the 21st century requires appropriate assessment and intervention methods, which in turn rely on adequate…
NASA Astrophysics Data System (ADS)
van Horssen, Wim T.; Wang, Yandong; Cao, Guohua
2018-06-01
In this paper, it is shown how characteristic coordinates, or equivalently how the well-known formula of d'Alembert, can be used to solve initial-boundary value problems for wave equations on fixed, bounded intervals involving Robin type of boundary conditions with time-dependent coefficients. A Robin boundary condition is a condition that specifies a linear combination of the dependent variable and its first order space-derivative on a boundary of the interval. Analytical methods, such as the method of separation of variables (SOV) or the Laplace transform method, are not applicable to those types of problems. The obtained analytical results by applying the proposed method, are in complete agreement with those obtained by using the numerical, finite difference method. For problems with time-independent coefficients in the Robin boundary condition(s), the results of the proposed method also completely agree with those as for instance obtained by the method of separation of variables, or by the finite difference method.
NASA Astrophysics Data System (ADS)
Banerjee, Banmali
Methods and procedures for successfully solving math word problems have been, and continue to be a mystery to many U.S. high school students. Previous studies suggest that the contextual and mathematical understanding of a word problem, along with the development of schemas and their related external representations, positively contribute to students' accomplishments when solving word problems. Some studies have examined the effects of diagramming on students' abilities to solve word problems that only involved basic arithmetic operations. Other studies have investigated how instructional models that used technology influenced students' problem solving achievements. Still other studies have used schema-based instruction involving students with learning disabilities. No study has evaluated regular high school students' achievements in solving standard math word problems using a diagramming technique without technological aid. This study evaluated students' achievement in solving math word problems using a diagramming technique. Using a quasi-experimental experimental pretest-posttest research design, quantitative data were collected from 172 grade 11 Hispanic English language learners (ELLS) and African American learners whose first language is English (EFLLs) in 18 classes at an inner city high school in Northern New Jersey. There were 88 control and 84 experimental students. The pretest and posttest of each participating student and samples of the experimental students' class assignments provided the qualitative data for the study. The data from this study exhibited that the diagramming method of solving math word problems significantly improved student achievement in the experimental group (p<.01) compared to the control group. The study demonstrated that urban, high school, ELLs benefited from instruction that placed emphasis on the mathematical vocabulary and symbols used in word problems and that both ELLs and EFLLs improved their problem solving success through careful attention to the creation and labeling of diagrams to represent the mathematics involved in standard word problems. Although Learnertype (ELL, EFLL), Classtype (Bilingual and Mixed), and Gender (Female, Male) were not significant indicators of student achievement, there was significant interaction between Treatment and Classtype at the level of the Bilingual students ( p<.01) and between Treatment and Learnertype at the level of the ELLs (p<.01).
Facilitating problem solving in high school chemistry
NASA Astrophysics Data System (ADS)
Gabel, Dorothy L.; Sherwood, Robert D.
The major purpose for conducting this study was to determine whether certain instructional strategies were superior to others in teaching high school chemistry students problem solving. The effectiveness of four instructional strategies for teaching problem solving to students of various proportional reasoning ability, verbal and visual preference, and mathematics anxiety were compared in this aptitude by treatment interaction study. The strategies used were the factor-label method, analogies, diagrams, and proportionality. Six hundred and nine high school students in eight schools were randomly assigned to one of four teaching strategies within each classroom. Students used programmed booklets to study the mole concept, the gas laws, stoichiometry, and molarity. Problem-solving ability was measured by a series of immediate posttests, delayed posttests and the ACS-NSTA Examination in High School Chemistry. Results showed that mathematics anxiety is negatively correlated with science achievement and that problem solving is dependent on students' proportional reasoning ability. The factor-label method was found to be the most desirable method and proportionality the least desirable method for teaching the mole concept. However, the proportionality method was best for teaching the gas laws. Several second-order interactions were found to be significant when mathematics anxiety was one of the aptitudes involved.
Investigating the effect of mental set on insight problem solving.
Ollinger, Michael; Jones, Gary; Knoblich, Günther
2008-01-01
Mental set is the tendency to solve certain problems in a fixed way based on previous solutions to similar problems. The moment of insight occurs when a problem cannot be solved using solution methods suggested by prior experience and the problem solver suddenly realizes that the solution requires different solution methods. Mental set and insight have often been linked together and yet no attempt thus far has systematically examined the interplay between the two. Three experiments are presented that examine the extent to which sets of noninsight and insight problems affect the subsequent solutions of insight test problems. The results indicate a subtle interplay between mental set and insight: when the set involves noninsight problems, no mental set effects are shown for the insight test problems, yet when the set involves insight problems, both facilitation and inhibition can be seen depending on the type of insight problem presented in the set. A two process model is detailed to explain these findings that combines the representational change mechanism with that of proceduralization.
ERIC Educational Resources Information Center
Marran, James F.; Rogan, Donald V.
Synectics is a method of creative problem solving through the use of metaphor and apparent irrelevancy developed by William J. J. Gordon. The process involves rational knowledge of the problem to be solved, irrational improvisations that lead to fertile associations creating new approaches to the problem, and euphoric state that is essential in…
A new implementation of the CMRH method for solving dense linear systems
NASA Astrophysics Data System (ADS)
Heyouni, M.; Sadok, H.
2008-04-01
The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.
Planning and Scheduling for Fleets of Earth Observing Satellites
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Jonsson, Ari; Morris, Robert; Smith, David E.; Norvig, Peter (Technical Monitor)
2001-01-01
We address the problem of scheduling observations for a collection of earth observing satellites. This scheduling task is a difficult optimization problem, potentially involving many satellites, hundreds of requests, constraints on when and how to service each request, and resources such as instruments, recording devices, transmitters, and ground stations. High-fidelity models are required to ensure the validity of schedules; at the same time, the size and complexity of the problem makes it unlikely that systematic optimization search methods will be able to solve them in a reasonable time. This paper presents a constraint-based approach to solving the Earth Observing Satellites (EOS) scheduling problem, and proposes a stochastic heuristic search method for solving it.
Enhancing chemistry problem-solving achievement using problem categorization
NASA Astrophysics Data System (ADS)
Bunce, Diane M.; Gabel, Dorothy L.; Samuel, John V.
The enhancement of chemistry students' skill in problem solving through problem categorization is the focus of this study. Twenty-four students in a freshman chemistry course for health professionals are taught how to solve problems using the explicit method of problem solving (EMPS) (Bunce & Heikkinen, 1986). The EMPS is an organized approach to problem analysis which includes encoding the information given in a problem (Given, Asked For), relating this to what is already in long-term memory (Recall), and planning a solution (Overall Plan) before a mathematical solution is attempted. In addition to the EMPS training, treatment students receive three 40-minute sessions following achievement tests in which they are taught how to categorize problems. Control students use this time to review the EMPS solutions of test questions. Although problem categorization is involved in one section of the EMPS (Recall), treatment students who received specific training in problem categorization demonstrate significantly higher achievement on combination problems (those problems requiring the use of more than one chemical topic for their solution) at (p = 0.01) than their counterparts. Significantly higher achievement for treatment students is also measured on an unannounced test (p = 0.02). Analysis of interview transcripts of both treatment and control students illustrates a Rolodex approach to problem solving employed by all students in this study. The Rolodex approach involves organizing equations used to solve problems on mental index cards and flipping through them, matching units given when a new problem is to be solved. A second phenomenon observed during student interviews is the absence of a link in the conceptual understanding of the chemical concepts involved in a problem and the problem-solving skills employed to correctly solve problems. This study shows that explicit training in categorization skills and the EMPS can lead to higher achievement in complex problem-solving situations (combination problems and unannounced test). However, such achievement may be limited by the lack of linkages between students' conceptual understanding and improved problem-solving skill.
An advanced probabilistic structural analysis method for implicit performance functions
NASA Technical Reports Server (NTRS)
Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.
1989-01-01
In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.
NASA Astrophysics Data System (ADS)
Zheng, Mingfang; He, Cunfu; Lu, Yan; Wu, Bin
2018-01-01
We presented a numerical method to solve phase dispersion curve in general anisotropic plates. This approach involves an exact solution to the problem in the form of the Legendre polynomial of multiple integrals, which we substituted into the state-vector formalism. In order to improve the efficiency of the proposed method, we made a special effort to demonstrate the analytical methodology. Furthermore, we analyzed the algebraic symmetries of the matrices in the state-vector formalism for anisotropic plates. The basic feature of the proposed method was the expansion of field quantities by Legendre polynomials. The Legendre polynomial method avoid to solve the transcendental dispersion equation, which can only be solved numerically. This state-vector formalism combined with Legendre polynomial expansion distinguished the adjacent dispersion mode clearly, even when the modes were very close. We then illustrated the theoretical solutions of the dispersion curves by this method for isotropic and anisotropic plates. Finally, we compared the proposed method with the global matrix method (GMM), which shows excellent agreement.
Design and Diagnosis Problem Solving with Multifunctional Technical Knowledge Bases
1992-09-29
STRUCTURE METHODOLOGY Design problem solving is a complex activity involving a number of subtasks. and a number of alternative methods potentially available...Conference on Artificial Intelligence. London: The British Computer Society, pp. 621-633. Friedland, P. (1979). Knowledge-based experimental design ...Computing Milieuxl: Management of Computing and Information Systems- -ty,*m man- agement General Terms: Design . Methodology Additional Key Words and Phrases
PAN AIR summary document (version 1.0)
NASA Technical Reports Server (NTRS)
Derbyshire, T.; Sidwell, K. W.
1982-01-01
The capabilities and limitations of the panel aerodynamics (PAN AIR) computer program system are summarized. This program uses a higher order panel method to solve boundary value problems involving the Prandtl-Glauert equation for subsonic and supersonic potential flows. Both aerodynamic and hydrodynamic problems can be solved using this modular software which is written for the CDC 6600 and 7600, and the CYBER 170 series computers.
ERIC Educational Resources Information Center
Andersen, Erling B.
A computer program for solving the conditional likelihood equations arising in the Rasch model for questionnaires is described. The estimation method and the computational problems involved are described in a previous research report by Andersen, but a summary of those results are given in two sections of this paper. A working example is also…
Space-dependent perfusion coefficient estimation in a 2D bioheat transfer problem
NASA Astrophysics Data System (ADS)
Bazán, Fermín S. V.; Bedin, Luciano; Borges, Leonardo S.
2017-05-01
In this work, a method for estimating the space-dependent perfusion coefficient parameter in a 2D bioheat transfer model is presented. In the method, the bioheat transfer model is transformed into a time-dependent semidiscrete system of ordinary differential equations involving perfusion coefficient values as parameters, and the estimation problem is solved through a nonlinear least squares technique. In particular, the bioheat problem is solved by the method of lines based on a highly accurate pseudospectral approach, and perfusion coefficient values are estimated by the regularized Gauss-Newton method coupled with a proper regularization parameter. The performance of the method on several test problems is illustrated numerically.
Recursive heuristic classification
NASA Technical Reports Server (NTRS)
Wilkins, David C.
1994-01-01
The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.
Numerical Modeling of Saturated Boiling in a Heated Tube
NASA Technical Reports Server (NTRS)
Majumdar, Alok; LeClair, Andre; Hartwig, Jason
2017-01-01
This paper describes a mathematical formulation and numerical solution of boiling in a heated tube. The mathematical formulation involves a discretization of the tube into a flow network consisting of fluid nodes and branches and a thermal network consisting of solid nodes and conductors. In the fluid network, the mass, momentum and energy conservation equations are solved and in the thermal network, the energy conservation equation of solids is solved. A pressure-based, finite-volume formulation has been used to solve the equations in the fluid network. The system of equations is solved by a hybrid numerical scheme which solves the mass and momentum conservation equations by a simultaneous Newton-Raphson method and the energy conservation equation by a successive substitution method. The fluid network and thermal network are coupled through heat transfer between the solid and fluid nodes which is computed by Chen's correlation of saturated boiling heat transfer. The computer model is developed using the Generalized Fluid System Simulation Program and the numerical predictions are compared with test data.
Applications of an exponential finite difference technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Handschuh, R.F.; Keith, T.G. Jr.
1988-07-01
An exponential finite difference scheme first presented by Bhattacharya for one dimensional unsteady heat conduction problems in Cartesian coordinates was extended. The finite difference algorithm developed was used to solve the unsteady diffusion equation in one dimensional cylindrical coordinates and was applied to two and three dimensional conduction problems in Cartesian coordinates. Heat conduction involving variable thermal conductivity was also investigated. The method was used to solve nonlinear partial differential equations in one and two dimensional Cartesian coordinates. Predicted results are compared to exact solutions where available or to results obtained by other numerical methods.
An efficient strongly coupled immersed boundary method for deforming bodies
NASA Astrophysics Data System (ADS)
Goza, Andres; Colonius, Tim
2016-11-01
Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.
ERIC Educational Resources Information Center
Hollister, James; Richie, Sam; Weeks, Arthur
2010-01-01
This study investigated the various methods involved in creating an intelligent tutor for the University of Central Florida Web Applets (UCF Web Applets), an online environment where student can perform and/or practice experiments. After conducting research into various methods, two major models emerged. These models include: 1) solving the…
HEMP 3D: A finite difference program for calculating elastic-plastic flow, appendix B
NASA Astrophysics Data System (ADS)
Wilkins, Mark L.
1993-05-01
The HEMP 3D program can be used to solve problems in solid mechanics involving dynamic plasticity and time dependent material behavior and problems in gas dynamics. The equations of motion, the conservation equations, and the constitutive relations listed below are solved by finite difference methods following the format of the HEMP computer simulation program formulated in two space dimensions and time.
Pilot interaction with automated airborne decision making systems
NASA Technical Reports Server (NTRS)
Rouse, W. B.; Hammer, J. M.; Morris, N. M.; Knaeuper, A. E.; Brown, E. N.; Lewis, C. M.; Yoon, W. C.
1984-01-01
Two project areas were pursued: the intelligent cockpit and human problem solving. The first area involves an investigation of the use of advanced software engineering methods to aid aircraft crews in procedure selection and execution. The second area is focused on human problem solving in dynamic environments, particulary in terms of identification of rule-based models land alternative approaches to training and aiding. Progress in each area is discussed.
Cognitive Process Modeling of Spatial Ability: The Assembling Objects Task
ERIC Educational Resources Information Center
Ivie, Jennifer L.; Embretson, Susan E.
2010-01-01
Spatial ability tasks appear on many intelligence and aptitude tests. Although the construct validity of spatial ability tests has often been studied through traditional correlational methods, such as factor analysis, less is known about the cognitive processes involved in solving test items. This study examines the cognitive processes involved in…
An Alternative Method to Gauss-Jordan Elimination: Minimizing Fraction Arithmetic
ERIC Educational Resources Information Center
Smith, Luke; Powell, Joan
2011-01-01
When solving systems of equations by using matrices, many teachers present a Gauss-Jordan elimination approach to row reducing matrices that can involve painfully tedious operations with fractions (which I will call the traditional method). In this essay, I present an alternative method to row reduce matrices that does not introduce additional…
Conformal mapping for multiple terminals
Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao
2016-01-01
Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746
Jona, Celine M.H.; Labuschagne, Izelle; Mercieca, Emily-Clare; Fisher, Fiona; Gluyas, Cathy; Stout, Julie C.; Andrews, Sophie C.
2017-01-01
Background: Family functioning in Huntington’s disease (HD) is known from previous studies to be adversely affected. However, which aspects of family functioning are disrupted is unknown, limiting the empirical basis around which to create supportive interventions. Objective: The aim of the current study was to assess family functioning in HD families. Methods: We assessed family functioning in 61 participants (38 HD gene-expanded participants and 23 family members) using the McMaster Family Assessment Device (FAD; Epstein, Baldwin and Bishop, 1983), which provides scores for seven domains of functioning: Problem Solving; Communication; Affective Involvement; Affective Responsiveness; Behavior Control; Roles; and General Family Functioning. Results: The most commonly reported disrupted domain for HD participants was Affective Involvement, which was reported by 39.5% of HD participants, followed closely by General Family Functioning (36.8%). For family members, the most commonly reported dysfunctional domains were Affective Involvement and Communication (both 52.2%). Furthermore, symptomatic HD participants reported more disruption to Problem Solving than pre-symptomatic HD participants. In terms of agreement between pre-symptomatic and symptomatic HD participants and their family members, all domains showed moderate to very good agreement. However, on average, family members rated Communication as more disrupted than their HD affected family member. Conclusion: These findings highlight the need to target areas of emotional engagement, communication skills and problem solving in family interventions in HD. PMID:28968240
One shot methods for optimal control of distributed parameter systems 1: Finite dimensional control
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1991-01-01
The efficient numerical treatment of optimal control problems governed by elliptic partial differential equations (PDEs) and systems of elliptic PDEs, where the control is finite dimensional is discussed. Distributed control as well as boundary control cases are discussed. The main characteristic of the new methods is that they are designed to solve the full optimization problem directly, rather than accelerating a descent method by an efficient multigrid solver for the equations involved. The methods use the adjoint state in order to achieve efficient smoother and a robust coarsening strategy. The main idea is the treatment of the control variables on appropriate scales, i.e., control variables that correspond to smooth functions are solved for on coarse grids depending on the smoothness of these functions. Solution of the control problems is achieved with the cost of solving the constraint equations about two to three times (by a multigrid solver). Numerical examples demonstrate the effectiveness of the method proposed in distributed control case, pointwise control and boundary control problems.
NASA Astrophysics Data System (ADS)
Ping, Owi Wei; Ahmad, Azhar; Adnan, Mazlini; Hua, Ang Kean
2017-05-01
Higher Order Thinking Skills (HOTS) is a new concept of education reform based on the Taxonomies Bloom. The concept concentrate on student understanding in learning process based on their own methods. Through the HOTS questions are able to train students to think creatively, critic and innovative. The aim of this study was to identify the student's proficiency in solving HOTS Mathematics question by using i-Think map. This research takes place in Sabak Bernam, Selangor. The method applied is quantitative approach that involves approximately all of the standard five students. Pra-posttest was conduct before and after the intervention using i-Think map in solving the HOTS questions. The result indicates significant improvement for post-test, which prove that applying i-Think map enhance the students ability to solve HOTS question. Survey's analysis showed 90% of the students agree having i-Thinking map in analysis the question carefully and using keywords in the map to solve the questions. As conclusion, this process benefits students to minimize in making the mistake when solving the questions. Therefore, teachers are necessarily to guide students in applying the eligible i-Think map and methods in analyzing the question through finding the keywords.
Efficient dual approach to distance metric learning.
Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton
2014-02-01
Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.
Multigrid method for stability problems
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1988-01-01
The problem of calculating the stability of steady state solutions of differential equations is treated. Leading eigenvalues (i.e., having maximal real part) of large matrices that arise from discretization are to be calculated. An efficient multigrid method for solving these problems is presented. The method begins by obtaining an initial approximation for the dominant subspace on a coarse level using a damped Jacobi relaxation. This proceeds until enough accuracy for the dominant subspace has been obtained. The resulting grid functions are then used as an initial approximation for appropriate eigenvalue problems. These problems are being solved first on coarse levels, followed by refinement until a desired accuracy for the eigenvalues has been achieved. The method employs local relaxation on all levels together with a global change on the coarsest level only, which is designed to separate the different eigenfunctions as well as to update their corresponding eigenvalues. Coarsening is done using the FAS formulation in a non-standard way in which the right hand side of the coarse grid equations involves unknown parameters to be solved for on the coarse grid. This in particular leads to a new multigrid method for calculating the eigenvalues of symmetric problems. Numerical experiments with a model problem demonstrate the effectiveness of the method proposed. Using an FMG algorithm a solution to the level of discretization errors is obtained in just a few work units (less than 10), where a work unit is the work involved in one Jacobi relization on the finest level.
Problem Solving and Comprehension. Third Edition.
ERIC Educational Resources Information Center
Whimbey, Arthur; Lochhead, Jack
This book is directed toward increasing students' ability to analyze problems and comprehend what they read and hear. It outlines and illustrates the methods that good problem solvers use in attacking complex ideas, and provides practice in applying these methods to a variety of questions involving comprehension and reasoning. Chapter I includes a…
On Partial Fraction Decompositions by Repeated Polynomial Divisions
ERIC Educational Resources Information Center
Man, Yiu-Kwong
2017-01-01
We present a method for finding partial fraction decompositions of rational functions with linear or quadratic factors in the denominators by means of repeated polynomial divisions. This method does not involve differentiation or solving linear equations for obtaining the unknown partial fraction coefficients, which is very suitable for either…
Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
NASA Astrophysics Data System (ADS)
Chen, Ying; Lowengrub, John; Shen, Jie; Wang, Cheng; Wise, Steven
2018-07-01
We develop efficient energy stable numerical methods for solving isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization. The scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is constructed based on a convex splitting approach. We prove that, for the isotropic Cahn-Hilliard system with the Willmore regularization, the total free energy of the system is non-increasing for any time step and mesh sizes. A straightforward modification of the scheme is then used to solve the regularized strongly anisotropic Cahn-Hilliard system, and it is numerically verified that the discrete energy of the anisotropic system is also non-increasing, and can be efficiently solved by using the modified stable method. We present numerical results in both two and three dimensions that are in good agreement with those in earlier work on the topics. Numerical simulations are presented to demonstrate the accuracy and efficiency of the proposed methods.
Multilevel acceleration of scattering-source iterations with application to electron transport
Drumm, Clif; Fan, Wesley
2017-08-18
Acceleration/preconditioning strategies available in the SCEPTRE radiation transport code are described. A flexible transport synthetic acceleration (TSA) algorithm that uses a low-order discrete-ordinates (S N) or spherical-harmonics (P N) solve to accelerate convergence of a high-order S N source-iteration (SI) solve is described. Convergence of the low-order solves can be further accelerated by applying off-the-shelf incomplete-factorization or algebraic-multigrid methods. Also available is an algorithm that uses a generalized minimum residual (GMRES) iterative method rather than SI for convergence, using a parallel sweep-based solver to build up a Krylov subspace. TSA has been applied as a preconditioner to accelerate the convergencemore » of the GMRES iterations. The methods are applied to several problems involving electron transport and problems with artificial cross sections with large scattering ratios. These methods were compared and evaluated by considering material discontinuities and scattering anisotropy. Observed accelerations obtained are highly problem dependent, but speedup factors around 10 have been observed in typical applications.« less
Multigrid methods for bifurcation problems: The self adjoint case
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1987-01-01
This paper deals with multigrid methods for computational problems that arise in the theory of bifurcation and is restricted to the self adjoint case. The basic problem is to solve for arcs of solutions, a task that is done successfully with an arc length continuation method. Other important issues are, for example, detecting and locating singular points as part of the continuation process, switching branches at bifurcation points, etc. Multigrid methods have been applied to continuation problems. These methods work well at regular points and at limit points, while they may encounter difficulties in the vicinity of bifurcation points. A new continuation method that is very efficient also near bifurcation points is presented here. The other issues mentioned above are also treated very efficiently with appropriate multigrid algorithms. For example, it is shown that limit points and bifurcation points can be solved for directly by a multigrid algorithm. Moreover, the algorithms presented here solve the corresponding problems in just a few work units (about 10 or less), where a work unit is the work involved in one local relaxation on the finest grid.
Linear solver performance in elastoplastic problem solution on GPU cluster
NASA Astrophysics Data System (ADS)
Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.
2017-12-01
Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.
Kirchhoff and Ohm in action: solving electric currents in continuous extended media
NASA Astrophysics Data System (ADS)
Dolinko, A. E.
2018-03-01
In this paper we show a simple and versatile computational simulation method for determining electric currents and electric potential in 2D and 3D media with arbitrary distribution of resistivity. One of the highlights of the proposed method is that the simulation space containing the distribution of resistivity and the points of external applied voltage are introduced by means of digital images or bitmaps, which easily allows simulating any phenomena involving distributions of resistivity. The simulation is based on the Kirchhoff’s laws of electric currents and it is solved by means of an iterative procedure. The method is also generalised to account for media with distributions of reactive impedance. At the end of this work, we show an example of application of the simulation, consisting in reproducing the response obtained with the geophysical method of electric resistivity tomography in presence of soil cracks. This paper is aimed at undergraduate or graduated students interested in computational physics and electricity and also researchers involved in the area of continuous electric media, which could find a simple and powerful tool for investigation.
Pseudo-time methods for constrained optimization problems governed by PDE
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1995-01-01
In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haack, Jeffrey; Shohet, Gil
2016-12-02
The software implements a heterogeneous multiscale method (HMM), which involves solving a classical molecular dynamics (MD) problem and then computes the entropy production in order to compute the relaxation times towards equilibrium for use in a Bhatnagar-Gross-Krook (BGK) solver.
ERIC Educational Resources Information Center
Sandefur, James T.
1991-01-01
Discussed is the process of translating situations involving changing quantities into mathematical relationships. This process, called dynamical modeling, allows students to learn new mathematics while sharpening their algebraic skills. A description of dynamical systems, problem-solving methods, a graphical analysis, and available classroom…
Tracking problem solving by multivariate pattern analysis and Hidden Markov Model algorithms.
Anderson, John R
2012-03-01
Multivariate pattern analysis can be combined with Hidden Markov Model algorithms to track the second-by-second thinking as people solve complex problems. Two applications of this methodology are illustrated with a data set taken from children as they interacted with an intelligent tutoring system for algebra. The first "mind reading" application involves using fMRI activity to track what students are doing as they solve a sequence of algebra problems. The methodology achieves considerable accuracy at determining both what problem-solving step the students are taking and whether they are performing that step correctly. The second "model discovery" application involves using statistical model evaluation to determine how many substates are involved in performing a step of algebraic problem solving. This research indicates that different steps involve different numbers of substates and these substates are associated with different fluency in algebra problem solving. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yarmohammadi, M.; Javadi, S.; Babolian, E.
2018-04-01
In this study a new spectral iterative method (SIM) based on fractional interpolation is presented for solving nonlinear fractional differential equations (FDEs) involving Caputo derivative. This method is equipped with a pre-algorithm to find the singularity index of solution of the problem. This pre-algorithm gives us a real parameter as the index of the fractional interpolation basis, for which the SIM achieves the highest order of convergence. In comparison with some recent results about the error estimates for fractional approximations, a more accurate convergence rate has been attained. We have also proposed the order of convergence for fractional interpolation error under the L2-norm. Finally, general error analysis of SIM has been considered. The numerical results clearly demonstrate the capability of the proposed method.
An Effective Method of Introducing the Periodic Table as a Crossword Puzzle at the High School Level
ERIC Educational Resources Information Center
Joag, Sushama D.
2014-01-01
A simple method to introduce the modern periodic table of elements at the high school level as a game of solving a crossword puzzle is presented here. A survey to test the effectiveness of this new method relative to the conventional method, involving use of a wall-mounted chart of the periodic table, was conducted on a convenience sample. This…
Measuring Family Problem Solving: The Family Problem Solving Diary.
ERIC Educational Resources Information Center
Kieren, Dianne K.
The development and use of the family problem-solving diary are described. The diary is one of several indicators and measures of family problem-solving behavior. It provides a record of each person's perception of day-to-day family problems (what the problem concerns, what happened, who got involved, what those involved did, how the problem…
Pedagogy and/or technology: Making difference in improving students' problem solving skills
NASA Astrophysics Data System (ADS)
Hrepic, Zdeslav; Lodder, Katherine; Shaw, Kimberly A.
2013-01-01
Pen input computers combined with interactive software may have substantial potential for promoting active instructional methodologies and for facilitating students' problem solving ability. An excellent example is a study in which introductory physics students improved retention, conceptual understanding and problem solving abilities when one of three weekly lectures was replaced with group problem solving sessions facilitated with Tablet PCs and DyKnow software [1,2]. The research goal of the present study was to isolate the effect of the methodology itself (using additional time to teach problem solving) from that of the involved technology. In Fall 2011 we compared the performance of students taking the same introductory physics lecture course while enrolled in two separate problem-solving sections. One section used pen-based computing to facilitate group problem solving while the other section used low-tech methods for one third of the semester (covering Kinematics), and then traded technologies for the middle third of the term (covering Dynamics). Analysis of quiz, exam and standardized pre-post test results indicated no significant difference in scores of the two groups. Combining this result with those of previous studies implies primacy of pedagogy (collaborative problem solving itself) over technology for student learning in problem solving recitations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Jian Hua; Gooding, R.J.
1994-06-01
We propose an algorithm to solve a system of partial differential equations of the type u[sub t](x,t) = F(x, t, u, u[sub x], u[sub xx], u[sub xxx], u[sub xxxx]) in 1 + 1 dimensions using the method of lines with piecewise ninth-order Hermite polynomials, where u and F and N-dimensional vectors. Nonlinear boundary conditions are easily incorporated with this method. We demonstrate the accuracy of this method through comparisons of numerically determine solutions to the analytical ones. Then, we apply this algorithm to a complicated physical system involving nonlinear and nonlocal strain forces coupled to a thermal field. 4 refs.,more » 5 figs., 1 tab.« less
Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination
NASA Technical Reports Server (NTRS)
Ryne, Mark S.; Wang, Tseng-Chan
1991-01-01
An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.
Multigrid method for stability problems
NASA Technical Reports Server (NTRS)
Ta'asan, Shlomo
1988-01-01
The problem of calculating the stability of steady state solutions of differential equations is addressed. Leading eigenvalues of large matrices that arise from discretization are calculated, and an efficient multigrid method for solving these problems is presented. The resulting grid functions are used as initial approximations for appropriate eigenvalue problems. The method employs local relaxation on all levels together with a global change on the coarsest level only, which is designed to separate the different eigenfunctions as well as to update their corresponding eigenvalues. Coarsening is done using the FAS formulation in a nonstandard way in which the right-hand side of the coarse grid equations involves unknown parameters to be solved on the coarse grid. This leads to a new multigrid method for calculating the eigenvalues of symmetric problems. Numerical experiments with a model problem are presented which demonstrate the effectiveness of the method.
Structural design using equilibrium programming formulations
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1995-01-01
Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.
An investigation of aviator problem-solving skills as they relate to amount of total flight time
NASA Astrophysics Data System (ADS)
Guilkey, James Elwood, Jr.
As aircraft become increasingly more reliable, safety issues have shifted towards the human component of flight, the pilot. Jensen (1995) indicated that 80% of all General Aviation (GA) accidents are the result, at least in part, of errors committed by the aviator. One major focus of current research involves aviator decision making (ADM). ADM combines a broad range of psychological factors including personality, attitude, and motivation. This approach fails to isolate certain key components such as aviator problem-solving (APS) which are paramount to safe operations. It should be noted that there is a clear delineation between problem-solving and decision making and not assume that they are homogenous. For years, researchers, industry, and the Federal Aviation Administration (FAA) have depended on total flight hours as the standard by which to judge aviator expertise. A pilot with less than a prescribed number of hours is considered a novice while those above that mark are considered experts. The reliance on time as a predictor of performance may be accurate when considering skills which are required on every flight (i.e., takeoff and landing) but we can't assume that this holds true for all aspects of aviator expertise. Complex problem-solving for example, is something that is rarely faced during the normal course of flying. In fact, there are a myriad of procedures and FAA mandated regulations designed to assist pilots in avoiding problems. Thus, one should not assume that aviator problem-solving skills will increase over time. This study investigated the relationship between problem-solving skills of general aviation pilots and total number of flight hours. It was discovered that flight time is not a good predictor of problem-solving performance. There were two distinct strategies that were identified in the study. The first, progressive problem solving (PPS) was characterized by a stepwise method in which pilots gathered information, formulated hypotheses, and evaluated outcomes. Both high time as well as low time pilots demonstrated this approach. The second method, termed knee-jerk decision making was distinguished by a lack of problem-solving abilities and involved an almost immediate decision with little if any supporting information. Again both high and low time pilots performed in this manner. The result of these findings is a recommendation that the FAA adopt new training methods which will allow pilots to develop the skills required to handle critical inflight situations.
Asymptotic-induced numerical methods for conservation laws
NASA Technical Reports Server (NTRS)
Garbey, Marc; Scroggs, Jeffrey S.
1990-01-01
Asymptotic-induced methods are presented for the numerical solution of hyperbolic conservation laws with or without viscosity. The methods consist of multiple stages. The first stage is to obtain a first approximation by using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problems identified by using techniques derived via asymptotics. Finally, a residual correction increases the accuracy of the scheme. The method is derived and justified with singular perturbation techniques.
NASA Astrophysics Data System (ADS)
Zirconia, A.; Supriyanti, F. M. T.; Supriatna, A.
2018-04-01
This study aims to determine generic science skills enhancement of students through implementation of IDEAL problem-solving model on genetic information course. Method of this research was mixed method, with pretest-posttest nonequivalent control group design. Subjects of this study were chemistry students enrolled in biochemistry course, consisted of 22 students in the experimental class and 19 students in control class. The instrument in this study was essayed involves 6 indicators generic science skills such as indirect observation, causality thinking, logical frame, self-consistent thinking, symbolic language, and developing concept. The results showed that genetic information course using IDEAL problem-solving model have been enhancing generic science skills in low category with
Graph cuts via l1 norm minimization.
Bhusnurmath, Arvind; Taylor, Camillo J
2008-10-01
Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.
Systems of Inhomogeneous Linear Equations
NASA Astrophysics Data System (ADS)
Scherer, Philipp O. J.
Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.
Problem solving therapy - use and effectiveness in general practice.
Pierce, David
2012-09-01
Problem solving therapy (PST) is one of the focused psychological strategies supported by Medicare for use by appropriately trained general practitioners. This article reviews the evidence base for PST and its use in the general practice setting. Problem solving therapy involves patients learning or reactivating problem solving skills. These skills can then be applied to specific life problems associated with psychological and somatic symptoms. Problem solving therapy is suitable for use in general practice for patients experiencing common mental health conditions and has been shown to be as effective in the treatment of depression as antidepressants. Problem solving therapy involves a series of sequential stages. The clinician assists the patient to develop new empowering skills, and then supports them to work through the stages of therapy to determine and implement the solution selected by the patient. Many experienced GPs will identify their own existing problem solving skills. Learning about PST may involve refining and focusing these skills.
ERIC Educational Resources Information Center
Kostadinov, Boyan
2013-01-01
This article attempts to introduce the reader to computational thinking and solving problems involving randomness. The main technique being employed is the Monte Carlo method, using the freely available software "R for Statistical Computing." The author illustrates the computer simulation approach by focusing on several problems of…
Conversion of a Rhotrix to a "Coupled Matrix"
ERIC Educational Resources Information Center
Sani, B.
2008-01-01
In this note, a method of converting a rhotrix to a special form of matrix termed a "coupled matrix" is proposed. The special matrix can be used to solve various problems involving n x n and (n - 1) x (n - 1) matrices simultaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gokaltun, Seckin; McDaniel, Dwayne; Roelant, David
2012-07-01
Multiphase flows involving gas and liquid phases can be observed in engineering operations at various Department of Energy sites, such as mixing of slurries using pulsed-air mixers and hydrogen gas generation in liquid waste tanks etc. The dynamics of the gas phase in the liquid domain play an important role in the mixing effectiveness of the pulsed-air mixers or in the level of gas pressure build-up in waste tanks. To understand such effects, computational fluid dynamics methods (CFD) can be utilized by developing a three-dimensional computerized multiphase flow model that can predict accurately the behavior of gas motion inside liquid-filledmore » tanks by solving the governing mathematical equations that represent the physics of the phenomena. In this paper, such a CFD method, lattice Boltzmann method (LBM), is presented that can model multiphase flows accurately and efficiently. LBM is favored over traditional Navier-Stokes based computational models since interfacial forces are handled more effectively in LBM. The LBM is easier to program, more efficient to solve on parallel computers, and has the ability to capture the interface between different fluid phases intrinsically. The LBM used in this paper can solve for the incompressible and viscous flow field in three dimensions, while at the same time, solve the Cahn-Hillard equation to track the position of the gas-liquid interface specifically when the density and viscosity ratios between the two fluids are high. This feature is of primary importance since the previous LBM models proposed for multiphase flows become unstable when the density ratio is larger than 10. The ability to provide stable and accurate simulations at large density ratios becomes important when the simulation case involves fluids such as air and water with a density ratio around 1000 that are common to many engineering problems. In order to demonstrate the capability of the 3D LBM method at high density ratios, a static bubble simulation is conducted to solve for the pressure difference between the inside and outside of a gas bubble in a liquid domain. Once the results show that the method is in agreement with the Laplace law, buoyant bubble simulations are conducted. The initial results obtained for bubble shape during the rising process was found to be in agreement with the theoretical expectations. (authors)« less
Interface conditions for domain decomposition with radical grid refinement
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1991-01-01
Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.
Mental Models for Mechanical Comprehension. A Review of Literature.
1986-06-01
the mental models that people use to understand and solve problems involving mechanics and motion. Method The existing psychological literature on...have been used to investigate mental models. The constructionist school is concerned with how mental models are formed. The information-processing...school uses the experimental methods of modern cognitive psychology to investigate mental structures. The componential approach attempts to meld the
Development Optimization and Uncertainty Analysis Methods for Oil and Gas Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ettehadtavakkol, Amin, E-mail: amin.ettehadtavakkol@ttu.edu; Jablonowski, Christopher; Lake, Larry
Uncertainty complicates the development optimization of oil and gas exploration and production projects, but methods have been devised to analyze uncertainty and its impact on optimal decision-making. This paper compares two methods for development optimization and uncertainty analysis: Monte Carlo (MC) simulation and stochastic programming. Two example problems for a gas field development and an oilfield development are solved and discussed to elaborate the advantages and disadvantages of each method. Development optimization involves decisions regarding the configuration of initial capital investment and subsequent operational decisions. Uncertainty analysis involves the quantification of the impact of uncertain parameters on the optimum designmore » concept. The gas field development problem is designed to highlight the differences in the implementation of the two methods and to show that both methods yield the exact same optimum design. The results show that both MC optimization and stochastic programming provide unique benefits, and that the choice of method depends on the goal of the analysis. While the MC method generates more useful information, along with the optimum design configuration, the stochastic programming method is more computationally efficient in determining the optimal solution. Reservoirs comprise multiple compartments and layers with multiphase flow of oil, water, and gas. We present a workflow for development optimization under uncertainty for these reservoirs, and solve an example on the design optimization of a multicompartment, multilayer oilfield development.« less
A composite step conjugate gradients squared algorithm for solving nonsymmetric linear systems
NASA Astrophysics Data System (ADS)
Chan, Tony; Szeto, Tedd
1994-03-01
We propose a new and more stable variant of the CGS method [27] for solving nonsymmetric linear systems. The method is based on squaring the Composite Step BCG method, introduced recently by Bank and Chan [1,2], which itself is a stabilized variant of BCG in that it skips over steps for which the BCG iterate is not defined and causes one kind of breakdown in BCG. By doing this, we obtain a method (Composite Step CGS or CSCGS) which not only handles the breakdowns described above, but does so with the advantages of CGS, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG. Our strategy for deciding whether to skip a step does not involve any machine dependent parameters and is designed to skip near breakdowns as well as produce smoother iterates. Numerical experiments show that the new method does produce improved performance over CGS on practical problems.
Data Processing: Fifteen Suggestions for Computer Training in Your Business Education Classes.
ERIC Educational Resources Information Center
Barr, Lowell L.
1980-01-01
Presents 15 suggestions for training business education students in the use of computers. Suggestions involve computer language, method of presentation, laboratory time, programing assignments, instructions and handouts, problem solving, deadlines, reviews, programming concepts, programming logic, documentation, and defensive programming. (CT)
Mesoscale modeling: solving complex flows in biology and biotechnology.
Mills, Zachary Grant; Mao, Wenbin; Alexeev, Alexander
2013-07-01
Fluids are involved in practically all physiological activities of living organisms. However, biological and biorelated flows are hard to analyze due to the inherent combination of interdependent effects and processes that occur on a multitude of spatial and temporal scales. Recent advances in mesoscale simulations enable researchers to tackle problems that are central for the understanding of such flows. Furthermore, computational modeling effectively facilitates the development of novel therapeutic approaches. Among other methods, dissipative particle dynamics and the lattice Boltzmann method have become increasingly popular during recent years due to their ability to solve a large variety of problems. In this review, we discuss recent applications of these mesoscale methods to several fluid-related problems in medicine, bioengineering, and biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.
An Action-Research Program for Increasing Employee Involvement in Problem Solving.
ERIC Educational Resources Information Center
Pasmore, William; Friedlander, Frank
1982-01-01
Describes the use of participative action research to solve problems of work-related employee injuries in a rural midwestern electronics plant by increasing employee involvement. The researchers established an employee problem-solving group that interviewed and surveyed workers, analyzed the results, and suggested new work arrangements. (Author/RW)
Chen, Zhe; Honomichl, Ryan; Kennedy, Diane; Tan, Enda
2016-06-01
The present study examines 5- to 8-year-old children's relation reasoning in solving matrix completion tasks. This study incorporates a componential analysis, an eye-tracking method, and a microgenetic approach, which together allow an investigation of the cognitive processing strategies involved in the development and learning of children's relational thinking. Developmental differences in problem-solving performance were largely due to deficiencies in engaging the processing strategies that are hypothesized to facilitate problem-solving performance. Feedback designed to highlight the relations between objects within the matrix improved 5- and 6-year-olds' problem-solving performance, as well as their use of appropriate processing strategies. Furthermore, children who engaged the processing strategies early on in the task were more likely to solve subsequent problems in later phases. These findings suggest that encoding relations, integrating rules, completing the model, and generalizing strategies across tasks are critical processing components that underlie relational thinking. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Analytical derivation: An epistemic game for solving mathematically based physics problems
NASA Astrophysics Data System (ADS)
Bajracharya, Rabindra R.; Thompson, John R.
2016-06-01
Problem solving, which often involves multiple steps, is an integral part of physics learning and teaching. Using the perspective of the epistemic game, we documented a specific game that is commonly pursued by students while solving mathematically based physics problems: the analytical derivation game. This game involves deriving an equation through symbolic manipulations and routine mathematical operations, usually without any physical interpretation of the processes. This game often creates cognitive obstacles in students, preventing them from using alternative resources or better approaches during problem solving. We conducted hour-long, semi-structured, individual interviews with fourteen introductory physics students. Students were asked to solve four "pseudophysics" problems containing algebraic and graphical representations. The problems required the application of the fundamental theorem of calculus (FTC), which is one of the most frequently used mathematical concepts in physics problem solving. We show that the analytical derivation game is necessary, but not sufficient, to solve mathematically based physics problems, specifically those involving graphical representations.
Total Quality Management in Libraries. ERIC Digest.
ERIC Educational Resources Information Center
Masters, Denise G.
Total Quality Management (TQM) is "a system of continuous improvement employing participative management and centered on the needs of customers." Key components of TQM are employee involvement and training, problem-solving teams, statistical methods, long-term goals and thinking, and recognition that the system, not people, produces…
CASE STUDIES IN THE INTEGRATED USE OF SCALE ANALYSES TO SOLVE LEAD PROBLEMS
All methods of controlling lead corrosion involve immobilizing lead into relatively insoluble compounds that deposit on the interior wall of water pipes. Many different solid phases can form under the disparate conditions that exist in distribution systems, which range in how the...
Teaching Integer Operations Using Ring Theory
ERIC Educational Resources Information Center
Hirsch, Jenna
2012-01-01
A facility with signed numbers forms the basis for effective problem solving throughout developmental mathematics. Most developmental mathematics textbooks explain signed number operations using absolute value, a method that involves considering the problem in several cases (same sign, opposite sign), and in the case of subtraction, rewriting the…
Solving Differential Equations Using Modified Picard Iteration
ERIC Educational Resources Information Center
Robin, W. A.
2010-01-01
Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…
NASA Technical Reports Server (NTRS)
Yan, Jue; Shu, Chi-Wang; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
In this paper we review the existing and develop new continuous Galerkin methods for solving time dependent partial differential equations with higher order derivatives in one and multiple space dimensions. We review local discontinuous Galerkin methods for convection diffusion equations involving second derivatives and for KdV type equations involving third derivatives. We then develop new local discontinuous Galerkin methods for the time dependent bi-harmonic type equations involving fourth derivatives, and partial differential equations involving fifth derivatives. For these new methods we present correct interface numerical fluxes and prove L(exp 2) stability for general nonlinear problems. Preliminary numerical examples are shown to illustrate these methods. Finally, we present new results on a post-processing technique, originally designed for methods with good negative-order error estimates, on the local discontinuous Galerkin methods applied to equations with higher derivatives. Numerical experiments show that this technique works as well for the new higher derivative cases, in effectively doubling the rate of convergence with negligible additional computational cost, for linear as well as some nonlinear problems, with a local uniform mesh.
NASA Astrophysics Data System (ADS)
Zendejas, Gerardo; Chiasson, Mike
This paper will propose and explore a method to enhance focal actors' abilities to enroll and control the many social and technical components interacting during the initiation, production, and diffusion of innovations. The reassembling and stabilizing of such components is the challenging goal of the focal actors involved in these processes. To address this possibility, a healthcare project involving the initiation, production, and diffusion of an IT-based innovation will be influenced by the researcher, using concepts from actor network theory (ANT), within an action research methodology (ARM). The experiences using this method, and the nature of enrolment and translation during its use, will highlight if and how ANT can provide a problem-solving method to help assemble the social and technical actants involved in the diffusion of an innovation. Finally, the paper will discuss the challenges and benefits of implementing such methods to attain widespread diffusion.
ERIC Educational Resources Information Center
Lin, Shih-Yin; Singh, Chandralekha
2015-01-01
It is well known that introductory physics students often have alternative conceptions that are inconsistent with established physical principles and concepts. Invoking alternative conceptions in the quantitative problem-solving process can derail the entire process. In order to help students solve quantitative problems involving strong…
Analytical Derivation: An Epistemic Game for Solving Mathematically Based Physics Problems
ERIC Educational Resources Information Center
Bajracharya, Rabindra R.; Thompson, John R.
2016-01-01
Problem solving, which often involves multiple steps, is an integral part of physics learning and teaching. Using the perspective of the epistemic game, we documented a specific game that is commonly pursued by students while solving mathematically based physics problems: the "analytical derivation" game. This game involves deriving an…
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne
2016-11-01
Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.
Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.
Abbasi, Mahdi
2014-01-01
Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.
Methods for Prediction of High-Speed Reacting Flows in Aerospace Propulsion
NASA Technical Reports Server (NTRS)
Drummond, J. Philip
2014-01-01
Research to develop high-speed airbreathing aerospace propulsion systems was underway in the late 1950s. A major part of the effort involved the supersonic combustion ramjet, or scramjet, engine. Work had also begun to develop computational techniques for solving the equations governing the flow through a scramjet engine. However, scramjet technology and the computational methods to assist in its evolution would remain apart for another decade. The principal barrier was that the computational methods needed for engine evolution lacked the computer technology required for solving the discrete equations resulting from the numerical methods. Even today, computer resources remain a major pacing item in overcoming this barrier. Significant advances have been made over the past 35 years, however, in modeling the supersonic chemically reacting flow in a scramjet combustor. To see how scramjet development and the required computational tools finally merged, we briefly trace the evolution of the technology in both areas.
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.
ERIC Educational Resources Information Center
Burns, Barbara A.; Jordan, Thomas M.
2006-01-01
Business managers are faced with complex decisions involving a wide range of issues--technical, social, environmental, and financial--and their interaction. Our education system focuses heavily on presenting structured problems and teaching students to apply a set of tools or methods to solve these problems. Yet the most difficult thing to teach…
ERIC Educational Resources Information Center
Davis-McGibony, C. Michele
2010-01-01
The jigsaw technique has been used in a fourth-year biochemistry course to increase problem-solving abilities of the students. The jigsaw method is a cooperative-learning technique that involves a group structure. Students start with a "home" group. That group is responsible for learning an assigned portion of a task. Then the instructor separates…
The Dynamics of Life Skills Coaching.
ERIC Educational Resources Information Center
Saskatchewan NewStart, Inc., Prince Albert.
This book is used throughout the life skills coach training course. The content focuses on increasing the understanding the training material and to assist in coaching life skills students. The course, based on adult training and counseling methods, involves the development of problem-solving behaviors in the management of personal affairs. The…
The Heat Is on: An Inquiry-Based Investigation for Specific Heat
ERIC Educational Resources Information Center
Herrington, Deborah G.
2011-01-01
A substantial number of upper-level science students and practicing physical science teachers demonstrate confusion about thermal equilibrium, heat transfer, heat capacity, and specific heat capacity. The traditional method of instruction, which involves learning the related definitions and equations, using equations to solve heat transfer…
Activities: Activities to Introduce Maxima-Minima Problems.
ERIC Educational Resources Information Center
Pleacher, David
1991-01-01
Presented are student activities that involve two standard problems from geometry and calculus--the volume of a box and the bank shot on a pool table. Problem solving is emphasized as a method of inquiry and application with descriptions of the results using graphical, numerical, and physical models. (JJK)
Environmental Education . . . The Way of the Hula Hoop?
ERIC Educational Resources Information Center
Applegate, Warren
1974-01-01
Given is information on a federally funded environmental education program based on field experience and community awareness. Problem-solving methods were used to involve students in local environmental issues. Field experiences included trips to the mountains and seashore and community projects in water quality, recycling, and nature trail…
Bringing Management Reality into the Classroom--The Development of Interactive Learning.
ERIC Educational Resources Information Center
Nicholson, Alastair
1997-01-01
Effective learning in management education can be enhanced by reproducing the real-world need to solve problems under pressure of time, inadequate information, and group interaction. An interactive classroom communication system involving problems in decision making and continuous improvement is one method for bridging theory and practice. (SK)
Sociodrama: Group Creative Problem Solving in Action.
ERIC Educational Resources Information Center
Riley, John F.
1990-01-01
Sociodrama is presented as a structured, yet flexible, method of encouraging the use of creative thinking to examine a difficult problem. An example illustrates the steps involved in putting sociodrama into action. Production techniques useful in sociodrama include the soliloquy, double, role reversal, magic shop, unity of opposites, and audience…
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1993-01-01
In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.
An optimization program based on the method of feasible directions: Theory and users guide
NASA Technical Reports Server (NTRS)
Belegundu, Ashok D.; Berke, Laszlo; Patnaik, Surya N.
1994-01-01
The theory and user instructions for an optimization code based on the method of feasible directions are presented. The code was written for wide distribution and ease of attachment to other simulation software. Although the theory of the method of feasible direction was developed in the 1960's, many considerations are involved in its actual implementation as a computer code. Included in the code are a number of features to improve robustness in optimization. The search direction is obtained by solving a quadratic program using an interior method based on Karmarkar's algorithm. The theory is discussed focusing on the important and often overlooked role played by the various parameters guiding the iterations within the program. Also discussed is a robust approach for handling infeasible starting points. The code was validated by solving a variety of structural optimization test problems that have known solutions obtained by other optimization codes. It has been observed that this code is robust: it has solved a variety of problems from different starting points. However, the code is inefficient in that it takes considerable CPU time as compared with certain other available codes. Further work is required to improve its efficiency while retaining its robustness.
NASA Technical Reports Server (NTRS)
Chao, D. F. K.
1983-01-01
Transient, numerical simulations of the de-icing of composite aircraft components by electrothermal heating were performed for a two dimensional rectangular geometry. The implicit Crank-Nicolson formulation was used to insure stability of the finite-difference heat conduction equations and the phase change in the ice layer was simulated using the Enthalpy method. The Gauss-Seidel point iterative method was used to solve the system of difference equations. Numerical solutions illustrating de-icer performance for various composite aircraft structures and environmental conditions are presented. Comparisons are made with previous studies. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.
ERIC Educational Resources Information Center
Wareham, Todd
2017-01-01
In human problem solving, there is a wide variation between individuals in problem solution time and success rate, regardless of whether or not this problem solving involves insight. In this paper, we apply computational and parameterized analysis to a plausible formalization of extended representation change theory (eRCT), an integration of…
Applications of numerical methods to simulate the movement of contaminants in groundwater.
Sun, N Z
1989-01-01
This paper reviews mathematical models and numerical methods that have been extensively used to simulate the movement of contaminants through the subsurface. The major emphasis is placed on the numerical methods of advection-dominated transport problems and inverse problems. Several mathematical models that are commonly used in field problems are listed. A variety of numerical solutions for three-dimensional models are introduced, including the multiple cell balance method that can be considered a variation of the finite element method. The multiple cell balance method is easy to understand and convenient for solving field problems. When the advection transport dominates the dispersion transport, two kinds of numerical difficulties, overshoot and numerical dispersion, are always involved in solving standard, finite difference methods and finite element methods. To overcome these numerical difficulties, various numerical techniques are developed, such as upstream weighting methods and moving point methods. A complete review of these methods is given and we also mention the problems of parameter identification, reliability analysis, and optimal-experiment design that are absolutely necessary for constructing a practical model. PMID:2695327
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, G.C.; Clinard, E.H.; Sanders, W.M.
1975-01-01
The Enzyme-Labeled Antibody (ELA) test system has been adapted to microtiter trays for both cell bound and soluble antigens. Problems involving both readout instrumentation and reaction product stability have been solved. Progress involving application of the ELA system for detection of hog cholera, trichinosis, swine brucellosis, and swine and bovine tuberculosis is reported. Prototype instrumentation for automating ELA processing is being developed. (auth)
A collocation-shooting method for solving fractional boundary value problems
NASA Astrophysics Data System (ADS)
Al-Mdallal, Qasem M.; Syam, Muhammed I.; Anwar, M. N.
2010-12-01
In this paper, we discuss the numerical solution of special class of fractional boundary value problems of order 2. The method of solution is based on a conjugating collocation and spline analysis combined with shooting method. A theoretical analysis about the existence and uniqueness of exact solution for the present class is proven. Two examples involving Bagley-Torvik equation subject to boundary conditions are also presented; numerical results illustrate the accuracy of the present scheme.
NASA Technical Reports Server (NTRS)
Lyusternik, L. A.
1980-01-01
The mathematics involved in numerically solving for the plane boundary value of the Laplace equation by the grid method is developed. The approximate solution of a boundary value problem for the domain of the Laplace equation by the grid method consists of finding u at the grid corner which satisfies the equation at the internal corners (u=Du) and certain boundary value conditions at the boundary corners.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Yan-Lin, E-mail: yanlin.shao@dnvgl.com; Faltinsen, Odd M.
2014-10-01
We propose a new efficient and accurate numerical method based on harmonic polynomials to solve boundary value problems governed by 3D Laplace equation. The computational domain is discretized by overlapping cells. Within each cell, the velocity potential is represented by the linear superposition of a complete set of harmonic polynomials, which are the elementary solutions of Laplace equation. By its definition, the method is named as Harmonic Polynomial Cell (HPC) method. The characteristics of the accuracy and efficiency of the HPC method are demonstrated by studying analytical cases. Comparisons will be made with some other existing boundary element based methods,more » e.g. Quadratic Boundary Element Method (QBEM) and the Fast Multipole Accelerated QBEM (FMA-QBEM) and a fourth order Finite Difference Method (FDM). To demonstrate the applications of the method, it is applied to some studies relevant for marine hydrodynamics. Sloshing in 3D rectangular tanks, a fully-nonlinear numerical wave tank, fully-nonlinear wave focusing on a semi-circular shoal, and the nonlinear wave diffraction of a bottom-mounted cylinder in regular waves are studied. The comparisons with the experimental results and other numerical results are all in satisfactory agreement, indicating that the present HPC method is a promising method in solving potential-flow problems. The underlying procedure of the HPC method could also be useful in other fields than marine hydrodynamics involved with solving Laplace equation.« less
Problem Solving Process Research of Everyone Involved in Innovation Based on CAI Technology
NASA Astrophysics Data System (ADS)
Chen, Tao; Shao, Yunfei; Tang, Xiaowo
It is very important that non-technical department personnel especially bottom line employee serve as innovators under the requirements of everyone involved in innovation. According the view of this paper, it is feasible and necessary to build everyone involved in innovation problem solving process under Total Innovation Management (TIM) based on the Theory of Inventive Problem Solving (TRIZ). The tools under the CAI technology: How TO mode and science effects database could be very useful for all employee especially non-technical department and bottom line for innovation. The problem solving process put forward in the paper focus on non-technical department personnel especially bottom line employee for innovation.
Students’ errors in solving combinatorics problems observed from the characteristics of RME modeling
NASA Astrophysics Data System (ADS)
Meika, I.; Suryadi, D.; Darhim
2018-01-01
This article was written based on the learning evaluation results of students’ errors in solving combinatorics problems observed from the characteristics of Realistic Mathematics Education (RME); that is modeling. Descriptive method was employed by involving 55 students from two international-based pilot state senior high schools in Banten. The findings of the study suggested that the students still committed errors in simplifying the problem as much 46%; errors in making mathematical model (horizontal mathematization) as much 60%; errors in finishing mathematical model (vertical mathematization) as much 65%; and errors in interpretation as well as validation as much 66%.
Huang, Chih-Hsu; Lin, Chou-Ching K; Ju, Ming-Shaung
2015-02-01
Compared with the Monte Carlo method, the population density method is efficient for modeling collective dynamics of neuronal populations in human brain. In this method, a population density function describes the probabilistic distribution of states of all neurons in the population and it is governed by a hyperbolic partial differential equation. In the past, the problem was mainly solved by using the finite difference method. In a previous study, a continuous Galerkin finite element method was found better than the finite difference method for solving the hyperbolic partial differential equation; however, the population density function often has discontinuity and both methods suffer from a numerical stability problem. The goal of this study is to improve the numerical stability of the solution using discontinuous Galerkin finite element method. To test the performance of the new approach, interaction of a population of cortical pyramidal neurons and a population of thalamic neurons was simulated. The numerical results showed good agreement between results of discontinuous Galerkin finite element and Monte Carlo methods. The convergence and accuracy of the solutions are excellent. The numerical stability problem could be resolved using the discontinuous Galerkin finite element method which has total-variation-diminishing property. The efficient approach will be employed to simulate the electroencephalogram or dynamics of thalamocortical network which involves three populations, namely, thalamic reticular neurons, thalamocortical neurons and cortical pyramidal neurons. Copyright © 2014 Elsevier Ltd. All rights reserved.
An efficient photogrammetric stereo matching method for high-resolution images
NASA Astrophysics Data System (ADS)
Li, Yingsong; Zheng, Shunyi; Wang, Xiaonan; Ma, Hao
2016-12-01
Stereo matching of high-resolution images is a great challenge in photogrammetry. The main difficulty is the enormous processing workload that involves substantial computing time and memory consumption. In recent years, the semi-global matching (SGM) method has been a promising approach for solving stereo problems in different data sets. However, the time complexity and memory demand of SGM are proportional to the scale of the images involved, which leads to very high consumption when dealing with large images. To solve it, this paper presents an efficient hierarchical matching strategy based on the SGM algorithm using single instruction multiple data instructions and structured parallelism in the central processing unit. The proposed method can significantly reduce the computational time and memory required for large scale stereo matching. The three-dimensional (3D) surface is reconstructed by triangulating and fusing redundant reconstruction information from multi-view matching results. Finally, three high-resolution aerial date sets are used to evaluate our improvement. Furthermore, precise airborne laser scanner data of one data set is used to measure the accuracy of our reconstruction. Experimental results demonstrate that our method remarkably outperforms in terms of time and memory savings while maintaining the density and precision of the 3D cloud points derived.
Quad-Tree Visual-Calculus Analysis of Satellite Coverage
NASA Technical Reports Server (NTRS)
Lo, Martin W.; Hockney, George; Kwan, Bruce
2003-01-01
An improved method of analysis of coverage of areas of the Earth by a constellation of radio-communication or scientific-observation satellites has been developed. This method is intended to supplant an older method in which the global-coverage-analysis problem is solved from a ground-to-satellite perspective. The present method provides for rapid and efficient analysis. This method is derived from a satellite-to-ground perspective and involves a unique combination of two techniques for multiresolution representation of map features on the surface of a sphere.
NASA Astrophysics Data System (ADS)
Balac, Stéphane; Fernandez, Arnaud
2016-02-01
The computer program SPIP is aimed at solving the Generalized Non-Linear Schrödinger equation (GNLSE), involved in optics e.g. in the modelling of light-wave propagation in an optical fibre, by the Interaction Picture method, a new efficient alternative method to the Symmetric Split-Step method. In the SPIP program a dedicated costless adaptive step-size control based on the use of a 4th order embedded Runge-Kutta method is implemented in order to speed up the resolution.
Supplier Selection Using Weighted Utility Additive Method
NASA Astrophysics Data System (ADS)
Karande, Prasad; Chakraborty, Shankar
2015-10-01
Supplier selection is a multi-criteria decision-making (MCDM) problem which mainly involves evaluating a number of available suppliers according to a set of common criteria for choosing the best one to meet the organizational needs. For any manufacturing or service organization, selecting the right upstream suppliers is a key success factor that will significantly reduce purchasing cost, increase downstream customer satisfaction and improve competitive ability. The past researchers have attempted to solve the supplier selection problem employing different MCDM techniques which involve active participation of the decision makers in the decision-making process. This paper deals with the application of weighted utility additive (WUTA) method for solving supplier selection problems. The WUTA method, an extension of utility additive approach, is based on ordinal regression and consists of building a piece-wise linear additive decision model from a preference structure using linear programming (LP). It adopts preference disaggregation principle and addresses the decision-making activities through operational models which need implicit preferences in the form of a preorder of reference alternatives or a subset of these alternatives present in the process. The preferential preorder provided by the decision maker is used as a restriction of a LP problem, which has its own objective function, minimization of the sum of the errors associated with the ranking of each alternative. Based on a given reference ranking of alternatives, one or more additive utility functions are derived. Using these utility functions, the weighted utilities for individual criterion values are combined into an overall weighted utility for a given alternative. It is observed that WUTA method, having a sound mathematical background, can provide accurate ranking to the candidate suppliers and choose the best one to fulfill the organizational requirements. Two real time examples are illustrated to prove its applicability and appropriateness in solving supplier selection problems.
Phase retrieval in annulus sector domain by non-iterative methods
NASA Astrophysics Data System (ADS)
Wang, Xiao; Mao, Heng; Zhao, Da-zun
2008-03-01
Phase retrieval could be achieved by solving the intensity transport equation (ITE) under the paraxial approximation. For the case of uniform illumination, Neumann boundary condition is involved and it makes the solving process more complicated. The primary mirror is usually designed segmented in the telescope with large aperture, and the shape of a segmented piece is often like an annulus sector. Accordingly, It is necessary to analyze the phase retrieval in the annulus sector domain. Two non-iterative methods are considered for recovering the phase. The matrix method is based on the decomposition of the solution into a series of orthogonalized polynomials, while the frequency filtering method depends on the inverse computation process of ITE. By the simulation, it is found that both methods can eliminate the effect of Neumann boundary condition, save a lot of computation time and recover the distorted phase well. The wavefront error (WFE) RMS can be less than 0.05 wavelength, even when some noise is added.
Testing Theoretical Models of Magnetic Damping Using an Air Track
ERIC Educational Resources Information Center
Vidaurre, Ana; Riera, Jaime; Monsoriu, Juan A.; Gimenez, Marcos H.
2008-01-01
Magnetic braking is a long-established application of Lenz's law. A rigorous analysis of the laws governing this problem involves solving Maxwell's equations in a time-dependent situation. Approximate models have been developed to describe different experimental results related to this phenomenon. In this paper we present a new method for the…
ERIC Educational Resources Information Center
Kazeni, Monde; Onwu, Gilbert
2013-01-01
The study aimed to determine the comparative effectiveness of context-based and traditional teaching approaches in enhancing student achievement in genetics, problem-solving, science inquiry and decision-making skills, and attitude towards the study of life sciences. A mixed method but essentially quantitative research approach involving a…
Solving magnetostatic field problems with NASTRAN
NASA Technical Reports Server (NTRS)
Hurwitz, M. M.; Schroeder, E. A.
1978-01-01
Determining the three-dimensional magnetostatic field in current-induced situations has usually involved vector potentials, which can lead to excessive computational times. How such magnetic fields may be determined using scalar potentials is reviewed. It is shown how the heat transfer capability of NASTRAN level 17 was modified to take advantage of the new method.
Using a Model to Describe Students' Inductive Reasoning in Problem Solving
ERIC Educational Resources Information Center
Canadas, Maria C.; Castro, Encarnacion; Castro, Enrique
2009-01-01
Introduction: We present some aspects of a wider investigation (Canadas, 2007), whose main objective is to describe and characterize inductive reasoning used by Spanish students in years 9 and 10 when they work on problems that involved linear and quadratic sequences. Method: We produced a test composed of six problems with different…
A Curriculum for Logical Thinking. NAAESC Occasional Papers, Volume 1, Number 4.
ERIC Educational Resources Information Center
Charuhas, Mary S.
The purpose of this paper is to demonstrate methods for developing cognitive processes in adult students. It discusses concept formation and concept attainment, problem solving (which involves concept formation and concept attainment), Bruner's three stages of learning (enactive, iconic, and symbolic modes), and visual thinking. A curriculum for…
Differential geometric methods in system theory.
NASA Technical Reports Server (NTRS)
Brockett, R. W.
1971-01-01
Discussion of certain problems in system theory which have been or might be solved using some basic concepts from differential geometry. The problems considered involve differential equations, controllability, optimal control, qualitative behavior, stochastic processes, and bilinear systems. The main goal is to extend the essentials of linear theory to some nonlinear classes of problems.
Effects of Minute Contextual Experience on Realistic Assessment of Proportional Reasoning
ERIC Educational Resources Information Center
Matney, Gabriel; Jackson, Jack L., II; Bostic, Jonathan
2013-01-01
This mixed methods study describes the effects of a "minute contextual experience" on students' ability to solve a realistic assessment problem involving scale drawings and proportional reasoning. Minute contextual experience (MCE) is defined to be a brief encounter with a context in which aspects of the context are explored openly. The…
An efficient method for solving the steady Euler equations
NASA Technical Reports Server (NTRS)
Liou, M. S.
1986-01-01
An efficient numerical procedure for solving a set of nonlinear partial differential equations is given, specifically for the steady Euler equations. Solutions of the equations were obtained by Newton's linearization procedure, commonly used to solve the roots of nonlinear algebraic equations. In application of the same procedure for solving a set of differential equations we give a theorem showing that a quadratic convergence rate can be achieved. While the domain of quadratic convergence depends on the problems studied and is unknown a priori, we show that firstand second-order derivatives of flux vectors determine whether the condition for quadratic convergence is satisfied. The first derivatives enter as an implicit operator for yielding new iterates and the second derivatives indicates smoothness of the flows considered. Consequently flows involving shocks are expected to require larger number of iterations. First-order upwind discretization in conjunction with the Steger-Warming flux-vector splitting is employed on the implicit operator and a diagonal dominant matrix results. However the explicit operator is represented by first- and seond-order upwind differencings, using both Steger-Warming's and van Leer's splittings. We discuss treatment of boundary conditions and solution procedures for solving the resulting block matrix system. With a set of test problems for one- and two-dimensional flows, we show detailed study as to the efficiency, accuracy, and convergence of the present method.
Abazarian, Elaheh; Baboli, M Teimourzadeh; Abazarian, Elham; Ghashghaei, F Esteki
2015-01-01
Background: Diabetes is the most prevalent disease that has involved 177 million people all over the world and, due to this, these patients suffer from depression and anxiety and they should use special methods for controlling the same. The aim of this research is the study of the effect of problem solving and decision making skill on the rate of the tendency to depression and anxiety. Materials and Methods: This research is a quasi-experimental (case-control) study. Statistically, the population of the present study was all diabetic patients of Qaemshahr who were controlled by physicians in 2011-2012. Thirty files were selected randomly from them and divided into two 15 patients’ groups (control and subject group) randomly. The measurement tools were Back depression inventory (21 items) and Zank anxiety questionnaire that were distributed among two groups. Then, the subject group participated in eight sessions of teaching problem solving and decision making courses separately, and the second group (control group) did not receive any instruction. Results: Finally, both groups had passed post-test and the data obtained from the questionnaires were studied by variance analysis statistical methods. Conclusion: The results showed that teaching problem solving and decision making skills was very effective in reducing diabetic patients’ depression and anxiety and resulted in reducing their depression and anxiety. PMID:26261814
Modelling crystal growth: Convection in an asymmetrically heated ampoule
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan D.; Rosenberger, Franz; Pulicani, J. P.; Krukowski, S.; Ouazzani, Jalil
1990-01-01
The objective was to develop and implement a numerical method capable of solving the nonlinear partial differential equations governing heat, mass, and momentum transfer in a 3-D cylindrical geometry in order to examine the character of convection in an asymmetrically heated cylindrical ampoule. The details of the numerical method, including verification tests involving comparison with results obtained from other methods, are presented. The results of the study of 3-D convection in an asymmetrically heated cylinder are described.
Introducing soft systems methodology plus (SSM+): why we need it and what it can contribute.
Braithwaite, Jeffrey; Hindle, Don; Iedema, Rick; Westbrook, Johanna I
2002-01-01
There are many complicated and seemingly intractable problems in the health care sector. Past ways to address them have involved political responses, economic restructuring, biomedical and scientific studies, and managerialist or business-oriented tools. Few methods have enabled us to develop a systematic response to problems. Our version of soft systems methodology, SSM+, seems to improve problem solving processes by providing an iterative, staged framework that emphasises collaborative learning and systems redesign involving both technical and cultural fixes.
A new 3D immersed boundary method for non-Newtonian fluid-structure-interaction with application
NASA Astrophysics Data System (ADS)
Zhu, Luoding
2017-11-01
Motivated by fluid-structure-interaction (FSI) phenomena in life sciences (e.g., motions of sperm and cytoskeleton in complex fluids), we introduce a new immersed boundary method for FSI problems involving non-Newtonian fluids in three dimensions. The non-Newtonian fluids are modelled by the FENE-P model (including the Oldroyd-B model as an especial case) and numerically solved by a lattice Boltzmann scheme (the D3Q7 model). The fluid flow is modelled by the lattice Boltzmann equations and numerically solved by the D3Q19 model. The deformable structure and the fluid-structure-interaction are handled by the immersed boundary method. As an application, we study a FSI toy problem - interaction of an elastic plate (flapped at its leading edge and restricted nowhere else) with a non-Newtonian fluid in a 3D flow. Thanks to the support of NSF-DMS support under research Grant 1522554.
NASA Astrophysics Data System (ADS)
Katsaounis, T. D.
2005-02-01
The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using Diffpack and MPI are also presented. Chapter 2 presents the overlapping domain decomposition method for solving PDEs. It is well known that these methods are suitable for parallel processing. The first part of the chapter covers the mathematical formulation of the method as well as algorithmic and implementational issues. The second part presents a serial and a parallel implementational framework within the programming environment of Diffpack. The chapter closes by showing how to solve two application examples with the overlapping domain decomposition method using Diffpack. Chapter 3 is a tutorial about how to incorporate the multigrid solver in Diffpack. The method is illustrated by examples such as a Poisson solver, a general elliptic problem with various types of boundary conditions and a nonlinear Poisson type problem. In chapter 4 the mixed finite element is introduced. Technical issues concerning the practical implementation of the method are also presented. The main difficulties of the efficient implementation of the method, especially in two and three space dimensions on unstructured grids, are presented and addressed in the framework of Diffpack. The implementational process is illustrated by two examples, namely the system formulation of the Poisson problem and the Stokes problem. Chapter 5 is closely related to chapter 4 and addresses the problem of how to solve efficiently the linear systems arising by the application of the mixed finite element method. The proposed method is block preconditioning. Efficient techniques for implementing the method within Diffpack are presented. Optimal block preconditioners are used to solve the system formulation of the Poisson problem, the Stokes problem and the bidomain model for the electrical activity in the heart. The subject of chapter 6 is systems of PDEs. Linear and nonlinear systems are discussed. Fully implicit and operator splitting methods are presented. Special attention is paid to how existing solvers for scalar equations in Diffpack can be used to derive fully implicit solvers for systems. The proposed techniques are illustrated in terms of two applications, namely a system of PDEs modelling pipeflow and a two-phase porous media flow. Stochastic PDEs is the topic of chapter 7. The first part of the chapter is a simple introduction to stochastic PDEs; basic analytical properties are presented for simple models like transport phenomena and viscous drag forces. The second part considers the numerical solution of stochastic PDEs. Two basic techniques are presented, namely Monte Carlo and perturbation methods. The last part explains how to implement and incorporate these solvers into Diffpack. Chapter 8 describes how to operate Diffpack from Python scripts. The main goal here is to provide all the programming and technical details in order to glue the programming environment of Diffpack with visualization packages through Python and in general take advantage of the Python interfaces. Chapter 9 attempts to show how to use numerical experiments to measure the performance of various PDE solvers. The authors gathered a rather impressive list, a total of 14 PDE solvers. Solvers for problems like Poisson, Navier--Stokes, elasticity, two-phase flows and methods such as finite difference, finite element, multigrid, and gradient type methods are presented. The authors provide a series of numerical results combining various solvers with various methods in order to gain insight into their computational performance and efficiency. In Chapter 10 the authors consider a computationally challenging problem, namely the computation of the electrical activity of the human heart. After a brief introduction on the biology of the problem the authors present the mathematical models involved and a numerical method for solving them within the framework of Diffpack. Chapter 11 and 12 are closely related; actually they could have been combined in a single chapter. Chapter 11 introduces several mathematical models used in finance, based on the Black--Scholes equation. Chapter 12 considers several numerical methods like Monte Carlo, lattice methods, finite difference and finite element methods. Implementation of these methods within Diffpack is presented in the last part of the chapter. Chapter 13 presents how the finite element method is used for the modelling and analysis of elastic structures. The authors describe the structural elements of Diffpack which include popular elements such as beams and plates and examples are presented on how to use them to simulate elastic structures. Chapter 14 describes an application problem, namely the extrusion of aluminum. This is a rather\\endcolumn complicated process which involves non-Newtonian flow, heat transfer and elasticity. The authors describe the systems of PDEs modelling the underlying process and use a finite element method to obtain a numerical solution. The implementation of the numerical method in Diffpack is presented along with some applications. The last chapter, chapter 15, focuses on mathematical and numerical models of systems of PDEs governing geological processes in sedimentary basins. The underlying mathematical model is solved using the finite element method within a fully implicit scheme. The authors discuss the implementational issues involved within Diffpack and they present results from several examples. In summary, the book focuses on the computational and implementational issues involved in solving partial differential equations. The potential reader should have a basic knowledge of PDEs and the finite difference and finite element methods. The examples presented are solved within the programming framework of Diffpack and the reader should have prior experience with the particular software in order to take full advantage of the book. Overall the book is well written, the subject of each chapter is well presented and can serve as a reference for graduate students, researchers and engineers who are interested in the numerical solution of partial differential equations modelling various applications.
Advancing Detached-Eddy Simulation
2007-01-01
fluxes leads to an improvement in the stability of the solution . This matrix is solved iteratively using a symmetric Gauss - Seidel procedure. Newtons sub...model (TLM) is a zonal approach, proposed by Balaras and Benocci (5) and Balaras et al. (4). The method involved the solution of filtered Navier...LES mesh. The method was subsequently used by Cabot (6) and Diurno et al. (7) to obtain the solution of the flow over a backward facing step and by
A numerical method for the dynamics of non-spherical cavitation bubbles
NASA Technical Reports Server (NTRS)
Lucca, G.; Prosperetti, A.
1982-01-01
A boundary integral numerical method for the dynamics of nonspherical cavitation bubbles in inviscid incompressible liquids is described. Only surface values of the velocity potential and its first derivatives are involved. The problem of solving the Laplace equation in the entire domain occupied by the liquid is thus avoided. The collapse of a bubble in the vicinity of a solid wall and the collapse of three bubbles with collinear centers are considered.
İbiş, Birol
2014-01-01
This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662
Energy levels of one-dimensional systems satisfying the minimal length uncertainty relation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernardo, Reginald Christian S., E-mail: rcbernardo@nip.upd.edu.ph; Esguerra, Jose Perico H., E-mail: jesguerra@nip.upd.edu.ph
2016-10-15
The standard approach to calculating the energy levels for quantum systems satisfying the minimal length uncertainty relation is to solve an eigenvalue problem involving a fourth- or higher-order differential equation in quasiposition space. It is shown that the problem can be reformulated so that the energy levels of these systems can be obtained by solving only a second-order quasiposition eigenvalue equation. Through this formulation the energy levels are calculated for the following potentials: particle in a box, harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well. For the particle in a box, the second-order quasiposition eigenvalue equation is a second-ordermore » differential equation with constant coefficients. For the harmonic oscillator, Pöschl–Teller well, Gaussian well, and double-Gaussian well, a method that involves using Wronskians has been used to solve the second-order quasiposition eigenvalue equation. It is observed for all of these quantum systems that the introduction of a nonzero minimal length uncertainty induces a positive shift in the energy levels. It is shown that the calculation of energy levels in systems satisfying the minimal length uncertainty relation is not limited to a small number of problems like particle in a box and the harmonic oscillator but can be extended to a wider class of problems involving potentials such as the Pöschl–Teller and Gaussian wells.« less
Numerical Simulation of Flow Through an Artificial Heart
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Kutler, Paul; Kwak, Dochan; Kiris, Cetin
1989-01-01
A solution procedure was developed that solves the unsteady, incompressible Navier-Stokes equations, and was used to numerically simulate viscous incompressible flow through a model of the Pennsylvania State artificial heart. The solution algorithm is based on the artificial compressibility method, and uses flux-difference splitting to upwind the convective terms; a line-relaxation scheme is used to solve the equations. The time-accuracy of the method is obtained by iteratively solving the equations at each physical time step. The artificial heart geometry involves a piston-type action with a moving solid wall. A single H-grid is fit inside the heart chamber. The grid is continuously compressed and expanded with a constant number of grid points to accommodate the moving piston. The computational domain ends at the valve openings where nonreflective boundary conditions based on the method of characteristics are applied. Although a number of simplifing assumptions were made regarding the geometry, the computational results agreed reasonably well with an experimental picture. The computer time requirements for this flow simulation, however, are quite extensive. Computational study of this type of geometry would benefit greatly from improvements in computer hardware speed and algorithm efficiency enhancements.
NASA Astrophysics Data System (ADS)
Chakroun, Mahmoud; Gogu, Grigore; Pacaud, Thomas; Thirion, François
2014-09-01
This study proposes an eco-innovative design process taking into consideration quality and environmental aspects in prioritizing and solving technical engineering problems. This approach provides a synergy between the Life Cycle Assessment (LCA), the nonquality matrix, the Theory of Inventive Problem Solving (TRIZ), morphological analysis and the Analytical Hierarchy Process (AHP). In the sequence of these tools, LCA assesses the environmental impacts generated by the system. Then, for a better consideration of environmental aspects, a new tool is developed, the non-quality matrix, which defines the problem to be solved first from an environmental point of view. The TRIZ method allows the generation of new concepts and contradiction resolution. Then, the morphological analysis offers the possibility of extending the search space of solutions in a design problem in a systematic way. Finally, the AHP identifies the promising solution(s) by providing a clear logic for the choice made. Their usefulness has been demonstrated through their application to a case study involving a centrifugal spreader with spinning discs.
A fast marching algorithm for the factored eikonal equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Treister, Eran, E-mail: erantreister@gmail.com; Haber, Eldad, E-mail: haber@math.ubc.ca; Department of Mathematics, The University of British Columbia, Vancouver, BC
The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. Thismore » inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss–Newton.« less
NASA Astrophysics Data System (ADS)
Cao, Jia; Yan, Zheng; He, Guangyu
2016-06-01
This paper introduces an efficient algorithm, multi-objective human learning optimization method (MOHLO), to solve AC/DC multi-objective optimal power flow problem (MOPF). Firstly, the model of AC/DC MOPF including wind farms is constructed, where includes three objective functions, operating cost, power loss, and pollutant emission. Combining the non-dominated sorting technique and the crowding distance index, the MOHLO method can be derived, which involves individual learning operator, social learning operator, random exploration learning operator and adaptive strategies. Both the proposed MOHLO method and non-dominated sorting genetic algorithm II (NSGAII) are tested on an improved IEEE 30-bus AC/DC hybrid system. Simulation results show that MOHLO method has excellent search efficiency and the powerful ability of searching optimal. Above all, MOHLO method can obtain more complete pareto front than that by NSGAII method. However, how to choose the optimal solution from pareto front depends mainly on the decision makers who stand from the economic point of view or from the energy saving and emission reduction point of view.
General method of solving the Schroedinger equation of atoms and molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakatsuji, Hiroshi
2005-12-15
We propose a general method of solving the Schroedinger equation of atoms and molecules. We first construct the wave function having the exact structure, using the ICI (iterative configuration or complement interaction) method and then optimize the variables involved by the variational principle. Based on the scaled Schroedinger equation and related principles, we can avoid the singularity problem of atoms and molecules and formulate a general method of calculating the exact wave functions in an analytical expansion form. We choose initial function {psi}{sub 0} and scaling g function, and then the ICI method automatically generates the wave function that hasmore » the exact structure by using the Hamiltonian of the system. The Hamiltonian contains all the information of the system. The free ICI method provides a flexible and variationally favorable procedure of constructing the exact wave function. We explain the computational procedure of the analytical ICI method routinely performed in our laboratory. Simple examples are given using hydrogen atom for the nuclear singularity case, the Hooke's atom for the electron singularity case, and the helium atom for both cases.« less
Localization of synchronous cortical neural sources.
Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc
2013-03-01
Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.
Heideman, Paul D.; Flores, K. Adryan; Sevier, Lu M.; Trouton, Kelsey E.
2017-01-01
Drawing by learners can be an effective way to develop memory and generate visual models for higher-order skills in biology, but students are often reluctant to adopt drawing as a study method. We designed a nonclassroom intervention that instructed introductory biology college students in a drawing method, minute sketches in folded lists (MSFL), and allowed them to self-assess their recall and problem solving, first in a simple recall task involving non-European alphabets and later using unfamiliar biology content. In two preliminary ex situ experiments, students had greater recall on the simple learning task, non-European alphabets with associated phonetic sounds, using MSFL in comparison with a preferred method, visual review (VR). In the intervention, students studying using MSFL and VR had ∼50–80% greater recall of content studied with MSFL and, in a subset of trials, better performance on problem-solving tasks on biology content. Eight months after beginning the intervention, participants had shifted self-reported use of drawing from 2% to 20% of study time. For a small subset of participants, MSFL had become a preferred study method, and 70% of participants reported continued use of MSFL. This brief, low-cost intervention resulted in enduring changes in study behavior. PMID:28495932
Social problem solving among depressed adolescents is enhanced by structured psychotherapies
Dietz, Laura J.; Marshal, Michael P.; Burton, Chad M.; Bridge, Jeffrey A.; Birmaher, Boris; Kolko, David; Duffy, Jamira N.; Brent, David A.
2014-01-01
Objective Changes in adolescent interpersonal behavior before and after an acute course of psychotherapy were investigated as outcomes and mediators of remission status in a previously described treatment study of depressed adolescents. Maternal depressive symptoms were examined as moderators of the association between psychotherapy condition and changes in adolescents’ interpersonal behavior. Method Adolescents (n = 63, mean age = 15.6 years, 77.8% female, 84.1% Caucasian) engaged in videotaped interactions with their mothers before randomization to cognitive behavior therapy (CBT), systemic behavior family therapy (SBFT), or nondirective supportive therapy (NST), and after 12–16 weeks of treatment. Adolescent involvement, problem solving and dyadic conflict were examined. Results Improvements in adolescent problem solving were significantly associated with CBT and SBFT. Maternal depressive symptoms moderated the effect of CBT, but not SBFT, on adolescents’ problem solving; adolescents experienced increases in problem solving only when their mothers had low or moderate levels of depressive symptoms. Improvements in adolescents’ problem solving were associated with higher rates of remission across treatment conditions, but there were no significant indirect effects of SBFT on remission status through problem solving. Exploratory analyses revealed a significant indirect effect of CBT on remission status through changes in adolescent problem solving, but only when maternal depressive symptoms at study entry were low. Conclusions Findings provide preliminary support for problem solving as an active treatment component of structured psychotherapies for depressed adolescents and suggest one Pathway by which maternal depression may disrupt treatment efficacy for depressed adolescents treated with CBT. PMID:24491077
Solution of Grad-Shafranov equation by the method of fundamental solutions
NASA Astrophysics Data System (ADS)
Nath, D.; Kalra, M. S.; Kalra
2014-06-01
In this paper we have used the Method of Fundamental Solutions (MFS) to solve the Grad-Shafranov (GS) equation for the axisymmetric equilibria of tokamak plasmas with monomial sources. These monomials are the individual terms appearing on the right-hand side of the GS equation if one expands the nonlinear terms into polynomials. Unlike the Boundary Element Method (BEM), the MFS does not involve any singular integrals and is a meshless boundary-alone method. Its basic idea is to create a fictitious boundary around the actual physical boundary of the computational domain. This automatically removes the involvement of singular integrals. The results obtained by the MFS match well with the earlier results obtained using the BEM. The method is also applied to Solov'ev profiles and it is found that the results are in good agreement with analytical results.
On inconsistency in frictional granular systems
NASA Astrophysics Data System (ADS)
Alart, Pierre; Renouf, Mathieu
2018-04-01
Numerical simulation of granular systems is often based on a discrete element method. The nonsmooth contact dynamics approach can be used to solve a broad range of granular problems, especially involving rigid bodies. However, difficulties could be encountered and hamper successful completion of some simulations. The slow convergence of the nonsmooth solver may sometimes be attributed to an ill-conditioned system, but the convergence may also fail. The prime aim of the present study was to identify situations that hamper the consistency of the mathematical problem to solve. Some simple granular systems were investigated in detail while reviewing and applying the related theoretical results. A practical alternative is briefly analyzed and tested.
NASA Astrophysics Data System (ADS)
Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter
2018-05-01
This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.
NASA Astrophysics Data System (ADS)
Pozderac, Preston; Leary, Cody
We investigated the solutions to the Helmholtz equation in the case of a spherically symmetric refractive index using three different methods. The first method involves solving the Helmholtz equation for a step index profile and applying further constraints contained in Maxwell's equations. Utilizing these equations, we can simultaneously solve for the electric and magnetic fields as well as the allowed energies of photons propagating in this system. The second method applies a perturbative correction to these energies, which surfaces when deriving a Helmholtz type equation in a medium with an inhomogeneous refractive index. Applying first order perturbation theory, we examine how the correction term affects the energy of the photon. In the third method, we investigate the effects of the above perturbation upon solutions to the scalar Helmholtz equation, which are separable with respect to its polarization and spatial degrees of freedom. This work provides insights into the vector field structure of a photon guided by a glass microsphere.
Score Calculation in Informatics Contests Using Multiple Criteria Decision Methods
ERIC Educational Resources Information Center
Skupiene, Jurate
2011-01-01
The Lithuanian Informatics Olympiad is a problem solving contest for high school students. The work of each contestant is evaluated in terms of several criteria, where each criterion is measured according to its own scale (but the same scale for each contestant). Several jury members are involved in the evaluation. This paper analyses the problem…
ERIC Educational Resources Information Center
Chen, Zhe; Honomichl, Ryan; Kennedy, Diane; Tan, Enda
2016-01-01
The present study examines 5- to 8-year-old children's relation reasoning in solving matrix completion tasks. This study incorporates a componential analysis, an eye-tracking method, and a microgenetic approach, which together allow an investigation of the cognitive processing strategies involved in the development and learning of children's…
NASA Astrophysics Data System (ADS)
Purwoko, Saad, Noor Shah; Tajudin, Nor'ain Mohd
2017-05-01
This study aims to: i) develop problem solving questions of Linear Equations System of Two Variables (LESTV) based on levels of IPT Model, ii) explain the level of students' skill of information processing in solving LESTV problems; iii) explain students' skill in information processing in solving LESTV problems; and iv) explain students' cognitive process in solving LESTV problems. This study involves three phases: i) development of LESTV problem questions based on Tessmer Model; ii) quantitative survey method on analyzing students' skill level of information processing; and iii) qualitative case study method on analyzing students' cognitive process. The population of the study was 545 eighth grade students represented by a sample of 170 students of five Junior High Schools in Hilir Barat Zone, Palembang (Indonesia) that were chosen using cluster sampling. Fifteen students among them were drawn as a sample for the interview session with saturated information obtained. The data were collected using the LESTV problem solving test and the interview protocol. The quantitative data were analyzed using descriptive statistics, while the qualitative data were analyzed using the content analysis. The finding of this study indicated that students' cognitive process was just at the step of indentifying external source and doing algorithm in short-term memory fluently. Only 15.29% students could retrieve type A information and 5.88% students could retrieve type B information from long-term memory. The implication was the development problems of LESTV had validated IPT Model in modelling students' assessment by different level of hierarchy.
Numerical Simulation of Two Phase Flows
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
2001-01-01
Two phase flows can be found in broad situations in nature, biology, and industry devices and can involve diverse and complex mechanisms. While the physical models may be specific for certain situations, the mathematical formulation and numerical treatment for solving the governing equations can be general. Hence, we will require information concerning each individual phase as needed in a single phase. but also the interactions between them. These interaction terms, however, pose additional numerical challenges because they are beyond the basis that we use to construct modern numerical schemes, namely the hyperbolicity of equations. Moreover, due to disparate differences in time scales, fluid compressibility and nonlinearity become acute, further complicating the numerical procedures. In this paper, we will show the ideas and procedure how the AUSM-family schemes are extended for solving two phase flows problems. Specifically, both phases are assumed in thermodynamic equilibrium, namely, the time scales involved in phase interactions are extremely short in comparison with those in fluid speeds and pressure fluctuations. Details of the numerical formulation and issues involved are discussed and the effectiveness of the method are demonstrated for several industrial examples.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.
2016-01-01
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230
Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C
2015-03-01
T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.
NASA Astrophysics Data System (ADS)
Ge, Yongbin; Cao, Fujun
2011-05-01
In this paper, a multigrid method based on the high order compact (HOC) difference scheme on nonuniform grids, which has been proposed by Kalita et al. [J.C. Kalita, A.K. Dass, D.C. Dalal, A transformation-free HOC scheme for steady convection-diffusion on non-uniform grids, Int. J. Numer. Methods Fluids 44 (2004) 33-53], is proposed to solve the two-dimensional (2D) convection diffusion equation. The HOC scheme is not involved in any grid transformation to map the nonuniform grids to uniform grids, consequently, the multigrid method is brand-new for solving the discrete system arising from the difference equation on nonuniform grids. The corresponding multigrid projection and interpolation operators are constructed by the area ratio. Some boundary layer and local singularity problems are used to demonstrate the superiority of the present method. Numerical results show that the multigrid method with the HOC scheme on nonuniform grids almost gets as equally efficient convergence rate as on uniform grids and the computed solution on nonuniform grids retains fourth order accuracy while on uniform grids just gets very poor solution for very steep boundary layer or high local singularity problems. The present method is also applied to solve the 2D incompressible Navier-Stokes equations using the stream function-vorticity formulation and the numerical solutions of the lid-driven cavity flow problem are obtained and compared with solutions available in the literature.
Phase Tomography Reconstructed by 3D TIE in Hard X-ray Microscope
NASA Astrophysics Data System (ADS)
Yin, Gung-Chian; Chen, Fu-Rong; Pyun, Ahram; Je, Jung Ho; Hwu, Yeukuang; Liang, Keng S.
2007-01-01
X-ray phase tomography and phase imaging are promising ways of investigation on low Z material. A polymer blend of PE/PS sample was used to test the 3D phase retrieval method in the parallel beam illuminated microscope. Because the polymer sample is thick, the phase retardation is quite mixed and the image can not be distinguished when the 2D transport intensity equation (TIE) is applied. In this study, we have provided a different approach for solving the phase in three dimensions for thick sample. Our method involves integration of 3D TIE/Fourier slice theorem for solving thick phase sample. In our experiment, eight sets of de-focal series image data sets were recorded covering the angular range of 0 to 180 degree. Only three set of image cubes were used in 3D TIE equation for solving the phase tomography. The phase contrast of the polymer blend in 3D is obviously enhanced, and the two different groups of polymer blend can be distinguished in the phase tomography.
Goldberg, Daniel N.; Narayanan, Sri Hari Krishna; Hascoet, Laurent; ...
2016-05-20
We apply an optimized method to the adjoint generation of a time-evolving land ice model through algorithmic differentiation (AD). The optimization involves a special treatment of the fixed-point iteration required to solve the nonlinear stress balance, which differs from a straightforward application of AD software, and leads to smaller memory requirements and in some cases shorter computation times of the adjoint. The optimization is done via implementation of the algorithm of Christianson (1994) for reverse accumulation of fixed-point problems, with the AD tool OpenAD. For test problems, the optimized adjoint is shown to have far lower memory requirements, potentially enablingmore » larger problem sizes on memory-limited machines. In the case of the land ice model, implementation of the algorithm allows further optimization by having the adjoint model solve a sequence of linear systems with identical (as opposed to varying) matrices, greatly improving performance. Finally, the methods introduced here will be of value to other efforts applying AD tools to ice models, particularly ones which solve a hybrid shallow ice/shallow shelf approximation to the Stokes equations.« less
Layout optimization using the homogenization method
NASA Technical Reports Server (NTRS)
Suzuki, Katsuyuki; Kikuchi, Noboru
1993-01-01
A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.
2018-03-14
pricing, Appl. Math . Comp. Vol.305, 174-187 (2017) 5. W. Li, S. Wang, Pricing European options with proportional transaction costs and stochastic...for fractional differential equation. Numer. Math . Theor. Methods Appl. 5, 229–241, 2012. [23] Kilbas A.A. and Marzan, S.A., Cauchy problem for...numerical technique for solving fractional optimal control problems, Comput. Math . Appl., 62, Issue 3, 1055–1067, 2011. [26] Lotfi A., Yousefi SA., Dehghan M
Pythagorean fuzzy analytic hierarchy process to multi-criteria decision making
NASA Astrophysics Data System (ADS)
Mohd, Wan Rosanisah Wan; Abdullah, Lazim
2017-11-01
A numerous approaches have been proposed in the literature to determine the criteria of weight. The weight of criteria is very significant in the process of decision making. One of the outstanding approaches that used to determine weight of criteria is analytic hierarchy process (AHP). This method involves decision makers (DMs) to evaluate the decision to form the pair-wise comparison between criteria and alternatives. In classical AHP, the linguistic variable of pairwise comparison is presented in terms of crisp value. However, this method is not appropriate to present the real situation of the problems because it involved the uncertainty in linguistic judgment. For this reason, AHP has been extended by incorporating the Pythagorean fuzzy sets. In addition, no one has found in the literature proposed how to determine the weight of criteria using AHP under Pythagorean fuzzy sets. In order to solve the MCDM problem, the Pythagorean fuzzy analytic hierarchy process is proposed to determine the criteria weight of the evaluation criteria. Using the linguistic variables, pairwise comparison for evaluation criteria are made to the weights of criteria using Pythagorean fuzzy numbers (PFNs). The proposed method is implemented in the evaluation problem in order to demonstrate its applicability. This study shows that the proposed method provides us with a useful way and a new direction in solving MCDM problems with Pythagorean fuzzy context.
A methodology for analysing lateral coupled behavior of high speed railway vehicles and structures
NASA Astrophysics Data System (ADS)
Antolín, P.; Goicolea, J. M.; Astiz, M. A.; Alonso, A.
2010-06-01
Continuous increment of the speed of high speed trains entails the increment of kinetic energy of the trains. The main goal of this article is to study the coupled lateral behavior of vehicle-structure systems for high speed trains. Non linear finite element methods are used for structures whereas multibody dynamics methods are employed for vehicles. Special attention must be paid when dealing with contact rolling constraints for coupling bridge decks and train wheels. The dynamic models must include mixed variables (displacements and creepages). Additionally special attention must be paid to the contact algorithms adequate to wheel-rail contact. The coupled vehicle-structure system is studied in a implicit dynamic framework. Due to the presence of very different systems (trains and bridges), different frequencies are involved in the problem leading to stiff systems. Regarding to contact methods, a main branch is studied in normal contact between train wheels and bridge decks: penalty method. According to tangential contact FastSim algorithm solves the tangential contact at each time step solving a differential equation involving relative displacements and creepage variables. Integration for computing the total forces in the contact ellipse domain is performed for each train wheel and each solver iteration. Coupling between trains and bridges requires a special treatment according to the kinetic constraints imposed in the wheel-rail pair and the load transmission. A numerical example is performed.
NASA Astrophysics Data System (ADS)
Zhou, Weibiao
2005-01-01
Heat and mass transfer inside bread during baking can be taken as a multiphase flow problem, involving heat, liquid water and water vapour. Among the various developed models, the one based on an evaporation-condensation mechanism well explains several unique phenomenal observations during baking, and is most promising. This paper presents the results of numerically solving the one-dimensional case of this simultaneous transfer model by applying finite difference methods (FDM) and finite element methods (FEM). In particular, various FDM and FEM schemes are applied and the sensitivity of the results to the changes within the parameters are studied. Changes in bread temperature and moisture are characterised by some critical values such as peak water level and dry-out time. Comparison between the results by FDM and FEM is made.
Efficient solution of ordinary differential equations modeling electrical activity in cardiac cells.
Sundnes, J; Lines, G T; Tveito, A
2001-08-01
The contraction of the heart is preceded and caused by a cellular electro-chemical reaction, causing an electrical field to be generated. Performing realistic computer simulations of this process involves solving a set of partial differential equations, as well as a large number of ordinary differential equations (ODEs) characterizing the reactive behavior of the cardiac tissue. Experiments have shown that the solution of the ODEs contribute significantly to the total work of a simulation, and there is thus a strong need to utilize efficient solution methods for this part of the problem. This paper presents how an efficient implicit Runge-Kutta method may be adapted to solve a complicated cardiac cell model consisting of 31 ODEs, and how this solver may be coupled to a set of PDE solvers to provide complete simulations of the electrical activity.
Calculation of the Full Scattering Amplitude without Partial Wave Decomposition II
NASA Technical Reports Server (NTRS)
Shertzer, J.; Temkin, A.
2003-01-01
As is well known, the full scattering amplitude can be expressed as an integral involving the complete scattering wave function. We have shown that the integral can be simplified and used in a practical way. Initial application to electron-hydrogen scattering without exchange was highly successful. The Schrodinger equation (SE) can be reduced to a 2d partial differential equation (pde), and was solved using the finite element method. We have now included exchange by solving the resultant SE, in the static exchange approximation. The resultant equation can be reduced to a pair of coupled pde's, to which the finite element method can still be applied. The resultant scattering amplitudes, both singlet and triplet, as a function of angle can be calculated for various energies. The results are in excellent agreement with converged partial wave results.
Solving Complex Problems: A Convergent Approach to Cognitive Load Measurement
ERIC Educational Resources Information Center
Zheng, Robert; Cook, Anne
2012-01-01
The study challenged the current practices in cognitive load measurement involving complex problem solving by manipulating the presence of pictures in multiple rule-based problem-solving situations and examining the cognitive load resulting from both off-line and online measures associated with complex problem solving. Forty-eight participants…
Enhancing Students' Problem-Solving Skills through Context-Based Learning
ERIC Educational Resources Information Center
Yu, Kuang-Chao; Fan, Szu-Chun; Lin, Kuen-Yi
2015-01-01
Problem solving is often challenging for students because they do not understand the problem-solving process (PSP). This study presents a three-stage, context-based, problem-solving, learning activity that involves watching detective films, constructing a context-simulation activity, and introducing a project design to enable students to construct…
Multi-GPU Accelerated Admittance Method for High-Resolution Human Exposure Evaluation.
Xiong, Zubiao; Feng, Shi; Kautz, Richard; Chandra, Sandeep; Altunyurt, Nevin; Chen, Ji
2015-12-01
A multi-graphics processing unit (GPU) accelerated admittance method solver is presented for solving the induced electric field in high-resolution anatomical models of human body when exposed to external low-frequency magnetic fields. In the solver, the anatomical model is discretized as a three-dimensional network of admittances. The conjugate orthogonal conjugate gradient (COCG) iterative algorithm is employed to take advantage of the symmetric property of the complex-valued linear system of equations. Compared against the widely used biconjugate gradient stabilized method, the COCG algorithm can reduce the solving time by 3.5 times and reduce the storage requirement by about 40%. The iterative algorithm is then accelerated further by using multiple NVIDIA GPUs. The computations and data transfers between GPUs are overlapped in time by using asynchronous concurrent execution design. The communication overhead is well hidden so that the acceleration is nearly linear with the number of GPU cards. Numerical examples show that our GPU implementation running on four NVIDIA Tesla K20c cards can reach 90 times faster than the CPU implementation running on eight CPU cores (two Intel Xeon E5-2603 processors). The implemented solver is able to solve large dimensional problems efficiently. A whole adult body discretized in 1-mm resolution can be solved in just several minutes. The high efficiency achieved makes it practical to investigate human exposure involving a large number of cases with a high resolution that meets the requirements of international dosimetry guidelines.
A Cognitive Model for Problem Solving in Computer Science
ERIC Educational Resources Information Center
Parham, Jennifer R.
2009-01-01
According to industry representatives, computer science education needs to emphasize the processes involved in solving computing problems rather than their solutions. Most of the current assessment tools used by universities and computer science departments analyze student answers to problems rather than investigating the processes involved in…
ERIC Educational Resources Information Center
Bisogno, Janet; JeanPierre, Bobby
2008-01-01
The West Point Bridge Design (WPBD) building project engages students in project-based learning by giving them a real-life problem to solve. By using technology, students are able to become involved in solving problems that they normally would not encounter. Involvement with interactive websites, such as WPBD, assists students in using…
Problem Solving through Paper Folding
ERIC Educational Resources Information Center
Wares, Arsalan
2014-01-01
The purpose of this article is to describe a couple of challenging mathematical problems that involve paper folding. These problem-solving tasks can be used to foster geometric and algebraic thinking among students. The context of paper folding makes some of the abstract mathematical ideas involved relatively concrete. When implemented…
Standardization of 237Np by the CIEMAT/NIST LSC tracer method
Gunther
2000-03-01
The standardization of 237Np presents some difficulties: several groups of alpha, beta and gamma radiation, chemical problems with the daughter nuclide 233Pa, an incomplete radioactive equilibrium after sample preparation, high conversion of some gamma transitions. To solve the chemical problems, a sample composition involving the Ultima Gold AB scintillator and a high concentration of HCl is used. Standardization by the CIEMAT/NIST method and by pulse shape discrimination is described. The results agree within 0.1% with those obtained by two other methods.
NASA Technical Reports Server (NTRS)
Adams, Gaynor J; DUGAN DUANE W
1952-01-01
A method of analysis based on slender-wing theory is developed to investigate the characteristics in roll of slender cruciform wings and wing-body combinations. The method makes use of the conformal mapping processes of classical hydrodynamics which transform the region outside a circle and the region outside an arbitrary arrangement of line segments intersecting at the origin. The method of analysis may be utilized to solve other slender cruciform wing-body problems involving arbitrarily assigned boundary conditions. (author)
Domain decomposition methods in aerodynamics
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Saltz, Joel
1990-01-01
Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
NASA Astrophysics Data System (ADS)
Sampoorna, M.; Trujillo Bueno, J.
2010-04-01
The linearly polarized solar limb spectrum that is produced by scattering processes contains a wealth of information on the physical conditions and magnetic fields of the solar outer atmosphere, but the modeling of many of its strongest spectral lines requires solving an involved non-local thermodynamic equilibrium radiative transfer problem accounting for partial redistribution (PRD) effects. Fast radiative transfer methods for the numerical solution of PRD problems are also needed for a proper treatment of hydrogen lines when aiming at realistic time-dependent magnetohydrodynamic simulations of the solar chromosphere. Here we show how the two-level atom PRD problem with and without polarization can be solved accurately and efficiently via the application of highly convergent iterative schemes based on the Gauss-Seidel and successive overrelaxation (SOR) radiative transfer methods that had been previously developed for the complete redistribution case. Of particular interest is the Symmetric SOR method, which allows us to reach the fully converged solution with an order of magnitude of improvement in the total computational time with respect to the Jacobi-based local accelerated lambda iteration method.
NASA Astrophysics Data System (ADS)
Setiawan, E. P.; Rosadi, D.
2017-01-01
Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.
An Intelligent Information System for forest management: NED/FVS integration
J. Wang; W.D. Potter; D. Nute; F. Maier; H. Michael Rauscher; M.J. Twery; S. Thomasma; P. Knopp
2002-01-01
An Intelligent Information System (IIS) is viewed as composed of a unified knowledge base, database, and model base. This allows an IIS to provide responses to user queries regardless of whether the query process involves a data retrieval, an inference, a computational method, a problem solving module, or some combination of these. NED-2 is a full-featured intelligent...
Teachers' Attitudes Toward WebQuests as a Method of Teaching
ERIC Educational Resources Information Center
Perkins, Robert; McKnight, Margaret L.
2005-01-01
One of the latest uses of technology gaining popular status in education is the WebQuest, a process that involves students using the World Wide Web to solve a problem. The goals of this project are to: (a) determine if teachers are using WebQuests in their classrooms; (b) ascertain whether teachers feel WebQuests are effective for teaching…
ERIC Educational Resources Information Center
Sangwin, Christopher J.; Jones, Ian
2017-01-01
In this paper we report the results of an experiment designed to test the hypothesis that when faced with a question involving the inverse direction of a reversible mathematical process, students solve a multiple-choice version by verifying the answers presented to them by the direct method, not by undertaking the actual inverse calculation.…
Method of Conjugate Radii for Solving Linear and Nonlinear Systems
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.
1999-01-01
This paper describes a method to solve a system of N linear equations in N steps. A quadratic form is developed involving the sum of the squares of the residuals of the equations. Equating the quadratic form to a constant yields a surface which is an ellipsoid. For different constants, a family of similar ellipsoids can be generated. Starting at an arbitrary point an orthogonal basis is constructed and the center of the family of similar ellipsoids is found in this basis by a sequence of projections. The coordinates of the center in this basis are the solution of linear system of equations. A quadratic form in N variables requires N projections. That is, the current method is an exact method. It is shown that the sequence of projections is equivalent to a special case of the Gram-Schmidt orthogonalization process. The current method enjoys an advantage not shared by the classic Method of Conjugate Gradients. The current method can be extended to nonlinear systems without modification. For nonlinear equations the Method of Conjugate Gradients has to be augmented with a line-search procedure. Results for linear and nonlinear problems are presented.
NASA Technical Reports Server (NTRS)
Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.
2016-01-01
An immersed boundary method for the compressible Navier-Stokes equation and the additional infrastructure that is needed to solve moving boundary problems and fully coupled fluid-structure interaction is described. All the methods described in this paper were implemented in NASA's LAVA solver framework. The underlying immersed boundary method is based on the locally stabilized immersed boundary method that was previously introduced by the authors. In the present paper this method is extended to account for all aspects that are involved for fluid structure interaction simulations, such as fast geometry queries and stencil computations, the treatment of freshly cleared cells, and the coupling of the computational fluid dynamics solver with a linear structural finite element method. The current approach is validated for moving boundary problems with prescribed body motion and fully coupled fluid structure interaction problems in 2D and 3D. As part of the validation procedure, results from the second AIAA aeroelastic prediction workshop are also presented. The current paper is regarded as a proof of concept study, while more advanced methods for fluid structure interaction are currently being investigated, such as geometric and material nonlinearities, and advanced coupling approaches.
Dynamic Deployment Simulations of Inflatable Space Structures
NASA Technical Reports Server (NTRS)
Wang, John T.
2005-01-01
The feasibility of using Control Volume (CV) method and the Arbitrary Lagrangian Eulerian (ALE) method in LSDYNA to simulate the dynamic deployment of inflatable space structures is investigated. The CV and ALE methods were used to predict the inflation deployments of three folded tube configurations. The CV method was found to be a simple and computationally efficient method that may be adequate for modeling slow inflation deployment sine the inertia of the inflation gas can be neglected. The ALE method was found to be very computationally intensive since it involves the solving of three conservative equations of fluid as well as dealing with complex fluid structure interactions.
NASA Astrophysics Data System (ADS)
Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.
2017-10-01
This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution
The Development, Implementation, and Evaluation of a Problem Solving Heuristic
ERIC Educational Resources Information Center
Lorenzo, Mercedes
2005-01-01
Problem-solving is one of the main goals in science teaching and is something many students find difficult. This research reports on the development, implementation and evaluation of a problem-solving heuristic. This heuristic intends to help students to understand the steps involved in problem solving (metacognitive tool), and to provide them…
Using Students' Representations Constructed during Problem Solving to Infer Conceptual Understanding
ERIC Educational Resources Information Center
Domin, Daniel; Bodner, George
2012-01-01
The differences in the types of representations constructed during successful and unsuccessful problem-solving episodes were investigated within the context of graduate students working on problems that involve concepts from 2D-NMR. Success at problem solving was established by having the participants solve five problems relating to material just…
ERIC Educational Resources Information Center
Karatas, Ilhan; Baki, Adnan
2013-01-01
Problem solving is recognized as an important life skill involving a range of processes including analyzing, interpreting, reasoning, predicting, evaluating and reflecting. For that reason educating students as efficient problem solvers is an important role of mathematics education. Problem solving skill is the centre of mathematics curriculum.…
[Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (2)].
Murase, Kenya
2015-01-01
In this issue, symbolic methods for solving differential equations were firstly introduced. Of the symbolic methods, Laplace transform method was also introduced together with some examples, in which this method was applied to solving the differential equations derived from a two-compartment kinetic model and an equivalent circuit model for membrane potential. Second, series expansion methods for solving differential equations were introduced together with some examples, in which these methods were used to solve Bessel's and Legendre's differential equations. In the next issue, simultaneous differential equations and various methods for solving these differential equations will be introduced together with some examples in medical physics.
Terahertz reflection imaging using Kirchhoff migration.
Dorney, T D; Johnson, J L; Van Rudd, J; Baraniuk, R G; Symes, W W; Mittleman, D M
2001-10-01
We describe a new imaging method that uses single-cycle pulses of terahertz (THz) radiation. This technique emulates data-collection and image-processing procedures developed for geophysical prospecting and is made possible by the availability of fiber-coupled THz receiver antennas. We use a simple migration procedure to solve the inverse problem; this permits us to reconstruct the location and shape of targets. These results demonstrate the feasibility of the THz system as a test-bed for the exploration of new seismic processing methods involving complex model systems.
Solution of transonic flows by an integro-differential equation method
NASA Technical Reports Server (NTRS)
Ogana, W.
1978-01-01
Solutions of steady transonic flow past a two-dimensional airfoil are obtained from a singular integro-differential equation which involves a tangential derivative of the perturbation velocity potential. Subcritical flows are solved by taking central differences everywhere. For supercritical flows with shocks, central differences are taken in subsonic flow regions and backward differences in supersonic flow regions. The method is applied to a nonlifting parabolic-arc airfoil and to a lifting NACA 0012 airfoil. Results compare favorably with those of finite-difference schemes.
The use of rational functions in numerical quadrature
NASA Astrophysics Data System (ADS)
Gautschi, Walter
2001-08-01
Quadrature problems involving functions that have poles outside the interval of integration can profitably be solved by methods that are exact not only for polynomials of appropriate degree, but also for rational functions having the same (or the most important) poles as the function to be integrated. Constructive and computational tools for accomplishing this are described and illustrated in a number of quadrature contexts. The superiority of such rational/polynomial methods is shown by an analysis of the remainder term and documented by numerical examples.
Numerical Leak Detection in a Pipeline Network of Complex Structure with Unsteady Flow
NASA Astrophysics Data System (ADS)
Aida-zade, K. R.; Ashrafova, E. R.
2017-12-01
An inverse problem for a pipeline network of complex loopback structure is solved numerically. The problem is to determine the locations and amounts of leaks from unsteady flow characteristics measured at some pipeline points. The features of the problem include impulse functions involved in a system of hyperbolic differential equations, the absence of classical initial conditions, and boundary conditions specified as nonseparated relations between the states at the endpoints of adjacent pipeline segments. The problem is reduced to a parametric optimal control problem without initial conditions, but with nonseparated boundary conditions. The latter problem is solved by applying first-order optimization methods. Results of numerical experiments are presented.
Paper simulation techniques in user requirements analysis for interactive computer systems
NASA Technical Reports Server (NTRS)
Ramsey, H. R.; Atwood, M. E.; Willoughby, J. K.
1979-01-01
This paper describes the use of a technique called 'paper simulation' in the analysis of user requirements for interactive computer systems. In a paper simulation, the user solves problems with the aid of a 'computer', as in normal man-in-the-loop simulation. In this procedure, though, the computer does not exist, but is simulated by the experimenters. This allows simulated problem solving early in the design effort, and allows the properties and degree of structure of the system and its dialogue to be varied. The technique, and a method of analyzing the results, are illustrated with examples from a recent paper simulation exercise involving a Space Shuttle flight design task
Maji, Kaushik; Kouri, Donald J
2011-03-28
We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.
Dissipation-preserving spectral element method for damped seismic wave equations
NASA Astrophysics Data System (ADS)
Cai, Wenjun; Zhang, Huai; Wang, Yushun
2017-12-01
This article describes the extension of the conformal symplectic method to solve the damped acoustic wave equation and the elastic wave equations in the framework of the spectral element method. The conformal symplectic method is a variation of conventional symplectic methods to treat non-conservative time evolution problems, which has superior behaviors in long-time stability and dissipation preservation. To reveal the intrinsic dissipative properties of the model equations, we first reformulate the original systems in their equivalent conformal multi-symplectic structures and derive the corresponding conformal symplectic conservation laws. We thereafter separate each system into a conservative Hamiltonian system and a purely dissipative ordinary differential equation system. Based on the splitting methodology, we solve the two subsystems respectively. The dissipative one is cheaply solved by its analytic solution. While for the conservative system, we combine a fourth-order symplectic Nyström method in time and the spectral element method in space to cover the circumstances in realistic geological structures involving complex free-surface topography. The Strang composition method is adopted thereby to concatenate the corresponding two parts of solutions and generate the completed conformal symplectic method. A relative larger Courant number than that of the traditional Newmark scheme is found in the numerical experiments in conjunction with a spatial sampling of approximately 5 points per wavelength. A benchmark test for the damped acoustic wave equation validates the effectiveness of our proposed method in precisely capturing dissipation rate. The classical Lamb problem is used to demonstrate the ability of modeling Rayleigh wave in elastic wave propagation. More comprehensive numerical experiments are presented to investigate the long-time simulation, low dispersion and energy conservation properties of the conformal symplectic methods in both the attenuating homogeneous and heterogeneous media.
Li, Haichen; Yaron, David J
2016-11-08
A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.
NASA Astrophysics Data System (ADS)
Sukmawati, Zuhairoh, Faihatuz
2017-05-01
The purpose of this research was to develop authentic assessment model based on showcase portfolio on learning of mathematical problem solving. This research used research and development Method (R & D) which consists of four stages of development that: Phase I, conducting a preliminary study. Phase II, determining the purpose of developing and preparing the initial model. Phase III, trial test of instrument for the initial draft model and the initial product. The respondents of this research are the students of SMAN 8 and SMAN 20 Makassar. The collection of data was through observation, interviews, documentation, student questionnaire, and instrument tests mathematical solving abilities. The data were analyzed with descriptive and inferential statistics. The results of this research are authentic assessment model design based on showcase portfolio which involves: 1) Steps in implementing the authentic assessment based Showcase, assessment rubric of cognitive aspects, assessment rubric of affective aspects, and assessment rubric of skill aspect. 2) The average ability of the students' problem solving which is scored by using authentic assessment based on showcase portfolio was in high category and the students' response in good category.
Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.
Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng
2015-02-01
This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.
Focus group discussion in mathematical physics learning
NASA Astrophysics Data System (ADS)
Ellianawati; Rudiana, D.; Sabandar, J.; Subali, B.
2018-03-01
The Focus Group Discussion (FGD) activity in Mathematical Physics learning has helped students perform the stages of problem solving reflectively. The FGD implementation was conducted to explore the problems and find the right strategy to improve the students' ability to solve the problem accurately which is one of reflective thinking component that has been difficult to improve. The research method used is descriptive qualitative by using single subject response in Physics student. During the FGD process, one student was observed of her reflective thinking development in solving the physics problem. The strategy chosen in the discussion activity was the Cognitive Apprenticeship-Instruction (CA-I) syntax. Based on the results of this study, it is obtained the information that after going through a series of stages of discussion, the students' reflective thinking skills is increased significantly. The scaffolding stage in the CA-I model plays an important role in the process of solving physics problems accurately. Students are able to recognize and formulate problems by describing problem sketches, identifying the variables involved, applying mathematical equations that accord to physics concepts, executing accurately, and applying evaluation by explaining the solution to various contexts.
Improving insight and non-insight problem solving with brief interventions.
Wen, Ming-Ching; Butler, Laurie T; Koutstaal, Wilma
2013-02-01
Developing brief training interventions that benefit different forms of problem solving is challenging. In earlier research, Chrysikou (2006) showed that engaging in a task requiring generation of alternative uses of common objects improved subsequent insight problem solving. These benefits were attributed to a form of implicit transfer of processing involving enhanced construction of impromptu, on-the-spot or 'ad hoc' goal-directed categorizations of the problem elements. Following this, it is predicted that the alternative uses exercise should benefit abilities that govern goal-directed behaviour, such as fluid intelligence and executive functions. Similarly, an indirect intervention - self-affirmation (SA) - that has been shown to enhance cognitive and executive performance after self-regulation challenge and when under stereotype threat, may also increase adaptive goal-directed thinking and likewise should bolster problem-solving performance. In Experiment 1, brief single-session interventions, involving either alternative uses generation or SA, significantly enhanced both subsequent insight and visual-spatial fluid reasoning problem solving. In Experiment 2, we replicated the finding of benefits of both alternative uses generation and SA on subsequent insight problem-solving performance, and demonstrated that the underlying mechanism likely involves improved executive functioning. Even brief cognitive- and social-psychological interventions may substantially bolster different types of problem solving and may exert largely similar facilitatory effects on goal-directed behaviours. © 2012 The British Psychological Society.
Involving youth in program decision-making: how common and what might it do for youth?
Akiva, Thomas; Cortina, Kai S; Smith, Charles
2014-11-01
The strategy of sharing program decision-making with youth in youth programs, a specific form of youth-adult partnership, is widely recommended in practitioner literature; however, empirical study is relatively limited. We investigated the prevalence and correlates of youth program decision-making practices (e.g., asking youth to help decide what activities are offered), using single-level and multilevel methods with a cross-sectional dataset of 979 youth attending 63 multipurpose after-school programs (average age of youth = 11.4, 53 % female). The prevalence of such practices was relatively high, particularly for forms that involved low power sharing such as involving youth in selecting the activities a program offers. Hierarchical linear modeling revealed positive associations between youth program decision-making practices and youth motivation to attend programs. We also found positive correlations between decision-making practices and youth problem-solving efficacy, expression efficacy, and empathy. Significant interactions with age suggest that correlations with problem solving and empathy are more pronounced for older youth. Overall, the findings suggest that involving youth in program decision-making is a promising strategy for promoting youth motivation and skill building, and in some cases this is particularly the case for older (high school-age) youth.
Testing the effectiveness of problem-based learning with learning-disabled students in biology
NASA Astrophysics Data System (ADS)
Guerrera, Claudia Patrizia
The purpose of the present study was to investigate the effects of problem-based learning (PBL) with learning-disabled (LD) students. Twenty-four students (12 dyads) classified as LD and attending a school for the learning-disabled participated in the study. Students engaged in either a computer-based environment involving BioWorld, a hospital simulation designed to teach biology students problem-solving skills, or a paper-and-pencil version based on the computer program. A hybrid model of learning was adopted whereby students were provided with direct instruction on the digestive system prior to participating in a problem-solving activity. Students worked in dyads and solved three problems involving the digestive system in either a computerized or a paper-and-pencil condition. The experimenter acted as a coach to assist students throughout the problem-solving process. A follow-up study was conducted, one month later, to measure the long-term learning gains. Quantitative and qualitative methods were used to analyze three types of data: process data, outcome data, and follow-up data. Results from the process data showed that all students engaged in effective collaboration and became more systematic in their problem solving over time. Findings from the outcome and follow-up data showed that students in both treatment conditions, made both learning and motivational gains and that these benefits were still evident one month later. Overall, results demonstrated that the computer facilitated students' problem solving and scientific reasoning skills. Some differences were noted in students' collaboration and the amount of assistance required from the coach in both conditions. Thus, PBL is an effective learning approach with LD students in science, regardless of the type of learning environment. These results have implications for teaching science to LD students, as well as for future designs of educational software for this population.
Analog Processor To Solve Optimization Problems
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Eberhardt, Silvio P.; Thakoor, Anil P.
1993-01-01
Proposed analog processor solves "traveling-salesman" problem, considered paradigm of global-optimization problems involving routing or allocation of resources. Includes electronic neural network and auxiliary circuitry based partly on concepts described in "Neural-Network Processor Would Allocate Resources" (NPO-17781) and "Neural Network Solves 'Traveling-Salesman' Problem" (NPO-17807). Processor based on highly parallel computing solves problem in significantly less time.
Wavepacket propagation using time-sliced semiclassical initial value methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, Brett B.; Reimers, Jeffrey R.; School of Chemistry, University of Sydney, Sydney NSW 2006
2004-12-22
A new semiclassical initial value representation (SC-IVR) propagator and a SC-IVR propagator originally introduced by Kay [J. Chem. Phys. 100, 4432 (1994)], are investigated for use in the split-operator method for solving the time-dependent Schroedinger equation. It is shown that the SC-IVR propagators can be derived from a procedure involving modified Filinov filtering of the Van Vleck expression for the semiclassical propagator. The two SC-IVR propagators have been selected for investigation because they avoid the need to perform a coherent state basis set expansion that is necessary in other time-slicing propagation schemes. An efficient scheme for solving the propagators ismore » introduced and can be considered to be a semiclassical form of the effective propagators of Makri [Chem. Phys. Lett. 159, 489 (1989)]. Results from applications to a one-dimensional, two-dimensional, and three-dimensional Hamiltonian for a double-well potential are presented.« less
Control system estimation and design for aerospace vehicles with time delay
NASA Technical Reports Server (NTRS)
Allgaier, G. R.; Williams, T. L.
1972-01-01
The problems of estimation and control of discrete, linear, time-varying systems are considered. Previous solutions to these problems involved either approximate techniques, open-loop control solutions, or results which required excessive computation. The estimation problem is solved by two different methods, both of which yield the identical algorithm for determining the optimal filter. The partitioned results achieve a substantial reduction in computation time and storage requirements over the expanded solution, however. The results reduce to the Kalman filter when no delays are present in the system. The control problem is also solved by two different methods, both of which yield identical algorithms for determining the optimal control gains. The stochastic control is shown to be identical to the deterministic control, thus extending the separation principle to time delay systems. The results obtained reduce to the familiar optimal control solution when no time delays are present in the system.
NASA Astrophysics Data System (ADS)
Saad, Shakila; Wan Jaafar, Wan Nurhadani; Jamil, Siti Jasmida
2013-04-01
The standard Traveling Salesman Problem (TSP) is the classical Traveling Salesman Problem (TSP) while Multiple Traveling Salesman Problem (MTSP) is an extension of TSP when more than one salesman is involved. The objective of MTSP is to find the least costly route that the traveling salesman problem can take if he wishes to visit exactly once each of a list of n cities and then return back to the home city. There are a few methods that can be used to solve MTSP. The objective of this research is to implement an exact method called Branch-and-Bound (B&B) algorithm. Briefly, the idea of B&B algorithm is to start with the associated Assignment Problem (AP). A branching strategy will be applied to the TSP and MTSP which is Breadth-first-Search (BFS). 11 nodes of cities are implemented for both problem and the solutions to the problem are presented.
Zhan, X.
2005-01-01
A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.
A model for solving the prescribed burn planning problem.
Rachmawati, Ramya; Ozlen, Melih; Reinke, Karin J; Hearne, John W
2015-01-01
The increasing frequency of destructive wildfires, with a consequent loss of life and property, has led to fire and land management agencies initiating extensive fuel management programs. This involves long-term planning of fuel reduction activities such as prescribed burning or mechanical clearing. In this paper, we propose a mixed integer programming (MIP) model that determines when and where fuel reduction activities should take place. The model takes into account multiple vegetation types in the landscape, their tolerance to frequency of fire events, and keeps track of the age of each vegetation class in each treatment unit. The objective is to minimise fuel load over the planning horizon. The complexity of scheduling fuel reduction activities has led to the introduction of sophisticated mathematical optimisation methods. While these approaches can provide optimum solutions, they can be computationally expensive, particularly for fuel management planning which extends across the landscape and spans long term planning horizons. This raises the question of how much better do exact modelling approaches compare to simpler heuristic approaches in their solutions. To answer this question, the proposed model is run using an exact MIP (using commercial MIP solver) and two heuristic approaches that decompose the problem into multiple single-period sub problems. The Knapsack Problem (KP), which is the first heuristic approach, solves the single period problems, using an exact MIP approach. The second heuristic approach solves the single period sub problem using a greedy heuristic approach. The three methods are compared in term of model tractability, computational time and the objective values. The model was tested using randomised data from 711 treatment units in the Barwon-Otway district of Victoria, Australia. Solutions for the exact MIP could be obtained for up to a 15-year planning only using a standard implementation of CPLEX. Both heuristic approaches can solve significantly larger problems, involving 100-year or even longer planning horizons. Furthermore there are no substantial differences in the solutions produced by the three approaches. It is concluded that for practical purposes a heuristic method is to be preferred to the exact MIP approach.
An immersed boundary method for fluid-structure interaction with compressible multiphase flows
NASA Astrophysics Data System (ADS)
Wang, Li; Currao, Gaetano M. D.; Han, Feng; Neely, Andrew J.; Young, John; Tian, Fang-Bao
2017-10-01
This paper presents a two-dimensional immersed boundary method for fluid-structure interaction with compressible multiphase flows involving large structure deformations. This method involves three important parts: flow solver, structure solver and fluid-structure interaction coupling. In the flow solver, the compressible multiphase Navier-Stokes equations for ideal gases are solved by a finite difference method based on a staggered Cartesian mesh, where a fifth-order accuracy Weighted Essentially Non-Oscillation (WENO) scheme is used to handle spatial discretization of the convective term, a fourth-order central difference scheme is employed to discretize the viscous term, the third-order TVD Runge-Kutta scheme is used to discretize the temporal term, and the level-set method is adopted to capture the multi-material interface. In this work, the structure considered is a geometrically non-linear beam which is solved by using a finite element method based on the absolute nodal coordinate formulation (ANCF). The fluid dynamics and the structure motion are coupled in a partitioned iterative manner with a feedback penalty immersed boundary method where the flow dynamics is defined on a fixed Lagrangian grid and the structure dynamics is described on a global coordinate. We perform several validation cases (including fluid over a cylinder, structure dynamics, flow induced vibration of a flexible plate, deformation of a flexible panel induced by shock waves in a shock tube, an inclined flexible plate in a hypersonic flow, and shock-induced collapse of a cylindrical helium cavity in the air), and compare the results with experimental and other numerical data. The present results agree well with the published data and the current experiment. Finally, we further demonstrate the versatility of the present method by applying it to a flexible plate interacting with multiphase flows.
Abazarian, Elaheh; Baboli, M Teimourzadeh; Abazarian, Elham; Ghashghaei, F Esteki
2015-01-01
Diabetes is the most prevalent disease that has involved 177 million people all over the world and, due to this, these patients suffer from depression and anxiety and they should use special methods for controlling the same. The aim of this research is the study of the effect of problem solving and decision making skill on the rate of the tendency to depression and anxiety. This research is a quasi-experimental (case-control) study. Statistically, the population of the present study was all diabetic patients of Qaemshahr who were controlled by physicians in 2011-2012. Thirty files were selected randomly from them and divided into two 15 patients' groups (control and subject group) randomly. The measurement tools were Back depression inventory (21 items) and Zank anxiety questionnaire that were distributed among two groups. Then, the subject group participated in eight sessions of teaching problem solving and decision making courses separately, and the second group (control group) did not receive any instruction. Finally, both groups had passed post-test and the data obtained from the questionnaires were studied by variance analysis statistical methods. The results showed that teaching problem solving and decision making skills was very effective in reducing diabetic patients' depression and anxiety and resulted in reducing their depression and anxiety.
NASA Technical Reports Server (NTRS)
Bless, Robert R.
1991-01-01
A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.
Benhammouda, Brahim
2016-01-01
Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.
The Association of DRD2 with Insight Problem Solving.
Zhang, Shun; Zhang, Jinghuan
2016-01-01
Although the insight phenomenon has attracted great attention from psychologists, it is still largely unknown whether its variation in well-functioning human adults has a genetic basis. Several lines of evidence suggest that genes involved in dopamine (DA) transmission might be potential candidates. The present study explored for the first time the association of dopamine D2 receptor gene ( DRD2 ) with insight problem solving. Fifteen single-nucleotide polymorphisms (SNPs) covering DRD2 were genotyped in 425 unrelated healthy Chinese undergraduates, and were further tested for association with insight problem solving. Both single SNP and haplotype analysis revealed several associations of DRD2 SNPs and haplotypes with insight problem solving. In conclusion, the present study provides the first evidence for the involvement of DRD2 in insight problem solving, future studies are necessary to validate these findings.
The Association of DRD2 with Insight Problem Solving
Zhang, Shun; Zhang, Jinghuan
2016-01-01
Although the insight phenomenon has attracted great attention from psychologists, it is still largely unknown whether its variation in well-functioning human adults has a genetic basis. Several lines of evidence suggest that genes involved in dopamine (DA) transmission might be potential candidates. The present study explored for the first time the association of dopamine D2 receptor gene (DRD2) with insight problem solving. Fifteen single-nucleotide polymorphisms (SNPs) covering DRD2 were genotyped in 425 unrelated healthy Chinese undergraduates, and were further tested for association with insight problem solving. Both single SNP and haplotype analysis revealed several associations of DRD2 SNPs and haplotypes with insight problem solving. In conclusion, the present study provides the first evidence for the involvement of DRD2 in insight problem solving, future studies are necessary to validate these findings. PMID:27933030
Hybrid method for moving interface problems with application to the Hele-Shaw flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, T.Y.; Li, Zhilin; Osher, S.
In this paper, a hybrid approach which combines the immersed interface method with the level set approach is presented. The fast version of the immersed interface method is used to solve the differential equations whose solutions and their derivatives may be discontinuous across the interfaces due to the discontinuity of the coefficients or/and singular sources along the interfaces. The moving interfaces then are updated using the newly developed fast level set formulation which involves computation only inside some small tubes containing the interfaces. This method combines the advantage of the two approaches and gives a second-order Eulerian discretization for interfacemore » problems. Several key steps in the implementation are addressed in detail. This new approach is then applied to Hele-Shaw flow, an unstable flow involving two fluids with very different viscosity. 40 refs., 10 figs., 3 tabs.« less
Adding Resistances and Capacitances in Introductory Electricity
NASA Astrophysics Data System (ADS)
Efthimiou, C. J.; Llewellyn, R. A.
2005-09-01
All introductory physics textbooks, with or without calculus, cover the addition of both resistances and capacitances in series and in parallel as discrete summations. However, none includes problems that involve continuous versions of resistors in parallel or capacitors in series. This paper introduces a method for solving the continuous problems that is logical, straightforward, and within the mathematical preparation of students at the introductory level.
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
Optimization of an auto-thermal ammonia synthesis reactor using cyclic coordinate method
NASA Astrophysics Data System (ADS)
A-N Nguyen, T.; Nguyen, T.-A.; Vu, T.-D.; Nguyen, K.-T.; K-T Dao, T.; P-H Huynh, K.
2017-06-01
The ammonia synthesis system is an important chemical process used in the manufacture of fertilizers, chemicals, explosives, fibers, plastics, refrigeration. In the literature, many works approaching the modeling, simulation and optimization of an auto-thermal ammonia synthesis reactor can be found. However, they just focus on the optimization of the reactor length while keeping the others parameters constant. In this study, the other parameters are also considered in the optimization problem such as the temperature of feed gas enters the catalyst zone, the initial nitrogen proportion. The optimal problem requires the maximization of an objective function which is multivariable function and subject to a number of equality constraints involving the solution of coupled differential equations and also inequality constraint. The cyclic coordinate search was applied to solve the multivariable-optimization problem. In each coordinate, the golden section method was applied to find the maximum value. The inequality constraints were treated using penalty method. The coupled differential equations system was solved using Runge-Kutta 4th order method. The results obtained from this study are also compared to the results from the literature.
Determination of optimal self-drive tourism route using the orienteering problem method
NASA Astrophysics Data System (ADS)
Hashim, Zakiah; Ismail, Wan Rosmanira; Ahmad, Norfaieqah
2013-04-01
This paper was conducted to determine the optimal travel routes for self-drive tourism based on the allocation of time and expense by maximizing the amount of attraction scores assigned to each city involved. Self-drive tourism represents a type of tourism where tourists hire or travel by their own vehicle. It only involves a tourist destination which can be linked with a network of roads. Normally, the traveling salesman problem (TSP) and multiple traveling salesman problems (MTSP) method were used in the minimization problem such as determination the shortest time or distance traveled. This paper involved an alternative approach for maximization method which is maximize the attraction scores and tested on tourism data for ten cities in Kedah. A set of priority scores are used to set the attraction score at each city. The classical approach of the orienteering problem was used to determine the optimal travel route. This approach is extended to the team orienteering problem and the two methods were compared. These two models have been solved by using LINGO12.0 software. The results indicate that the model involving the team orienteering problem provides a more appropriate solution compared to the orienteering problem model.
Engineering neural systems for high-level problem solving.
Sylvester, Jared; Reggia, James
2016-07-01
There is a long-standing, sometimes contentious debate in AI concerning the relative merits of a symbolic, top-down approach vs. a neural, bottom-up approach to engineering intelligent machine behaviors. While neurocomputational methods excel at lower-level cognitive tasks (incremental learning for pattern classification, low-level sensorimotor control, fault tolerance and processing of noisy data, etc.), they are largely non-competitive with top-down symbolic methods for tasks involving high-level cognitive problem solving (goal-directed reasoning, metacognition, planning, etc.). Here we take a step towards addressing this limitation by developing a purely neural framework named galis. Our goal in this work is to integrate top-down (non-symbolic) control of a neural network system with more traditional bottom-up neural computations. galis is based on attractor networks that can be "programmed" with temporal sequences of hand-crafted instructions that control problem solving by gating the activity retention of, communication between, and learning done by other neural networks. We demonstrate the effectiveness of this approach by showing that it can be applied successfully to solve sequential card matching problems, using both human performance and a top-down symbolic algorithm as experimental controls. Solving this kind of problem makes use of top-down attention control and the binding together of visual features in ways that are easy for symbolic AI systems but not for neural networks to achieve. Our model can not only be instructed on how to solve card matching problems successfully, but its performance also qualitatively (and sometimes quantitatively) matches the performance of both human subjects that we had perform the same task and the top-down symbolic algorithm that we used as an experimental control. We conclude that the core principles underlying the galis framework provide a promising approach to engineering purely neurocomputational systems for problem-solving tasks that in people require higher-level cognitive functions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Problem Solving, Scaffolding and Learning
ERIC Educational Resources Information Center
Lin, Shih-Yin
2012-01-01
Helping students to construct robust understanding of physics concepts and develop good solving skills is a central goal in many physics classrooms. This thesis examine students' problem solving abilities from different perspectives and explores strategies to scaffold students' learning. In studies involving analogical problem solving…
NASA Astrophysics Data System (ADS)
Deshamukhya, Tuhin; Bhanja, Dipankar; Nath, Sujit; Maji, Ambarish; Choubey, Gautam
2017-07-01
The following study is concerned with determination of temperature distribution of porous fins under convective and insulated tip conditions. The authors have made an effort to study the effect of various important parameters involved in the transfer of heat through porous fins as well as the temperature distribution along the fin length subjected to both convective as well as insulated ends. The non-linear equation obtained has been solved by Adomian Decomposition method and validated with a numerical scheme called Finite Difference method by using a central difference scheme and Gauss Siedel Iterative method.
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
Some recent developments of the immersed interface method for flow simulation
NASA Astrophysics Data System (ADS)
Xu, Sheng
2017-11-01
The immersed interface method is a general methodology for solving PDEs subject to interfaces. In this talk, I will give an overview of some recent developments of the method toward the enhancement of its robustness for flow simulation. In particular, I will present with numerical results how to capture boundary conditions on immersed rigid objects, how to adopt interface triangulation in the method, and how to parallelize the method for flow with moving objects. With these developments, the immersed interface method can achieve accurate and efficient simulation of a flow involving multiple moving complex objects. Thanks to NSF for the support of this work under Grant NSF DMS 1320317.
Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong
2017-04-01
This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-03-01
As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.
Cost effective campaigning in social networks
NASA Astrophysics Data System (ADS)
Kotnis, Bhushan; Kuri, Joy
2016-05-01
Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind.
A transformation method for constrained-function minimization
NASA Technical Reports Server (NTRS)
Park, S. K.
1975-01-01
A direct method for constrained-function minimization is discussed. The method involves the construction of an appropriate function mapping all of one finite dimensional space onto the region defined by the constraints. Functions which produce such a transformation are constructed for a variety of constraint regions including, for example, those arising from linear and quadratic inequalities and equalities. In addition, the computational performance of this method is studied in the situation where the Davidon-Fletcher-Powell algorithm is used to solve the resulting unconstrained problem. Good performance is demonstrated for 19 test problems by achieving rapid convergence to a solution from several widely separated starting points.
How To Solve Problems. For Success in Freshman Physics, Engineering, and Beyond. Third Edition.
ERIC Educational Resources Information Center
Scarl, Donald
To expertly solve engineering and science problems one needs to know science and engineering as well as have a tool kit of problem-solving methods. This book is about problem-solving methods: it presents the methods professional problem solvers use, explains why these methods have evolved, and shows how a student can make these methods his/her…
Characterising the Cognitive Processes in Mathematical Investigation
ERIC Educational Resources Information Center
Yeo, Joseph B. W.; Yeap, Ban Har
2010-01-01
Many educators believe that mathematical investigation involves both problem posing and problem solving, but some teachers have taught their students to investigate during problem solving. The confusion about the relationship between investigation and problem solving may affect how teachers teach their students and how researchers conduct their…
Individual differences in solving arithmetic word problems
2013-01-01
Background With the present functional magnetic resonance imaging (fMRI) study at 3 T, we investigated the neural correlates of visualization and verbalization during arithmetic word problem solving. In the domain of arithmetic, visualization might mean to visualize numbers and (intermediate) results while calculating, and verbalization might mean that numbers and (intermediate) results are verbally repeated during calculation. If the brain areas involved in number processing are domain-specific as assumed, that is, that the left angular gyrus (AG) shows an affinity to the verbal domain, and that the left and right intraparietal sulcus (IPS) shows an affinity to the visual domain, the activation of these areas should show a dependency on an individual’s cognitive style. Methods 36 healthy young adults participated in the fMRI study. The participants habitual use of visualization and verbalization during solving arithmetic word problems was assessed with a short self-report assessment. During the fMRI measurement, arithmetic word problems that had to be solved by the participants were presented in an event-related design. Results We found that visualizers showed greater brain activation in brain areas involved in visual processing, and that verbalizers showed greater brain activation within the left angular gyrus. Conclusions Our results indicate that cognitive styles or preferences play an important role in understanding brain activation. Our results confirm, that strong visualizers use mental imagery more strongly than weak visualizers during calculation. Moreover, our results suggest that the left AG shows a specific affinity to the verbal domain and subserves number processing in a modality-specific way. PMID:23883107
Model-Based Prognostics of Hybrid Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhury, Indranil; Bregon, Anibal
2015-01-01
Model-based prognostics has become a popular approach to solving the prognostics problem. However, almost all work has focused on prognostics of systems with continuous dynamics. In this paper, we extend the model-based prognostics framework to hybrid systems models that combine both continuous and discrete dynamics. In general, most systems are hybrid in nature, including those that combine physical processes with software. We generalize the model-based prognostics formulation to hybrid systems, and describe the challenges involved. We present a general approach for modeling hybrid systems, and overview methods for solving estimation and prediction in hybrid systems. As a case study, we consider the problem of conflict (i.e., loss of separation) prediction in the National Airspace System, in which the aircraft models are hybrid dynamical systems.
Routine human-competitive machine intelligence by means of genetic programming
NASA Astrophysics Data System (ADS)
Koza, John R.; Streeter, Matthew J.; Keane, Martin
2004-01-01
Genetic programming is a systematic method for getting computers to automatically solve a problem. Genetic programming starts from a high-level statement of what needs to be done and automatically creates a computer program to solve the problem. The paper demonstrates that genetic programming (1) now routinely delivers high-return human-competitive machine intelligence; (2) is an automated invention machine; (3) can automatically create a general solution to a problem in the form of a parameterized topology; and (4) has delivered a progression of qualitatively more substantial results in synchrony with five approximately order-of-magnitude increases in the expenditure of computer time. Recent results involving the automatic synthesis of the topology and sizing of analog electrical circuits and controllers demonstrate these points.
Current Approaches in Implementing Citizen Science in the Classroom
Shah, Harsh R.; Martinez, Luis R.
2016-01-01
Citizen science involves a partnership between inexperienced volunteers and trained scientists engaging in research. In addition to its obvious benefit of accelerating data collection, citizen science has an unexplored role in the classroom, from K–12 schools to higher education. With recent studies showing a weakening in scientific competency of American students, incorporating citizen science initiatives in the curriculum provides a means to address deficiencies in a fragmented educational system. The integration of traditional and innovative pedagogical methods to reform our educational system is therefore imperative in order to provide practical experiences in scientific inquiry, critical thinking, and problem solving for school-age individuals. Citizen science can be used to emphasize the recognition and use of systematic approaches to solve problems affecting the community. PMID:27047583
NASA Technical Reports Server (NTRS)
Shertzer, Janine; Temkin, A.
2003-01-01
As is well known, the full scattering amplitude can be expressed as an integral involving the complete scattering wave function. We have shown that the integral can be simplified and used in a practical way. Initial application to electron-hydrogen scattering without exchange was highly successful. The Schrodinger equation (SE), which can be reduced to a 2d partial differential equation (pde), was solved using the finite element method. We have now included exchange by solving the resultant SE, in the static exchange approximation, which is reducible to a pair of coupled pde's. The resultant scattering amplitudes, both singlet and triplet, calculated as a function of energy are in excellent agreement with converged partial wave results.
Current Approaches in Implementing Citizen Science in the Classroom.
Shah, Harsh R; Martinez, Luis R
2016-03-01
Citizen science involves a partnership between inexperienced volunteers and trained scientists engaging in research. In addition to its obvious benefit of accelerating data collection, citizen science has an unexplored role in the classroom, from K-12 schools to higher education. With recent studies showing a weakening in scientific competency of American students, incorporating citizen science initiatives in the curriculum provides a means to address deficiencies in a fragmented educational system. The integration of traditional and innovative pedagogical methods to reform our educational system is therefore imperative in order to provide practical experiences in scientific inquiry, critical thinking, and problem solving for school-age individuals. Citizen science can be used to emphasize the recognition and use of systematic approaches to solve problems affecting the community.
NASA Astrophysics Data System (ADS)
Le Hardy, D.; Favennec, Y.; Rousseau, B.
2016-08-01
The 2D radiative transfer equation coupled with specular reflection boundary conditions is solved using finite element schemes. Both Discontinuous Galerkin and Streamline-Upwind Petrov-Galerkin variational formulations are fully developed. These two schemes are validated step-by-step for all involved operators (transport, scattering, reflection) using analytical formulations. Numerical comparisons of the two schemes, in terms of convergence rate, reveal that the quadratic SUPG scheme proves efficient for solving such problems. This comparison constitutes the main issue of the paper. Moreover, the solution process is accelerated using block SOR-type iterative methods, for which the determination of the optimal parameter is found in a very cheap way.
NASA Astrophysics Data System (ADS)
Winicour, Jeffrey
2017-08-01
An algebraic-hyperbolic method for solving the Hamiltonian and momentum constraints has recently been shown to be well posed for general nonlinear perturbations of the initial data for a Schwarzschild black hole. This is a new approach to solving the constraints of Einstein’s equations which does not involve elliptic equations and has potential importance for the construction of binary black hole data. In order to shed light on the underpinnings of this approach, we consider its application to obtain solutions of the constraints for linearized perturbations of Minkowski space. In that case, we find the surprising result that there are no suitable Cauchy hypersurfaces in Minkowski space for which the linearized algebraic-hyperbolic constraint problem is well posed.
A hybrid nonlinear programming method for design optimization
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1986-01-01
Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.
ERIC Educational Resources Information Center
Holmes, Stephen D.; He, Qingping; Meadows, Michelle
2017-01-01
The relationship between the characteristics of 33 mathematical problem-solving questions answered by 16-year-old students in England and the quality of problem-solving elicited was investigated in two studies. The first study used comparative judgement (CJ) to estimate the quality of the problem-solving elicited by each question, involving 33…
Rank-k modification methods for recursive least squares problems
NASA Astrophysics Data System (ADS)
Olszanskyj, Serge; Lebak, James; Bojanczyk, Adam
1994-09-01
In least squares problems, it is often desired to solve the same problem repeatedly but with several rows of the data either added, deleted, or both. Methods for quickly solving a problem after adding or deleting one row of data at a time are known. In this paper we introduce fundamental rank-k updating and downdating methods and show how extensions of rank-1 downdating methods based on LINPACK, Corrected Semi-Normal Equations (CSNE), and Gram-Schmidt factorizations, as well as new rank-k downdating methods, can all be derived from these fundamental results. We then analyze the cost of each new algorithm and make comparisons tok applications of the corresponding rank-1 algorithms. We provide experimental results comparing the numerical accuracy of the various algorithms, paying particular attention to the downdating methods, due to their potential numerical difficulties for ill-conditioned problems. We then discuss the computation involved for each downdating method, measured in terms of operation counts and BLAS calls. Finally, we provide serial execution timing results for these algorithms, noting preferable points for improvement and optimization. From our experiments we conclude that the Gram-Schmidt methods perform best in terms of numerical accuracy, but may be too costly for serial execution for large problems.
Mathematical Problem Solving. Issues in Research.
ERIC Educational Resources Information Center
Lester, Frank K., Jr., Ed.; Garofalo, Joe, Ed.
This set of papers was originally developed for a conference on Issues and Directions in Mathematics Problem Solving Research held at Indiana University in May 1981. The purpose is to contribute to the clear formulation of the key issues in mathematical problem-solving research by presenting the ideas of actively involved researchers. An…
Solving Problems with Charts & Tables. Pipefitter.
ERIC Educational Resources Information Center
Greater Baton Rouge Chamber of Commerce, LA.
Developed as part of the ABCs of Construction National Workplace Literacy Project, this instructional module is designed to help individuals employed as pipefitters learn to solve problems with charts and tables. Outlined in the first section is a five-step procedure for solving problems involving tables and/or charts: identifying the question to…
The Role of Problem Solving in Complex Intraverbal Repertoires
ERIC Educational Resources Information Center
Sautter, Rachael A.; LeBlanc, Linda A.; Jay, Allison A.; Goldsmith, Tina R.; Carr, James E.
2011-01-01
We examined whether typically developing preschoolers could learn to use a problem-solving strategy that involved self-prompting with intraverbal chains to provide multiple responses to intraverbal categorization questions. Teaching the children to use the problem-solving strategy did not produce significant increases in target responses until…
Creativity and Insight in Problem Solving
ERIC Educational Resources Information Center
Golnabi, Laura
2016-01-01
This paper analyzes the thought process involved in problem solving and its categorization as creative thinking as defined by psychologist R. Weisberg (2006). Additionally, the notion of insight, sometimes present in unconscious creative thinking and often leading to creative ideas, is discussed in the context of geometry problem solving. In…
Planning meals: Problem-solving on a real data-base
ERIC Educational Resources Information Center
Byrne, Richard
1977-01-01
Planning the menu for a dinner party, which involves problem-solving with a large body of knowledge, is used to study the daily operation of human memory. Verbal protocol analysis, a technique devised to investigate formal problem-solving, is examined theoretically and adapted for analysis of this task. (Author/MV)
A multilevel correction adaptive finite element method for Kohn-Sham equation
NASA Astrophysics Data System (ADS)
Hu, Guanghui; Xie, Hehu; Xu, Fei
2018-02-01
In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.
Parameter estimation of kinetic models from metabolic profiles: two-phase dynamic decoupling method.
Jia, Gengjie; Stephanopoulos, Gregory N; Gunawan, Rudiyanto
2011-07-15
Time-series measurements of metabolite concentration have become increasingly more common, providing data for building kinetic models of metabolic networks using ordinary differential equations (ODEs). In practice, however, such time-course data are usually incomplete and noisy, and the estimation of kinetic parameters from these data is challenging. Practical limitations due to data and computational aspects, such as solving stiff ODEs and finding global optimal solution to the estimation problem, give motivations to develop a new estimation procedure that can circumvent some of these constraints. In this work, an incremental and iterative parameter estimation method is proposed that combines and iterates between two estimation phases. One phase involves a decoupling method, in which a subset of model parameters that are associated with measured metabolites, are estimated using the minimization of slope errors. Another phase follows, in which the ODE model is solved one equation at a time and the remaining model parameters are obtained by minimizing concentration errors. The performance of this two-phase method was tested on a generic branched metabolic pathway and the glycolytic pathway of Lactococcus lactis. The results showed that the method is efficient in getting accurate parameter estimates, even when some information is missing.
2010-01-01
solving the problem and then applying facts and skills to reach a solution (Savery, 1998). KEY INSTRUCTIONAL STORY RESEARCH QUESTIONS Regardless of the...collaborative writing in higher education. In C. J. Bonk & K. S. King (Eds.), Electronic collaborators: Learner-centered technologies for literacy ...Gentner and Kokinov (2001) and luthe (2005), analogical reasoning involves making inferences from the similarity of relationships of elements across two
Two-dimensional computer simulation of EMVJ and grating solar cells under AMO illumination
NASA Technical Reports Server (NTRS)
Gray, J. L.; Schwartz, R. J.
1984-01-01
A computer program, SCAP2D (Solar Cell Analysis Program in 2-Dimensions), is used to evaluate the Etched Multiple Vertical Junction (EMVJ) and grating solar cells. The aim is to demonstrate how SCAP2D can be used to evaluate cell designs. The cell designs studied are by no means optimal designs. The SCAP2D program solves the three coupled, nonlinear partial differential equations, Poisson's Equation and the hole and electron continuity equations, simultaneously in two-dimensions using finite differences to discretize the equations and Newton's Method to linearize them. The variables solved for are the electrostatic potential and the hole and electron concentrations. Each linear system of equations is solved directly by Gaussian Elimination. Convergence of the Newton Iteration is assumed when the largest correction to the electrostatic potential or hole or electron quasi-potential is less than some predetermined error. A typical problem involves 2000 nodes with a Jacobi matrix of order 6000 and a bandwidth of 243.
Demi, L; van Dongen, K W A; Verweij, M D
2011-03-01
Experimental data reveals that attenuation is an important phenomenon in medical ultrasound. Attenuation is particularly important for medical applications based on nonlinear acoustics, since higher harmonics experience higher attenuation than the fundamental. Here, a method is presented to accurately solve the wave equation for nonlinear acoustic media with spatially inhomogeneous attenuation. Losses are modeled by a spatially dependent compliance relaxation function, which is included in the Westervelt equation. Introduction of absorption in the form of a causal relaxation function automatically results in the appearance of dispersion. The appearance of inhomogeneities implies the presence of a spatially inhomogeneous contrast source in the presented full-wave method leading to inclusion of forward and backward scattering. The contrast source problem is solved iteratively using a Neumann scheme, similar to the iterative nonlinear contrast source (INCS) method. The presented method is directionally independent and capable of dealing with weakly to moderately nonlinear, large scale, three-dimensional wave fields occurring in diagnostic ultrasound. Convergence of the method has been investigated and results for homogeneous, lossy, linear media show full agreement with the exact results. Moreover, the performance of the method is demonstrated through simulations involving steered and unsteered beams in nonlinear media with spatially homogeneous and inhomogeneous attenuation. © 2011 Acoustical Society of America
A Crisis in Space--A Futuristic Simulation Using Creative Problem Solving.
ERIC Educational Resources Information Center
Clode, Linda
1992-01-01
An enrichment program developed for sixth-grade gifted students combined creative problem solving with future studies in a way that would simulate real life crisis problem solving. The program involved forecasting problems of the future requiring evacuation of Earth, assuming roles on a spaceship, and simulating crises as the spaceship traveled to…
Autobiographical Memory and Social Problem-Solving in Asperger Syndrome
ERIC Educational Resources Information Center
Goddard, Lorna; Howlin, Patricia; Dritschel, Barbara; Patel, Trishna
2007-01-01
Difficulties in social interaction are a central feature of Asperger syndrome. Effective social interaction involves the ability to solve interpersonal problems as and when they occur. Here we examined social problem-solving in a group of adults with Asperger syndrome and control group matched for age, gender and IQ. We also assessed…
Using Everyday Materials To Promote Problem Solving in Toddlers.
ERIC Educational Resources Information Center
Segatti, Laura; Brown-DuPaul, Judy; Keyes, Tracy L.
2003-01-01
Outlines benefits of and skills involved in problem solving. Details how an environment rich in materials that foster cause-and-effect or trial-and-error explorations promote cognitive development among toddlers. Offers examples of problem-solving experiences and lists materials for use in curriculum planning. Describes the teacher' role as one of…
USDA-ARS?s Scientific Manuscript database
In this research editorial we make four points relative to solving water resource issues: (1) they are complex problems and difficult to solve, (2) some progress has been made on solving these issues, (3) external non-stationary drivers such as land use changes, climate change and variability, and s...
The Effects of Labels on Learning Subgoals for Solving Problems.
ERIC Educational Resources Information Center
Catrambone, Richard
This study, involving 65 undergraduates at the Georgia Institute of Technology (Atlanta); explores a scheme for representing problem-solving knowledge and predicting transfer as a function of problem-solving subgoals acquired from examples. A subgoal is an unknown entity (numerical or conceptual) that needs to be found in order to achieve a higher…
Is Word-Problem Solving a Form of Text Comprehension?
ERIC Educational Resources Information Center
Fuchs, Lynn S.; Fuchs, Douglas; Compton, Donald L.; Hamlett, Carol L.; Wang, Amber Y.
2015-01-01
This study's hypotheses were that (a) word-problem (WP) solving is a form of text comprehension that involves language comprehension processes, working memory, and reasoning, but (b) WP solving differs from other forms of text comprehension by requiring WP-specific language comprehension as well as general language comprehension. At the start of…
ERIC Educational Resources Information Center
Azad, Gazi F.; Kim, Mina; Marcus, Steven C.; Sheridan, Susan M.; Mandell, David S.
2016-01-01
Effective parent-teacher communication involves problem-solving concerns about students. Few studies have examined problem-solving interactions between parents and teachers of children with autism spectrum disorder (ASD), with a particular focus on identifying communication barriers and strategies for improving them. This study examined the…
Emergent Leadership in Children's Cooperative Problem Solving Groups
ERIC Educational Resources Information Center
Sun, Jingjng; Anderson, Richard C.; Perry, Michelle; Lin, Tzu-Jung
2017-01-01
Social skills involved in leadership were examined in a problem-solving activity in which 252 Chinese 5th-graders worked in small groups on a spatial-reasoning puzzle. Results showed that students who engaged in peer-managed small-group discussions of stories prior to problem solving produced significantly better solutions and initiated…
Discrete-continuous variable structural synthesis using dual methods
NASA Technical Reports Server (NTRS)
Schmit, L. A.; Fleury, C.
1980-01-01
Approximation concepts and dual methods are extended to solve structural synthesis problems involving a mix of discrete and continuous sizing type of design variables. Pure discrete and pure continuous variable problems can be handled as special cases. The basic mathematical programming statement of the structural synthesis problem is converted into a sequence of explicit approximate primal problems of separable form. These problems are solved by constructing continuous explicit dual functions, which are maximized subject to simple nonnegativity constraints on the dual variables. A newly devised gradient projection type of algorithm called DUAL 1, which includes special features for handling dual function gradient discontinuities that arise from the discrete primal variables, is used to find the solution of each dual problem. Computational implementation is accomplished by incorporating the DUAL 1 algorithm into the ACCESS 3 program as a new optimizer option. The power of the method set forth is demonstrated by presenting numerical results for several example problems, including a pure discrete variable treatment of a metallic swept wing and a mixed discrete-continuous variable solution for a thin delta wing with fiber composite skins.
Possibilities of the particle finite element method for fluid-soil-structure interaction problems
NASA Astrophysics Data System (ADS)
Oñate, Eugenio; Celigueta, Miguel Angel; Idelsohn, Sergio R.; Salazar, Fernando; Suárez, Benjamín
2011-09-01
We present some developments in the particle finite element method (PFEM) for analysis of complex coupled problems in mechanics involving fluid-soil-structure interaction (FSSI). The PFEM uses an updated Lagrangian description to model the motion of nodes (particles) in both the fluid and the solid domains (the later including soil/rock and structures). A mesh connects the particles (nodes) defining the discretized domain where the governing equations for each of the constituent materials are solved as in the standard FEM. The stabilization for dealing with an incompressibility continuum is introduced via the finite calculus method. An incremental iterative scheme for the solution of the non linear transient coupled FSSI problem is described. The procedure to model frictional contact conditions and material erosion at fluid-solid and solid-solid interfaces is described. We present several examples of application of the PFEM to solve FSSI problems such as the motion of rocks by water streams, the erosion of a river bed adjacent to a bridge foundation, the stability of breakwaters and constructions sea waves and the study of landslides.
NASA Astrophysics Data System (ADS)
Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.
2017-11-01
We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.
Direct-Solve Image-Based Wavefront Sensing
NASA Technical Reports Server (NTRS)
Lyon, Richard G.
2009-01-01
A method of wavefront sensing (more precisely characterized as a method of determining the deviation of a wavefront from a nominal figure) has been invented as an improved means of assessing the performance of an optical system as affected by such imperfections as misalignments, design errors, and fabrication errors. The method is implemented by software running on a single-processor computer that is connected, via a suitable interface, to the image sensor (typically, a charge-coupled device) in the system under test. The software collects a digitized single image from the image sensor. The image is displayed on a computer monitor. The software directly solves for the wavefront in a time interval of a fraction of a second. A picture of the wavefront is displayed. The solution process involves, among other things, fast Fourier transforms. It has been reported to the effect that some measure of the wavefront is decomposed into modes of the optical system under test, but it has not been reported whether this decomposition is postprocessing of the solution or part of the solution process.
Analysis of Student Errors on Division of Fractions
NASA Astrophysics Data System (ADS)
Maelasari, E.; Jupri, A.
2017-02-01
This study aims to describe the type of student errors that typically occurs at the completion of the division arithmetic operations on fractions, and to describe the causes of students’ mistakes. This research used a descriptive qualitative method, and involved 22 fifth grade students at one particular elementary school in Kuningan, Indonesia. The results of this study showed that students’ error answers caused by students changing their way of thinking to solve multiplication and division operations on the same procedures, the changing of mix fractions to common fraction have made students confused, and students are careless in doing calculation. From student written work, in solving the fraction problems, we found that there is influence between the uses of learning methods and student response, and some of student responses beyond researchers’ prediction. We conclude that the teaching method is not only the important thing that must be prepared, but the teacher should also prepare about predictions of students’ answers to the problems that will be given in the learning process. This could be a reflection for teachers to be better and to achieve the expected learning goals.
CFD-ACE+: a CAD system for simulation and modeling of MEMS
NASA Astrophysics Data System (ADS)
Stout, Phillip J.; Yang, H. Q.; Dionne, Paul; Leonard, Andy; Tan, Zhiqiang; Przekwas, Andrzej J.; Krishnan, Anantha
1999-03-01
Computer aided design (CAD) systems are a key to designing and manufacturing MEMS with higher performance/reliability, reduced costs, shorter prototyping cycles and improved time- to-market. One such system is CFD-ACE+MEMS, a modeling and simulation environment for MEMS which includes grid generation, data visualization, graphical problem setup, and coupled fluidic, thermal, mechanical, electrostatic, and magnetic physical models. The fluid model is a 3D multi- block, structured/unstructured/hybrid, pressure-based, implicit Navier-Stokes code with capabilities for multi- component diffusion, multi-species transport, multi-step gas phase chemical reactions, surface reactions, and multi-media conjugate heat transfer. The thermal model solves the total enthalpy from of the energy equation. The energy equation includes unsteady, convective, conductive, species energy, viscous dissipation, work, and radiation terms. The electrostatic model solves Poisson's equation. Both the finite volume method and the boundary element method (BEM) are available for solving Poisson's equation. The BEM method is useful for unbounded problems. The magnetic model solves for the vector magnetic potential from Maxwell's equations including eddy currents but neglecting displacement currents. The mechanical model is a finite element stress/deformation solver which has been coupled to the flow, heat, electrostatic, and magnetic calculations to study flow, thermal electrostatically, and magnetically included deformations of structures. The mechanical or structural model can accommodate elastic and plastic materials, can handle large non-linear displacements, and can model isotropic and anisotropic materials. The thermal- mechanical coupling involves the solution of the steady state Navier equation with thermoelastic deformation. The electrostatic-mechanical coupling is a calculation of the pressure force due to surface charge on the mechanical structure. Results of CFD-ACE+MEMS modeling of MEMS such as cantilever beams, accelerometers, and comb drives are discussed.
NASA Technical Reports Server (NTRS)
Pepin, T. J.
1977-01-01
The inversion methods are reported that have been used to determine the vertical profile of the extinction coefficient due to the stratospheric aerosols from data measured during the ASTP/SAM solar occultation experiment. Inversion methods include the onion skin peel technique and methods of solving the Fredholm equation for the problem subject to smoothing constraints. The latter of these approaches involves a double inversion scheme. Comparisons are made between the inverted results from the SAM experiment and near simultaneous measurements made by lidar and balloon born dustsonde. The results are used to demonstrate the assumptions required to perform the inversions for aerosols.
NASA Technical Reports Server (NTRS)
Misiakos, K.; Lindholm, F. A.
1986-01-01
Several parameters of certain three-dimensional semiconductor devices including diodes, transistors, and solar cells can be determined without solving the actual boundary-value problem. The recombination current, transit time, and open-circuit voltage of planar diodes are emphasized here. The resulting analytical expressions enable determination of the surface recombination velocity of shallow planar diodes. The method involves introducing corresponding one-dimensional models having the same values of these parameters.
Pseudopotential Method for Higher Partial Wave Scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idziaszek, Zbigniew; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, 02-668 Warsaw; Calarco, Tommaso
2006-01-13
We present a zero-range pseudopotential applicable for all partial wave interactions between neutral atoms. For p and d waves, we derive effective pseudopotentials, which are useful for problems involving anisotropic external potentials. Finally, we consider two nontrivial applications of the p-wave pseudopotential: we solve analytically the problem of two interacting spin-polarized fermions confined in a harmonic trap, and we analyze the scattering of p-wave interacting particles in a quasi-two-dimensional system.
NASA Astrophysics Data System (ADS)
Patel, Japan
Short mean free paths are characteristic of charged particles. High energy charged particles often have highly forward peaked scattering cross sections. Transport problems involving such charged particles are also highly optically thick. When problems simultaneously have forward peaked scattering and high optical thickness, their solution, using standard iterative methods, becomes very inefficient. In this dissertation, we explore Fokker-Planck-based acceleration for solving such problems.
... teaches family members about psychosis, coping, communication, and problem-solving skills. Family members who are informed and involved ... to ensure success. Case Management helps clients with problem solving. The case manager may offer solutions to address ...
Diverse knowledges and competing interests: an essay on socio-technical problem-solving.
di Norcia, Vincent
2002-01-01
Solving complex socio-technical problems, this paper claims, involves diverse knowledges (cognitive diversity), competing interests (social diversity), and pragmatism. To explain this view, this paper first explores two different cases: Canadian pulp and paper mill pollution and siting nuclear reactors in systematically sensitive areas of California. Solving such socio-technically complex problems involves cognitive diversity as well as social diversity and pragmatism. Cognitive diversity requires one to not only recognize relevant knowledges but also to assess their validity. Finally, it is suggested, integrating the resultant set of diverse relevant and valid knowledges determines the parameters of the solution space for the problem.
An implicit boundary integral method for computing electric potential of macromolecules in solvent
NASA Astrophysics Data System (ADS)
Zhong, Yimin; Ren, Kui; Tsai, Richard
2018-04-01
A numerical method using implicit surface representations is proposed to solve the linearized Poisson-Boltzmann equation that arises in mathematical models for the electrostatics of molecules in solvent. The proposed method uses an implicit boundary integral formulation to derive a linear system defined on Cartesian nodes in a narrowband surrounding the closed surface that separates the molecule and the solvent. The needed implicit surface is constructed from the given atomic description of the molecules, by a sequence of standard level set algorithms. A fast multipole method is applied to accelerate the solution of the linear system. A few numerical studies involving some standard test cases are presented and compared to other existing results.
Numerical optimization methods for controlled systems with parameters
NASA Astrophysics Data System (ADS)
Tyatyushkin, A. I.
2017-10-01
First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.
NASA Technical Reports Server (NTRS)
Wright, William B.
1988-01-01
Transient, numerical simulations of the deicing of composite aircraft components by electrothermal heating have been performed in a 2-D rectangular geometry. Seven numerical schemes and four solution methods were used to find the most efficient numerical procedure for this problem. The phase change in the ice was simulated using the Enthalpy method along with the Method for Assumed States. Numerical solutions illustrating deicer performance for various conditions are presented. Comparisons are made with previous numerical models and with experimental data. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.
Structure-preserving spectral element method in attenuating seismic wave modeling
NASA Astrophysics Data System (ADS)
Cai, Wenjun; Zhang, Huai
2016-04-01
This work describes the extension of the conformal symplectic method to solve the damped acoustic wave equation and the elastic wave equations in the framework of the spectral element method. The conformal symplectic method is a variation of conventional symplectic methods to treat non-conservative time evolution problems which has superior behaviors in long-time stability and dissipation preservation. To construct the conformal symplectic method, we first reformulate the damped acoustic wave equation and the elastic wave equations in their equivalent conformal multi-symplectic structures, which naturally reveal the intrinsic properties of the original systems, especially, the dissipation laws. We thereafter separate each structures into a conservative Hamiltonian system and a purely dissipative ordinary differential equation system. Based on the splitting methodology, we solve the two subsystems respectively. The dissipative one is cheaply solved by its analytic solution. While for the conservative system, we combine a fourth-order symplectic Nyström method in time and the spectral element method in space to cover the circumstances in realistic geological structures involving complex free-surface topography. The Strang composition method is adopted thereby to concatenate the corresponding two parts of solutions and generate the completed numerical scheme, which is conformal symplectic and can therefore guarantee the numerical stability and dissipation preservation after a large time modeling. Additionally, a relative larger Courant number than that of the traditional Newmark scheme is found in the numerical experiments in conjunction with a spatial sampling of approximately 5 points per wavelength. A benchmark test for the damped acoustic wave equation validates the effectiveness of our proposed method in precisely capturing dissipation rate. The classical Lamb problem is used to demonstrate the ability of modeling Rayleigh-wave propagation. More comprehensive numerical experiments are presented to investigate the long-time simulation, low dispersion and energy conservation properties of the conformal symplectic method in both the attenuating homogeneous and heterogeneous mediums.
NASA Astrophysics Data System (ADS)
Levkovich-Maslyuk, Fedor
2016-08-01
We give a pedagogical introduction to the Bethe ansatz techniques in integrable QFTs and spin chains. We first discuss and motivate the general framework of asymptotic Bethe ansatz for the spectrum of integrable QFTs in large volume, based on the exact S-matrix. Then we illustrate this method in several concrete theories. The first case we study is the SU(2) chiral Gross-Neveu model. We derive the Bethe equations via algebraic Bethe ansatz, solving in the process the Heisenberg XXX spin chain. We discuss this famous spin chain model in some detail, covering in particular the coordinate Bethe ansatz, some properties of Bethe states, and the classical scaling limit leading to finite-gap equations. Then we proceed to the more involved SU(3) chiral Gross-Neveu model and derive the Bethe equations using nested algebraic Bethe ansatz to solve the arising SU(3) spin chain. Finally we show how a method similar to the Bethe ansatz works in a completely different setting, namely for the 1D oscillator in quantum mechanics.
Case study of a problem-based learning course of physics in a telecommunications engineering degree
NASA Astrophysics Data System (ADS)
Macho-Stadler, Erica; Jesús Elejalde-García, Maria
2013-08-01
Active learning methods can be appropriate in engineering, as their methodology promotes meta-cognition, independent learning and problem-solving skills. Problem-based learning is the educational process by which problem-solving activities and instructor's guidance facilitate learning. Its key characteristic involves posing a 'concrete problem' to initiate the learning process, generally implemented by small groups of students. Many universities have developed and used active methodologies successfully in the teaching-learning process. During the past few years, the University of the Basque Country has promoted the use of active methodologies through several teacher training programmes. In this paper, we describe and analyse the results of the educational experience using the problem-based learning (PBL) method in a physics course for undergraduates enrolled in the technical telecommunications engineering degree programme. From an instructors' perspective, PBL strengths include better student attitude in class and increased instructor-student and student-student interactions. The students emphasised developing teamwork and communication skills in a good learning atmosphere as positive aspects.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen
1995-01-01
A nontraditional numerical method for solving conservation laws is being developed. The new method is designed from a physicist's perspective, i.e., its development is based more on physics than numerics. Even though it uses only the simplest approximation techniques, a 2D time-marching Euler solver developed recently using the new method is capable of generating nearly perfect solutions for a 2D shock reflection problem used by Helen Yee and others. Moreover, a recent application of this solver to computational aeroacoustics (CAA) problems reveals that: (1) accuracy of its results is comparable to that of a 6th order compact difference scheme even though nominally the current solver is only of 2nd-order accuracy; (2) generally, the non-reflecting boundary condition can be implemented in a simple way without involving characteristic variables; and (3) most importantly, the current solver is capable of handling both continuous and discontinuous flows very well and thus provides a unique numerical tool for solving those flow problems where the interactions between sound waves and shocks are important, such as the noise field around a supersonic over- or under-expansion jet.
Numerical simulation of an electrothermal deicer pad. M.S. Thesis. Final Report
NASA Technical Reports Server (NTRS)
Marano, J. J.
1983-01-01
A numerical simulation is developed to investigate the removal of ice from composite aircraft blades by means of electrothermal deicing. The model considers one dimensional, unsteady state heat transfer in the composite blade-ice body. The heat conduction equations are approximated by using the Crank-Nicolson finite difference scheme, and the phase change in the ice layer is handled using the Enthalpy method. To solve the system of equations which result, Gauss-Seidel iteration is used. The simulation computes the temperature profile in the composite blade-ice body, as well as the movement of the ice-water interface, as a function of time. This information can be used to evaluate deicer performance. The simulation can also be used to solve a variety of other heat conduction problems involving composite bodies.
NASA Technical Reports Server (NTRS)
Raibstein, A. I.; Kalev, I.; Pipano, A.
1976-01-01
A procedure for the local stiffness modifications of large structures is described. It enables structural modifications without an a priori definition of the changes in the original structure and without loss of efficiency due to multiple loading conditions. The solution procedure, implemented in NASTRAN, involved the decomposed stiffness matrix and the displacement vectors of the original structure. It solves the modified structure exactly, irrespective of the magnitude of the stiffness changes. In order to investigate the efficiency of the present procedure and to test its applicability within a design environment, several real and large structures were solved. The results of the efficiency studies indicate that the break-even point of the procedure varies between 8% and 60% stiffness modifications, depending upon the structure's characteristics and the options employed.
Two-Stage Path Planning Approach for Designing Multiple Spacecraft Reconfiguration Maneuvers
NASA Technical Reports Server (NTRS)
Aoude, Georges S.; How, Jonathan P.; Garcia, Ian M.
2007-01-01
The paper presents a two-stage approach for designing optimal reconfiguration maneuvers for multiple spacecraft. These maneuvers involve well-coordinated and highly-coupled motions of the entire fleet of spacecraft while satisfying an arbitrary number of constraints. This problem is particularly difficult because of the nonlinearity of the attitude dynamics, the non-convexity of some of the constraints, and the coupling between the positions and attitudes of all spacecraft. As a result, the trajectory design must be solved as a single 6N DOF problem instead of N separate 6 DOF problems. The first stage of the solution approach quickly provides a feasible initial solution by solving a simplified version without differential constraints using a bi-directional Rapidly-exploring Random Tree (RRT) planner. A transition algorithm then augments this guess with feasible dynamics that are propagated from the beginning to the end of the trajectory. The resulting output is a feasible initial guess to the complete optimal control problem that is discretized in the second stage using a Gauss pseudospectral method (GPM) and solved using an off-the-shelf nonlinear solver. This paper also places emphasis on the importance of the initialization step in pseudospectral methods in order to decrease their computation times and enable the solution of a more complex class of problems. Several examples are presented and discussed.
Zhang, Fan; Yeh, Gour-Tsyh; Parker, Jack C; Brooks, Scott C; Pace, Molly N; Kim, Young-Jin; Jardine, Philip M; Watson, David B
2007-06-16
This paper presents a reaction-based water quality transport model in subsurface flow systems. Transport of chemical species with a variety of chemical and physical processes is mathematically described by M partial differential equations (PDEs). Decomposition via Gauss-Jordan column reduction of the reaction network transforms M species reactive transport equations into two sets of equations: a set of thermodynamic equilibrium equations representing N(E) equilibrium reactions and a set of reactive transport equations of M-N(E) kinetic-variables involving no equilibrium reactions (a kinetic-variable is a linear combination of species). The elimination of equilibrium reactions from reactive transport equations allows robust and efficient numerical integration. The model solves the PDEs of kinetic-variables rather than individual chemical species, which reduces the number of reactive transport equations and simplifies the reaction terms in the equations. A variety of numerical methods are investigated for solving the coupled transport and reaction equations. Simulation comparisons with exact solutions were performed to verify numerical accuracy and assess the effectiveness of various numerical strategies to deal with different application circumstances. Two validation examples involving simulations of uranium transport in soil columns are presented to evaluate the ability of the model to simulate reactive transport with complex reaction networks involving both kinetic and equilibrium reactions.
ERIC Educational Resources Information Center
Sternberg, Robert J.
1979-01-01
An information-processing framework is presented for understanding intelligence. Two levels of processing are discussed: the steps involved in solving a complex intellectual task, and higher-order processes used to decide how to solve the problem. (MH)
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Pope, Stephen B.
2013-04-01
The Rate-Controlled Constrained-Equilibrium (RCCE) method is a thermodynamic based dimension reduction method which enables representation of chemistry involving n s species in terms of fewer n r constraints. Here we focus on the application of the RCCE method to Lagrangian particle probability density function based computations. In these computations, at every reaction fractional step, given the initial particle composition (represented using RCCE), we need to compute the reaction mapping, i.e. the particle composition at the end of the time step. In this work we study three different implementations of RCCE for computing this reaction mapping, and compare their relative accuracy and efficiency. These implementations include: (1) RCCE/TIFS (Trajectory In Full Space): this involves solving a system of n s rate-equations for all the species in the full composition space to obtain the reaction mapping. The other two implementations obtain the reaction mapping by solving a reduced system of n r rate-equations obtained by projecting the n s rate-equations for species evaluated in the full space onto the constrained subspace. These implementations include (2) RCCE: this is the classical implementation of RCCE which uses a direct projection of the rate-equations for species onto the constrained subspace; and (3) RCCE/RAMP (Reaction-mixing Attracting Manifold Projector): this is a new implementation introduced here which uses an alternative projector obtained using the RAMP approach. We test these three implementations of RCCE for methane/air premixed combustion in the partially-stirred reactor with chemistry represented using the n s=31 species GRI-Mech 1.2 mechanism with n r=13 to 19 constraints. We show that: (a) the classical RCCE implementation involves an inaccurate projector which yields large errors (over 50%) in the reaction mapping; (b) both RCCE/RAMP and RCCE/TIFS approaches yield significantly lower errors (less than 2%); and (c) overall the RCCE/TIFS approach is the most accurate, efficient (by orders of magnitude) and robust implementation.
Bryant, Lucinda L.; Leary, Janie M.; Vu, Maihan B.; Hill-Briggs, Felicia; Samuel-Hodge, Carmen D.; McMilin, Colleen R.; Keyserling, Thomas C.
2014-01-01
Introduction In low-income and underserved populations, financial hardship and multiple competing roles and responsibilities lead to difficulties in lifestyle change for cardiovascular disease (CVD) prevention. To improve CVD prevention behaviors, we adapted, pilot-tested, and evaluated a problem-solving intervention designed to address barriers to lifestyle change. Methods The sample consisted of 81 participants from 3 underserved populations, including 28 Hispanic or non-Hispanic white women in a western community (site 1), 31 African-American women in a semirural southern community (site 2), and 22 adults in an Appalachian community (site 3). Incorporating focus group findings, we assessed a standardized intervention involving 6-to-8 week group sessions devoted to problem-solving in the fall of 2009. Results Most sessions were attended by 76.5% of participants, demonstrating participant adoption and engagement. The intervention resulted in significant improvement in problem-solving skills (P < .001) and perceived stress (P < .05). Diet, physical activity, and weight remained stable, although 72% of individuals reported maintenance or increase in daily fruit and vegetable intake, and 67% reported maintenance or increase in daily physical activity. Conclusion Study results suggest the intervention was acceptable to rural, underserved populations and effective in training them in problem-solving skills and stress management for CVD risk reduction. PMID:24602586
A Description of the Strategic Knowledge of Experts Solving Transmission Genetics Problems.
ERIC Educational Resources Information Center
Collins, Angelo
Descriptions of the problem-solving strategies of experts solving realistic, computer-generated transmission genetics problems are presented in this paper and implications for instruction are discussed. Seven experts were involved in the study. All of the experts had a doctoral degree and experience in both teaching and doing research in genetics.…
ERIC Educational Resources Information Center
Epstein, Baila
2016-01-01
Background: Clinical problem-solving is fundamental to the role of the speech-language pathologist in both the diagnostic and treatment processes. The problem-solving often involves collaboration with clients and their families, supervisors, and other professionals. Considering the importance of cooperative problem-solving in the profession,…
The Case for Problem Solving in Second Language Learning. CLCS Occasional Paper No. 33.
ERIC Educational Resources Information Center
Bourke, James Mannes
A study undertaken in Ireland investigated the effectiveness of a second language teaching strategy that focused on grammatical problem-solving. In this approach, the problems are located within the target language system, and the problem-solving involves induction of grammatical rules and use of those rules. Learners are confronted with instances…
2012-01-01
Background While participatory social network analysis can help health service partnerships to solve problems, little is known about its acceptability in cross-cultural settings. We conducted two case studies of chronic illness service partnerships in 2007 and 2008 to determine whether participatory research incorporating social network analysis is acceptable for problem-solving in Australian Aboriginal health service delivery. Methods Local research groups comprising 13–19 partnership staff, policy officers and community members were established at each of two sites to guide the research and to reflect and act on the findings. Network and work practice surveys were conducted with 42 staff, and the results were fed back to the research groups. At the end of the project, 19 informants at the two sites were interviewed, and the researchers conducted critical reflection. The effectiveness and acceptability of the participatory social network method were determined quantitatively and qualitatively. Results Participants in both local research groups considered that the network survey had accurately described the links between workers related to the exchange of clinical and cultural information, team care relationships, involvement in service management and planning and involvement in policy development. This revealed the function of the teams and the roles of workers in each partnership. Aboriginal workers had a high number of direct links in the exchange of cultural information, illustrating their role as the cultural resource, whereas they had fewer direct links with other network members on clinical information exchange and team care. The problem of their current and future roles was discussed inside and outside the local research groups. According to the interview informants the participatory network analysis had opened the way for problem-solving by “putting issues on the table”. While there were confronting and ethically challenging aspects, these informants considered that with flexibility of data collection to account for the preferences of Aboriginal members, then the method was appropriate in cross-cultural contexts for the difficult discussions that are needed to improve partnerships. Conclusion Critical reflection showed that the preconditions for difficult discussions are, first, that partners have the capacity to engage in such discussions, second, that partners assess whether the effort required for these discussions is balanced by the benefits they gain from the partnership, and, third, that “boundary spanning” staff can facilitate commitment to partnership goals. PMID:22682504
Implicit time-integration method for simultaneous solution of a coupled non-linear system
NASA Astrophysics Data System (ADS)
Watson, Justin Kyle
Historically large physical problems have been divided into smaller problems based on the physics involved. This is no different in reactor safety analysis. The problem of analyzing a nuclear reactor for design basis accidents is performed by a handful of computer codes each solving a portion of the problem. The reactor thermal hydraulic response to an event is determined using a system code like TRAC RELAP Advanced Computational Engine (TRACE). The core power response to the same accident scenario is determined using a core physics code like Purdue Advanced Core Simulator (PARCS). Containment response to the reactor depressurization in a Loss Of Coolant Accident (LOCA) type event is calculated by a separate code. Sub-channel analysis is performed with yet another computer code. This is just a sample of the computer codes used to solve the overall problems of nuclear reactor design basis accidents. Traditionally each of these codes operates independently from each other using only the global results from one calculation as boundary conditions to another. Industry's drive to uprate power for reactors has motivated analysts to move from a conservative approach to design basis accident towards a best estimate method. To achieve a best estimate calculation efforts have been aimed at coupling the individual physics models to improve the accuracy of the analysis and reduce margins. The current coupling techniques are sequential in nature. During a calculation time-step data is passed between the two codes. The individual codes solve their portion of the calculation and converge to a solution before the calculation is allowed to proceed to the next time-step. This thesis presents a fully implicit method of simultaneous solving the neutron balance equations, heat conduction equations and the constitutive fluid dynamics equations. It discusses the problems involved in coupling different physics phenomena within multi-physics codes and presents a solution to these problems. The thesis also outlines the basic concepts behind the nodal balance equations, heat transfer equations and the thermal hydraulic equations, which will be coupled to form a fully implicit nonlinear system of equations. The coupling of separate physics models to solve a larger problem and improve accuracy and efficiency of a calculation is not a new idea, however implementing them in an implicit manner and solving the system simultaneously is. Also the application to reactor safety codes is new and has not be done with thermal hydraulics and neutronics codes on realistic applications in the past. The coupling technique described in this thesis is applicable to other similar coupled thermal hydraulic and core physics reactor safety codes. This technique is demonstrated using coupled input decks to show that the system is solved correctly and then verified by using two derivative test problems based on international benchmark problems the OECD/NRC Three mile Island (TMI) Main Steam Line Break (MSLB) problem (representative of pressurized water reactor analysis) and the OECD/NRC Peach Bottom (PB) Turbine Trip (TT) benchmark (representative of boiling water reactor analysis).
Derivative free Davidon-Fletcher-Powell (DFP) for solving symmetric systems of nonlinear equations
NASA Astrophysics Data System (ADS)
Mamat, M.; Dauda, M. K.; Mohamed, M. A. bin; Waziri, M. Y.; Mohamad, F. S.; Abdullah, H.
2018-03-01
Research from the work of engineers, economist, modelling, industry, computing, and scientist are mostly nonlinear equations in nature. Numerical solution to such systems is widely applied in those areas of mathematics. Over the years, there has been significant theoretical study to develop methods for solving such systems, despite these efforts, unfortunately the methods developed do have deficiency. In a contribution to solve systems of the form F(x) = 0, x ∈ Rn , a derivative free method via the classical Davidon-Fletcher-Powell (DFP) update is presented. This is achieved by simply approximating the inverse Hessian matrix with {Q}k+1-1 to θkI. The modified method satisfied the descent condition and possess local superlinear convergence properties. Interestingly, without computing any derivative, the proposed method never fail to converge throughout the numerical experiments. The output is based on number of iterations and CPU time, different initial starting points were used on a solve 40 benchmark test problems. With the aid of the squared norm merit function and derivative-free line search technique, the approach yield a method of solving symmetric systems of nonlinear equations that is capable of significantly reducing the CPU time and number of iteration, as compared to its counterparts. A comparison between the proposed method and classical DFP update were made and found that the proposed methodis the top performer and outperformed the existing method in almost all the cases. In terms of number of iterations, out of the 40 problems solved, the proposed method solved 38 successfully, (95%) while classical DFP solved 2 problems (i.e. 05%). In terms of CPU time, the proposed method solved 29 out of the 40 problems given, (i.e.72.5%) successfully whereas classical DFP solves 11 (27.5%). The method is valid in terms of derivation, reliable in terms of number of iterations and accurate in terms of CPU time. Thus, suitable and achived the objective.
Crossword expertise as recognitional decision making: an artificial intelligence approach
Thanasuan, Kejkaew; Mueller, Shane T.
2014-01-01
The skills required to solve crossword puzzles involve two important aspects of lexical memory: semantic information in the form of clues that indicate the meaning of the answer, and orthographic patterns that constrain the possibilities but may also provide hints to possible answers. Mueller and Thanasuan (2013) proposed a model accounting for the simple memory access processes involved in solving individual crossword clues, but expert solvers also bring additional skills and strategies to bear on solving complete puzzles. In this paper, we developed an computational model of crossword solving that incorporates strategic and other factors, and is capable of solving crossword puzzles in a human-like fashion, in order to understand the complete set of skills needed to solve a crossword puzzle. We compare our models to human expert and novice solvers to investigate how different strategic and structural factors in crossword play impact overall performance. Results reveal that expert crossword solving relies heavily on fluent semantic memory search and retrieval, which appear to allow experts to take better advantage of orthographic-route solutions, and experts employ strategies that enable them to use orthographic information. Furthermore, other processes central to traditional AI models (error correction and backtracking) appear to be of less importance for human players. PMID:25309483
Crossword expertise as recognitional decision making: an artificial intelligence approach.
Thanasuan, Kejkaew; Mueller, Shane T
2014-01-01
THE SKILLS REQUIRED TO SOLVE CROSSWORD PUZZLES INVOLVE TWO IMPORTANT ASPECTS OF LEXICAL MEMORY: semantic information in the form of clues that indicate the meaning of the answer, and orthographic patterns that constrain the possibilities but may also provide hints to possible answers. Mueller and Thanasuan (2013) proposed a model accounting for the simple memory access processes involved in solving individual crossword clues, but expert solvers also bring additional skills and strategies to bear on solving complete puzzles. In this paper, we developed an computational model of crossword solving that incorporates strategic and other factors, and is capable of solving crossword puzzles in a human-like fashion, in order to understand the complete set of skills needed to solve a crossword puzzle. We compare our models to human expert and novice solvers to investigate how different strategic and structural factors in crossword play impact overall performance. Results reveal that expert crossword solving relies heavily on fluent semantic memory search and retrieval, which appear to allow experts to take better advantage of orthographic-route solutions, and experts employ strategies that enable them to use orthographic information. Furthermore, other processes central to traditional AI models (error correction and backtracking) appear to be of less importance for human players.
NASA Astrophysics Data System (ADS)
Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick
2017-12-01
In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.
NASA Astrophysics Data System (ADS)
Ortega Gelabert, Olga; Zlotnik, Sergio; Afonso, Juan Carlos; Díez, Pedro
2017-04-01
The determination of the present-day physical state of the thermal and compositional structure of the Earth's lithosphere and sub-lithospheric mantle is one of the main goals in modern lithospheric research. All this data is essential to build Earth's evolution models and to reproduce many geophysical observables (e.g. elevation, gravity anomalies, travel time data, heat flow, etc) together with understanding the relationship between them. Determining the lithospheric state involves the solution of high-resolution inverse problems and, consequently, the solution of many direct models is required. The main objective of this work is to contribute to the existing inversion techniques in terms of improving the estimation of the elevation (topography) by including a dynamic component arising from sub-lithospheric mantle flow. In order to do so, we implement an efficient Reduced Order Method (ROM) built upon classic Finite Elements. ROM allows to reduce significantly the computational cost of solving a family of problems, for example all the direct models that are required in the solution of the inverse problem. The strategy of the method consists in creating a (reduced) basis of solutions, so that when a new problem has to be solved, its solution is sought within the basis instead of attempting to solve the problem itself. In order to check the Reduced Basis approach, we implemented the method in a 3D domain reproducing a portion of Earth that covers up to 400 km depth. Within the domain the Stokes equation is solved with realistic viscosities and densities. The different realizations (the family of problems) is created by varying viscosities and densities in a similar way as it would happen in an inversion problem. The Reduced Basis method is shown to be an extremely efficiently solver for the Stokes equation in this context.
Direct numerical simulations of fluid flow, heat transfer and phase changes
NASA Technical Reports Server (NTRS)
Juric, D.; Tryggvason, G.; Han, J.
1997-01-01
Direct numerical simulations of fluid flow, heat transfer, and phase changes are presented. The simulations are made possible by a recently developed finite difference/front tracking method based on the one-field formulation of the governing equations where a single set of conservation equations is written for all the phases involved. The conservation equations are solved on a fixed rectangular grid, but the phase boundaries are kept sharp by tracking them explicitly by a moving grid of lower dimension. The method is discussed and applications to boiling heat transfer and the solidification of drops colliding with a wall are shown.
Accurate evaluation of exchange fields in finite element micromagnetic solvers
NASA Astrophysics Data System (ADS)
Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.
2012-04-01
Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.
[Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (1)].
Murase, Kenya
2014-01-01
Utilization of differential equations and methods for solving them in medical physics are presented. First, the basic concept and the kinds of differential equations were overviewed. Second, separable differential equations and well-known first-order and second-order differential equations were introduced, and the methods for solving them were described together with several examples. In the next issue, the symbolic and series expansion methods for solving differential equations will be mainly introduced.
Finite element modeling of electromagnetic fields and waves using NASTRAN
NASA Technical Reports Server (NTRS)
Moyer, E. Thomas, Jr.; Schroeder, Erwin
1989-01-01
The various formulations of Maxwell's equations are reviewed with emphasis on those formulations which most readily form analogies with Navier's equations. Analogies involving scalar and vector potentials and electric and magnetic field components are presented. Formulations allowing for media with dielectric and conducting properties are emphasized. It is demonstrated that many problems in electromagnetism can be solved using the NASTRAN finite element code. Several fundamental problems involving time harmonic solutions of Maxwell's equations with known analytic solutions are solved using NASTRAN to demonstrate convergence and mesh requirements. Mesh requirements are studied as a function of frequency, conductivity, and dielectric properties. Applications in both low frequency and high frequency are highlighted. The low frequency problems demonstrate the ability to solve problems involving media inhomogeneity and unbounded domains. The high frequency applications demonstrate the ability to handle problems with large boundary to wavelength ratios.
Mathematical Problem-Solving Styles in the Education of Deaf and Hard-of-Hearing Individuals
ERIC Educational Resources Information Center
Erickson, Elizabeth E. A.
2012-01-01
This study explored the mathematical problem-solving styles of middle school and high school deaf and hard-of-hearing students and the mathematical problem-solving styles of the mathematics teachers of middle school and high school deaf and hard-of-hearing students. The research involved 45 deaf and hard-of-hearing students and 19 teachers from a…
Solving the Sailors and the Coconuts Problem via Diagrammatic Approach
ERIC Educational Resources Information Center
Man, Yiu-Kwong
2010-01-01
In this article, we discuss how to use a diagrammatic approach to solve the classic sailors and the coconuts problem. It provides us an insight on how to tackle this type of problem in a novel and intuitive way. This problem-solving approach will be found useful to mathematics teachers or lecturers involved in teaching elementary number theory,…
ERIC Educational Resources Information Center
Owoh, Jeremy Strickland
2015-01-01
In today's technology enriched schools and workforces, creative problem-solving is involved in many aspects of a person's life. The educational systems of developed nations are designed to raise students who are creative and skillful in solving complex problems. Technology and the age of information require nations to develop generations of…
ERIC Educational Resources Information Center
Clariana, Roy B.; Engelmann, Tanja; Yu, Wu
2013-01-01
Problem solving likely involves at least two broad stages, problem space representation and then problem solution (Newell and Simon, Human problem solving, 1972). The metric centrality that Freeman ("Social Networks" 1:215-239, 1978) implemented in social network analysis is offered here as a potential measure of both. This development research…
NASA Technical Reports Server (NTRS)
Puri, Ishwar K.
2004-01-01
Our goal has been to investigate the influence of both dilution and radiation on the extinction process of nonpremixed flames at low strain rates. Simulations have been performed by using a counterflow code and three radiation models have been included in it, namely, the optically thin, the narrowband, and discrete ordinate models. The counterflow flame code OPPDIFF was modified to account for heat transfer losses by radiation from the hot gases. The discrete ordinate method (DOM) approximation was first suggested by Chandrasekhar for solving problems in interstellar atmospheres. Carlson and Lathrop developed the method for solving multi-dimensional problem in neutron transport. Only recently has the method received attention in the field of heat transfer. Due to the applicability of the discrete ordinate method for thermal radiation problems involving flames, the narrowband code RADCAL was modified to calculate the radiative properties of the gases. A non-premixed counterflow flame was simulated with the discrete ordinate method for radiative emissions. In comparison with two other models, it was found that the heat losses were comparable with the optically thin and simple narrowband model. The optically thin model had the highest heat losses followed by the DOM model and the narrow-band model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haut, T. S.; Babb, T.; Martinsson, P. G.
2015-06-16
Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existingmore » methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.« less
An incremental strategy for calculating consistent discrete CFD sensitivity derivatives
NASA Technical Reports Server (NTRS)
Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.
1992-01-01
In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.
Meshless method for solving fixed boundary problem of plasma equilibrium
NASA Astrophysics Data System (ADS)
Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi
2015-07-01
This study solves the Grad-Shafranov equation with a fixed plasma boundary by utilizing a meshless method for the first time. Previous studies have utilized a finite element method (FEM) to solve an equilibrium inside the fixed separatrix. In order to avoid difficulties of FEM (such as mesh problem, difficulty of coding, expensive calculation cost), this study focuses on the meshless methods, especially RBF-MFS and KANSA's method to solve the fixed boundary problem. The results showed that CPU time of the meshless methods was ten to one hundred times shorter than that of FEM to obtain the same accuracy.
Contact Stress Analysis of Spiral Bevel Gears Using Finite Element Analysis
NASA Technical Reports Server (NTRS)
Bibel, G. D.; Kumar, A; Reddy, S.; Handschuh, R.
1995-01-01
A procedure is presented for performing three-dimensional stress analysis of spiral bevel gears in mesh using the finite element method. The procedure involves generating a finite element model by solving equations that identify tooth surface coordinates. Coordinate transformations are used to orientate the gear and pinion for gear meshing. Contact boundary conditions are simulated with gap elements. A solution technique for correct orientation of the gap elements is given. Example models and results are presented.
An Assessment of Operational Energy Capability Improvement Fund (OECIF) Programs 17-S-2544
2017-09-19
persistently attack key operational energy problems . OECIF themes are summarized in Table 1, and Appendix A includes more detail on the programs within... problems FY 2014 Analytical methods and tools FY 2015 Improving fuel economy for the current tactical ground fleet FY 2016 Increasing the operational...involve a variety of organizations to solve operational energy problems . In FY 2015, the OECIF program received a one-time $14.1M Congressional plus-up
Gridless particle technique for the Vlasov-Poisson system in problems with high degree of symmetry
NASA Astrophysics Data System (ADS)
Boella, E.; Coppa, G.; D'Angola, A.; Peiretti Paradisi, B.
2018-03-01
In the paper, gridless particle techniques are presented in order to solve problems involving electrostatic, collisionless plasmas. The method makes use of computational particles having the shape of spherical shells or of rings, and can be used to study cases in which the plasma has spherical or axial symmetry, respectively. As a computational grid is absent, the technique is particularly suitable when the plasma occupies a rapidly changing space region.
Indicators of Arctic Sea Ice Bistability in Climate Model Simulations and Observations
2014-09-30
ultimately developed a novel mathematical method to solve the system of equations involving the addition of a numerical “ ghost ” layer, as described in the...balance models ( EBMs ) and (ii) seasonally-varying single-column models (SCMs). As described in Approach item #1, we developed an idealized model that...includes both latitudinal and seasonal variations (Fig. 1). The model reduces to a standard EBM or SCM as limiting cases in the parameter space, thus
Robot Control Based On Spatial-Operator Algebra
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo; Kreutz, Kenneth K.; Jain, Abhinandan
1992-01-01
Method for mathematical modeling and control of robotic manipulators based on spatial-operator algebra providing concise representation and simple, high-level theoretical frame-work for solution of kinematical and dynamical problems involving complicated temporal and spatial relationships. Recursive algorithms derived immediately from abstract spatial-operator expressions by inspection. Transition from abstract formulation through abstract solution to detailed implementation of specific algorithms to compute solution greatly simplified. Complicated dynamical problems like two cooperating robot arms solved more easily.
Application of machine learning methods for traffic signs recognition
NASA Astrophysics Data System (ADS)
Filatov, D. V.; Ignatev, K. V.; Deviatkin, A. V.; Serykh, E. V.
2018-02-01
This paper focuses on solving a relevant and pressing safety issue on intercity roads. Two approaches were considered for solving the problem of traffic signs recognition; the approaches involved neural networks to analyze images obtained from a camera in the real-time mode. The first approach is based on a sequential image processing. At the initial stage, with the help of color filters and morphological operations (dilatation and erosion), the area containing the traffic sign is located on the image, then the selected and scaled fragment of the image is analyzed using a feedforward neural network to determine the meaning of the found traffic sign. Learning of the neural network in this approach is carried out using a backpropagation method. The second approach involves convolution neural networks at both stages, i.e. when searching and selecting the area of the image containing the traffic sign, and when determining its meaning. Learning of the neural network in the second approach is carried out using the intersection over union function and a loss function. For neural networks to learn and the proposed algorithms to be tested, a series of videos from a dash cam were used that were shot under various weather and illumination conditions. As a result, the proposed approaches for traffic signs recognition were analyzed and compared by key indicators such as recognition rate percentage and the complexity of neural networks’ learning process.
High order solution of Poisson problems with piecewise constant coefficients and interface jumps
NASA Astrophysics Data System (ADS)
Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben
2017-04-01
We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.
Finite difference and Runge-Kutta methods for solving vibration problems
NASA Astrophysics Data System (ADS)
Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi
2017-11-01
The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.
The Multigrid-Mask Numerical Method for Solution of Incompressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Ku, Hwar-Ching; Popel, Aleksander S.
1996-01-01
A multigrid-mask method for solution of incompressible Navier-Stokes equations in primitive variable form has been developed. The main objective is to apply this method in conjunction with the pseudospectral element method solving flow past multiple objects. There are two key steps involved in calculating flow past multiple objects. The first step utilizes only Cartesian grid points. This homogeneous or mask method step permits flow into the interior rectangular elements contained in objects, but with the restriction that the velocity for those Cartesian elements within and on the surface of an object should be small or zero. This step easily produces an approximate flow field on Cartesian grid points covering the entire flow field. The second or heterogeneous step corrects the approximate flow field to account for the actual shape of the objects by solving the flow field based on the local coordinates surrounding each object and adapted to it. The noise occurring in data communication between the global (low frequency) coordinates and the local (high frequency) coordinates is eliminated by the multigrid method when the Schwarz Alternating Procedure (SAP) is implemented. Two dimensional flow past circular and elliptic cylinders will be presented to demonstrate the versatility of the proposed method. An interesting phenomenon is found that when the second elliptic cylinder is placed in the wake of the first elliptic cylinder a traction force results in a negative drag coefficient.
Kuperminc, Gabriel P.; Allen, Joseph P.
2006-01-01
A model of problematic adolescent behavior that expands current theories of social skill deficits in delinquent behavior to consider both social skills and orientation toward the use of adaptive skills was examined in an ethnically and socioeconomically diverse sample of 113 male and female adolescents. Adolescents were selected on the basis of moderate to serious risk for difficulties in social adaptation in order to focus on the population of youth most likely to be targeted by prevention efforts. Structural equation modeling was used to examine cross-sectional data using multiple informants (adolescents, peers, and parents) and multiple methods (performance test and self-report). Adolescent social orientation, as reflected in perceived problem solving effectiveness, identification with adult prosocial values, and self-efficacy expectations, exhibited a direct association to delinquent behavior and an indirect association to drug involvement mediated by demonstrated success in using problem solving skills. Results suggest that the utility of social skill theories of adolescent problem behaviors for informing preventive and remedial interventions can be enhanced by expanding them to consider adolescents’ orientation toward using the skills they may already possess. PMID:16929380
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tetsu, Hiroyuki; Nakamoto, Taishi, E-mail: h.tetsu@geo.titech.ac.jp
Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton–Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme,more » we examined the scheme developed by Douglas and Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.« less
NASA Astrophysics Data System (ADS)
Stoykov, S.; Atanassov, E.; Margenov, S.
2016-10-01
Many of the scientific applications involve sparse or dense matrix operations, such as solving linear systems, matrix-matrix products, eigensolvers, etc. In what concerns structural nonlinear dynamics, the computations of periodic responses and the determination of stability of the solution are of primary interest. Shooting method iswidely used for obtaining periodic responses of nonlinear systems. The method involves simultaneously operations with sparse and dense matrices. One of the computationally expensive operations in the method is multiplication of sparse by dense matrices. In the current work, a new algorithm for sparse matrix by dense matrix products is presented. The algorithm takes into account the structure of the sparse matrix, which is obtained by space discretization of the nonlinear Mindlin's plate equation of motion by the finite element method. The algorithm is developed to use the vector engine of Intel Xeon Phi coprocessors. It is compared with the standard sparse matrix by dense matrix algorithm and the one developed by Intel MKL and it is shown that by considering the properties of the sparse matrix better algorithms can be developed.
Shape optimization using a NURBS-based interface-enriched generalized FEM
Najafi, Ahmad R.; Safdari, Masoud; Tortorelli, Daniel A.; ...
2016-11-26
This study presents a gradient-based shape optimization over a fixed mesh using a non-uniform rational B-splines-based interface-enriched generalized finite element method, applicable to multi-material structures. In the proposed method, non-uniform rational B-splines are used to parameterize the design geometry precisely and compactly by a small number of design variables. An analytical shape sensitivity analysis is developed to compute derivatives of the objective and constraint functions with respect to the design variables. Subtle but important new terms involve the sensitivity of shape functions and their spatial derivatives. As a result, verification and illustrative problems are solved to demonstrate the precision andmore » capability of the method.« less
Jona, Celine M H; Labuschagne, Izelle; Mercieca, Emily-Clare; Fisher, Fiona; Gluyas, Cathy; Stout, Julie C; Andrews, Sophie C
2017-01-01
Family functioning in Huntington's disease (HD) is known from previous studies to be adversely affected. However, which aspects of family functioning are disrupted is unknown, limiting the empirical basis around which to create supportive interventions. The aim of the current study was to assess family functioning in HD families. We assessed family functioning in 61 participants (38 HD gene-expanded participants and 23 family members) using the McMaster Family Assessment Device (FAD; Epstein, Baldwin and Bishop, 1983), which provides scores for seven domains of functioning: Problem Solving; Communication; Affective Involvement; Affective Responsiveness; Behavior Control; Roles; and General Family Functioning. The most commonly reported disrupted domain for HD participants was Affective Involvement, which was reported by 39.5% of HD participants, followed closely by General Family Functioning (36.8%). For family members, the most commonly reported dysfunctional domains were Affective Involvement and Communication (both 52.2%). Furthermore, symptomatic HD participants reported more disruption to Problem Solving than pre-symptomatic HD participants. In terms of agreement between pre-symptomatic and symptomatic HD participants and their family members, all domains showed moderate to very good agreement. However, on average, family members rated Communication as more disrupted than their HD affected family member. These findings highlight the need to target areas of emotional engagement, communication skills and problem solving in family interventions in HD.
NASA Astrophysics Data System (ADS)
Yannopapas, Vassilios; Paspalakis, Emmanuel
2018-07-01
We present a new theoretical tool for simulating optical trapping of nanoparticles in the presence of an arbitrary metamaterial design. The method is based on rigorously solving Maxwell's equations for the metamaterial via a hybrid discrete-dipole approximation/multiple-scattering technique and direct calculation of the optical force exerted on the nanoparticle by means of the Maxwell stress tensor. We apply the method to the case of a spherical polystyrene probe trapped within the optical landscape created by illuminating of a plasmonic metamaterial consisting of periodically arranged tapered metallic nanopyramids. The developed technique is ideally suited for general optomechanical calculations involving metamaterial designs and can compete with purely numerical methods such as finite-difference or finite-element schemes.
Bound-preserving Legendre-WENO finite volume schemes using nonlinear mapping
NASA Astrophysics Data System (ADS)
Smith, Timothy; Pantano, Carlos
2017-11-01
We present a new method to enforce field bounds in high-order Legendre-WENO finite volume schemes. The strategy consists of reconstructing each field through an intermediate mapping, which by design satisfies realizability constraints. Determination of the coefficients of the polynomial reconstruction involves nonlinear equations that are solved using Newton's method. The selection between the original or mapped reconstruction is implemented dynamically to minimize computational cost. The method has also been generalized to fields that exhibit interdependencies, requiring multi-dimensional mappings. Further, the method does not depend on the existence of a numerical flux function. We will discuss details of the proposed scheme and show results for systems in conservation and non-conservation form. This work was funded by the NSF under Grant DMS 1318161.
NASA Astrophysics Data System (ADS)
Chui, Siu Lit; Lu, Ya Yan
2004-03-01
Wide-angle full-vector beam propagation methods (BPMs) for three-dimensional wave-guiding structures can be derived on the basis of rational approximants of a square root operator or its exponential (i.e., the one-way propagator). While the less accurate BPM based on the slowly varying envelope approximation can be efficiently solved by the alternating direction implicit (ADI) method, the wide-angle variants involve linear systems that are more difficult to handle. We present an efficient solver for these linear systems that is based on a Krylov subspace method with an ADI preconditioner. The resulting wide-angle full-vector BPM is used to simulate the propagation of wave fields in a Y branch and a taper.
Chui, Siu Lit; Lu, Ya Yan
2004-03-01
Wide-angle full-vector beam propagation methods (BPMs) for three-dimensional wave-guiding structures can be derived on the basis of rational approximants of a square root operator or its exponential (i.e., the one-way propagator). While the less accurate BPM based on the slowly varying envelope approximation can be efficiently solved by the alternating direction implicit (ADI) method, the wide-angle variants involve linear systems that are more difficult to handle. We present an efficient solver for these linear systems that is based on a Krylov subspace method with an ADI preconditioner. The resulting wide-angle full-vector BPM is used to simulate the propagation of wave fields in a Y branch and a taper.
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
ERIC Educational Resources Information Center
Valentine, Andrew; Belski, Iouri; Hamilton, Margaret
2017-01-01
Problem-solving is a key engineering skill, yet is an area in which engineering graduates underperform. This paper investigates the potential of using web-based tools to teach students problem-solving techniques without the need to make use of class time. An idea generation experiment involving 90 students was designed. Students were surveyed…
The Investigation of Problem Solving Skill of the Mountaineers in Terms of Demographic Variables
ERIC Educational Resources Information Center
Gürer, Burak
2015-01-01
The aim of this research is to investigate problem solving skills of the individuals involved in mountaineering. 315 volunteers participated in the study. The research data were collected by problem solving scale developed by Heppner and Peterson and the Turkish version of which was conducted by Sahin et al. There are totally 35 items and only 3…
Students' understandings of electrochemistry
NASA Astrophysics Data System (ADS)
O'Grady-Morris, Kathryn
Electrochemistry is considered by students to be a difficult topic in chemistry. This research was a mixed methods study guided by the research question: At the end of a unit of study, what are students' understandings of electrochemistry? The framework of analysis used for the qualitative and quantitative data collected in this study was comprised of three categories: types of knowledge used in problem solving, levels of representation of knowledge in chemistry (macroscopic, symbolic, and particulate), and alternative conceptions. Although individually each of the three categories has been reported in previous studies, the contribution of this study is the inter-relationships among them. Semi-structured, task-based interviews were conducted while students were setting up and operating electrochemical cells in the laboratory, and a two-tiered, multiple-choice diagnostic instrument was designed to identify alternative conceptions that students held at the end of the unit. For familiar problems, those involving routine voltaic cells, students used a working-forwards problem-solving strategy, two or three levels of representation of knowledge during explanations, scored higher on both procedural and conceptual knowledge questions in the diagnostic instrument, and held fewer alternative conceptions related to the operation of these cells. For less familiar problems, those involving non-routine voltaic cells and electrolytic cells, students approached problem-solving with procedural knowledge, used only one level of representation of knowledge when explaining the operation of these cells, scored higher on procedural knowledge than conceptual knowledge questions in the diagnostic instrument, and held a greater number of alternative conceptions. Decision routines that involved memorized formulas and procedures were used to solve both quantitative and qualitative problems and the main source of alternative conceptions in this study was the overgeneralization of theory related to the particulate level of representation of knowledge. The findings from this study may contribute further to our understanding of students' conceptions in electrochemistry. Furthermore, understanding the influence of the three categories in the framework of analysis and their inter-relationships on how students make sense of this field may result in a better understanding of classroom practice that could promote the acquisition of conceptual knowledge --- knowledge that is "rich in relationships".
Implicit Runge-Kutta Methods with Explicit Internal Stages
NASA Astrophysics Data System (ADS)
Skvortsov, L. M.
2018-03-01
The main computational costs of implicit Runge-Kutta methods are caused by solving a system of algebraic equations at every step. By introducing explicit stages, it is possible to increase the stage (or pseudo-stage) order of the method, which makes it possible to increase the accuracy and avoid reducing the order in solving stiff problems, without additional costs of solving algebraic equations. The paper presents implicit methods with an explicit first stage and one or two explicit internal stages. The results of solving test problems are compared with similar methods having no explicit internal stages.
Sosnowski, Tytus; Rynkiewicz, Andrzej; Wordecha, Małgorzata; Kępkowicz, Anna; Majewska, Adrianna; Pstrągowska, Aleksandra; Oleksy, Tomasz; Wypych, Marek; Marchewka, Artur
2017-07-01
It is known that solving mental tasks leads to tonic increase in cardiovascular activity. Our previous research showed that tasks involving rule application (RA) caused greater tonic increase in cardiovascular activity than tasks requiring rule discovery (RD). However, it is not clear what brain mechanisms are responsible for this difference. The aim of two experimental studies was to compare the patterns of brain and cardiovascular activity while both RD and the RA numeric tasks were being solved. The fMRI study revealed greater brain activation while solving RD tasks than while solving RA tasks. In particular, RD tasks evoked greater activation of the left inferior frontal gyrus and selected areas in the parietal, and temporal cortices, including the precuneus, supramarginal gyrus, angular gyrus, inferior parietal lobule, and the superior temporal gyrus, and the cingulate cortex. In addition, RA tasks caused larger increases in HR than RD tasks. The second study, carried out in a cardiovascular laboratory, showed greater increases in heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean arterial pressure (MAP) while solving RA tasks than while solving RD tasks. The results support the hypothesis that RD and RA tasks involve different modes of information processing, but the neuronal mechanism responsible for the observed greater cardiovascular response to RA tasks than to RD tasks is not completely clear. Copyright © 2017. Published by Elsevier B.V.
Reliable Multi Method Assessment of Metacognition Use in Chemistry Problem Solving
ERIC Educational Resources Information Center
Cooper, Melanie M.; Sandi-Urena, Santiago; Stevens, Ron
2008-01-01
Metacognition is fundamental in achieving understanding of chemistry and developing of problem solving skills. This paper describes an across-method-and-time instrument designed to assess the use of metacognition in chemistry problem solving. This multi method instrument combines a self report, namely the Metacognitive Activities Inventory…
Discovering Steiner Triple Systems through Problem Solving
ERIC Educational Resources Information Center
Sriraman, Bharath
2004-01-01
An attempt to implement problem solving as a teacher of ninth grade algebra is described. The problems selected were not general ones, they involved combinations and represented various situations and were more complex which lead to the discovery of Steiner triple systems.
An Expansion Formula with Higher-Order Derivatives for Fractional Operators of Variable Order
Almeida, Ricardo
2013-01-01
We obtain approximation formulas for fractional integrals and derivatives of Riemann-Liouville and Marchaud types with a variable fractional order. The approximations involve integer-order derivatives only. An estimation for the error is given. The efficiency of the approximation method is illustrated with examples. As applications, we show how the obtained results are useful to solve differential equations, and problems of the calculus of variations that depend on fractional derivatives of Marchaud type. PMID:24319382
Contact stress analysis of spiral bevel gears using nonlinear finite element static analysis
NASA Technical Reports Server (NTRS)
Bibel, G. D.; Kumar, A.; Reddy, S.; Handschuh, R.
1993-01-01
A procedure is presented for performing three-dimensional stress analysis of spiral bevel gears in mesh using the finite element method. The procedure involves generating a finite element model by solving equations that identify tooth surface coordinates. Coordinate transformations are used to orientate the gear and pinion for gear meshing. Contact boundary conditions are simulated with gap elements. A solution technique for correct orientation of the gap elements is given. Example models and results are presented.
Time fractional capital-induced labor migration model
NASA Astrophysics Data System (ADS)
Ali Balcı, Mehmet
2017-07-01
In this study we present a new model of neoclassical economic growth by considering that workers move from regions with lower density of capital to regions with higher density of capital. Since the labor migration and capital flow involves self-similarities in long range time, we use the fractional order derivatives for the time variable. To solve this model we proposed Variational Iteration Method, and studied numerically labor migration flow data from Turkey along with other countries throughout the period of 1966-2014.
Time-Domain Computation Of Electromagnetic Fields In MMICs
NASA Technical Reports Server (NTRS)
Lansing, Faiza S.; Rascoe, Daniel L.
1995-01-01
Maxwell's equations solved on three-dimensional, conformed orthogonal grids by finite-difference techniques. Method of computing frequency-dependent electrical parameters of monolithic microwave integrated circuit (MMIC) involves time-domain computation of propagation of electromagnetic field in response to excitation by single pulse at input terminal, followed by computation of Fourier transforms to obtain frequency-domain response from time-domain response. Parameters computed include electric and magnetic fields, voltages, currents, impedances, scattering parameters, and effective dielectric constants. Powerful and efficient means for analyzing performance of even complicated MMIC.
Heat transfer in a micropolar fluid over a stretching sheet with Newtonian heating.
Qasim, Muhammad; Khan, Ilyas; Shafie, Sharidan
2013-01-01
This article looks at the steady flow of Micropolar fluid over a stretching surface with heat transfer in the presence of Newtonian heating. The relevant partial differential equations have been reduced to ordinary differential equations. The reduced ordinary differential equation system has been numerically solved by Runge-Kutta-Fehlberg fourth-fifth order method. Influence of different involved parameters on dimensionless velocity, microrotation and temperature is examined. An excellent agreement is found between the present and previous limiting results.
Quadrature imposition of compatibility conditions in Chebyshev methods
NASA Technical Reports Server (NTRS)
Gottlieb, D.; Streett, C. L.
1990-01-01
Often, in solving an elliptic equation with Neumann boundary conditions, a compatibility condition has to be imposed for well-posedness. This condition involves integrals of the forcing function. When pseudospectral Chebyshev methods are used to discretize the partial differential equation, these integrals have to be approximated by an appropriate quadrature formula. The Gauss-Chebyshev (or any variant of it, like the Gauss-Lobatto) formula can not be used here since the integrals under consideration do not include the weight function. A natural candidate to be used in approximating the integrals is the Clenshaw-Curtis formula, however it is shown that this is the wrong choice and it may lead to divergence if time dependent methods are used to march the solution to steady state. The correct quadrature formula is developed for these problems. This formula takes into account the degree of the polynomials involved. It is shown that this formula leads to a well conditioned Chebyshev approximation to the differential equations and that the compatibility condition is automatically satisfied.
NASA Astrophysics Data System (ADS)
Genco, Filippo
Damage to plasma-facing components (PFC) due to various plasma instabilities is still a major concern for the successful development of fusion energy and represents a significant research obstacle in the community. It is of great importance to fully understand the behavior and lifetime expectancy of PFC under both low energy cycles during normal events and highly energetic events as disruptions, Edge-Localized Modes (ELM), Vertical Displacement Events (VDE), and Run-away electron (RE). The consequences of these high energetic dumps with energy fluxes ranging from 10 MJ/m2 up to 200 MJ/m 2 applied in very short periods (0.1 to 5 ms) can be catastrophic both for safety and economic reasons. Those phenomena can cause a) large temperature increase in the target material b) consequent melting, evaporation and erosion losses due to the extremely high heat fluxes c) possible structural damage and permanent degradation of the entire bulk material with probable burnout of the coolant tubes; d) plasma contamination, transport of target material into the chamber far from where it was originally picked. The modeling of off-normal events such as Disruptions and ELMs requires the simultaneous solution of three main problems along time: a) the heat transfer in the plasma facing component b) the interaction of the produced vapor from the surface with the incoming plasma particles c) the transport of the radiation produced in the vapor-plasma cloud. In addition the moving boundaries problem has to be considered and solved at the material surface. Considering the carbon divertor as target, the moving boundaries are two since for the given conditions, carbon doesn't melt: the plasma front and the moving eroded material surface. The current solution methods for this problem use finite differences and moving coordinates system based on the Crank-Nicholson method and Alternating Directions Implicit Method (ADI). Currently Particle-In-Cell (PIC) methods are widely used for solving complex dynamics problems involving distorted plasma hydrodynamic problems and plasma physics. The PIC method solves the hydrodynamic equations solving all field equations tracking at the same time "sample particles" or pseudo-particles (representative of the much more numerous real ones) as the move under the influence of diffusion or magnetic force. The superior behavior of the PIC techniques over the more classical Lagrangian finite difference methods stands in the fact that detailed information about the particles are available at all times as well as mass and momentum transport values are constantly provided. This allows with a relative small number of particles to well describe the behavior of plasma even in presence of highly distorted flows without losing accuracy. The radiation transport equation is solved at each time step calculating for each cell the opacity and emissivity coefficients. Photon radiation continuum and line fluxes are also calculated per the entire domain and provide useful information for the entire energetic calculation of the system which in the end provides the total values of erosion and lifetime of the target material. In this thesis, a new code named HEIGHTS-PIC code has been created and modified using a new approach of the PIC technique to solve the three physics problems involved integrating each of them as a continuum providing insight on the plasma behavior, evolution along time and physical understanding of the very complex phenomena taking place. The results produced with the models are compared with the well-known and benchmarked HEIGHTS package and also with existing experimental results especially produced in Russia at the TRINITI facility. Comparisons with LASER experiments are also discussed.
MO-F-204-00: Preparing for the ABR Diagnostic and Nuclear Medical Physics Exams
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
Brosnan, Sarah F; Beran, Michael J; Parrish, Audrey E; Price, Sara A; Wilson, Bart J
2013-07-18
How do primates, humans included, deal with novel problems that arise in interactions with other group members? Despite much research regarding how animals and humans solve social problems, few studies have utilized comparable procedures, outcomes, or measures across different species. Thus, it is difficult to piece together the evolution of decision making, including the roots from which human economic decision making emerged. Recently, a comparative body of decision making research has emerged, relying largely on the methodology of experimental economics in order to address these questions in a cross-species fashion. Experimental economics is an ideal method of inquiry for this approach. It is a well-developed method for distilling complex decision making involving multiple conspecifics whose decisions are contingent upon one another into a series of simple decision choices. This allows these decisions to be compared across species and contexts. In particular, our group has used this approach to investigate coordination in New World monkeys, Old World monkeys, and great apes (including humans), using identical methods. We find that in some cases there are remarkable continuities of outcome, as when some pairs in all species solved a coordination game, the Assurance game. On the other hand, we also find that these similarities in outcomes are likely driven by differences in underlying cognitive mechanisms. New World monkeys required exogenous information about their partners' choices in order to solve the task, indicating that they were using a matching strategy. Old World monkeys, on the other hand, solved the task without exogenous cues, leading to investigations into what mechanisms may be underpinning their responses (e.g., reward maximization, strategy formation, etc.). Great apes showed a strong experience effect, with cognitively enriched apes following what appears to be a strategy. Finally, humans were able to solve the task with or without exogenous cues. However, when given the chance to do so, they incorporated an additional mechanism unavailable to the other primates - language - to coordinate outcomes with their partner. We discuss how these results inform not only comparative psychology, but also evolutionary psychology, as they provide an understanding of the evolution of human economic behavior, and the evolution of decision making more broadly.
MO-F-204-02: Preparing for Part 2 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szczykutowicz, T.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
MO-F-204-03: Preparing for Part 3 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zambelli, J.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
MO-F-204-01: Preparing for Part 1 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKenney, S.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
MO-F-204-04: Preparing for Parts 2 & 3 of the ABR Nuclear Medicine Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDougall, R.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance of allmore » aspects of clinical medical physics. All parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those unique aspects of the nuclear exam, and how preparing for a second specialty differs from the first. Medical physicists who recently completed each ABR exam portion will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-04: Preparing for Parts 2 & 3 of the ABR Nuclear Medicine Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacDougall, R.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-00: Preparing for the ABR Diagnostic and Nuclear Medicine Physics Exams
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-01: Preparing for Part 1 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simiele, S.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-03: Preparing for Part 3 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevins, N.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
WE-D-213-02: Preparing for Part 2 of the ABR Diagnostic Physics Exam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zambelli, J.
Adequate, efficient preparation for the ABR Diagnostic and Nuclear Medical Physics exams is key to successfully obtain ABR professional certification. Each part of the ABR exam presents its own challenges: Part I: Determine the scope of basic medical physics study material, efficiently review this material, and solve related written questions/problems. Part II: Understand imaging principles, modalities, and systems, including image acquisition, processing, and display. Understand the relationship between imaging techniques, image quality, patient dose and safety, and solve related written questions/problems. Part III: Gain crucial, practical, clinical medical physics experience. Effectively communicate and explain the practice, performance, and significance ofmore » all aspects of clinical medical physics. All three parts of the ABR exam require specific skill sets and preparation: mastery of basic physics and imaging principles; written problem solving often involving rapid calculation; responding clearly and succinctly to oral questions about the practice, methods, and significance of clinical medical physics. This symposium focuses on the preparation and skill sets necessary for each part of the ABR exam. Although there is some overlap, the nuclear exam covers a different body of knowledge than the diagnostic exam. A separate speaker will address those aspects that are unique to the nuclear exam. Medical physicists who have recently completed each of part of the ABR exam will share their experiences, insights, and preparation methods to help attendees best prepare for the challenges of each part of the ABR exam. In accordance with ABR exam security policy, no recalls or exam questions will be discussed. Learning Objectives: How to prepare for Part 1 of the ABR exam by determining the scope of basic medical physics study material and related problem solving/calculations How to Prepare for Part 2 of the ABR exam by understanding diagnostic and/or nuclear imaging physics, systems, dosimetry, safety and related problem solving/calculations How to Prepare for Part 3 of the ABR exam by effectively communicating the practice, methods, and significance of clinical diagnostic and/or nuclear medical physics.« less
Rapid prototyping strategy for a surgical data warehouse.
Tang, S-T; Huang, Y-F; Hsiao, M-L; Yang, S-H; Young, S-T
2003-01-01
Healthcare processes typically generate an enormous volume of patient information. This information largely represents unexploited knowledge, since current hospital operational systems (e.g., HIS, RIS) are not suitable for knowledge exploitation. Data warehousing provides an attractive method for solving these problems, but the process is very complicated. This study presents a novel strategy for effectively implementing a healthcare data warehouse. This study adopted the rapid prototyping (RP) method, which involves intensive interactions. System developers and users were closely linked throughout the life cycle of the system development. The presence of iterative RP loops meant that the system requirements were increasingly integrated and problems were gradually solved, such that the prototype system evolved into the final operational system. The results were analyzed by monitoring the series of iterative RP loops. First a definite workflow for ensuring data completeness was established, taking a patient-oriented viewpoint when collecting the data. Subsequently the system architecture was determined for data retrieval, storage, and manipulation. This architecture also clarifies the relationships among the novel system and legacy systems. Finally, a graphic user interface for data presentation was implemented. Our results clearly demonstrate the potential for adopting an RP strategy in the successful establishment of a healthcare data warehouse. The strategy can be modified and expanded to provide new services or support new application domains. The design patterns and modular architecture used in the framework will be useful in solving problems in different healthcare domains.
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Mahan, Luther A.
1970-01-01
Compares the effects of two problem-solving teaching approaches. Lower ability students in an activity group demonstrated superior growth in basic science understanding, &roblem-solving skills, science interests, personal adjustment, and school attitudes. Neither method favored cognitive learning by higher ability students. (PR)
Probabilistic Fatigue Damage Program (FATIG)
NASA Technical Reports Server (NTRS)
Michalopoulos, Constantine
2012-01-01
FATIG computes fatigue damage/fatigue life using the stress rms (root mean square) value, the total number of cycles, and S-N curve parameters. The damage is computed by the following methods: (a) traditional method using Miner s rule with stress cycles determined from a Rayleigh distribution up to 3*sigma; and (b) classical fatigue damage formula involving the Gamma function, which is derived from the integral version of Miner's rule. The integration is carried out over all stress amplitudes. This software solves the problem of probabilistic fatigue damage using the integral form of the Palmgren-Miner rule. The software computes fatigue life using an approach involving all stress amplitudes, up to N*sigma, as specified by the user. It can be used in the design of structural components subjected to random dynamic loading, or by any stress analyst with minimal training for fatigue life estimates of structural components.
Market-Based Approaches to Managing Science Return from Planetary Missions
NASA Technical Reports Server (NTRS)
Wessen, Randii R.; Porter, David; Hanson, Robin
1996-01-01
A research plan is described for the design and testing of a method for the planning and negotiation of science observations. The research plan is presented in relation to the fact that the current method, which involves a hierarchical process of science working groups, is unsuitable for the planning of the Cassini mission. The research plan involves the market-based approach in which participants are allocated budgets of scheduling points. The points are used to provide an intensity of preference for the observations being scheduled. In this way, the schedulers do not have to limit themselves to solving major conflicts, but try to maximize the number of scheduling points that result in a conflict-free timeline. Incentives are provided for the participants by the fixed budget concerning their tradeoff decisions. A degree of feedback is provided in the process so that the schedulers may rebid based on the current timeline.
Design and testing of focusing magnets for a compact electron linac
NASA Astrophysics Data System (ADS)
Chen, Qushan; Qin, Bin; Liu, Kaifeng; Liu, Xu; Fu, Qiang; Tan, Ping; Hu, Tongning; Pei, Yuanji
2015-10-01
Solenoid field errors have great influence on electron beam qualities. In this paper, design and testing of high precision solenoids for a compact electron linac is presented. We proposed an efficient and practical method to solve the peak field of the solenoid for relativistic electron beams based on the reduced envelope equation. Beam dynamics involving space charge force were performed to predict the focusing effects. Detailed optimization methods were introduced to achieve an ultra-compact configuration as well as high accuracy, with the help of the POISSON and OPERA packages. Efforts were attempted to restrain system errors in the off-line testing, which showed the short lens and the main solenoid produced a peak field of 0.13 T and 0.21 T respectively. Data analysis involving central and off axes was carried out and demonstrated that the testing results fitted well with the design.
A genuinely discontinuous approach for multiphase EHD problems
NASA Astrophysics Data System (ADS)
Natarajan, Mahesh; Desjardins, Olivier
2017-11-01
Electrohydrodynamics (EHD) involves solving the Poisson equation for the electric field potential. For multiphase flows, although the electric field potential is a continuous quantity, due to the discontinuity in the electric permittivity between the phases, additional jump conditions at the interface, for the normal and tangential components of the electric field need to be satisfied. All approaches till date either ignore the jump conditions, or involve simplifying assumptions, and hence yield unconvincing results even for simple test problems. In the present work, we develop a genuinely discontinuous approach for the Poisson equation for multiphase flows using a Finite Volume Unsplit Volume of Fluid method. The governing equation and the jump conditions without assumptions are used to develop the method, and its efficiency is demonstrated by comparison of the numerical results with canonical test problems having exact solutions. Postdoctoral Associate, Department of Mechanical and Aerospace Engineering.
Solving the water jugs problem by an integer sequence approach
NASA Astrophysics Data System (ADS)
Man, Yiu-Kwong
2012-01-01
In this article, we present an integer sequence approach to solve the classic water jugs problem. The solution steps can be obtained easily by additions and subtractions only, which is suitable for manual calculation or programming by computer. This approach can be introduced to secondary and undergraduate students, and also to teachers and lecturers involved in teaching mathematical problem solving, recreational mathematics, or elementary number theory.
NASA Astrophysics Data System (ADS)
Santa Vélez, Camilo; Enea Romano, Antonio
2018-05-01
Static coordinates can be convenient to solve the vacuum Einstein's equations in presence of spherical symmetry, but for cosmological applications comoving coordinates are more suitable to describe an expanding Universe, especially in the framework of cosmological perturbation theory (CPT). Using CPT we develop a method to transform static spherically symmetric (SSS) modifications of the de Sitter solution from static coordinates to the Newton gauge. We test the method with the Schwarzschild de Sitter (SDS) metric and then derive general expressions for the Bardeen's potentials for a class of SSS metrics obtained by adding to the de Sitter metric a term linear in the mass and proportional to a general function of the radius. Using the gauge invariance of the Bardeen's potentials we then obtain a gauge invariant definition of the turn around radius. We apply the method to an SSS solution of the Brans-Dicke theory, confirming the results obtained independently by solving the perturbation equations in the Newton gauge. The Bardeen's potentials are then derived for new SSS metrics involving logarithmic, power law and exponential modifications of the de Sitter metric. We also apply the method to SSS metrics which give flat rotation curves, computing the radial energy density profile in comoving coordinates in presence of a cosmological constant.
A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection
NASA Technical Reports Server (NTRS)
Samtaney, Ravi
1999-01-01
An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.
Multistage Spectral Relaxation Method for Solving the Hyperchaotic Complex Systems
Saberi Nik, Hassan; Rebelo, Paulo
2014-01-01
We present a pseudospectral method application for solving the hyperchaotic complex systems. The proposed method, called the multistage spectral relaxation method (MSRM) is based on a technique of extending Gauss-Seidel type relaxation ideas to systems of nonlinear differential equations and using the Chebyshev pseudospectral methods to solve the resulting system on a sequence of multiple intervals. In this new application, the MSRM is used to solve famous hyperchaotic complex systems such as hyperchaotic complex Lorenz system and the complex permanent magnet synchronous motor. We compare this approach to the Runge-Kutta based ode45 solver to show that the MSRM gives accurate results. PMID:25386624
Taking a Common-Sense Approach to Moral Education.
ERIC Educational Resources Information Center
Myers, R. E.
2001-01-01
Outlines how one veteran high school teacher wrote up an everyday moral dilemma (obliquely involving drug trafficking) for his students to discuss and solve. Notes problem-solving steps and questions, and how the students worked their way to a solution through discussion. (SR)
7 CFR 4285.70 - Evaluation criteria.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Adequacy, soundness, and appropriateness of the proposed approach to solve the identified problem. (30%) (3) Feasibility and probability of success of project solving the problem. (10%) (4) Qualifications, experience in... proposal demonstrates the following: (1) Focus on a practical solution to a significant problem involving...
Crime Solving Techniques: Training Bulletin.
ERIC Educational Resources Information Center
Sands, Jack M.
The document is a training bulletin for criminal investigators, explaining the use of probability, logic, lateral thinking, group problem solving, and psychological profiles as methods of solving crimes. One chpater of several pages is devoted to each of the five methods. The use of each method is explained; problems are presented for the user to…
Chosen interval methods for solving linear interval systems with special type of matrix
NASA Astrophysics Data System (ADS)
Szyszka, Barbara
2013-10-01
The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.
Heideman, Paul D; Flores, K Adryan; Sevier, Lu M; Trouton, Kelsey E
2017-01-01
Drawing by learners can be an effective way to develop memory and generate visual models for higher-order skills in biology, but students are often reluctant to adopt drawing as a study method. We designed a nonclassroom intervention that instructed introductory biology college students in a drawing method, minute sketches in folded lists (MSFL), and allowed them to self-assess their recall and problem solving, first in a simple recall task involving non-European alphabets and later using unfamiliar biology content. In two preliminary ex situ experiments, students had greater recall on the simple learning task, non-European alphabets with associated phonetic sounds, using MSFL in comparison with a preferred method, visual review (VR). In the intervention, students studying using MSFL and VR had ∼50-80% greater recall of content studied with MSFL and, in a subset of trials, better performance on problem-solving tasks on biology content. Eight months after beginning the intervention, participants had shifted self-reported use of drawing from 2% to 20% of study time. For a small subset of participants, MSFL had become a preferred study method, and 70% of participants reported continued use of MSFL. This brief, low-cost intervention resulted in enduring changes in study behavior. © 2017 P. D. Heideman et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhai, B.
A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less
Hpm of Estrogen Model on the Dynamics of Breast Cancer
NASA Astrophysics Data System (ADS)
Govindarajan, A.; Balamuralitharan, S.; Sundaresan, T.
2018-04-01
We enhance a deterministic mathematical model involving universal dynamics on breast cancer with immune response. This is population model so includes Normal cells class, Tumor cells, Immune cells and Estrogen. The eects regarding Estrogen are below incorporated in the model. The effects show to that amount the arrival of greater Estrogen increases the danger over growing breast cancer. Furthermore, approximate solution regarding nonlinear differential equations is arrived by Homotopy Perturbation Method (HPM). Hes HPM is good and correct technique after solve nonlinear differential equation directly. Approximate solution learnt with the support of that method is suitable same as like the actual results in accordance with this models.
An efficient numerical algorithm for transverse impact problems
NASA Technical Reports Server (NTRS)
Sankar, B. V.; Sun, C. T.
1985-01-01
Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.
Speech-Message Extraction from Interference Introduced by External Distributed Sources
NASA Astrophysics Data System (ADS)
Kanakov, V. A.; Mironov, N. A.
2017-08-01
The problem of this study involves the extraction of a speech signal originating from a certain spatial point and calculation of the intelligibility of the extracted voice message. It is solved by the method of decreasing the influence of interference from the speech-message sources on the extracted signal. This method is based on introducing the time delays, which depend on the spatial coordinates, to the recording channels. Audio records of the voices of eight different people were used as test objects during the studies. It is proved that an increase in the number of microphones improves intelligibility of the speech message which is extracted from interference.
Processes in construction of failure management expert systems from device design information
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Lance, Nick
1987-01-01
This paper analyzes the tasks and problem solving methods used by an engineer in constructing a failure management expert system from design information about the device to te diagnosed. An expert test engineer developed a trouble-shooting expert system based on device design information and experience with similar devices, rather than on specific expert knowledge gained from operating the device or troubleshooting its failures. The construction of the expert system was intensively observed and analyzed. This paper characterizes the knowledge, tasks, methods, and design decisions involved in constructing this type of expert system, and makes recommendations concerning tools for aiding and automating construction of such systems.
NASA Astrophysics Data System (ADS)
Fikri, Fariz Fahmi; Nuraini, Nuning
2018-03-01
The differential equation is one of the branches in mathematics which is closely related to human life problems. Some problems that occur in our life can be modeled into differential equations as well as systems of differential equations such as the Lotka-Volterra model and SIR model. Therefore, solving a problem of differential equations is very important. Some differential equations are difficult to solve, so numerical methods are needed to solve that problems. Some numerical methods for solving differential equations that have been widely used are Euler Method, Heun Method, Runge-Kutta and others. However, some of these methods still have some restrictions that cause the method cannot be used to solve more complex problems such as an evaluation interval that we cannot change freely. New methods are needed to improve that problems. One of the method that can be used is the artificial bees colony algorithm. This algorithm is one of metaheuristic algorithm method, which can come out from local search space and do exploration in solution search space so that will get better solution than other method.
NASA Astrophysics Data System (ADS)
Khataybeh, S. N.; Hashim, I.
2018-04-01
In this paper, we propose for the first time a method based on Bernstein polynomials for solving directly a class of third-order ordinary differential equations (ODEs). This method gives a numerical solution by converting the equation into a system of algebraic equations which is solved directly. Some numerical examples are given to show the applicability of the method.
ADM For Solving Linear Second-Order Fredholm Integro-Differential Equations
NASA Astrophysics Data System (ADS)
Karim, Mohd F.; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Che-Him, Norziha; Roslan, Rozaini; Khalid, Kamil
2018-04-01
In this paper, we apply Adomian Decomposition Method (ADM) as numerically analyse linear second-order Fredholm Integro-differential Equations. The approximate solutions of the problems are calculated by Maple package. Some numerical examples have been considered to illustrate the ADM for solving this equation. The results are compared with the existing exact solution. Thus, the Adomian decomposition method can be the best alternative method for solving linear second-order Fredholm Integro-Differential equation. It converges to the exact solution quickly and in the same time reduces computational work for solving the equation. The result obtained by ADM shows the ability and efficiency for solving these equations.
A simple level set method for solving Stefan problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S.; Merriman, B.; Osher, S.
1997-07-15
Discussed in this paper is an implicit finite difference scheme for solving a heat equation and a simple level set method for capturing the interface between solid and liquid phases which are used to solve Stefan problems.
NASA Astrophysics Data System (ADS)
Hayat, A. Z.; Wahyu, W.; Kurnia
2018-05-01
This study aims to find out the improvement of cognitive ability of students on the implementation of cooperative learning model of peer-tutoring by using problem-solving approach. The research method used is mix method of Sequential Explanatory strategy and pretest post-test non-equivalent control group design. The participants involved in this study were 68 grade 10 students of Vocational High School in Bandung that consisted of 34 samples of experimental class and 34 samples of control class. The instruments used include written test and questionnaires. The improvement of cognitive ability of students was calculated using the N- gain formula. Differences of two average scores were calculated using t-test at significant level of α = 0.05. The result of study shows that the improvement of cognitive ability in experimental class was significantly different compared to the improvement in the control class at significant level of α = 0.05. The improvement of cognitive ability in experimental class is higher than in control class.
Imaging model for the scintillator and its application to digital radiography image enhancement.
Wang, Qian; Zhu, Yining; Li, Hongwei
2015-12-28
Digital Radiography (DR) images obtained by OCD-based (optical coupling detector) Micro-CT system usually suffer from low contrast. In this paper, a mathematical model is proposed to describe the image formation process in scintillator. By solving the correlative inverse problem, the quality of DR images is improved, i.e. higher contrast and spatial resolution. By analyzing the radiative transfer process of visible light in scintillator, scattering is recognized as the main factor leading to low contrast. Moreover, involved blurring effect is also concerned and described as point spread function (PSF). Based on these physical processes, the scintillator imaging model is then established. When solving the inverse problem, pre-correction to the intensity of x-rays, dark channel prior based haze removing technique, and an effective blind deblurring approach are employed. Experiments on a variety of DR images show that the proposed approach could improve the contrast of DR images dramatically as well as eliminate the blurring vision effectively. Compared with traditional contrast enhancement methods, such as CLAHE, our method could preserve the relative absorption values well.
Mesh refinement strategy for optimal control problems
NASA Astrophysics Data System (ADS)
Paiva, L. T.; Fontes, F. A. C. C.
2013-10-01
Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.
NASA Astrophysics Data System (ADS)
Tisdell, Christopher C.
2018-07-01
This paper is based on the presumption that teaching multiple ways to solve the same problem has academic and social value. In particular, we argue that such a multifaceted approach to pedagogy moves towards an environment of more inclusive and personalized learning. From a mathematics education perspective, our discussion is framed around pedagogical approaches to triple integrals seen in a standard multivariable calculus curriculum. We present some critical perspectives regarding the dominant and long-standing approach to the teaching of triple integrals currently seen in hegemonic calculus textbooks; and we illustrate the need for more diverse pedagogical methods. Finally, we take a constructive position by introducing a new and alternate pedagogical approach to solve some of the classical problems involving triple integrals from the literature through a simple application of integration by parts. This pedagogical alternative for triple integrals is designed to question the dominant one-size-fits-all approach of rearranging the order of integration and the privileging of graphical methods; and to enable a shift towards a more inclusive, enhanced and personalized learning experience.
Benchmarking algorithms for the solution of Collisional Radiative Model (CRM) equations.
NASA Astrophysics Data System (ADS)
Klapisch, Marcel; Busquet, Michel
2007-11-01
Elements used in ICF target designs can have many charge states in the same plasma conditions, each charge state having numerous energy levels. When LTE conditions are not met, one has to solve CRM equations for the populations of energy levels, which are necessary for opacities/emissivities, Z* etc. In case of sparse spectra, or when configuration interaction is important (open d or f shells), statistical methods[1] are insufficient. For these cases one must resort to a detailed level CRM rate generator[2]. The equations to be solved may involve tens of thousands of levels. The system is by nature ill conditioned. We show that some classical methods do not converge. Improvements of the latter will be compared with new algorithms[3] with respect to performance, robustness, and accuracy. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Q. S. R. T.,65, 43 (2000). [2] M Klapisch, M Busquet and A. Bar-Shalom, Proceedings of APIP'07, AIP series (to be published). [3] M Klapisch and M Busquet, High Ener. Density Phys. 3,143 (2007)
3D brain tumor localization and parameter estimation using thermographic approach on GPU.
Bousselham, Abdelmajid; Bouattane, Omar; Youssfi, Mohamed; Raihani, Abdelhadi
2018-01-01
The aim of this paper is to present a GPU parallel algorithm for brain tumor detection to estimate its size and location from surface temperature distribution obtained by thermography. The normal brain tissue is modeled as a rectangular cube including spherical tumor. The temperature distribution is calculated using forward three dimensional Pennes bioheat transfer equation, it's solved using massively parallel Finite Difference Method (FDM) and implemented on Graphics Processing Unit (GPU). Genetic Algorithm (GA) was used to solve the inverse problem and estimate the tumor size and location by minimizing an objective function involving measured temperature on the surface to those obtained by numerical simulation. The parallel implementation of Finite Difference Method reduces significantly the time of bioheat transfer and greatly accelerates the inverse identification of brain tumor thermophysical and geometrical properties. Experimental results show significant gains in the computational speed on GPU and achieve a speedup of around 41 compared to the CPU. The analysis performance of the estimation based on tumor size inside brain tissue also presented. Copyright © 2017 Elsevier Ltd. All rights reserved.
Explicit formulation of second and third order optical nonlinearity in the FDTD framework
NASA Astrophysics Data System (ADS)
Varin, Charles; Emms, Rhys; Bart, Graeme; Fennel, Thomas; Brabec, Thomas
2018-01-01
The finite-difference time-domain (FDTD) method is a flexible and powerful technique for rigorously solving Maxwell's equations. However, three-dimensional optical nonlinearity in current commercial and research FDTD softwares requires solving iteratively an implicit form of Maxwell's equations over the entire numerical space and at each time step. Reaching numerical convergence demands significant computational resources and practical implementation often requires major modifications to the core FDTD engine. In this paper, we present an explicit method to include second and third order optical nonlinearity in the FDTD framework based on a nonlinear generalization of the Lorentz dispersion model. A formal derivation of the nonlinear Lorentz dispersion equation is equally provided, starting from the quantum mechanical equations describing nonlinear optics in the two-level approximation. With the proposed approach, numerical integration of optical nonlinearity and dispersion in FDTD is intuitive, transparent, and fully explicit. A strong-field formulation is also proposed, which opens an interesting avenue for FDTD-based modelling of the extreme nonlinear optics phenomena involved in laser filamentation and femtosecond micromachining of dielectrics.
Announcing a Hydrogeology Journal theme issue on "The future of hydrogeology"
Voss, Clifford I.
2003-01-01
What is the future of hydrogeology? Are most of the fundamental scientific problems in hydrogeology already solved? Is there really any need for more fundamental research, field measurements, or method development? Have recent scientific advances really added capabilities and tools for our practical needs? Are there any unsolved hydrogeologic questions still remaining that are vital to our optimal use and management of subsurface resources or does the remaining work only fill in some details to a story essentially already told? Will the science of hydrogeology soon become primarily an applied field, where the main task is to use known methods to solve practical problems of water supply and water quality? For other questions involving subsurface fluids, for example, waste isolation, understanding of geological processes and climate changes, are current hydrogeologic capabilities sufficient and is there any possibility for improvement? These are the types of questions that will be dealt with by an upcoming theme issue of Hydrogeology Journal (HJ) to appear in early 2005 [HJ 13(1)]. This issue will contain 10–20 peer-reviewed invited articles on both general topics and specific subject areas of hydrogeology.
Bell, Kathleen R; Brockway, Jo Ann; Fann, Jesse R; Cole, Wesley R; St De Lore, Jef; Bush, Nigel; Lang, Ariel J; Hart, Tessa; Warren, Michael; Dikmen, Sureyya; Temkin, Nancy; Jain, Sonia; Raman, Rema; Stein, Murray B
2015-01-01
Military service members (SMs) and veterans who sustain mild traumatic brain injuries (mTBI) during combat deployments often have co-morbid conditions but are reluctant to seek out therapy in medical or mental health settings. Efficacious methods of intervention that are patient-centered and adaptable to a mobile and often difficult-to-reach population would be useful in improving quality of life. This article describes a new protocol developed as part of a randomized clinical trial of a telephone-mediated program for SMs with mTBI. The 12-session program combines problem solving training (PST) with embedded modules targeting depression, anxiety, insomnia, and headache. The rationale and development of this behavioral intervention for implementation with persons with multiple co-morbidities is described along with the proposed analysis of results. In particular, we provide details regarding the creation of a treatment that is manualized yet flexible enough to address a wide variety of problems and symptoms within a standard framework. The methods involved in enrolling and retaining an often hard-to-study population are also highlighted. Copyright © 2014 Elsevier Inc. All rights reserved.
Using Programmable Calculators to Solve Electrostatics Problems.
ERIC Educational Resources Information Center
Yerian, Stephen C.; Denker, Dennis A.
1985-01-01
Provides a simple routine which allows first-year physics students to use programmable calculators to solve otherwise complex electrostatic problems. These problems involve finding electrostatic potential and electric field on the axis of a uniformly charged ring. Modest programing skills are required of students. (DH)
Medical Problem-Solving: A Critique of the Literature.
ERIC Educational Resources Information Center
McGuire, Christine H.
1985-01-01
Prescriptive, decision-analysis of medical problem-solving has been based on decision theory that involves calculation and manipulation of complex probability and utility values to arrive at optimal decisions that will maximize patient benefits. The studies offer a methodology for improving clinical judgment. (Author/MLW)
Improving the learning of clinical reasoning through computer-based cognitive representation.
Wu, Bian; Wang, Minhong; Johnson, Janice M; Grotzer, Tina A
2014-01-01
Objective Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Methods Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. Results A significant improvement was found in students' learning products from the beginning to the end of the study, consistent with students' report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. Conclusions The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction.
Improving the learning of clinical reasoning through computer-based cognitive representation
Wu, Bian; Wang, Minhong; Johnson, Janice M.; Grotzer, Tina A.
2014-01-01
Objective Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Methods Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. Results A significant improvement was found in students’ learning products from the beginning to the end of the study, consistent with students’ report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. Conclusions The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction. PMID:25518871
Processes involved in solving mathematical problems
NASA Astrophysics Data System (ADS)
Shahrill, Masitah; Putri, Ratu Ilma Indra; Zulkardi, Prahmana, Rully Charitas Indra
2018-04-01
This study examines one of the instructional practices features utilized within the Year 8 mathematics lessons in Brunei Darussalam. The codes from the TIMSS 1999 Video Study were applied and strictly followed, and from the 183 mathematics problems recorded, there were 95 problems with a solution presented during the public segments of the video-recorded lesson sequences of the four sampled teachers. The analyses involved firstly, identifying the processes related to mathematical problem statements, and secondly, examining the different processes used in solving the mathematical problems for each problem publicly completed during the lessons. The findings revealed that for three of the teachers, their problem statements coded as `using procedures' ranged from 64% to 83%, while the remaining teacher had 40% of his problem statements coded as `making connections.' The processes used when solving the problems were mainly `using procedures', and none of the problems were coded as `giving results only'. Furthermore, all four teachers made use of making the relevant connections in solving the problems given to their respective students.
Azad, Gazi F.; Kim, Mina; Marcus, Steven C.; Mandell, David S.; Sheridan, Susan M.
2016-01-01
Effective parent-teacher communication involves problem-solving concerns about students. Few studies have examined problem solving interactions between parents and teachers of children with autism spectrum disorder (ASD), with a particular focus on identifying communication barriers and strategies for improving them. This study examined the problem-solving behaviors of parents and teachers of children with ASD. Participants included 18 teachers and 39 parents of children with ASD. Parent-teacher dyads were prompted to discuss and provide a solution for a problem that a student experienced at home and at school. Parents and teachers also reported on their problem-solving behaviors. Results showed that parents and teachers displayed limited use of the core elements of problem-solving. Teachers displayed more problem-solving behaviors than parents. Both groups reported engaging in more problem-solving behaviors than they were observed to display during their discussions. Our findings suggest that teacher and parent training programs should include collaborative approaches to problem-solving. PMID:28392604
Azad, Gazi F; Kim, Mina; Marcus, Steven C; Mandell, David S; Sheridan, Susan M
2016-12-01
Effective parent-teacher communication involves problem-solving concerns about students. Few studies have examined problem solving interactions between parents and teachers of children with autism spectrum disorder (ASD), with a particular focus on identifying communication barriers and strategies for improving them. This study examined the problem-solving behaviors of parents and teachers of children with ASD. Participants included 18 teachers and 39 parents of children with ASD. Parent-teacher dyads were prompted to discuss and provide a solution for a problem that a student experienced at home and at school. Parents and teachers also reported on their problem-solving behaviors. Results showed that parents and teachers displayed limited use of the core elements of problem-solving. Teachers displayed more problem-solving behaviors than parents. Both groups reported engaging in more problem-solving behaviors than they were observed to display during their discussions. Our findings suggest that teacher and parent training programs should include collaborative approaches to problem-solving.
Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report
NASA Technical Reports Server (NTRS)
Ahmad, Shahid
1991-01-01
An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons with available analytical and numerical results, the stability and high accuracy of these dynamic analysis techniques are established.
NASA Astrophysics Data System (ADS)
Sanan, P.; Schnepp, S. M.; May, D.; Schenk, O.
2014-12-01
Geophysical applications require efficient forward models for non-linear Stokes flow on high resolution spatio-temporal domains. The bottleneck in applying the forward model is solving the linearized, discretized Stokes problem which takes the form of a large, indefinite (saddle point) linear system. Due to the heterogeniety of the effective viscosity in the elliptic operator, devising effective preconditioners for saddle point problems has proven challenging and highly problem-dependent. Nevertheless, at least three approaches show promise for preconditioning these difficult systems in an algorithmically scalable way using multigrid and/or domain decomposition techniques. The first is to work with a hierarchy of coarser or smaller saddle point problems. The second is to use the Schur complement method to decouple and sequentially solve for the pressure and velocity. The third is to use the Schur decomposition to devise preconditioners for the full operator. These involve sub-solves resembling inexact versions of the sequential solve. The choice of approach and sub-methods depends crucially on the motivating physics, the discretization, and available computational resources. Here we examine the performance trade-offs for preconditioning strategies applied to idealized models of mantle convection and lithospheric dynamics, characterized by large viscosity gradients. Due to the arbitrary topological structure of the viscosity field in geodynamical simulations, we utilize low order, inf-sup stable mixed finite element spatial discretizations which are suitable when sharp viscosity variations occur in element interiors. Particular attention is paid to possibilities within the decoupled and approximate Schur complement factorization-based monolithic approaches to leverage recently-developed flexible, communication-avoiding, and communication-hiding Krylov subspace methods in combination with `heavy' smoothers, which require solutions of large per-node sub-problems, well-suited to solution on hybrid computational clusters. To manage the combinatorial explosion of solver options (which include hybridizations of all the approaches mentioned above), we leverage the modularity of the PETSc library.
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation.
Pillai, S; Singhvi, I
2008-09-01
Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C(18) column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies.
Quantitative Estimation of Itopride Hydrochloride and Rabeprazole Sodium from Capsule Formulation
Pillai, S.; Singhvi, I.
2008-01-01
Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C18 column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies. PMID:21394269
NASA Technical Reports Server (NTRS)
Twomey, S.; Herman, B.; Rabinoff, R.
1977-01-01
An extension of the Chahine relaxation method (1970) for inverting the radiative transfer equation is presented. This method is superior to the original method in that it takes into account in a realistic manner the shape of the kernel function, and its extension to nonlinear systems is much more straightforward. A comparison of the new method with a matrix method due to Twomey (1965), in a problem involving inference of vertical distribution of ozone from spectroscopic measurements in the near ultraviolet, indicates that in this situation this method is stable with errors in the input data up to 4%, whereas the matrix method breaks down at these levels. The problem of non-uniqueness of the solution, which is a property of the system of equations rather than of any particular algorithm for solving them, remains, although it takes on slightly different forms for the two algorithms.
Determination of criteria weights in solving multi-criteria problems
NASA Astrophysics Data System (ADS)
Kasim, Maznah Mat
2014-12-01
A multi-criteria (MC) problem comprises of units to be analyzed under a set of evaluation criteria. Solving a MC problem is basically the process of finding the overall performance or overall quality of the units of analysis by using certain aggregation method. Based on these overall measures of each unit, a decision can be made whether to sort them, to select the best or to group them according to certain ranges. Prior to solving the MC problems, the weights of the related criteria have to be determined with the assumption that the weights represent the degree of importance or the degree of contribution towards the overall performance of the units. This paper presents two main approaches which are called as subjective and objective approaches, where the first one involves evaluator(s) while the latter approach depends on the intrinsic information contained in each criterion. The subjective and objective weights are defined if the criteria are assumed to be independent with each other, but if they are dependent, there is another type of weight, which is called as monotone measure weight or compound weights which represent degree of interaction among the criteria. The measure of individual weights or compound weights must be addressed in solving multi-criteria problems so that the solutions are more reliable since in the real world, evaluation criteria always come with different degree of importance or are dependent with each other. As the real MC problems have their own uniqueness, it is up to the decision maker(s) to decide which type of weights and which method are the most applicable ones for the problem under study.
NASA Astrophysics Data System (ADS)
Gulland, E.-K.; Veenendaal, B.; Schut, A. G. T.
2012-07-01
Problem-solving knowledge and skills are an important attribute of spatial sciences graduates. The challenge of higher education is to build a teaching and learning environment that enables students to acquire these skills in relevant and authentic applications. This study investigates the effectiveness of traditional face-to-face teaching and online learning technologies in supporting the student learning of problem-solving and computer programming skills, techniques and solutions. The student cohort considered for this study involves students in the surveying as well as geographic information science (GISc) disciplines. Also, students studying across a range of learning modes including on-campus, distance and blended, are considered in this study. Student feedback and past studies reveal a lack of student interest and engagement in problem solving and computer programming. Many students do not see such skills as directly relevant and applicable to their perceptions of what future spatial careers hold. A range of teaching and learning methods for both face-to-face teaching and distance learning were introduced to address some of the perceived weaknesses of the learning environment. These included initiating greater student interaction in lectures, modifying assessments to provide greater feedback and student accountability, and the provision of more interactive and engaging online learning resources. The paper presents and evaluates the teaching methods used to support the student learning environment. Responses of students in relation to their learning experiences were collected via two anonymous, online surveys and these results were analysed with respect to student pass and retention rates. The study found a clear distinction between expectations and engagement of surveying students in comparison to GISc students. A further outcome revealed that students who were already engaged in their learning benefited the most from the interactive learning resources and opportunities provided.
Development of a Prototype Lattice Boltzmann Code for CFD of Fusion Systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pattison, Martin J; Premnath, Kannan N; Banerjee, Sanjoy
2007-02-26
Designs of proposed fusion reactors, such as the ITER project, typically involve the use of liquid metals as coolants in components such as heat exchangers, which are generally subjected to strong magnetic fields. These fields induce electric currents in the fluids, resulting in magnetohydrodynamic (MHD) forces which have important effects on the flow. The objective of this SBIR project was to develop computational techniques based on recently developed lattice Boltzmann techniques for the simulation of these MHD flows and implement them in a computational fluid dynamics (CFD) code for the study of fluid flow systems encountered in fusion engineering. Themore » code developed during this project, solves the lattice Boltzmann equation, which is a kinetic equation whose behaviour represents fluid motion. This is in contrast to most CFD codes which are based on finite difference/finite volume based solvers. The lattice Boltzmann method (LBM) is a relatively new approach which has a number of advantages compared with more conventional methods such as the SIMPLE or projection method algorithms that involve direct solution of the Navier-Stokes equations. These are that the LBM is very well suited to parallel processing, with almost linear scaling even for very large numbers of processors. Unlike other methods, the LBM does not require solution of a Poisson pressure equation leading to a relatively fast execution time. A particularly attractive property of the LBM is that it can handle flows in complex geometries very easily. It can use simple rectangular grids throughout the computational domain -- generation of a body-fitted grid is not required. A recent advance in the LBM is the introduction of the multiple relaxation time (MRT) model; the implementation of this model greatly enhanced the numerical stability when used in lieu of the single relaxation time model, with only a small increase in computer time. Parallel processing was implemented using MPI and demonstrated the ability of the LBM to scale almost linearly. The equation for magnetic induction was also solved using a lattice Boltzmann method. This approach has the advantage that it fits in well to the framework used for the hydrodynamic equations, but more importantly that it preserves the ability of the code to run efficiently on parallel architectures. Since the LBM is a relatively recent model, a number of new developments were needed to solve the magnetic induction equation for practical problems. Existing methods were only suitable for cases where the fluid viscosity and the magnetic resistivity are of the same order, and a preconditioning method was used to allow the simulation of liquid metals, where these properties differ by several orders of magnitude. An extension of this method to the hydrodynamic equations allowed faster convergence to steady state. A new method of imposing boundary conditions using an extrapolation technique was derived, enabling the magnetic field at a boundary to be specified. Also, a technique by which the grid can be stretched was formulated to resolve thin layers at high imposed magnetic fields, allowing flows with Hartmann numbers of several thousand to be quickly and efficiently simulated. In addition, a module has been developed to calculate the temperature field and heat transfer. This uses a total variation diminishing scheme to solve the equations and is again very amenable to parallelisation. Although, the module was developed with thermal modelling in mind, it can also be applied to passive scalar transport. The code is fully three dimensional and has been applied to a wide variety of cases, including both laminar and turbulent flows. Validations against a series of canonical problems involving both MHD effects and turbulence have clearly demonstrated the ability of the LBM to properly model these types of flow. As well as applications to fusion engineering, the resulting code is flexible enough to be applied to a wide range of other flows, in particular those requiring parallel computations with many processors. For example, at present it is being used for studies in aerodynamics and acoustics involving flows at high Reynolds numbers. It is anticipated that it will be used for multiphase flow applications in the near future.« less
Adomian decomposition method used to solve the one-dimensional acoustic equations
NASA Astrophysics Data System (ADS)
Dispini, Meta; Mungkasi, Sudi
2017-05-01
In this paper we propose the use of Adomian decomposition method to solve one-dimensional acoustic equations. This recursive method can be calculated easily and the result is an approximation of the exact solution. We use the Maple software to compute the series in the Adomian decomposition. We obtain that the Adomian decomposition method is able to solve the acoustic equations with the physically correct behavior.
Some fuzzy techniques for staff selection process: A survey
NASA Astrophysics Data System (ADS)
Md Saad, R.; Ahmad, M. Z.; Abu, M. S.; Jusoh, M. S.
2013-04-01
With high level of business competition, it is vital to have flexible staff that are able to adapt themselves with work circumstances. However, staff selection process is not an easy task to be solved, even when it is tackled in a simplified version containing only a single criterion and a homogeneous skill. When multiple criteria and various skills are involved, the problem becomes much more complicated. In adddition, there are some information that could not be measured precisely. This is patently obvious when dealing with opinions, thoughts, feelings, believes, etc. One possible tool to handle this issue is by using fuzzy set theory. Therefore, the objective of this paper is to review the existing fuzzy techniques for solving staff selection process. It classifies several existing research methods and identifies areas where there is a gap and need further research. Finally, this paper concludes by suggesting new ideas for future research based on the gaps identified.
Application of GA, PSO, and ACO algorithms to path planning of autonomous underwater vehicles
NASA Astrophysics Data System (ADS)
Aghababa, Mohammad Pourmahmood; Amrollahi, Mohammad Hossein; Borjkhani, Mehdi
2012-09-01
In this paper, an underwater vehicle was modeled with six dimensional nonlinear equations of motion, controlled by DC motors in all degrees of freedom. Near-optimal trajectories in an energetic environment for underwater vehicles were computed using a numerical solution of a nonlinear optimal control problem (NOCP). An energy performance index as a cost function, which should be minimized, was defined. The resulting problem was a two-point boundary value problem (TPBVP). A genetic algorithm (GA), particle swarm optimization (PSO), and ant colony optimization (ACO) algorithms were applied to solve the resulting TPBVP. Applying an Euler-Lagrange equation to the NOCP, a conjugate gradient penalty method was also adopted to solve the TPBVP. The problem of energetic environments, involving some energy sources, was discussed. Some near-optimal paths were found using a GA, PSO, and ACO algorithms. Finally, the problem of collision avoidance in an energetic environment was also taken into account.
NASA Technical Reports Server (NTRS)
Iyer, Venkit
1990-01-01
A solution method, fourth-order accurate in the body-normal direction and second-order accurate in the stream surface directions, to solve the compressible 3-D boundary layer equations is presented. The transformation used, the discretization details, and the solution procedure are described. Ten validation cases of varying complexity are presented and results of calculation given. The results range from subsonic flow to supersonic flow and involve 2-D or 3-D geometries. Applications to laminar flow past wing and fuselage-type bodies are discussed. An interface procedure is used to solve the surface Euler equations with the inviscid flow pressure field as the input to assure accurate boundary conditions at the boundary layer edge. Complete details of the computer program used and information necessary to run each of the test cases are given in the Appendix.
Exploiting Quantum Resonance to Solve Combinatorial Problems
NASA Technical Reports Server (NTRS)
Zak, Michail; Fijany, Amir
2006-01-01
Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.
Numerical optimization in Hilbert space using inexact function and gradient evaluations
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.
Progress in protein crystallography.
Dauter, Zbigniew; Wlodawer, Alexander
2016-01-01
Macromolecular crystallography evolved enormously from the pioneering days, when structures were solved by "wizards" performing all complicated procedures almost by hand. In the current situation crystal structures of large systems can be often solved very effectively by various powerful automatic programs in days or hours, or even minutes. Such progress is to a large extent coupled to the advances in many other fields, such as genetic engineering, computer technology, availability of synchrotron beam lines and many other techniques, creating the highly interdisciplinary science of macromolecular crystallography. Due to this unprecedented success crystallography is often treated as one of the analytical methods and practiced by researchers interested in structures of macromolecules, but not highly competent in the procedures involved in the process of structure determination. One should therefore take into account that the contemporary, highly automatic systems can produce results almost without human intervention, but the resulting structures must be carefully checked and validated before their release into the public domain.
NASA Technical Reports Server (NTRS)
Dulikravich, D. S.
1980-01-01
A computer program is presented which numerically solves an exact, full potential equation (FPE) for three dimensional, steady, inviscid flow through an isolated wind turbine rotor. The program automatically generates a three dimensional, boundary conforming grid and iteratively solves the FPE while fully accounting for both the rotating cascade and Coriolis effects. The numerical techniques incorporated involve rotated, type dependent finite differencing, a finite volume method, artificial viscosity in conservative form, and a successive line overrelaxation combined with the sequential grid refinement procedure to accelerate the iterative convergence rate. Consequently, the WIND program is capable of accurately analyzing incompressible and compressible flows, including those that are locally transonic and terminated by weak shocks. The program can also be used to analyze the flow around isolated aircraft propellers and helicopter rotors in hover as long as the total relative Mach number of the oncoming flow is subsonic.
Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (3).
Murase, Kenya
2016-01-01
In this issue, simultaneous differential equations were introduced. These differential equations are often used in the field of medical physics. The methods for solving them were also introduced, which include Laplace transform and matrix methods. Some examples were also introduced, in which Laplace transform and matrix methods were applied to solving simultaneous differential equations derived from a three-compartment kinetic model for analyzing the glucose metabolism in tissues and Bloch equations for describing the behavior of the macroscopic magnetization in magnetic resonance imaging.In the next (final) issue, partial differential equations and various methods for solving them will be introduced together with some examples in medical physics.
After Being Challenged by a Video Game Problem, Sleep Increases the Chance to Solve It
Beijamini, Felipe; Pereira, Sofia Isabel Ribeiro; Cini, Felipe Augusto; Louzada, Fernando Mazzilli
2014-01-01
In the past years many studies have demonstrated the role of sleep on memory consolidation. It is known that sleeping after learning a declarative or non-declarative task, is better than remaining awake. Furthermore, there are reports of a possible role for dreams in consolidation of declarative memories. Other studies have reported the effect of naps on memory consolidation. With similar protocols, another set of studies indicated that sleep has a role in creativity and problem-solving. Here we hypothesised that sleep can increase the likelihood of solving problems. After struggling to solve a video game problem, subjects who took a nap (n = 14) were almost twice as likely to solve it when compared to the wake control group (n = 15). It is interesting to note that, in the nap group 9 out 14 subjects engaged in slow-wave sleep (SWS) and all solved the problem. Surprisingly, we did not find a significant involvement of Rapid Eye Movement (REM) sleep in this task. Slow-wave sleep is believed to be crucial for the transfer of memory-related information to the neocortex and implement intentions. Sleep can benefit problem-solving through the generalisation of newly encoded information and abstraction of the gist. In conclusion, our results indicate that sleep, even a nap, can potentiate the solution of problems that involve logical reasoning. Thus, sleep's function seems to go beyond memory consolidation to include managing of everyday-life events. PMID:24416219
After being challenged by a video game problem, sleep increases the chance to solve it.
Beijamini, Felipe; Pereira, Sofia Isabel Ribeiro; Cini, Felipe Augusto; Louzada, Fernando Mazzilli
2014-01-01
In the past years many studies have demonstrated the role of sleep on memory consolidation. It is known that sleeping after learning a declarative or non-declarative task, is better than remaining awake. Furthermore, there are reports of a possible role for dreams in consolidation of declarative memories. Other studies have reported the effect of naps on memory consolidation. With similar protocols, another set of studies indicated that sleep has a role in creativity and problem-solving. Here we hypothesised that sleep can increase the likelihood of solving problems. After struggling to solve a video game problem, subjects who took a nap (n = 14) were almost twice as likely to solve it when compared to the wake control group (n = 15). It is interesting to note that, in the nap group 9 out 14 subjects engaged in slow-wave sleep (SWS) and all solved the problem. Surprisingly, we did not find a significant involvement of Rapid Eye Movement (REM) sleep in this task. Slow-wave sleep is believed to be crucial for the transfer of memory-related information to the neocortex and implement intentions. Sleep can benefit problem-solving through the generalisation of newly encoded information and abstraction of the gist. In conclusion, our results indicate that sleep, even a nap, can potentiate the solution of problems that involve logical reasoning. Thus, sleep's function seems to go beyond memory consolidation to include managing of everyday-life events.
NASA Astrophysics Data System (ADS)
Kuncoro, K. S.; Junaedi, I.; Dwijanto
2018-03-01
This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.
The FLAME-slab method for electromagnetic wave scattering in aperiodic slabs
NASA Astrophysics Data System (ADS)
Mansha, Shampy; Tsukerman, Igor; Chong, Y. D.
2017-12-01
The proposed numerical method, "FLAME-slab," solves electromagnetic wave scattering problems for aperiodic slab structures by exploiting short-range regularities in these structures. The computational procedure involves special difference schemes with high accuracy even on coarse grids. These schemes are based on Trefftz approximations, utilizing functions that locally satisfy the governing differential equations, as is done in the Flexible Local Approximation Method (FLAME). Radiation boundary conditions are implemented via Fourier expansions in the air surrounding the slab. When applied to ensembles of slab structures with identical short-range features, such as amorphous or quasicrystalline lattices, the method is significantly more efficient, both in runtime and in memory consumption, than traditional approaches. This efficiency is due to the fact that the Trefftz functions need to be computed only once for the whole ensemble.
Action research methodology in clinical pharmacy: how to involve and change.
Nørgaard, Lotte Stig; Sørensen, Ellen Westh
2016-06-01
Introduction The focus in clinical pharmacy practice is and has for the last 30-35 years been on changing the role of pharmacy staff into service orientation and patient counselling. One way of doing this is by involving staff in change process and as a researcher to take part in the change process by establishing partnerships with staff. On the background of the authors' widespread action research (AR)-based experiences, recommendations and comments for how to conduct an AR-study is described, and one of their AR-based studies illustrate the methodology and the research methods used. Methodology AR is defined as an approach to research which is based on a problem-solving relationship between researchers and clients, which aims at both solving a problem and at collaboratively generating new knowledge. Research questions relevant in AR-studies are: what was the working process in this change oriented study? What learning and/or changes took place? What challenges/pitfalls had to be overcome? What were the influence/consequences for the involved parts? When to use If you want to implement new services and want to involve staff and others in the process, an AR methodology is very suitable. The basic advantages of doing AR-based studies are grounded in their participatory and democratic basis and their starting point in problems experienced in practice. Limitations Some of the limitations in AR-studies are that neither of the participants in a project steering group are the only ones to decide. Furthermore, the collective process makes the decision-making procedures relatively complex.
NASA Technical Reports Server (NTRS)
Feng, Hui-Yu; VanderWijngaart, Rob; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
We describe the design of a new method for the measurement of the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. The method involves the solution of a stylized heat transfer problem on an unstructured, adaptive grid. A Spectral Element Method (SEM) with an adaptive, nonconforming mesh is selected to discretize the transport equation. The relatively high order of the SEM lowers the fraction of wall clock time spent on inter-processor communication, which eases the load balancing task and allows us to concentrate on the memory accesses. The benchmark is designed to be three-dimensional. Parallelization and load balance issues of a reference implementation will be described in detail in future reports.
Effects of Variation and Prior Knowledge on Abstract Concept Learning
ERIC Educational Resources Information Center
Braithwaite, David W.; Goldstone, Robert L.
2015-01-01
Learning abstract concepts through concrete examples may promote learning at the cost of inhibiting transfer. The present study investigated one approach to solving this problem: systematically varying superficial features of the examples. Participants learned to solve problems involving a mathematical concept by studying either superficially…
Student Involvement in Problem Solving and Decision Making--A Look at the Facts of Life.
ERIC Educational Resources Information Center
Sweeney, Jim
1979-01-01
The author contends that, in spite of the belief by principals and teachers that students participate in school decision making and problem solving, in reality they really do not. He suggests ways in which this condition can be rectified. (KC)
The Emotional Dimensions of the Problem-Solving Process.
ERIC Educational Resources Information Center
Hill, Barbara; And Others
1979-01-01
Predictable affective responses are evoked during each phase of a group or organizational problem-solving process. With the needs assessment phase come hope and energy; with goal-setting, confusion and dissatisfaction; with action planning, involvement and accomplishment; with implementation, "stage fright" and joy; with evaluation, pride or…
NASA Astrophysics Data System (ADS)
Patel, Jitendra Kumar; Natarajan, Ganesh
2018-05-01
We present an interpolation-free diffuse interface immersed boundary method for multiphase flows with moving bodies. A single fluid formalism using the volume-of-fluid approach is adopted to handle multiple immiscible fluids which are distinguished using the volume fractions, while the rigid bodies are tracked using an analogous volume-of-solid approach that solves for the solid fractions. The solution to the fluid flow equations are carried out using a finite volume-immersed boundary method, with the latter based on a diffuse interface philosophy. In the present work, we assume that the solids are filled with a "virtual" fluid with density and viscosity equal to the largest among all fluids in the domain. The solids are assumed to be rigid and their motion is solved using Newton's second law of motion. The immersed boundary methodology constructs a modified momentum equation that reduces to the Navier-Stokes equations in the fully fluid region and recovers the no-slip boundary condition inside the solids. An implicit incremental fractional-step methodology in conjunction with a novel hybrid staggered/non-staggered approach is employed, wherein a single equation for normal momentum at the cell faces is solved everywhere in the domain, independent of the number of spatial dimensions. The scalars are all solved for at the cell centres, with the transport equations for solid and fluid volume fractions solved using a high-resolution scheme. The pressure is determined everywhere in the domain (including inside the solids) using a variable coefficient Poisson equation. The solution to momentum, pressure, solid and fluid volume fraction equations everywhere in the domain circumvents the issue of pressure and velocity interpolation, which is a source of spurious oscillations in sharp interface immersed boundary methods. A well-balanced algorithm with consistent mass/momentum transport ensures robust simulations of high density ratio flows with strong body forces. The proposed diffuse interface immersed boundary method is shown to be discretely mass-preserving while being temporally second-order accurate and exhibits nominal second-order accuracy in space. We examine the efficacy of the proposed approach through extensive numerical experiments involving one or more fluids and solids, that include two-particle sedimentation in homogeneous and stratified environment. The results from the numerical simulations show that the proposed methodology results in reduced spurious force oscillations in case of moving bodies while accurately resolving complex flow phenomena in multiphase flows with moving solids. These studies demonstrate that the proposed diffuse interface immersed boundary method, which could be related to a class of penalisation approaches, is a robust and promising alternative to computationally expensive conformal moving mesh algorithms as well as the class of sharp interface immersed boundary methods for multibody problems in multi-phase flows.
Efficient ICCG on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Hammond, Steven W.; Schreiber, Robert
1989-01-01
Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.
NASA Astrophysics Data System (ADS)
Hernandez-Walls, R.; Martín-Atienza, B.; Salinas-Matus, M.; Castillo, J.
2017-11-01
When solving the linear inviscid shallow water equations with variable depth in one dimension using finite differences, a tridiagonal system of equations must be solved. Here we present an approach, which is more efficient than the commonly used numerical method, to solve this tridiagonal system of equations using a recursion formula. We illustrate this approach with an example in which we solve for a rectangular channel to find the resonance modes. Our numerical solution agrees very well with the analytical solution. This new method is easy to use and understand by undergraduate students, so it can be implemented in undergraduate courses such as Numerical Methods, Lineal Algebra or Differential Equations.
Collaborative problem solving with a total quality model.
Volden, C M; Monnig, R
1993-01-01
A collaborative problem-solving system committed to the interests of those involved complies with the teachings of the total quality management movement in health care. Deming espoused that any quality system must become an integral part of routine activities. A process that is used consistently in dealing with problems, issues, or conflicts provides a mechanism for accomplishing total quality improvement. The collaborative problem-solving process described here results in quality decision-making. This model incorporates Ishikawa's cause-and-effect (fishbone) diagram, Moore's key causes of conflict, and the steps of the University of North Dakota Conflict Resolution Center's collaborative problem solving model.
NASA Astrophysics Data System (ADS)
Vasant, Pandian; Barsoum, Nader
2008-10-01
Many engineering, science, information technology and management optimization problems can be considered as non linear programming real world problems where the all or some of the parameters and variables involved are uncertain in nature. These can only be quantified using intelligent computational techniques such as evolutionary computation and fuzzy logic. The main objective of this research paper is to solve non linear fuzzy optimization problem where the technological coefficient in the constraints involved are fuzzy numbers which was represented by logistic membership functions by using hybrid evolutionary optimization approach. To explore the applicability of the present study a numerical example is considered to determine the production planning for the decision variables and profit of the company.
Paninski, Liam; Haith, Adrian; Szirtes, Gabor
2008-02-01
We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.
The semantic system is involved in mathematical problem solving.
Zhou, Xinlin; Li, Mengyi; Li, Leinian; Zhang, Yiyun; Cui, Jiaxin; Liu, Jie; Chen, Chuansheng
2018-02-01
Numerous studies have shown that the brain regions around bilateral intraparietal cortex are critical for number processing and arithmetical computation. However, the neural circuits for more advanced mathematics such as mathematical problem solving (with little routine arithmetical computation) remain unclear. Using functional magnetic resonance imaging (fMRI), this study (N = 24 undergraduate students) compared neural bases of mathematical problem solving (i.e., number series completion, mathematical word problem solving, and geometric problem solving) and arithmetical computation. Direct subject- and item-wise comparisons revealed that mathematical problem solving typically had greater activation than arithmetical computation in all 7 regions of the semantic system (which was based on a meta-analysis of 120 functional neuroimaging studies on semantic processing). Arithmetical computation typically had greater activation in the supplementary motor area and left precentral gyrus. The results suggest that the semantic system in the brain supports mathematical problem solving. Copyright © 2017 Elsevier Inc. All rights reserved.
Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
A family of conjugate gradient methods for large-scale nonlinear equations.
Feng, Dexiang; Sun, Min; Wang, Xueyong
2017-01-01
In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Arteaga, Santiago Egido
1998-12-01
The steady-state Navier-Stokes equations are of considerable interest because they are used to model numerous common physical phenomena. The applications encountered in practice often involve small viscosities and complicated domain geometries, and they result in challenging problems in spite of the vast attention that has been dedicated to them. In this thesis we examine methods for computing the numerical solution of the primitive variable formulation of the incompressible equations on distributed memory parallel computers. We use the Galerkin method to discretize the differential equations, although most results are stated so that they apply also to stabilized methods. We also reformulate some classical results in a single framework and discuss some issues frequently dismissed in the literature, such as the implementation of pressure space basis and non- homogeneous boundary values. We consider three nonlinear methods: Newton's method, Oseen's (or Picard) iteration, and sequences of Stokes problems. All these iterative nonlinear methods require solving a linear system at every step. Newton's method has quadratic convergence while that of the others is only linear; however, we obtain theoretical bounds showing that Oseen's iteration is more robust, and we confirm it experimentally. In addition, although Oseen's iteration usually requires more iterations than Newton's method, the linear systems it generates tend to be simpler and its overall costs (in CPU time) are lower. The Stokes problems result in linear systems which are easier to solve, but its convergence is much slower, so that it is competitive only for large viscosities. Inexact versions of these methods are studied, and we explain why the best timings are obtained using relatively modest error tolerances in solving the corresponding linear systems. We also present a new damping optimization strategy based on the quadratic nature of the Navier-Stokes equations, which improves the robustness of all the linearization strategies considered and whose computational cost is negligible. The algebraic properties of these systems depend on both the discretization and nonlinear method used. We study in detail the positive definiteness and skewsymmetry of the advection submatrices (essentially, convection-diffusion problems). We propose a discretization based on a new trilinear form for Newton's method. We solve the linear systems using three Krylov subspace methods, GMRES, QMR and TFQMR, and compare the advantages of each. Our emphasis is on parallel algorithms, and so we consider preconditioners suitable for parallel computers such as line variants of the Jacobi and Gauss- Seidel methods, alternating direction implicit methods, and Chebyshev and least squares polynomial preconditioners. These work well for moderate viscosities (moderate Reynolds number). For small viscosities we show that effective parallel solution of the advection subproblem is a critical factor to improve performance. Implementation details on a CM-5 are presented.
NASA Astrophysics Data System (ADS)
Buddala, Raviteja; Mahapatra, Siba Sankar
2017-11-01
Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.
NASA Astrophysics Data System (ADS)
Nguyen, Dong-Hai
This research project investigates the difficulties students encounter when solving physics problems involving the integral and the area under the curve concepts and the strategies to facilitate students learning to solve those types of problems. The research contexts of this project are calculus-based physics courses covering mechanics and electromagnetism. In phase I of the project, individual teaching/learning interviews were conducted with 20 students in mechanics and 15 students from the same cohort in electromagnetism. The students were asked to solve problems on several topics of mechanics and electromagnetism. These problems involved calculating physical quantities (e.g. velocity, acceleration, work, electric field, electric resistance, electric current) by integrating or finding the area under the curve of functions of related quantities (e.g. position, velocity, force, charge density, resistivity, current density). Verbal hints were provided when students made an error or were unable to proceed. A total number of 140 one-hour interviews were conducted in this phase, which provided insights into students' difficulties when solving the problems involving the integral and the area under the curve concepts and the hints to help students overcome those difficulties. In phase II of the project, tutorials were created to facilitate students' learning to solve physics problems involving the integral and the area under the curve concepts. Each tutorial consisted of a set of exercises and a protocol that incorporated the helpful hints to target the difficulties that students expressed in phase I of the project. Focus group learning interviews were conducted to test the effectiveness of the tutorials in comparison with standard learning materials (i.e. textbook problems and solutions). Overall results indicated that students learning with our tutorials outperformed students learning with standard materials in applying the integral and the area under the curve concepts to physics problems. The results of this project provide broader and deeper insights into students' problem solving with the integral and the area under the curve concepts and suggest strategies to facilitate students' learning to apply these concepts to physics problems. This study also has significant implications for further research, curriculum development and instruction.
NASA Astrophysics Data System (ADS)
Agarwal, P.; El-Sayed, A. A.
2018-06-01
In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.
Problem Finding in Professional Learning Communities: A Learning Study Approach
ERIC Educational Resources Information Center
Tan, Yuen Sze Michelle; Caleon, Imelda Santos
2016-01-01
This study marries collaborative problem solving and learning study in understanding the onset of a cycle of teacher professional development process within school-based professional learning communities (PLCs). It aimed to explore how a PLC carried out collaborative problem finding--a key process involved in collaborative problem solving--that…
Design and Implementation of the Game-Design and Learning Program
ERIC Educational Resources Information Center
Akcaoglu, Mete
2016-01-01
Design involves solving complex, ill-structured problems. Design tasks are consequently, appropriate contexts for children to exercise higher-order thinking and problem-solving skills. Although creating engaging and authentic design contexts for young children is difficult within the confines of traditional schooling, recently, game-design has…
Engaging Students with Pre-Recorded "Live" Reflections on Problem-Solving with "Livescribe" Pens
ERIC Educational Resources Information Center
Hickman, Mike
2013-01-01
This pilot study, involving PGCE primary student teachers, applies "Livescribe" pen technology to facilitate individual and group reflection on collaborative mathematical problem solving (Hickman 2011). The research question was: How does thinking aloud, supported by digital audio recording, support student teachers' understanding of…
ERIC Educational Resources Information Center
Nitschke, Kai; Ruh, Nina; Kappler, Sonja; Stahl, Christoph; Kaller, Christoph P.
2012-01-01
Understanding the functional neuroanatomy of planning and problem solving may substantially benefit from better insight into the chronology of the cognitive processes involved. Based on the assumption that regularities in cognitive processing are reflected in overtly observable eye-movement patterns, here we recorded eye movements while…
Teaching Students with Moderate Intellectual Disability to Solve Word Problems
ERIC Educational Resources Information Center
Browder, Diane M.; Spooner, Fred; Lo, Ya-yu; Saunders, Alicia F.; Root, Jenny R.; Ley Davis, Luann; Brosh, Chelsi R.
2018-01-01
This study evaluated an intervention developed through an Institute of Education Sciences-funded Goal 2 research project to teach students with moderate intellectual disability (moderate ID) to solve addition and subtraction word problems. The intervention involved modified schema-based instruction that embedded effective practices (e.g.,…
Infusing Action Mazes into Language Assessment Class Using Quandary
ERIC Educational Resources Information Center
Kiliçkaya, Ferit
2017-01-01
It is widely acknowledged that problem solving is one of today's prominent skills and is an ongoing activity where learners are actively involved in seeking information, generating new knowledge based on this information, and making decisions accordingly. In this respective, through infusing problem-solving into curriculum of language teaching, it…
Designing WebQuests to Support Creative Problem Solving
ERIC Educational Resources Information Center
Rubin, Jim
2013-01-01
WebQuests have been a popular alternative for collaborative group work that utilizes internet resources, but studies have questioned how effective they are in challenging students to use higher order thinking processes that involve creative problem solving. This article explains how different levels of inquiry relate to categories of learning…
The Effects of Motivation and Emotion upon Problem Solving.
ERIC Educational Resources Information Center
Sanders, Michele; Matsumoto, David
Recent research has refuted the behaviorist approach by establishing a relationship between emotion and behavior. The data collection procedure, however, has often involved an inferred emotional state from a hypothetical situation. As partial fulfillment of a class requirement, 60 college students were asked to perform two problem solving tasks…
ERIC Educational Resources Information Center
Engelmann, Tanja; Tergan, Sigmar-Olaf; Hesse, Friedrich W.
2010-01-01
Computer-supported collaboration by spatially distributed group members still involves interaction problems within the group. This article presents an empirical study investigating the question of whether computer-supported collaborative problem solving by spatially distributed group members can be fostered by evoking knowledge and information…
Aspects of the Cognitive Model of Physics Problem Solving.
ERIC Educational Resources Information Center
Brekke, Stewart E.
Various aspects of the cognitive model of physics problem solving are discussed in detail including relevant cues, encoding, memory, and input stimuli. The learning process involved in the recognition of familiar and non-familiar sensory stimuli is highlighted. Its four components include selection, acquisition, construction, and integration. The…
REACTT: an algorithm for solving spatial equilibrium problems.
D.J. Brooks; J. Kincaid
1987-01-01
The problem of determining equilibrium prices and quantities in spatially separated markets is reviewed. Algorithms that compute spatial equilibria are discussed. A computer program using the reactive programming algorithm for solving spatial equilibrium problems that involve multiple commodities is presented, along with detailed documentation. A sample data set,...
Teaching Math. Extending Problem Solving.
ERIC Educational Resources Information Center
May, Lola
1996-01-01
Describes four teaching activities to help children extend math problem-solving skills by using their own questions. Activities involve using a chart and symbols to develop equations adding up to 12, going on an imaginary shopping trip, using shapes to represent dollar amounts, using the date on a penny to engage in various mathematical…
Conceptual Transformation and Cognitive Processes in Origami Paper Folding
ERIC Educational Resources Information Center
Tenbrink, Thora; Taylor, Holly A.
2015-01-01
Research on problem solving typically does not address tasks that involve following detailed and/or illustrated step-by-step instructions. Such tasks are not seen as cognitively challenging problems to be solved. In this paper, we challenge this assumption by analyzing verbal protocols collected during an Origami folding task. Participants…
Spatial Visualization in Physics Problem Solving
ERIC Educational Resources Information Center
Kozhevnikov, Maria; Motes, Michael A.; Hegarty, Mary
2007-01-01
Three studies were conducted to examine the relation of spatial visualization to solving kinematics problems that involved either predicting the two-dimensional motion of an object, translating from one frame of reference to another, or interpreting kinematics graphs. In Study 1, 60 physics-naive students were administered kinematics problems and…
Implementing the Japanese Problem-Solving Lesson Structure
ERIC Educational Resources Information Center
Groves, Susie
2013-01-01
While there has been worldwide interest in Japanese Lesson Study as a model for teacher professional learning, there has been less research into authentic implementation of the problem-solving lesson structure that underpins mathematics research lessons in Japan. Findings from a Lesson Study project involving teachers from three Victorian primary…
Cognitive Principles of Problem Solving and Instruction. Final Report.
ERIC Educational Resources Information Center
Greeno, James G.; And Others
Research in this project studied cognitive processes involved in understanding and solving problems used in instruction in the domain of mathematics, and explored implications of these cognitive analyses for the design of instruction. Three general issues were addressed: knowledge required for understanding problems, knowledge of the conditions…
Asad, Munazza; Iqbal, Khadija; Sabir, Mohammad
2015-01-01
Problem based learning (PBL) is an instructional approach that utilizes problems or cases as a context for students to acquire problem solving skills. It promotes communication skills, active learning, and critical thinking skills. It encourages peer teaching and active participation in a group. It was a cross-sectional study conducted at Al Nafees Medical College, Isra University, Islamabad, in one month duration. This study was conducted on 193 students of both 1st and 2nd year MBBS. Each PBL consists of three sessions, spaced by 2-3 days. In the first session students were provided a PBL case developed by both basic and clinical science faculty. In Session 2 (group discussion), they share, integrate their knowledge with the group and Wrap up (third session), was concluded at the end. A questionnaire based survey was conducted to find out overall effectiveness of PBL sessions. Teaching through PBLs greatly improved the problem solving and critical reasoning skills with 60% students of first year and 71% of 2nd year agreeing that the acquisition of knowledge and its application in solving multiple choice questions (MCQs) was greatly improved by these sessions. They observed that their self-directed learning, intrinsic motivation and skills to relate basic concepts with clinical reasoning which involves higher order thinking have greatly enhanced. Students found PBLs as an effective strategy to promote teamwork and critical thinking skills. PBL is an effective method to improve critical thinking and problem solving skills among medical students.
[Problem-solving approach in the training of healthcare professionals].
Batista, Nildo; Batista, Sylvia Helena; Goldenberg, Paulete; Seiffert, Otília; Sonzogno, Maria Cecília
2005-04-01
To discuss the problem-solving approach in the training of healthcare professionals who would be able to act both in academic life and in educational practices in services and communities. This is an analytical description of an experience of problem-based learning in specialization-level training that was developed within a university-level healthcare education institution. The analysis focuses on three perspectives: course design, student-centered learning and the teacher's role. The problem-solving approach provided impetus to the learning experience for these postgraduate students. There was increased motivation, leadership development and teamworking. This was translated through their written work, seminars and portfolio preparation. The evaluation process for these experiences presupposes well-founded practices that express the views of the subjects involved: self-assessment and observer assessment. The impact of this methodology on teaching practices is that there is a need for greater knowledge of the educational theories behind the principles of significant learning, teachers as intermediaries and research as an educational axiom. The problem-solving approach is an innovative response to the challenges of training healthcare professionals. Its potential is recognized, while it is noted that educational innovations are characterized by causing ruptures in consolidated methods and by establishing different ways of responding to demands presented at specific moments. The critical problems were identified, while highlighting the risk of considering this approach to be a technical tool that is unconnected with the design of the teaching policy. Experiences and analyses based on the problem-solving assumptions need to be shared, thus enabling the production of knowledge that strengthens the transformation of educational practices within healthcare.
Improved Modeling of Finite-Rate Turbulent Combustion Processes in Research Combustors
NASA Technical Reports Server (NTRS)
VanOverbeke, Thomas J.
1998-01-01
The objective of this thesis is to further develop and test a stochastic model of turbulent combustion in recirculating flows. There is a requirement to increase the accuracy of multi-dimensional combustion predictions. As turbulence affects reaction rates, this interaction must be more accurately evaluated. In this work a more physically correct way of handling the interaction of turbulence on combustion is further developed and tested. As turbulence involves randomness, stochastic modeling is used. Averaged values such as temperature and species concentration are found by integrating the probability density function (pdf) over the range of the scalar. The model in this work does not assume the pdf type, but solves for the evolution of the pdf using the Monte Carlo solution technique. The model is further developed by including a more robust reaction solver, by using accurate thermodynamics and by more accurate transport elements. The stochastic method is used with Semi-Implicit Method for Pressure-Linked Equations. The SIMPLE method is used to solve for velocity, pressure, turbulent kinetic energy and dissipation. The pdf solver solves for temperature and species concentration. Thus, the method is partially familiar to combustor engineers. The method is compared to benchmark experimental data and baseline calculations. The baseline method was tested on isothermal flows, evaporating sprays and combusting sprays. Pdf and baseline predictions were performed for three diffusion flames and one premixed flame. The pdf method predicted lower combustion rates than the baseline method in agreement with the data, except for the premixed flame. The baseline and stochastic predictions bounded the experimental data for the premixed flame. The use of a continuous mixing model or relax to mean mixing model had little effect on the prediction of average temperature. Two grids were used in a hydrogen diffusion flame simulation. Grid density did not effect the predictions except for peak temperature and tangential velocity. The hybrid pdf method did take longer and required more memory, but has a theoretical basis to extend to many reaction steps which cannot be said of current turbulent combustion models.
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
Transshipment site selection using the AHP and TOPSIS approaches under fuzzy environment.
Onüt, Semih; Soner, Selin
2008-01-01
Site selection is an important issue in waste management. Selection of the appropriate solid waste site requires consideration of multiple alternative solutions and evaluation criteria because of system complexity. Evaluation procedures involve several objectives, and it is often necessary to compromise among possibly conflicting tangible and intangible factors. For these reasons, multiple criteria decision-making (MCDM) has been found to be a useful approach to solve this kind of problem. Different MCDM models have been applied to solve this problem. But most of them are basically mathematical and ignore qualitative and often subjective considerations. It is easier for a decision-maker to describe a value for an alternative by using linguistic terms. In the fuzzy-based method, the rating of each alternative is described using linguistic terms, which can also be expressed as triangular fuzzy numbers. Furthermore, there have not been any studies focused on the site selection in waste management using both fuzzy TOPSIS (technique for order preference by similarity to ideal solution) and AHP (analytical hierarchy process) techniques. In this paper, a fuzzy TOPSIS based methodology is applied to solve the solid waste transshipment site selection problem in Istanbul, Turkey. The criteria weights are calculated by using the AHP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onuet, Semih; Soner, Selin
Site selection is an important issue in waste management. Selection of the appropriate solid waste site requires consideration of multiple alternative solutions and evaluation criteria because of system complexity. Evaluation procedures involve several objectives, and it is often necessary to compromise among possibly conflicting tangible and intangible factors. For these reasons, multiple criteria decision-making (MCDM) has been found to be a useful approach to solve this kind of problem. Different MCDM models have been applied to solve this problem. But most of them are basically mathematical and ignore qualitative and often subjective considerations. It is easier for a decision-maker tomore » describe a value for an alternative by using linguistic terms. In the fuzzy-based method, the rating of each alternative is described using linguistic terms, which can also be expressed as triangular fuzzy numbers. Furthermore, there have not been any studies focused on the site selection in waste management using both fuzzy TOPSIS (technique for order preference by similarity to ideal solution) and AHP (analytical hierarchy process) techniques. In this paper, a fuzzy TOPSIS based methodology is applied to solve the solid waste transshipment site selection problem in Istanbul, Turkey. The criteria weights are calculated by using the AHP.« less
A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆
Ying, Wenjun; Henriquez, Craig S.
2013-01-01
This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600
Solving Differential Equations in R: Package deSolve
In this paper we present the R package deSolve to solve initial value problems (IVP) written as ordinary differential equations (ODE), differential algebraic equations (DAE) of index 0 or 1 and partial differential equations (PDE), the latter solved using the method of lines appr...
Discussion summary: Fictitious domain methods
NASA Technical Reports Server (NTRS)
Glowinski, Rowland; Rodrigue, Garry
1991-01-01
Fictitious Domain methods are constructed in the following manner: Suppose a partial differential equation is to be solved on an open bounded set, Omega, in 2-D or 3-D. Let R be a rectangle domain containing the closure of Omega. The partial differential equation is first solved on R. Using the solution on R, the solution of the equation on Omega is then recovered by some procedure. The advantage of the fictitious domain method is that in many cases the solution of a partial differential equation on a rectangular region is easier to compute than on a nonrectangular region. Fictitious domain methods for solving elliptic PDEs on general regions are also very efficient when used on a parallel computer. The reason is that one can use the many domain decomposition methods that are available for solving the PDE on the fictitious rectangular region. The discussion on fictitious domain methods began with a talk by R. Glowinski in which he gave some examples of a variational approach to ficititious domain methods for solving the Helmholtz and Navier-Stokes equations.
Finite difference time domain calculation of transients in antennas with nonlinear loads
NASA Technical Reports Server (NTRS)
Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent
1991-01-01
Determining transient electromagnetic fields in antennas with nonlinear loads is a challenging problem. Typical methods used involve calculating frequency domain parameters at a large number of different frequencies, then applying Fourier transform methods plus nonlinear equation solution techniques. If the antenna is simple enough so that the open circuit time domain voltage can be determined independently of the effects of the nonlinear load on the antennas current, time stepping methods can be applied in a straightforward way. Here, transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain (FDTD) methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case, the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets, including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.
An Efficient Statistical Method to Compute Molecular Collisional Rate Coefficients
NASA Astrophysics Data System (ADS)
Loreau, Jérôme; Lique, François; Faure, Alexandre
2018-01-01
Our knowledge about the “cold” universe often relies on molecular spectra. A general property of such spectra is that the energy level populations are rarely at local thermodynamic equilibrium. Solving the radiative transfer thus requires the availability of collisional rate coefficients with the main colliding partners over the temperature range ∼10–1000 K. These rate coefficients are notoriously difficult to measure and expensive to compute. In particular, very few reliable collisional data exist for inelastic collisions involving reactive radicals or ions. In this Letter, we explore the use of a fast quantum statistical method to determine molecular collisional excitation rate coefficients. The method is benchmarked against accurate (but costly) rigid-rotor close-coupling calculations. For collisions proceeding through the formation of a strongly bound complex, the method is found to be highly satisfactory up to room temperature. Its accuracy decreases with decreasing potential well depth and with increasing temperature, as expected. This new method opens the way to the determination of accurate inelastic collisional data involving key reactive species such as {{{H}}}3+, H2O+, and H3O+ for which exact quantum calculations are currently not feasible.
Grassmann phase space methods for fermions. I. Mode theory
NASA Astrophysics Data System (ADS)
Dalton, B. J.; Jeffers, J.; Barnett, S. M.
2016-07-01
In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggest the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. The theory of Grassmann phase space methods for fermions based on separate modes is developed, showing how the distribution function is defined and used to determine quantum correlation functions, Fock state populations and coherences via Grassmann phase space integrals, how the Fokker-Planck equations are obtained and then converted into equivalent Ito equations for stochastic Grassmann variables. The fermion distribution function is an even Grassmann function, and is unique. The number of c-number Wiener increments involved is 2n2, if there are n modes. The situation is somewhat different to the bosonic c-number case where only 2 n Wiener increments are involved, the sign of the drift term in the Ito equation is reversed and the diffusion matrix in the Fokker-Planck equation is anti-symmetric rather than symmetric. The un-normalised B distribution is of particular importance for determining Fock state populations and coherences, and as pointed out by Plimak, Collett and Olsen, the drift vector in its Fokker-Planck equation only depends linearly on the Grassmann variables. Using this key feature we show how the Ito stochastic equations can be solved numerically for finite times in terms of c-number stochastic quantities. Averages of products of Grassmann stochastic variables at the initial time are also involved, but these are determined from the initial conditions for the quantum state. The detailed approach to the numerics is outlined, showing that (apart from standard issues in such numerics) numerical calculations for Grassmann phase space theories of fermion systems could be carried out without needing to represent Grassmann phase space variables on the computer, and only involving processes using c-numbers. We compare our approach to that of Plimak, Collett and Olsen and show that the two approaches differ. As a simple test case we apply the B distribution theory and solve the Ito stochastic equations to demonstrate coupling between degenerate Cooper pairs in a four mode fermionic system involving spin conserving interactions between the spin 1 / 2 fermions, where modes with momenta - k , + k-each associated with spin up, spin down states, are involved.
Cornoldi, Cesare; Carretti, Barbara; Drusi, Silvia; Tencati, Chiara
2015-09-01
Despite doubts voiced on their efficacy, a series of studies has been carried out on the capacity of training programmes to improve academic and reasoning skills by focusing on underlying cognitive abilities and working memory in particular. No systematic efforts have been made, however, to test training programmes that involve both general and specific underlying abilities. If effective, these programmes could help to increase students' motivation and competence. This study examined the feasibility of improving problem-solving skills in school children by means of a training programme that addresses general and specific abilities involved in problem solving, focusing on metacognition and working memory. The project involved a sample of 135 primary school children attending eight classes in the third, fourth, and fifth grades (age range 8-10 years). The classes were assigned to two groups, one attending the training programme in the first 3 months of the study (Training Group 1) and the other serving as a waiting-list control group (Training Group 2). In the second phase of the study, the role of the two groups was reversed, with Training Group 2 attending the training instead of Training Group 1. The training programme led to improvements in both metacognitive and working memory tasks, with positive-related effects on the ability to solve problems. The gains seen in Training Group 1 were also maintained at the second post-test (after 3 months). Specific activities focusing on metacognition and working memory may contribute to modifying arithmetical problem-solving performance in primary school children. © 2015 The British Psychological Society.
A Study on Intelligence of High School Students
ERIC Educational Resources Information Center
Rani, M. Usha; Prakash, Srinivasan
2015-01-01
Intelligence involves the ability to think, solve problems, analyze situations, and understand social values, customs, and norms. Intelligence is a general mental capability that involves the ability to reason, plan, think abstractly, comprehend ideas and language, and learn. Intellectual ability involves comprehension, understanding, and learning…
NASA Astrophysics Data System (ADS)
Maries, Alexandru; Singh, Chandralekha
2018-06-01
Drawing appropriate diagrams is a useful problem solving heuristic that can transform a problem into a representation that is easier to exploit for solving it. One major focus while helping introductory physics students learn effective problem solving is to help them understand that drawing diagrams can facilitate problem solution. We conducted an investigation in which two different interventions were implemented during recitation quizzes in a large enrollment algebra-based introductory physics course. Students were either (i) asked to solve problems in which the diagrams were drawn for them or (ii) explicitly told to draw a diagram. A comparison group was not given any instruction regarding diagrams. We developed rubrics to score the problem solving performance of students in different intervention groups and investigated ten problems. We found that students who were provided diagrams never performed better and actually performed worse than the other students on three problems, one involving standing sound waves in a tube (discussed elsewhere) and two problems in electricity which we focus on here. These two problems were the only problems in electricity that involved considerations of initial and final conditions, which may partly account for why students provided with diagrams performed significantly worse than students who were not provided with diagrams. In order to explore potential reasons for this finding, we conducted interviews with students and found that some students provided with diagrams may have spent less time on the conceptual analysis and planning stage of the problem solving process. In particular, those provided with the diagram were more likely to jump into the implementation stage of problem solving early without fully analyzing and understanding the problem, which can increase the likelihood of mistakes in solutions.
NASA Astrophysics Data System (ADS)
Novikov, A. E.
1993-10-01
There are several methods of solving the problem of the flow distribution in hydraulic networks. But all these methods have no mathematical tools for forming joint systems of equations to solve this problem. This paper suggests a method of constructing joint systems of equations to calculate hydraulic circuits of the arbitrary form. The graph concept, according to Kirchhoff, has been introduced.
New Finite Difference Methods Based on IIM for Inextensible Interfaces in Incompressible Flows
Li, Zhilin; Lai, Ming-Chih
2012-01-01
In this paper, new finite difference methods based on the augmented immersed interface method (IIM) are proposed for simulating an inextensible moving interface in an incompressible two-dimensional flow. The mathematical models arise from studying the deformation of red blood cells in mathematical biology. The governing equations are incompressible Stokes or Navier-Stokes equations with an unknown surface tension, which should be determined in such a way that the surface divergence of the velocity is zero along the interface. Thus, the area enclosed by the interface and the total length of the interface should be conserved during the evolution process. Because of the nonlinear and coupling nature of the problem, direct discretization by applying the immersed boundary or immersed interface method yields complex nonlinear systems to be solved. In our new methods, we treat the unknown surface tension as an augmented variable so that the augmented IIM can be applied. Since finding the unknown surface tension is essentially an inverse problem that is sensitive to perturbations, our regularization strategy is to introduce a controlled tangential force along the interface, which leads to a least squares problem. For Stokes equations, the forward solver at one time level involves solving three Poisson equations with an interface. For Navier-Stokes equations, we propose a modified projection method that can enforce the pressure jump condition corresponding directly to the unknown surface tension. Several numerical experiments show good agreement with other results in the literature and reveal some interesting phenomena. PMID:23795308
Analytical sizing methods for behind-the-meter battery storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Kintner-Meyer, Michael; Yang, Tao
In behind-the-meter application, battery storage system (BSS) is utilized to reduce a commercial or industrial customer’s payment for electricity use, including energy charge and demand charge. The potential value of BSS in payment reduction and the most economic size can be determined by formulating and solving standard mathematical programming problems. In this method, users input system information such as load profiles, energy/demand charge rates, and battery characteristics to construct a standard programming problem that typically involve a large number of constraints and decision variables. Such a large scale programming problem is then solved by optimization solvers to obtain numerical solutions.more » Such a method cannot directly link the obtained optimal battery sizes to input parameters and requires case-by-case analysis. In this paper, we present an objective quantitative analysis of costs and benefits of customer-side energy storage, and thereby identify key factors that affect battery sizing. Based on the analysis, we then develop simple but effective guidelines that can be used to determine the most cost-effective battery size or guide utility rate design for stimulating energy storage development. The proposed analytical sizing methods are innovative, and offer engineering insights on how the optimal battery size varies with system characteristics. We illustrate the proposed methods using practical building load profile and utility rate. The obtained results are compared with the ones using mathematical programming based methods for validation.« less
Zhu, Yuanheng; Zhao, Dongbin; Li, Xiangjun
2017-03-01
H ∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.
An adaptive grid algorithm for one-dimensional nonlinear equations
NASA Technical Reports Server (NTRS)
Gutierrez, William E.; Hills, Richard G.
1990-01-01
Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and less computation time than required by the tridiagonal method. The performance of the adaptive grid method tends to degrade as the solution process proceeds in time, but still remains faster than the tridiagonal scheme.
Hong, Jun; Chen, Dongchu; Peng, Zhiqiang; Li, Zulin; Liu, Haibo; Guo, Jian
2018-05-01
A new method for measuring the alternating current (AC) half-wave voltage of a Mach-Zehnder modulator is proposed and verified by experiment in this paper. Based on the opto-electronic self-oscillation technology, the physical relationship between the saturation output power of the oscillating signal and the AC half-wave voltage is revealed, and the value of the AC half-wave voltage is solved by measuring the saturation output power of the oscillating signal. The experimental results show that the measured data of this new method involved are in agreement with a traditional method, and not only an external microwave signal source but also the calibration for different frequency measurements is not needed in our new method. The measuring process is simplified with this new method on the premise of ensuring the accuracy of measurement, and it owns good practical value.
Review on solving the forward problem in EEG source analysis
Hallez, Hans; Vanrumste, Bart; Grech, Roberta; Muscat, Joseph; De Clercq, Wim; Vergult, Anneleen; D'Asseler, Yves; Camilleri, Kenneth P; Fabri, Simon G; Van Huffel, Sabine; Lemahieu, Ignace
2007-01-01
Background The aim of electroencephalogram (EEG) source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. Methods While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. Results It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter). In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM), the finite element method (FEM) and the finite difference method (FDM). In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative methods are required to solve these sparse linear systems. The following iterative methods are discussed: successive over-relaxation, conjugate gradients method and algebraic multigrid method. Conclusion Solving the forward problem has been well documented in the past decades. In the past simplified spherical head models are used, whereas nowadays a combination of imaging modalities are used to accurately describe the geometry of the head model. Efforts have been done on realistically describing the shape of the head model, as well as the heterogenity of the tissue types and realistically determining the conductivity. However, the determination and validation of the in vivo conductivity values is still an important topic in this field. In addition, more studies have to be done on the influence of all the parameters of the head model and of the numerical techniques on the solution of the forward problem. PMID:18053144
A General Architecture for Intelligent Tutoring of Diagnostic Classification Problem Solving
Crowley, Rebecca S.; Medvedeva, Olga
2003-01-01
We report on a general architecture for creating knowledge-based medical training systems to teach diagnostic classification problem solving. The approach is informed by our previous work describing the development of expertise in classification problem solving in Pathology. The architecture envelops the traditional Intelligent Tutoring System design within the Unified Problem-solving Method description Language (UPML) architecture, supporting component modularity and reuse. Based on the domain ontology, domain task ontology and case data, the abstract problem-solving methods of the expert model create a dynamic solution graph. Student interaction with the solution graph is filtered through an instructional layer, which is created by a second set of abstract problem-solving methods and pedagogic ontologies, in response to the current state of the student model. We outline the advantages and limitations of this general approach, and describe it’s implementation in SlideTutor–a developing Intelligent Tutoring System in Dermatopathology. PMID:14728159
Investigations of Sayre's Equation.
NASA Astrophysics Data System (ADS)
Shiono, Masaaki
Available from UMI in association with The British Library. Since the discovery of X-ray diffraction, various methods of using it to solve crystal structures have been developed. The major methods used can be divided into two categories: (1) Patterson function based methods; (2) Direct phase-determination methods. In the early days of structure determination from X-ray diffraction, Patterson methods played the leading role. Direct phase-determining methods ('direct methods' for short) were introduced by D. Harker and J. S. Kasper in the form of inequality relationships in 1948. A significant development of direct methods was produced by Sayre (1952). The equation he introduced, generally called Sayre's equation, gives exact relationships between structure factors for equal atoms. Later Cochran (1955) derived the so-called triple phase relationship, the main means by which it has become possible to find the structure factor phases automatically by computer. Although the background theory of direct methods is very mathematical, the user of direct-methods computer programs needs no detailed knowledge of these automatic processes in order to solve structures. Recently introduced direct methods are based on Sayre's equation, so it is important to investigate its properties thoroughly. One such new method involves the Sayre equation tangent formula (SETF) which attempts to minimise the least square residual for the Sayre's equations (Debaerdemaeker, Tate and Woolfson; 1985). In chapters I-III the principles and developments of direct methods will be described and in chapters IV -VI the properties of Sayre's equation and its modification will be discussed. Finally, in chapter VII, there will be described the investigation of the possible use of an equation, similar in type to Sayre's equation, derived from the characteristics of the Patterson function.
A Relaxation Method for Nonlocal and Non-Hermitian Operators
NASA Astrophysics Data System (ADS)
Lagaris, I. E.; Papageorgiou, D. G.; Braun, M.; Sofianos, S. A.
1996-06-01
We present a grid method to solve the time dependent Schrödinger equation (TDSE). It uses the Crank-Nicholson scheme to propagate the wavefunction forward in time and finite differences to approximate the derivative operators. The resulting sparse linear system is solved by the symmetric successive overrelaxation iterative technique. The method handles local and nonlocal interactions and Hamiltonians that correspond to either Hermitian or to non-Hermitian matrices with real eigenvalues. We test the method by solving the TDSE in the imaginary time domain, thus converting the time propagation to asymptotic relaxation. Benchmark problems solved are both in one and two dimensions, with local, nonlocal, Hermitian and non-Hermitian Hamiltonians.
Overview of Krylov subspace methods with applications to control problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.
Numerical approximations for fractional diffusion equations via a Chebyshev spectral-tau method
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Ezz-Eldien, Samer S.
2013-10-01
In this paper, a class of fractional diffusion equations with variable coefficients is considered. An accurate and efficient spectral tau technique for solving the fractional diffusion equations numerically is proposed. This method is based upon Chebyshev tau approximation together with Chebyshev operational matrix of Caputo fractional differentiation. Such approach has the advantage of reducing the problem to the solution of a system of algebraic equations, which may then be solved by any standard numerical technique. We apply this general method to solve four specific examples. In each of the examples considered, the numerical results show that the proposed method is of high accuracy and is efficient for solving the time-dependent fractional diffusion equations.
NASA Technical Reports Server (NTRS)
Miller, R. H.
1982-01-01
Results obtained during the development of a consistent aerodynamic theory for rotors in hovering flight are discussed. Methods of aerodynamic analysis were developed which are adequate for general design purposes until such time as more elaborate solutions become available, in particular solutions which include real fluids effects. Several problems were encountered in the course of this development, and many remain to be solved, however it is felt that a better understanding of the aerodynamic phenomena involved was obtained. Remaining uncertainties are discussed.
Intervention into a turbulent urban situation: A case study. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Caldwell, G. M., Jr.
1973-01-01
The application is reported of NASA management philosophy and techniques within New Castle County, Delaware, to meet actual problems of community violence. It resulted in restructuring the county approach to problems of this nature, and development of a comprehensive system for planning, based on the NASA planning process. The method involved federal, state, and local resources with community representatives in solving the problems. The concept of a turbulent environment is presented with parallels drawn between NASA management experience and problems of management within an urban arena.
Bayesian approach to inverse statistical mechanics.
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Bayesian approach to inverse statistical mechanics
NASA Astrophysics Data System (ADS)
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Lin, S H; Sahai, R; Eyring, H
1971-04-01
A theoretical model for the accumulation of pesticides in soil has been proposed and discussed from the viewpoint of heterogeneous reaction kinetics with a basic aim to understand the complex nature of soil processes relating to the environmental pollution. In the bulk of soil, the pesticide disappears by diffusion and a chemical reaction; the rate processes considered on the surface of soil are diffusion, chemical reaction, vaporization, and regular pesticide application. The differential equations involved have been solved analytically by the Laplace-transform method.
Lin, S. H.; Sahai, R.; Eyring, H.
1971-01-01
A theoretical model for the accumulation of pesticides in soil has been proposed and discussed from the viewpoint of heterogeneous reaction kinetics with a basic aim to understand the complex nature of soil processes relating to the environmental pollution. In the bulk of soil, the pesticide disappears by diffusion and a chemical reaction; the rate processes considered on the surface of soil are diffusion, chemical reaction, vaporization, and regular pesticide application. The differential equations involved have been solved analytically by the Laplace-transform method. PMID:5279519
On the computation of steady Hopper flows. II: von Mises materials in various geometries
NASA Astrophysics Data System (ADS)
Gremaud, Pierre A.; Matthews, John V.; O'Malley, Meghan
2004-11-01
Similarity solutions are constructed for the flow of granular materials through hoppers. Unlike previous work, the present approach applies to nonaxisymmetric containers. The model involves ten unknowns (stresses, velocity, and plasticity function) determined by nine nonlinear first order partial differential equations together with a quadratic algebraic constraint (yield condition). A pseudospectral discretization is applied; the resulting problem is solved with a trust region method. The important role of the hopper geometry on the flow is illustrated by several numerical experiments of industrial relevance.
Object Transportation by Two Mobile Robots with Hand Carts
Hara, Tatsunori
2014-01-01
This paper proposes a methodology by which two small mobile robots can grasp, lift, and transport large objects using hand carts. The specific problems involve generating robot actions and determining the hand cart positions to achieve the stable loading of objects onto the carts. These problems are solved using nonlinear optimization, and we propose an algorithm for generating robot actions. The proposed method was verified through simulations and experiments using actual devices in a real environment. The proposed method could reduce the number of robots required to transport large objects with 50–60%. In addition, we demonstrated the efficacy of this task in real environments where errors occur in robot sensing and movement. PMID:27433499
Object Transportation by Two Mobile Robots with Hand Carts.
Sakuyama, Takuya; Figueroa Heredia, Jorge David; Ogata, Taiki; Hara, Tatsunori; Ota, Jun
2014-01-01
This paper proposes a methodology by which two small mobile robots can grasp, lift, and transport large objects using hand carts. The specific problems involve generating robot actions and determining the hand cart positions to achieve the stable loading of objects onto the carts. These problems are solved using nonlinear optimization, and we propose an algorithm for generating robot actions. The proposed method was verified through simulations and experiments using actual devices in a real environment. The proposed method could reduce the number of robots required to transport large objects with 50-60%. In addition, we demonstrated the efficacy of this task in real environments where errors occur in robot sensing and movement.
Bajaj, Chandrajit; Chen, Shun-Chuan; Rand, Alexander
2011-01-01
In order to compute polarization energy of biomolecules, we describe a boundary element approach to solving the linearized Poisson-Boltzmann equation. Our approach combines several important features including the derivative boundary formulation of the problem and a smooth approximation of the molecular surface based on the algebraic spline molecular surface. State of the art software for numerical linear algebra and the kernel independent fast multipole method is used for both simplicity and efficiency of our implementation. We perform a variety of computational experiments, testing our method on a number of actual proteins involved in molecular docking and demonstrating the effectiveness of our solver for computing molecular polarization energy. PMID:21660123
Multiscale model reduction for shale gas transport in poroelastic fractured media
NASA Astrophysics Data System (ADS)
Akkutlu, I. Yucel; Efendiev, Yalchin; Vasilyeva, Maria; Wang, Yuhe
2018-01-01
Inherently coupled flow and geomechanics processes in fractured shale media have implications for shale gas production. The system involves highly complex geo-textures comprised of a heterogeneous anisotropic fracture network spatially embedded in an ultra-tight matrix. In addition, nonlinearities due to viscous flow, diffusion, and desorption in the matrix and high velocity gas flow in the fractures complicates the transport. In this paper, we develop a multiscale model reduction approach to couple gas flow and geomechanics in fractured shale media. A Discrete Fracture Model (DFM) is used to treat the complex network of fractures on a fine grid. The coupled flow and geomechanics equations are solved using a fixed stress-splitting scheme by solving the pressure equation using a continuous Galerkin method and the displacement equation using an interior penalty discontinuous Galerkin method. We develop a coarse grid approximation and coupling using the Generalized Multiscale Finite Element Method (GMsFEM). GMsFEM constructs the multiscale basis functions in a systematic way to capture the fracture networks and their interactions with the shale matrix. Numerical results and an error analysis is provided showing that the proposed approach accurately captures the coupled process using a few multiscale basis functions, i.e. a small fraction of the degrees of freedom of the fine-scale problem.
Parallel Preconditioning for CFD Problems on the CM-5
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)
1994-01-01
Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.
Yakubova, Gulnoza; Hughes, Elizabeth M; Hornberger, Erin
2015-09-01
The purpose of this study was to determine the effectiveness of a point-of-view video modeling intervention to teach mathematics problem-solving when working on word problems involving subtracting mixed fractions with uncommon denominators. Using a multiple-probe across students design of single-case methodology, three high school students with ASD completed the study. All three students demonstrated greater accuracy in solving fraction word problems and maintained accuracy levels at a 1-week follow-up.
Journey into Problem Solving: A Gift from Polya
ERIC Educational Resources Information Center
Lederman, Eric
2009-01-01
In "How to Solve It", accomplished mathematician and skilled communicator George Polya describes a four-step universal solving technique designed to help students develop mathematical problem-solving skills. By providing a glimpse at the grace with which experts solve problems, Polya provides definable methods that are not exclusive to…
A new Newton-like method for solving nonlinear equations.
Saheya, B; Chen, Guo-Qing; Sui, Yun-Kang; Wu, Cai-Ying
2016-01-01
This paper presents an iterative scheme for solving nonline ar equations. We establish a new rational approximation model with linear numerator and denominator which has generalizes the local linear model. We then employ the new approximation for nonlinear equations and propose an improved Newton's method to solve it. The new method revises the Jacobian matrix by a rank one matrix each iteration and obtains the quadratic convergence property. The numerical performance and comparison show that the proposed method is efficient.
Ghanbari, Behzad
2014-01-01
We aim to study the convergence of the homotopy analysis method (HAM in short) for solving special nonlinear Volterra-Fredholm integrodifferential equations. The sufficient condition for the convergence of the method is briefly addressed. Some illustrative examples are also presented to demonstrate the validity and applicability of the technique. Comparison of the obtained results HAM with exact solution shows that the method is reliable and capable of providing analytic treatment for solving such equations.
Some observations on a new numerical method for solving Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Kumar, A.
1981-01-01
An explicit-implicit technique for solving Navier-Stokes equations is described which, is much less complex than other implicit methods. It is used to solve a complex, two-dimensional, steady-state, supersonic-flow problem. The computational efficiency of the method and the quality of the solution obtained from it at high Courant-Friedrich-Lewy (CFL) numbers are discussed. Modifications are discussed and certain observations are made about the method which may be helpful in using it successfully.
NASA Astrophysics Data System (ADS)
Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle
2016-08-01
Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.