Step by Step: Biology Undergraduates’ Problem-Solving Procedures during Multiple-Choice Assessment
Prevost, Luanna B.; Lemons, Paula P.
2016-01-01
This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. PMID:27909021
Step by Step: Biology Undergraduates' Problem-Solving Procedures during Multiple-Choice Assessment.
Prevost, Luanna B; Lemons, Paula P
2016-01-01
This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. © 2016 L. B. Prevost and P. P. Lemons. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Fuchs, Lynn S.; Fuchs, Douglas; Hamlett, Carol L.; Lambert, Warren; Stuebing, Karla; Fletcher, Jack M.
2009-01-01
The purpose of this study was to explore patterns of difficulty in 2 domains of mathematical cognition: computation and problem solving. Third graders (n = 924; 47.3% male) were representatively sampled from 89 classrooms; assessed on computation and problem solving; classified as having difficulty with computation, problem solving, both domains, or neither domain; and measured on 9 cognitive dimensions. Difficulty occurred across domains with the same prevalence as difficulty with a single domain; specific difficulty was distributed similarly across domains. Multivariate profile analysis on cognitive dimensions and chi-square tests on demographics showed that specific computational difficulty was associated with strength in language and weaknesses in attentive behavior and processing speed; problem-solving difficulty was associated with deficient language as well as race and poverty. Implications for understanding mathematics competence and for the identification and treatment of mathematics difficulties are discussed. PMID:20057912
ERIC Educational Resources Information Center
She, Hsiao-Ching; Cheng, Meng-Tzu; Li, Ta-Wei; Wang, Chia-Yu; Chiu, Hsin-Tien; Lee, Pei-Zon; Chou, Wen-Chi; Chuang, Ming-Hua
2012-01-01
This study investigates the effect of Web-based Chemistry Problem-Solving, with the attributes of Web-searching and problem-solving scaffolds, on undergraduate students' problem-solving task performance. In addition, the nature and extent of Web-searching strategies students used and its correlation with task performance and domain knowledge also…
Blanchard-Fields, Fredda; Mienaltowski, Andrew; Seay, Renee Baldi
2007-01-01
Using the Everyday Problem Solving Inventory of Cornelius and Caspi, we examined differences in problem-solving strategy endorsement and effectiveness in two domains of everyday functioning (instrumental or interpersonal, and a mixture of the two domains) and for four strategies (avoidance-denial, passive dependence, planful problem solving, and cognitive analysis). Consistent with past research, our research showed that older adults were more problem focused than young adults in their approach to solving instrumental problems, whereas older adults selected more avoidant-denial strategies than young adults when solving interpersonal problems. Overall, older adults were also more effective than young adults when solving everyday problems, in particular for interpersonal problems.
ERIC Educational Resources Information Center
Kostousov, Sergei; Kudryavtsev, Dmitry
2017-01-01
Problem solving is a critical competency for modern world and also an effective way of learning. Education should not only transfer domain-specific knowledge to students, but also prepare them to solve real-life problems--to apply knowledge from one or several domains within specific situation. Problem solving as teaching tool is known for a long…
Hoppmann, Christiane A; Coats, Abby Heckman; Blanchard-Fields, Fredda
2008-07-01
Qualitative interviews on family and financial problems from 332 adolescents, young, middle-aged, and older adults, demonstrated that developmentally relevant goals predicted problem-solving strategy use over and above problem domain. Four focal goals concerned autonomy, generativity, maintaining good relationships with others, and changing another person. We examined both self- and other-focused problem-solving strategies. Autonomy goals were associated with self-focused instrumental problem solving and generative goals were related to other-focused instrumental problem solving in family and financial problems. Goals of changing another person were related to other-focused instrumental problem solving in the family domain only. The match between goals and strategies, an indicator of problem-solving adaptiveness, showed that young individuals displayed the greatest match between autonomy goals and self-focused problem solving, whereas older adults showed a greater match between generative goals and other-focused problem solving. Findings speak to the importance of considering goals in investigations of age-related differences in everyday problem solving.
ERIC Educational Resources Information Center
Roesch, Frank; Nerb, Josef; Riess, Werner
2015-01-01
Our study investigated whether problem-oriented designed ecology lessons with phases of direct instruction and of open experimentation foster the development of cross-domain and domain-specific components of "experimental problem-solving ability" better than conventional lessons in science. We used a paper-and-pencil test to assess…
Tu, S W; Eriksson, H; Gennari, J H; Shahar, Y; Musen, M A
1995-06-01
PROTEGE-II is a suite of tools and a methodology for building knowledge-based systems and domain-specific knowledge-acquisition tools. In this paper, we show how PROTEGE-II can be applied to the task of providing protocol-based decision support in the domain of treating HIV-infected patients. To apply PROTEGE-II, (1) we construct a decomposable problem-solving method called episodic skeletal-plan refinement, (2) we build an application ontology that consists of the terms and relations in the domain, and of method-specific distinctions not already captured in the domain terms, and (3) we specify mapping relations that link terms from the application ontology to the domain-independent terms used in the problem-solving method. From the application ontology, we automatically generate a domain-specific knowledge-acquisition tool that is custom-tailored for the application. The knowledge-acquisition tool is used for the creation and maintenance of domain knowledge used by the problem-solving method. The general goal of the PROTEGE-II approach is to produce systems and components that are reusable and easily maintained. This is the rationale for constructing ontologies and problem-solving methods that can be composed from a set of smaller-grained methods and mechanisms. This is also why we tightly couple the knowledge-acquisition tools to the application ontology that specifies the domain terms used in the problem-solving systems. Although our evaluation is still preliminary, for the application task of providing protocol-based decision support, we show that these goals of reusability and easy maintenance can be achieved. We discuss design decisions and the tradeoffs that have to be made in the development of the system.
Problem Solving and Chemical Equilibrium: Successful versus Unsuccessful Performance.
ERIC Educational Resources Information Center
Camacho, Moises; Good, Ron
1989-01-01
Describes the problem-solving behaviors of experts and novices engaged in solving seven chemical equilibrium problems. Lists 27 behavioral tendencies of successful and unsuccessful problem solvers. Discusses several implications for a problem solving theory, think-aloud techniques, adequacy of the chemistry domain, and chemistry instruction.…
Trumpower, David L; Goldsmith, Timothy E; Guynn, Melissa J
2004-12-01
Solving training problems with nonspecific goals (NG; i.e., solving for all possible unknown values) often results in better transfer than solving training problems with standard goals (SG; i.e., solving for one particular unknown value). In this study, we evaluated an attentional focus explanation of the goal specificity effect. According to the attentional focus view, solving NG problems causes attention to be directed to local relations among successive problem states, whereas solving SG problems causes attention to be directed to relations between the various problem states and the goal state. Attention to the former is thought to enhance structural knowledge about the problem domain and thus promote transfer. Results supported this view because structurally different transfer problems were solved faster following NG training than following SG training. Moreover, structural knowledge representations revealed more links depicting local relations following NG training and more links to the training goal following SG training. As predicted, these effects were obtained only by domain novices.
Skill Acquisition: Compilation of Weak-Method Problem Solutions.
ERIC Educational Resources Information Center
Anderson, John R.
According to the ACT theory of skill acquisition, cognitive skills are encoded by a set of productions, which are organized according to a hierarchical goal structure. People solve problems in new domains by applying weak problem-solving procedures to declarative knowledge they have about this domain. From these initial problem solutions,…
Problem Solving. Research Brief
ERIC Educational Resources Information Center
Muir, Mike
2004-01-01
No longer solely the domain of Mathematics, problem solving permeates every area of today's curricula. Ideally students are applying heuristics strategies in varied contexts and novel situations in every subject taught. The ability to solve problems is a basic life skill and is essential to understanding technical subjects. Problem-solving is a…
Step by Step: Biology Undergraduates' Problem-Solving Procedures during Multiple-Choice Assessment
ERIC Educational Resources Information Center
Prevost, Luanna B.; Lemons, Paula P.
2016-01-01
This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this…
A General Architecture for Intelligent Tutoring of Diagnostic Classification Problem Solving
Crowley, Rebecca S.; Medvedeva, Olga
2003-01-01
We report on a general architecture for creating knowledge-based medical training systems to teach diagnostic classification problem solving. The approach is informed by our previous work describing the development of expertise in classification problem solving in Pathology. The architecture envelops the traditional Intelligent Tutoring System design within the Unified Problem-solving Method description Language (UPML) architecture, supporting component modularity and reuse. Based on the domain ontology, domain task ontology and case data, the abstract problem-solving methods of the expert model create a dynamic solution graph. Student interaction with the solution graph is filtered through an instructional layer, which is created by a second set of abstract problem-solving methods and pedagogic ontologies, in response to the current state of the student model. We outline the advantages and limitations of this general approach, and describe it’s implementation in SlideTutor–a developing Intelligent Tutoring System in Dermatopathology. PMID:14728159
Working memory dysfunctions predict social problem solving skills in schizophrenia.
Huang, Jia; Tan, Shu-ping; Walsh, Sarah C; Spriggens, Lauren K; Neumann, David L; Shum, David H K; Chan, Raymond C K
2014-12-15
The current study aimed to examine the contribution of neurocognition and social cognition to components of social problem solving. Sixty-seven inpatients with schizophrenia and 31 healthy controls were administrated batteries of neurocognitive tests, emotion perception tests, and the Chinese Assessment of Interpersonal Problem Solving Skills (CAIPSS). MANOVAs were conducted to investigate the domains in which patients with schizophrenia showed impairments. Correlations were used to determine which impaired domains were associated with social problem solving, and multiple regression analyses were conducted to compare the relative contribution of neurocognitive and social cognitive functioning to components of social problem solving. Compared with healthy controls, patients with schizophrenia performed significantly worse in sustained attention, working memory, negative emotion, intention identification and all components of the CAIPSS. Specifically, sustained attention, working memory and negative emotion identification were found to correlate with social problem solving and 1-back accuracy significantly predicted the poor performance in social problem solving. Among the dysfunctions in schizophrenia, working memory contributed most to deficits in social problem solving in patients with schizophrenia. This finding provides support for targeting working memory in the development of future social problem solving rehabilitation interventions. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Flexibility in Problem Solving: The Case of Equation Solving
ERIC Educational Resources Information Center
Star, Jon R.; Rittle-Johnson, Bethany
2008-01-01
A key learning outcome in problem-solving domains is the development of flexible knowledge, where learners know multiple strategies and adaptively choose efficient strategies. Two interventions hypothesized to improve flexibility in problem solving were experimentally evaluated: prompts to discover multiple strategies and direct instruction on…
A Cognitive Simulator for Learning the Nature of Human Problem Solving
NASA Astrophysics Data System (ADS)
Miwa, Kazuhisa
Problem solving is understood as a process through which states of problem solving are transferred from the initial state to the goal state by applying adequate operators. Within this framework, knowledge and strategies are given as operators for the search. One of the most important points of researchers' interest in the domain of problem solving is to explain the performance of problem solving behavior based on the knowledge and strategies that the problem solver has. We call the interplay between problem solvers' knowledge/strategies and their behavior the causal relation between mental operations and behavior. It is crucially important, we believe, for novice learners in this domain to understand the causal relation between mental operations and behavior. Based on this insight, we have constructed a learning system in which learners can control mental operations of a computational agent that solves a task, such as knowledge, heuristics, and cognitive capacity, and can observe its behavior. We also introduce this system to a university class, and discuss which findings were discovered by the participants.
Jiang, Feng; Han, Ji-zhong
2018-01-01
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods. PMID:29623088
Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong
2018-01-01
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.
Dispositional Predictors of Problem Solving in the Field of Office Work
ERIC Educational Resources Information Center
Rausch, Andreas
2017-01-01
It was investigated how domain-specific knowledge, fluid intelligence, vocational interest and work-related self-efficacy predicted domain-specific problem-solving performance in the field of office work. The participants included 100 German VET (vocational education and training) students nearing the end of a 3-year apprenticeship program as an…
Calculating Probabilistic Distance to Solution in a Complex Problem Solving Domain
ERIC Educational Resources Information Center
Sudol, Leigh Ann; Rivers, Kelly; Harris, Thomas K.
2012-01-01
In complex problem solving domains, correct solutions are often comprised of a combination of individual components. Students usually go through several attempts, each attempt reflecting an individual solution state that can be observed during practice. Classic metrics to measure student performance over time rely on counting the number of…
Problem Solving and the Development of Expertise in Management.
ERIC Educational Resources Information Center
Lash, Fredrick B.
This study investigated novice and expert problem solving behavior in management to examine the role of domain specific knowledge on problem solving processes. Forty-one middle level marketing managers in a large petrochemical organization provided think aloud protocols in response to two hypothetical management scenarios. Protocol analysis…
The Process of Solving Complex Problems
ERIC Educational Resources Information Center
Fischer, Andreas; Greiff, Samuel; Funke, Joachim
2012-01-01
This article is about Complex Problem Solving (CPS), its history in a variety of research domains (e.g., human problem solving, expertise, decision making, and intelligence), a formal definition and a process theory of CPS applicable to the interdisciplinary field. CPS is portrayed as (a) knowledge acquisition and (b) knowledge application…
ERIC Educational Resources Information Center
Dermitzaki, Irini; Leondari, Angeliki; Goudas, Marios
2009-01-01
This study aimed at investigating the relations between students' strategic behaviour during problem solving, task performance and domain-specific self-concept. A total of 167 first- and second-graders were individually examined in tasks involving cubes assembly and in academic self-concept in mathematics. Students' cognitive, metacognitive, and…
ERIC Educational Resources Information Center
Choi, Jung-Min
2010-01-01
The primary concern in current interaction design is focused on how to help users solve problems and achieve goals more easily and efficiently. While users' sufficient knowledge acquisition of operating a product or system is considered important, their acquisition of problem-solving knowledge in the task domain has largely been disregarded. As a…
An Ada Based Expert System for the Ada Version of SAtool II. Volume 1 and 2
1991-06-06
Integrated Computer-Aided Manufacturing (ICAM) (20). In fact, IDEF 0 stands for ICAM Definition Method Zero . IDEF0 defines a subset of SA that omits...reasoning that has been programmed). An expert’s knowledge is specific to one problem domain as opposed to knowledge about general problem-solving...techniques. General problem domains are medicine, finance, science or engineering and so forth in which an expert can solve specific problems very well
ERIC Educational Resources Information Center
Reddy, M. Vijaya Bhaskara; Panacharoensawad, Buncha
2017-01-01
In twenty first century, abundant innovative tools have been identified by the researchers to evaluate the conceptual understandings, problem solving, beliefs and attitudes about physics. Nevertheless, lacking of wide variety of evaluation instruments with respect to problem solving in physics. It indicates that the complexity of the domain fields…
Fictitious domain method for fully resolved reacting gas-solid flow simulation
NASA Astrophysics Data System (ADS)
Zhang, Longhui; Liu, Kai; You, Changfu
2015-10-01
Fully resolved simulation (FRS) for gas-solid multiphase flow considers solid objects as finite sized regions in flow fields and their behaviours are predicted by solving equations in both fluid and solid regions directly. Fixed mesh numerical methods, such as fictitious domain method, are preferred in solving FRS problems and have been widely researched. However, for reacting gas-solid flows no suitable fictitious domain numerical method has been developed. This work presents a new fictitious domain finite element method for FRS of reacting particulate flows. Low Mach number reacting flow governing equations are solved sequentially on a regular background mesh. Particles are immersed in the mesh and driven by their surface forces and torques integrated on immersed interfaces. Additional treatments on energy and surface reactions are developed. Several numerical test cases validated the method and a burning carbon particles array falling simulation proved the capability for solving moving reacting particle cluster problems.
Cognitive Science: Problem Solving And Learning For Physics Education
NASA Astrophysics Data System (ADS)
Ross, Brian H.
2007-11-01
Cognitive Science has focused on general principles of problem solving and learning that might be relevant for physics education research. This paper examines three selected issues that have relevance for the difficulty of transfer in problem solving domains: specialized systems of memory and reasoning, the importance of content in thinking, and a characterization of memory retrieval in problem solving. In addition, references to these issues are provided to allow the interested researcher entries to the literatures.
ERIC Educational Resources Information Center
Tawfik, Andrew; Jonassen, David
2013-01-01
Solving complex, ill-structured problems may be effectively supported by case-based reasoning through case libraries that provide just-in-time domain-specific principles in the form of stories. The cases not only articulate previous experiences of practitioners, but also serve as problem-solving narratives from which learners can acquire meaning.…
Working memory, worry, and algebraic ability.
Trezise, Kelly; Reeve, Robert A
2014-05-01
Math anxiety (MA)-working memory (WM) relationships have typically been examined in the context of arithmetic problem solving, and little research has examined the relationship in other math domains (e.g., algebra). Moreover, researchers have tended to examine MA/worry separate from math problem solving activities and have used general WM tasks rather than domain-relevant WM measures. Furthermore, it seems to have been assumed that MA affects all areas of math. It is possible, however, that MA is restricted to particular math domains. To examine these issues, the current research assessed claims about the impact on algebraic problem solving of differences in WM and algebraic worry. A sample of 80 14-year-old female students completed algebraic worry, algebraic WM, algebraic problem solving, nonverbal IQ, and general math ability tasks. Latent profile analysis of worry and WM measures identified four performance profiles (subgroups) that differed in worry level and WM capacity. Consistent with expectations, subgroup membership was associated with algebraic problem solving performance: high WM/low worry>moderate WM/low worry=moderate WM/high worry>low WM/high worry. Findings are discussed in terms of the conceptual relationship between emotion and cognition in mathematics and implications for the MA-WM-performance relationship. Copyright © 2013 Elsevier Inc. All rights reserved.
Cognitive and psychomotor effects of risperidone in schizophrenia and schizoaffective disorder.
Houthoofd, Sofie A M K; Morrens, Manuel; Sabbe, Bernard G C
2008-09-01
The aim of this review was to discuss data from double-blind, randomized controlled trials (RCTs) that have investigated the effects of oral and long-acting injectable risperidone on cognitive and psychomotor functioning in patients with schizophrenia or schizoaffective disorder. PubMed/MEDLINE and the Institute of Scientific Information Web of Science database were searched for relevant English-language double-blind RCTs published between March 2000 and July 2008, using the terms schizophrenia, schizoaffective disorder, cognition, risperidone, psychomotor, processing speed, attention, vigilance, working memory, verbal learning, visual learning, reasoning, problem solving, social cognition, MATRICS, and long-acting. Relevant studies included patients with schizophrenia or schizoaffective disorder. Cognitive domains were delineated at the Consensus Conferences of the National Institute of Mental Health-Measurement And Treatment Research to Improve Cognition in Schizophrenia (NIMH-MATRICS). The tests employed to assess each domain and psychomotor functioning, and the within-group and between-group comparisons of risperidone with haloperidol and other atypical antipsychotics, are presented. The results of individual tests were included when they were individually presented and interpretable for either drug; outcomes that were presented as cluster scores or factor structures were excluded. A total of 12 articles were included in this review. Results suggested that the use of oral risperidone appeared to be associated with within-group improvements on the cognitive domains of processing speed, attention/vigilance, verbal and visual learning and memory, and reasoning and problem solving in patients with schizophrenia or schizoaffective disorder. Risperidone and haloperidol seemed to generate similar beneficial effects (on the domains of processing speed, attention/vigilance, [verbal and nonverbal] working memory, and visual learning and memory, as well as psychomotor functioning), although the results for verbal fluency, verbal learning and memory, and reasoning and problem solving were not unanimous, and no comparative data on social cognition were available. Similar cognitive effects were found with risperidone, olanzapine, and quetiapine on the domains of verbal working memory and reasoning and problem solving, as well as verbal fluency. More research is needed on the domains in which study results were contradictory. For olanzapine versus risperidone, these were verbal and visual learning and memory and psychomotor functioning. No comparative data for olanzapine and risperidone were available for the social cognition domain. For quetiapine versus risperidone, the domains in which no unanimity was found were processing speed, attention/vigilance, nonverbal working memory, and verbal learning and memory. The limited available reports on risperidone versus clozapine suggest that: risperidone was associated with improved, and clozapine with worsened, performance on the nonverbal working memory domain; risperidone improved and clozapine did not improve reasoning and problem-solving performance; clozapine improved, and risperidone did not improve, social cognition performance. Use of long-acting injectable risperidone seemed to be associated with improved performance in the domains of attention/vigilance, verbal learning and memory, and reasoning and problem solving, as well as psychomotor functioning. The results for the nonverbal working memory domain were indeterminate, and no clear improvement was seen in the social cognition domain. The domains of processing speed, verbal working memory, and visual learning and memory, as well as verbal fluency, were not assessed. The results of this review of within-group comparisons of oral risperidone suggest that the agent appeared to be associated with improved functioning in the cognitive domains of processing speed, attention/vigilance, verbal and visual learning and memory, and reasoning and problem solving in patients with schizophrenia or schizoaffective disorder. Long-acting injectable risperidone seemed to be associated with improved functioning in the domains of attention/vigilance, verbal learning and memory, and reasoning and problem solving, as well as psychomotor functioning, in patients with schizophrenia or schizoaffective disorder.
The Transitory Phase to the Attainment of Self-Regulatory Skill in Mathematical Problem Solving
ERIC Educational Resources Information Center
Lazakidou, G.; Paraskeva, F.; Retalis, S.
2007-01-01
Three phases of development of self-regulatory skill in the domain of mathematical problem solving were designed to examine students' behaviour and the effects on their problem solving ability. Forty-eight Grade 4 students (10 year olds) participated in this pilot study. The students were randomly assigned to one of three groups, each representing…
ERIC Educational Resources Information Center
Brooks, Christopher Darren
2009-01-01
The purpose of this study was to investigate the effectiveness of process-oriented and product-oriented worked example strategies and the mediating effect of prior knowledge (high versus low) on problem solving and learner attitude in the domain of microeconomics. In addition, the effect of these variables on learning efficiency as well as the…
Decision-Making and Problem-Solving Approaches in Pharmacy Education
Martin, Lindsay C.; Holdford, David A.
2016-01-01
Domain 3 of the Center for the Advancement of Pharmacy Education (CAPE) 2013 Educational Outcomes recommends that pharmacy school curricula prepare students to be better problem solvers, but are silent on the type of problems they should be prepared to solve. We identified five basic approaches to problem solving in the curriculum at a pharmacy school: clinical, ethical, managerial, economic, and legal. These approaches were compared to determine a generic process that could be applied to all pharmacy decisions. Although there were similarities in the approaches, generic problem solving processes may not work for all problems. Successful problem solving requires identification of the problems faced and application of the right approach to the situation. We also advocate that the CAPE Outcomes make explicit the importance of different approaches to problem solving. Future pharmacists will need multiple approaches to problem solving to adapt to the complexity of health care. PMID:27170823
Decision-Making and Problem-Solving Approaches in Pharmacy Education.
Martin, Lindsay C; Donohoe, Krista L; Holdford, David A
2016-04-25
Domain 3 of the Center for the Advancement of Pharmacy Education (CAPE) 2013 Educational Outcomes recommends that pharmacy school curricula prepare students to be better problem solvers, but are silent on the type of problems they should be prepared to solve. We identified five basic approaches to problem solving in the curriculum at a pharmacy school: clinical, ethical, managerial, economic, and legal. These approaches were compared to determine a generic process that could be applied to all pharmacy decisions. Although there were similarities in the approaches, generic problem solving processes may not work for all problems. Successful problem solving requires identification of the problems faced and application of the right approach to the situation. We also advocate that the CAPE Outcomes make explicit the importance of different approaches to problem solving. Future pharmacists will need multiple approaches to problem solving to adapt to the complexity of health care.
Problem-Solving Examples as Interactive Learning Objects for Educational Digital Libraries
ERIC Educational Resources Information Center
Brusilovsky, Peter; Yudelson, Michael; Hsiao, I-Han
2009-01-01
The paper analyzes three major problems encountered by our team as we endeavored to turn problem solving examples in the domain of programming into highly reusable educational activities, which could be included as first class objects in various educational digital libraries. It also suggests three specific approaches to resolving these problems,…
NASA Astrophysics Data System (ADS)
Hafner, Robert; Stewart, Jim
Past problem-solving research has provided a basis for helping students structure their knowledge and apply appropriate problem-solving strategies to solve problems for which their knowledge (or mental models) of scientific phenomena is adequate (model-using problem solving). This research examines how problem solving in the domain of Mendelian genetics proceeds in situations where solvers' mental models are insufficient to solve problems at hand (model-revising problem solving). Such situations require solvers to use existing models to recognize anomalous data and to revise those models to accommodate the data. The study was conducted in the context of 9-week high school genetics course and addressed: the heuristics charactenstic of successful model-revising problem solving: the nature of the model revisions, made by students as well as the nature of model development across problem types; and the basis upon which solvers decide that a revised model is sufficient (that t has both predictive and explanatory power).
Problem Solving in Electricity.
ERIC Educational Resources Information Center
Caillot, Michel; Chalouhi, Elias
Two studies were conducted to describe how students perform direct current (D-C) circuit problems. It was hypothesized that problem solving in the electricity domain depends largely on good visual processing of the circuit diagram and that this processing depends on the ability to recognize when two or more electrical components are in series or…
Cognitive Principles of Problem Solving and Instruction. Final Report.
ERIC Educational Resources Information Center
Greeno, James G.; And Others
Research in this project studied cognitive processes involved in understanding and solving problems used in instruction in the domain of mathematics, and explored implications of these cognitive analyses for the design of instruction. Three general issues were addressed: knowledge required for understanding problems, knowledge of the conditions…
NASA Astrophysics Data System (ADS)
Roesch, Frank; Nerb, Josef; Riess, Werner
2015-03-01
Our study investigated whether problem-oriented designed ecology lessons with phases of direct instruction and of open experimentation foster the development of cross-domain and domain-specific components of experimental problem-solving ability better than conventional lessons in science. We used a paper-and-pencil test to assess students' abilities in a quasi-experimental intervention study utilizing a pretest/posttest control-group design (N = 340; average performing sixth-grade students). The treatment group received lessons on forest ecosystems consistent with the principle of education for sustainable development. This learning environment was expected to help students enhance their ecological knowledge and their theoretical and methodological experimental competencies. Two control groups received either the teachers' usual lessons on forest ecosystems or non-specific lessons on other science topics. We found that the treatment promoted specific components of experimental problem-solving ability (generating epistemic questions, planning two-factorial experiments, and identifying correct experimental controls). However, the observed effects were small, and awareness for aspects of higher ecological experimental validity was not promoted by the treatment.
A new principle technic for the transformation from frequency domain to time domain
NASA Astrophysics Data System (ADS)
Gao, Ben-Qing
2017-03-01
A principle technic for the transformation from frequency domain to time domain is presented. Firstly, a special type of frequency domain transcendental equation is obtained for an expected frequency domain parameter which is a rational or irrational fraction expression. Secondly, the inverse Laplace transformation is performed. When the two time-domain factors corresponding to the two frequency domain factors at two sides of frequency domain transcendental equation are known quantities, a time domain transcendental equation is reached. At last, the expected time domain parameter corresponding to the expected frequency domain parameter can be solved by the inverse convolution process. Proceeding from rational or irrational fraction expression, all solving process is provided. In the meantime, the property of time domain sequence is analyzed and the strategy for choosing the parameter values is described. Numerical examples are presented to verify the proposed theory and technic. Except for rational or irrational fraction expressions, examples of complex relative permittivity of water and plasma are used as verification method. The principle method proposed in the paper can easily solve problems which are difficult to be solved by Laplace transformation.
Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.
Musen, M. A.
1998-01-01
When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community. PMID:9929181
Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.
Musen, M A
1998-01-01
When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.
Jona, Celine M H; Labuschagne, Izelle; Mercieca, Emily-Clare; Fisher, Fiona; Gluyas, Cathy; Stout, Julie C; Andrews, Sophie C
2017-01-01
Family functioning in Huntington's disease (HD) is known from previous studies to be adversely affected. However, which aspects of family functioning are disrupted is unknown, limiting the empirical basis around which to create supportive interventions. The aim of the current study was to assess family functioning in HD families. We assessed family functioning in 61 participants (38 HD gene-expanded participants and 23 family members) using the McMaster Family Assessment Device (FAD; Epstein, Baldwin and Bishop, 1983), which provides scores for seven domains of functioning: Problem Solving; Communication; Affective Involvement; Affective Responsiveness; Behavior Control; Roles; and General Family Functioning. The most commonly reported disrupted domain for HD participants was Affective Involvement, which was reported by 39.5% of HD participants, followed closely by General Family Functioning (36.8%). For family members, the most commonly reported dysfunctional domains were Affective Involvement and Communication (both 52.2%). Furthermore, symptomatic HD participants reported more disruption to Problem Solving than pre-symptomatic HD participants. In terms of agreement between pre-symptomatic and symptomatic HD participants and their family members, all domains showed moderate to very good agreement. However, on average, family members rated Communication as more disrupted than their HD affected family member. These findings highlight the need to target areas of emotional engagement, communication skills and problem solving in family interventions in HD.
Jona, Celine M.H.; Labuschagne, Izelle; Mercieca, Emily-Clare; Fisher, Fiona; Gluyas, Cathy; Stout, Julie C.; Andrews, Sophie C.
2017-01-01
Background: Family functioning in Huntington’s disease (HD) is known from previous studies to be adversely affected. However, which aspects of family functioning are disrupted is unknown, limiting the empirical basis around which to create supportive interventions. Objective: The aim of the current study was to assess family functioning in HD families. Methods: We assessed family functioning in 61 participants (38 HD gene-expanded participants and 23 family members) using the McMaster Family Assessment Device (FAD; Epstein, Baldwin and Bishop, 1983), which provides scores for seven domains of functioning: Problem Solving; Communication; Affective Involvement; Affective Responsiveness; Behavior Control; Roles; and General Family Functioning. Results: The most commonly reported disrupted domain for HD participants was Affective Involvement, which was reported by 39.5% of HD participants, followed closely by General Family Functioning (36.8%). For family members, the most commonly reported dysfunctional domains were Affective Involvement and Communication (both 52.2%). Furthermore, symptomatic HD participants reported more disruption to Problem Solving than pre-symptomatic HD participants. In terms of agreement between pre-symptomatic and symptomatic HD participants and their family members, all domains showed moderate to very good agreement. However, on average, family members rated Communication as more disrupted than their HD affected family member. Conclusion: These findings highlight the need to target areas of emotional engagement, communication skills and problem solving in family interventions in HD. PMID:28968240
The effects of monitoring environment on problem-solving performance.
Laird, Brian K; Bailey, Charles D; Hester, Kim
2018-01-01
While effective and efficient solving of everyday problems is important in business domains, little is known about the effects of workplace monitoring on problem-solving performance. In a laboratory experiment, we explored the monitoring environment's effects on an individual's propensity to (1) establish pattern solutions to problems, (2) recognize when pattern solutions are no longer efficient, and (3) solve complex problems. Under three work monitoring regimes-no monitoring, human monitoring, and electronic monitoring-114 participants solved puzzles for monetary rewards. Based on research related to worker autonomy and theory of social facilitation, we hypothesized that monitored (versus non-monitored) participants would (1) have more difficulty finding a pattern solution, (2) more often fail to recognize when the pattern solution is no longer efficient, and (3) solve fewer complex problems. Our results support the first two hypotheses, but in complex problem solving, an interaction was found between self-assessed ability and the monitoring environment.
Non-Mathematical Problem Solving in Organic Chemistry
ERIC Educational Resources Information Center
Cartrette, David P.; Bodner, George M.
2010-01-01
Differences in problem-solving ability among organic chemistry graduate students and faculty were studied within the domain of problems that involved the determination of the structure of a molecule from the molecular formula of the compound and a combination of IR and [to the first power]H NMR spectra. The participants' performance on these tasks…
Developing Procedural Flexibility: Are Novices Prepared to Learn from Comparing Procedures?
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Star, Jon R.; Durkin, Kelley
2012-01-01
Background: A key learning outcome in problem-solving domains is the development of procedural flexibility, where learners know multiple procedures and use them appropriately to solve a range of problems (e.g., Verschaffel, Luwel, Torbeyns, & Van Dooren, 2009). However, students often fail to become flexible problem solvers in mathematics. To…
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
The Association between Motivation, Affect, and Self-regulated Learning When Solving Problems.
Baars, Martine; Wijnia, Lisette; Paas, Fred
2017-01-01
Self-regulated learning (SRL) skills are essential for learning during school years, particularly in complex problem-solving domains, such as biology and math. Although a lot of studies have focused on the cognitive resources that are needed for learning to solve problems in a self-regulated way, affective and motivational resources have received much less research attention. The current study investigated the relation between affect (i.e., Positive Affect and Negative Affect Scale), motivation (i.e., autonomous and controlled motivation), mental effort, SRL skills, and problem-solving performance when learning to solve biology problems in a self-regulated online learning environment. In the learning phase, secondary education students studied video-modeling examples of how to solve hereditary problems, solved hereditary problems which they chose themselves from a set of problems with different complexity levels (i.e., five levels). In the posttest, students solved hereditary problems, self-assessed their performance, and chose a next problem from the set of problems but did not solve these problems. The results from this study showed that negative affect, inaccurate self-assessments during the posttest, and higher perceptions of mental effort during the posttest were negatively associated with problem-solving performance after learning in a self-regulated way.
NASA Astrophysics Data System (ADS)
Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.
2015-06-01
The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.
Individual differences in solving arithmetic word problems
2013-01-01
Background With the present functional magnetic resonance imaging (fMRI) study at 3 T, we investigated the neural correlates of visualization and verbalization during arithmetic word problem solving. In the domain of arithmetic, visualization might mean to visualize numbers and (intermediate) results while calculating, and verbalization might mean that numbers and (intermediate) results are verbally repeated during calculation. If the brain areas involved in number processing are domain-specific as assumed, that is, that the left angular gyrus (AG) shows an affinity to the verbal domain, and that the left and right intraparietal sulcus (IPS) shows an affinity to the visual domain, the activation of these areas should show a dependency on an individual’s cognitive style. Methods 36 healthy young adults participated in the fMRI study. The participants habitual use of visualization and verbalization during solving arithmetic word problems was assessed with a short self-report assessment. During the fMRI measurement, arithmetic word problems that had to be solved by the participants were presented in an event-related design. Results We found that visualizers showed greater brain activation in brain areas involved in visual processing, and that verbalizers showed greater brain activation within the left angular gyrus. Conclusions Our results indicate that cognitive styles or preferences play an important role in understanding brain activation. Our results confirm, that strong visualizers use mental imagery more strongly than weak visualizers during calculation. Moreover, our results suggest that the left AG shows a specific affinity to the verbal domain and subserves number processing in a modality-specific way. PMID:23883107
Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft
NASA Astrophysics Data System (ADS)
Tsuchiya, Takeshi
This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.
Teaching Thinking and Problem Solving.
ERIC Educational Resources Information Center
Bransford, John; And Others
1986-01-01
This article focuses on two approaches to teaching reasoning and problem solving. One emphasizes the role of domain-specific knowledge; the other emphasizes general strategic and metacognitive knowledge. Many instructional programs are based on the latter approach. The article concludes that these programs can be strengthened by focusing on domain…
MARS-MD: rejection based image domain material decomposition
NASA Astrophysics Data System (ADS)
Bateman, C. J.; Knight, D.; Brandwacht, B.; McMahon, J.; Healy, J.; Panta, R.; Aamir, R.; Rajendran, K.; Moghiseh, M.; Ramyar, M.; Rundle, D.; Bennett, J.; de Ruiter, N.; Smithies, D.; Bell, S. T.; Doesburg, R.; Chernoglazov, A.; Mandalika, V. B. H.; Walsh, M.; Shamshad, M.; Anjomrouz, M.; Atharifard, A.; Vanden Broeke, L.; Bheesette, S.; Kirkbride, T.; Anderson, N. G.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Butler, A. P. H.; Butler, P. H.
2018-05-01
This paper outlines image domain material decomposition algorithms that have been routinely used in MARS spectral CT systems. These algorithms (known collectively as MARS-MD) are based on a pragmatic heuristic for solving the under-determined problem where there are more materials than energy bins. This heuristic contains three parts: (1) splitting the problem into a number of possible sub-problems, each containing fewer materials; (2) solving each sub-problem; and (3) applying rejection criteria to eliminate all but one sub-problem's solution. An advantage of this process is that different constraints can be applied to each sub-problem if necessary. In addition, the result of this process is that solutions will be sparse in the material domain, which reduces crossover of signal between material images. Two algorithms based on this process are presented: the Segmentation variant, which uses segmented material classes to define each sub-problem; and the Angular Rejection variant, which defines the rejection criteria using the angle between reconstructed attenuation vectors.
Using a general problem-solving strategy to promote transfer.
Youssef-Shalala, Amina; Ayres, Paul; Schubert, Carina; Sweller, John
2014-09-01
Cognitive load theory was used to hypothesize that a general problem-solving strategy based on a make-as-many-moves-as-possible heuristic could facilitate problem solutions for transfer problems. In four experiments, school students were required to learn about a topic through practice with a general problem-solving strategy, through a conventional problem solving strategy or by studying worked examples. In Experiments 1 and 2 using junior high school students learning geometry, low knowledge students in the general problem-solving group scored significantly higher on near or far transfer tests than the conventional problem-solving group. In Experiment 3, an advantage for a general problem-solving group over a group presented worked examples was obtained on far transfer tests using the same curriculum materials, again presented to junior high school students. No differences between conditions were found in Experiments 1, 2, or 3 using test problems similar to the acquisition problems. Experiment 4 used senior high school students studying economics and found the general problem-solving group scored significantly higher than the conventional problem-solving group on both similar and transfer tests. It was concluded that the general problem-solving strategy was helpful for novices, but not for students that had access to domain-specific knowledge. PsycINFO Database Record (c) 2014 APA, all rights reserved.
NASA Technical Reports Server (NTRS)
Bless, Robert R.
1991-01-01
A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.
The Association between Motivation, Affect, and Self-regulated Learning When Solving Problems
Baars, Martine; Wijnia, Lisette; Paas, Fred
2017-01-01
Self-regulated learning (SRL) skills are essential for learning during school years, particularly in complex problem-solving domains, such as biology and math. Although a lot of studies have focused on the cognitive resources that are needed for learning to solve problems in a self-regulated way, affective and motivational resources have received much less research attention. The current study investigated the relation between affect (i.e., Positive Affect and Negative Affect Scale), motivation (i.e., autonomous and controlled motivation), mental effort, SRL skills, and problem-solving performance when learning to solve biology problems in a self-regulated online learning environment. In the learning phase, secondary education students studied video-modeling examples of how to solve hereditary problems, solved hereditary problems which they chose themselves from a set of problems with different complexity levels (i.e., five levels). In the posttest, students solved hereditary problems, self-assessed their performance, and chose a next problem from the set of problems but did not solve these problems. The results from this study showed that negative affect, inaccurate self-assessments during the posttest, and higher perceptions of mental effort during the posttest were negatively associated with problem-solving performance after learning in a self-regulated way. PMID:28848467
Controlling Uncertainty: A Review of Human Behavior in Complex Dynamic Environments
ERIC Educational Resources Information Center
Osman, Magda
2010-01-01
Complex dynamic control (CDC) tasks are a type of problem-solving environment used for examining many cognitive activities (e.g., attention, control, decision making, hypothesis testing, implicit learning, memory, monitoring, planning, and problem solving). Because of their popularity, there have been many findings from diverse domains of research…
The Design of Computerized Practice Fields for Problem Solving and Contextualized Transfer
ERIC Educational Resources Information Center
Riedel, Jens; Fitzgerald, Gail; Leven, Franz; Toenshoff, Burkhard
2003-01-01
Current theories of learning emphasize the importance of learner-centered, active, authentic, environments for meaningful knowledge construction. From this perspective, computerized case-based learning systems afford practice fields for learners to build domain knowledge and problem-solving skills and to support contextualized transfer of…
Reversible Reasoning and the Working Backwards Problem Solving Strategy
ERIC Educational Resources Information Center
Ramful, Ajay
2015-01-01
Making sense of mathematical concepts and solving mathematical problems may demand different forms of reasoning. These could be either domain-based, such as algebraic, geometric or statistical reasoning, while others are more general such as inductive/deductive reasoning. This article aims at giving visibility to a particular form of reasoning…
NASA Technical Reports Server (NTRS)
Gomez, Fernando
1989-01-01
It is shown how certain kinds of domain independent expert systems based on classification problem-solving methods can be constructed directly from natural language descriptions by a human expert. The expert knowledge is not translated into production rules. Rather, it is mapped into conceptual structures which are integrated into long-term memory (LTM). The resulting system is one in which problem-solving, retrieval and memory organization are integrated processes. In other words, the same algorithm and knowledge representation structures are shared by these processes. As a result of this, the system can answer questions, solve problems or reorganize LTM.
Beyond rules: The next generation of expert systems
NASA Technical Reports Server (NTRS)
Ferguson, Jay C.; Wagner, Robert E.
1987-01-01
The PARAGON Representation, Management, and Manipulation system is introduced. The concepts of knowledge representation, knowledge management, and knowledge manipulation are combined in a comprehensive system for solving real world problems requiring high levels of expertise in a real time environment. In most applications the complexity of the problem and the representation used to describe the domain knowledge tend to obscure the information from which solutions are derived. This inhibits the acquisition of domain knowledge verification/validation, places severe constraints on the ability to extend and maintain a knowledge base while making generic problem solving strategies difficult to develop. A unique hybrid system was developed to overcome these traditional limitations.
Determining the Exchangeability of Concept Map and Problem-Solving Essay Scores
ERIC Educational Resources Information Center
Hollenbeck, Keith; Twyman, Todd; Tindal, Gerald
2006-01-01
This study investigated the score exchangeability of concept maps with problem-solving essays. Of interest was whether sixth-grade students' concept maps predicted their scores on essay responses that used concept map content. Concept maps were hypothesized to be alternatives to performance assessments for content-area domain knowledge in science.…
ERIC Educational Resources Information Center
O'Connell, Ann Aileen
The relationships among types of errors observed during probability problem solving were studied. Subjects were 50 graduate students in an introductory probability and statistics course. Errors were classified as text comprehension, conceptual, procedural, and arithmetic. Canonical correlation analysis was conducted on the frequencies of specific…
SCAMPER and Creative Problem Solving in Political Science: Insights from Classroom Observation
ERIC Educational Resources Information Center
Radziszewski, Elizabeth
2017-01-01
This article describes the author's experience using SCAMPER, a creativity-building technique, in a creative problem-solving session that was conducted in an environmental conflict course to generate ideas for managing postconflict stability. SCAMPER relies on cues to help students connect ideas from different domains of knowledge, explore random…
The Effect of Simulation Games on the Learning of Computational Problem Solving
ERIC Educational Resources Information Center
Liu, Chen-Chung; Cheng, Yuan-Bang; Huang, Chia-Wen
2011-01-01
Simulation games are now increasingly applied to many subject domains as they allow students to engage in discovery processes, and may facilitate a flow learning experience. However, the relationship between learning experiences and problem solving strategies in simulation games still remains unclear in the literature. This study, thus, analyzed…
ERIC Educational Resources Information Center
Mashood, K. K.; Singh, Vijay A.
2013-01-01
Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in…
Validation of Predictive Relationship of Creative Problem-Solving Attrubutes with Math Creativity
ERIC Educational Resources Information Center
Pham, Linh Hung
2014-01-01
This study was designed to investigate the predictive relationships of creative problem-solving attributes, which comprise divergent thinking, convergent thinking, motivation, general and domain knowledge and skills, and environment, with mathematical creativity of sixth grade students in Thai Nguyen City, Viet Nam. The study also aims to revise…
NASA Astrophysics Data System (ADS)
Mashood, K. K.; Singh, Vijay A.
2013-09-01
Research suggests that problem-solving skills are transferable across domains. This claim, however, needs further empirical substantiation. We suggest correlation studies as a methodology for making preliminary inferences about transfer. The correlation of the physics performance of students with their performance in chemistry and mathematics in highly competitive problem-solving examinations was studied using a massive database. The sample sizes ranged from hundreds to a few hundred thousand. Encouraged by the presence of significant correlations, we interviewed 20 students to explore the pedagogic potential of physics in imparting transferable problem-solving skills. We report strategies and practices relevant to physics employed by these students which foster transfer.
Undergraduate Performance in Solving Ill-Defined Biochemistry Problems
Sensibaugh, Cheryl A.; Madrid, Nathaniel J.; Choi, Hye-Jeong; Anderson, William L.; Osgood, Marcy P.
2017-01-01
With growing interest in promoting skills related to the scientific process, we studied performance in solving ill-defined problems demonstrated by graduating biochemistry majors at a public, minority-serving university. As adoption of techniques for facilitating the attainment of higher-order learning objectives broadens, so too does the need to appropriately measure and understand student performance. We extended previous validation of the Individual Problem Solving Assessment (IPSA) and administered multiple versions of the IPSA across two semesters of biochemistry courses. A final version was taken by majors just before program exit, and student responses on that version were analyzed both quantitatively and qualitatively. This mixed-methods study quantifies student performance in scientific problem solving, while probing the qualitative nature of unsatisfactory solutions. Of the five domains measured by the IPSA, we found that average graduates were only successful in two areas: evaluating given experimental data to state results and reflecting on performance after the solution to the problem was provided. The primary difficulties in each domain were quite different. The most widespread challenge for students was to design an investigation that rationally aligned with a given hypothesis. We also extend the findings into pedagogical recommendations. PMID:29180350
Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Metcalfe, Arron W.S.; Swigart, Anna G.; Menon, Vinod
2014-01-01
The study of developmental disorders can provide a unique window into the role of domain-general cognitive abilities and neural systems in typical and atypical development. Mathematical disabilities (MD) are characterized by marked difficulty in mathematical cognition in the presence of preserved intelligence and verbal ability. Although studies of MD have most often focused on the role of core deficits in numerical processing, domain-general cognitive abilities, in particular working memory (WM), have also been implicated. Here we identify specific WM components that are impaired in children with MD and then examine their role in arithmetic problem solving. Compared to typically developing (TD) children, the MD group demonstrated lower arithmetic performance and lower visuo-spatial working memory (VSWM) scores with preserved abilities on the phonological and central executive components of WM. Whole brain analysis revealed that, during arithmetic problem solving, left posterior parietal cortex, bilateral dorsolateral and ventrolateral prefrontal cortex, cingulate gyrus and precuneus, and fusiform gyrus responses were positively correlated with VSWM ability in TD children, but not in the MD group. Additional analyses using a priori posterior parietal cortex regions previously implicated in WM tasks, demonstrated a convergent pattern of results during arithmetic problem solving. These results suggest that MD is characterized by a common locus of arithmetic and VSWM deficits at both the cognitive and functional neuroanatomical levels. Unlike TD children, children with MD do not use VSWM resources appropriately during arithmetic problem solving. This work advances our understanding of VSWM as an important domain-general cognitive process in both typical and atypical mathematical skill development. PMID:23896444
Cross, Fiona R; Jackson, Robert R
2015-03-01
Intricate predatory strategies are widespread in the salticid subfamily Spartaeinae. The hypothesis we consider here is that the spartaeine species that are proficient at solving prey-capture problems are also proficient at solving novel problems. We used nine species from this subfamily in our experiments. Eight of these species (two Brettus, one Cocalus, three Cyrba, two Portia) are known for specialized invasion of other spiders' webs and for actively choosing other spiders as preferred prey ('araneophagy'). Except for Cocalus, these species also use trial and error to derive web-based signals with which they gain dynamic fine control of the resident spider's behaviour ('aggressive mimicry').The ninth species, Paracyrba wanlessi, is not araneophagic and instead specializes at preying on mosquitoes. We presented these nine species with a novel confinement problem that could be solved by trial and error. The test spider began each trial on an island in a tray of water, with an atoll surrounding the island. From the island, the spider could choose between two potential escape tactics (leap or swim), but we decided at random before the trial which tactic would fail and which tactic would achieve partial success. Our findings show that the seven aggressive-mimic species are proficient at solving the confinement problem by repeating 'correct' choices and by switching to the alternative tactic after making an 'incorrect' choice. However, as predicted, there was no evidence of C. gibbosus or P. wanlessi, the two non-aggressive-mimic species, solving the confinement problem. We discuss these findings in the context of an often-made distinction between domain-specific and domain-general cognition.
Three-dimensional electrical impedance tomography: a topology optimization approach.
Mello, Luís Augusto Motta; de Lima, Cícero Ribeiro; Amato, Marcelo Britto Passos; Lima, Raul Gonzalez; Silva, Emílio Carlos Nelli
2008-02-01
Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.
Numerical time-domain electromagnetics based on finite-difference and convolution
NASA Astrophysics Data System (ADS)
Lin, Yuanqu
Time-domain methods posses a number of advantages over their frequency-domain counterparts for the solution of wideband, nonlinear, and time varying electromagnetic scattering and radiation phenomenon. Time domain integral equation (TDIE)-based methods, which incorporate the beneficial properties of integral equation method, are thus well suited for solving broadband scattering problems for homogeneous scatterers. Widespread adoption of TDIE solvers has been retarded relative to other techniques by their inefficiency, inaccuracy and instability. Moreover, two-dimensional (2D) problems are especially problematic, because 2D Green's functions have infinite temporal support, exacerbating these difficulties. This thesis proposes a finite difference delay modeling (FDDM) scheme for the solution of the integral equations of 2D transient electromagnetic scattering problems. The method discretizes the integral equations temporally using first- and second-order finite differences to map Laplace-domain equations into the Z domain before transforming to the discrete time domain. The resulting procedure is unconditionally stable because of the nature of the Laplace- to Z-domain mapping. The first FDDM method developed in this thesis uses second-order Lagrange basis functions with Galerkin's method for spatial discretization. The second application of the FDDM method discretizes the space using a locally-corrected Nystrom method, which accelerates the precomputation phase and achieves high order accuracy. The Fast Fourier Transform (FFT) is applied to accelerate the marching-on-time process in both methods. While FDDM methods demonstrate impressive accuracy and stability in solving wideband scattering problems for homogeneous scatterers, they still have limitations in analyzing interactions between several inhomogenous scatterers. Therefore, this thesis devises a multi-region finite-difference time-domain (MR-FDTD) scheme based on domain-optimal Green's functions for solving sparsely-populated problems. The scheme uses a discrete Green's function (DGF) on the FDTD lattice to truncate the local subregions, and thus reduces reflection error on the local boundary. A continuous Green's function (CGF) is implemented to pass the influence of external fields into each FDTD region which mitigates the numerical dispersion and anisotropy of standard FDTD. Numerical results will illustrate the accuracy and stability of the proposed techniques.
NASA Astrophysics Data System (ADS)
Perepelkin, Eugene; Tarelkin, Aleksandr
2018-02-01
A magnetostatics problem arises when searching for the distribution of the magnetic field generated by magnet systems of many physics research facilities, e.g., accelerators. The domain in which the boundary-value problem is solved often has a piecewise smooth boundary. In this case, numerical calculations of the problem require consideration of the solution behavior in the corner domain. In this work we obtained an upper estimation of the magnetic field growth using integral formulation of the magnetostatic problem and propose a method for condensing the differential mesh near the corner domain of the vacuum in the three-dimensional space based on this estimation.
Cognitive Task Analysis: Implications for the Theory and Practice of Instructional Design.
ERIC Educational Resources Information Center
Dehoney, Joanne
Cognitive task analysis grew out of efforts by cognitive psychologists to understand problem-solving in a lab setting. It has proved a useful tool for describing expert performance in complex problem solving domains. This review considers two general models of cognitive task analysis and examines the procedures and results of analyses in three…
The Effects of Group Monitoring on Fatigue-Related Einstellung during Mathematical Problem Solving
ERIC Educational Resources Information Center
Frings, Daniel
2011-01-01
Fatigue resulting from sleep deficit can lead to decreased performance in a variety of cognitive domains and can result in potentially serious accidents. The present study aimed to test whether fatigue leads to increased Einstellung (low levels of cognitive flexibility) in a series of mathematical problem-solving tasks. Many situations involving…
Does Problem Solving = Prior Knowledge + Reasoning Skills in Earth Science? An Exploratory Study
ERIC Educational Resources Information Center
Chang, Chun-Yen
2010-01-01
This study examined the interrelationship between tenth-grade students' problem solving ability (PSA) and their domain-specific knowledge (DSK) as well as reasoning skills (RS) in a secondary school of Taiwan. The PSA test was designed to emphasize students' divergent-thinking ability (DTA) and convergent-thinking ability (CTA) subscales in the…
ERIC Educational Resources Information Center
Lubin, Ian A.; Ge, Xun
2012-01-01
This paper discusses a qualitative study which examined students' problem-solving, metacognition, and motivation in a learning environment designed for teaching educational technology to pre-service teachers. The researchers converted a linear and didactic learning environment into a new open learning environment by contextualizing domain-related…
Creativity in Unique Problem-Solving in Mathematics and Its Influence on Motivation for Learning
ERIC Educational Resources Information Center
Bishara, Saied
2016-01-01
This research study investigates the ability of students to tackle the solving of unique mathematical problems in the domain of numerical series, verbal and formal, and its influence on the motivation of junior high students with learning disabilities in the Arab sector. Two instruments were used to collect the data: mathematical series were…
NASA Astrophysics Data System (ADS)
Hu, Y.; Ji, Y.; Egbert, G. D.
2015-12-01
The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.
NASA Astrophysics Data System (ADS)
Mönkölä, Sanna
2013-06-01
This study considers developing numerical solution techniques for the computer simulations of time-harmonic fluid-structure interaction between acoustic and elastic waves. The focus is on the efficiency of an iterative solution method based on a controllability approach and spectral elements. We concentrate on the model, in which the acoustic waves in the fluid domain are modeled by using the velocity potential and the elastic waves in the structure domain are modeled by using displacement. Traditionally, the complex-valued time-harmonic equations are used for solving the time-harmonic problems. Instead of that, we focus on finding periodic solutions without solving the time-harmonic problems directly. The time-dependent equations can be simulated with respect to time until a time-harmonic solution is reached, but the approach suffers from poor convergence. To overcome this challenge, we follow the approach first suggested and developed for the acoustic wave equations by Bristeau, Glowinski, and Périaux. Thus, we accelerate the convergence rate by employing a controllability method. The problem is formulated as a least-squares optimization problem, which is solved with the conjugate gradient (CG) algorithm. Computation of the gradient of the functional is done directly for the discretized problem. A graph-based multigrid method is used for preconditioning the CG algorithm.
Production system chunking in SOAR: Case studies in automated learning
NASA Technical Reports Server (NTRS)
Allen, Robert
1989-01-01
A preliminary study of SOAR, a general intelligent architecture for automated problem solving and learning, is presented. The underlying principles of universal subgoaling and chunking were applied to a simple, yet representative, problem in artificial intelligence. A number of problem space representations were examined and compared. It is concluded that learning is an inherent and beneficial aspect of problem solving. Additional studies are suggested in domains relevant to mission planning and to SOAR itself.
Using Online Algorithms to Solve NP-Hard Problems More Efficiently in Practice
2007-12-01
bounds. For the openstacks , TPP, and pipesworld domains, our results were qualitatively different: most instances in these domains were either easy...between our results in these two sets of domains. For most in- stances in the openstacks domain we found no k values that elicited a “yes” answer in
Triangular node for Transmission-Line Modeling (TLM) applied to bio-heat transfer.
Milan, Hugo F M; Gebremedhin, Kifle G
2016-12-01
Transmission-Line Modeling (TLM) is a numerical method used to solve complex and time-domain bio-heat transfer problems. In TLM, rectangles are used to discretize two-dimensional problems. The drawback in using rectangular shapes is that instead of refining only the domain of interest, a large additional domain will also be refined in the x and y axes, which results in increased computational time and memory space. In this paper, we developed a triangular node for TLM applied to bio-heat transfer that does not have the drawback associated with the rectangular nodes. The model includes heat source, blood perfusion (advection), boundary conditions and initial conditions. The boundary conditions could be adiabatic, temperature, heat flux, or convection. A matrix equation for TLM, which simplifies the solution of time-domain problems or solves steady-state problems, was also developed. The predicted results were compared against results obtained from the solution of a simplified two-dimensional problem, and they agreed within 1% for a mesh length of triangular faces of 59µm±9µm (mean±standard deviation) and a time step of 1ms. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Diamantopoulos, Theodore; Rowe, Kristopher; Diamessis, Peter
2017-11-01
The Collocation Penalty Method (CPM) solves a PDE on the interior of a domain, while weakly enforcing boundary conditions at domain edges via penalty terms, and naturally lends itself to high-order and multi-domain discretization. Such spectral multi-domain penalty methods (SMPM) have been used to solve the Navier-Stokes equations. Bounds for penalty coefficients are typically derived using the energy method to guarantee stability for time-dependent problems. The choice of collocation points and penalty parameter can greatly affect the conditioning and accuracy of a solution. Effort has been made in recent years to relate various high-order methods on multiple elements or domains under the umbrella of the Correction Procedure via Reconstruction (CPR). Most applications of CPR have focused on solving the compressible Navier-Stokes equations using explicit time-stepping procedures. A particularly important aspect which is still missing in the context of the SMPM is a study of the Helmholtz equation arising in many popular time-splitting schemes for the incompressible Navier-Stokes equations. Stability and convergence results for the SMPM for the Helmholtz equation will be presented. Emphasis will be placed on the efficiency and accuracy of high-order methods.
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Multiple graph regularized protein domain ranking
2012-01-01
Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331
NASA Astrophysics Data System (ADS)
Britt, S.; Tsynkov, S.; Turkel, E.
2018-02-01
We solve the wave equation with variable wave speed on nonconforming domains with fourth order accuracy in both space and time. This is accomplished using an implicit finite difference (FD) scheme for the wave equation and solving an elliptic (modified Helmholtz) equation at each time step with fourth order spatial accuracy by the method of difference potentials (MDP). High-order MDP utilizes compact FD schemes on regular structured grids to efficiently solve problems on nonconforming domains while maintaining the design convergence rate of the underlying FD scheme. Asymptotically, the computational complexity of high-order MDP scales the same as that for FD.
Development of Condensing Mesh Method for Corner Domain at Numerical Simulation Magnetic System
NASA Astrophysics Data System (ADS)
Perepelkin, E.; Tarelkin, A.; Polyakova, R.; Kovalenko, A.
2018-05-01
A magnetostatic problem arises in searching for the distribution of the magnetic field generated by magnet systems of many physics research facilities, e.g., accelerators. The domain in which the boundaryvalue problem is solved often has a piecewise smooth boundary. In this case, numerical calculations of the problem require the consideration of the solution behavior in the corner domain. In this work we obtained the upper estimation of the magnetic field growth and propose a method of condensing the differential grid near the corner domain of vacuum in case of 3-dimensional space based on this estimation. An example of calculating a real model problem for SDP NICA in the domain containing a corner point is given.
Learning from external environments using Soar
NASA Technical Reports Server (NTRS)
Laird, John E.
1989-01-01
Soar, like the previous PRODIGY and Theo, is a problem-solving architecture that attempts to learn from experience; unlike them, it takes a more uniform approach, using a single forward-chaining architecture for planning and execution. Its single learning mechanism, designated 'chunking', is domain-independent. Two developmental approaches have been employed with Soar: the first of these allows the architecture to attempt a problem on its own, while the second involves a degree of external guidance. This learning through guidance is integrated with general problem-solving and autonomous learning, leading to an avoidance of human interaction for simple problems that Soar can solve on its own.
Cognitive functioning and social problem-solving skills in schizophrenia.
Hatashita-Wong, Michi; Smith, Thomas E; Silverstein, Steven M; Hull, James W; Willson, Deborah F
2002-05-01
This study examined the relationships between symptoms, cognitive functioning, and social skill deficits in schizophrenia. Few studies have incorporated measures of cognitive functioning and symptoms in predictive models for social problem solving. For our study, 44 participants were recruited from consecutive outpatient admissions. Neuropsychological tests were given to assess cognitive function, and social problem solving was assessed using structured vignettes designed to evoke the participant's ability to generate, evaluate, and apply solutions to social problems. A sequential model-fitting method of analysis was used to incorporate social problem solving, symptom presentation, and cognitive impairment into linear regression models. Predictor variables were drawn from demographic, cognitive, and symptom domains. Because this method of analysis was exploratory and not intended as hierarchical modelling, no a priori hypotheses were proposed. Participants with higher scores on tests of cognitive flexibility were better able to generate accurate, appropriate, and relevant responses to the social problem-solving vignettes. The results suggest that cognitive flexibility is a potentially important mediating factor in social problem-solving competence. While other factors are related to social problem-solving skill, this study supports the importance of cognition and understanding how it relates to the complex and multifaceted nature of social functioning.
ERIC Educational Resources Information Center
Obersteiner, Andreas; Bernhard, Matthias; Reiss, Kristina
2015-01-01
Understanding contingency table analysis is a facet of mathematical competence in the domain of data and probability. Previous studies have shown that even young children are able to solve specific contingency table problems, but apply a variety of strategies that are actually invalid. The purpose of this paper is to describe primary school…
ERIC Educational Resources Information Center
Kozbelt, Aaron; Dexter, Scott; Dolese, Melissa; Meredith, Daniel; Ostrofsky, Justin
2015-01-01
We applied computer-based text analyses of regressive imagery to verbal protocols of individuals engaged in creative problem-solving in two domains: visual art (23 experts, 23 novices) and computer programming (14 experts, 14 novices). Percentages of words involving primary process and secondary process thought, plus emotion-related words, were…
Techniques for determining physical zones of influence
Hamann, Hendrik F; Lopez-Marrero, Vanessa
2013-11-26
Techniques for analyzing flow of a quantity in a given domain are provided. In one aspect, a method for modeling regions in a domain affected by a flow of a quantity is provided which includes the following steps. A physical representation of the domain is provided. A grid that contains a plurality of grid-points in the domain is created. Sources are identified in the domain. Given a vector field that defines a direction of flow of the quantity within the domain, a boundary value problem is defined for each of one or more of the sources identified in the domain. Each of the boundary value problems is solved numerically to obtain a solution for the boundary value problems at each of the grid-points. The boundary problem solutions are post-processed to model the regions affected by the flow of the quantity on the physical representation of the domain.
NASA Astrophysics Data System (ADS)
Nikooeinejad, Z.; Delavarkhalafi, A.; Heydari, M.
2018-03-01
The difficulty of solving the min-max optimal control problems (M-MOCPs) with uncertainty using generalised Euler-Lagrange equations is caused by the combination of split boundary conditions, nonlinear differential equations and the manner in which the final time is treated. In this investigation, the shifted Jacobi pseudospectral method (SJPM) as a numerical technique for solving two-point boundary value problems (TPBVPs) in M-MOCPs for several boundary states is proposed. At first, a novel framework of approximate solutions which satisfied the split boundary conditions automatically for various boundary states is presented. Then, by applying the generalised Euler-Lagrange equations and expanding the required approximate solutions as elements of shifted Jacobi polynomials, finding a solution of TPBVPs in nonlinear M-MOCPs with uncertainty is reduced to the solution of a system of algebraic equations. Moreover, the Jacobi polynomials are particularly useful for boundary value problems in unbounded domain, which allow us to solve infinite- as well as finite and free final time problems by domain truncation method. Some numerical examples are given to demonstrate the accuracy and efficiency of the proposed method. A comparative study between the proposed method and other existing methods shows that the SJPM is simple and accurate.
Vectorial finite elements for solving the radiative transfer equation
NASA Astrophysics Data System (ADS)
Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.
2018-06-01
The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.
Boundary Approximation Methods for Sloving Elliptic Problems on Unbounded Domains
NASA Astrophysics Data System (ADS)
Li, Zi-Cai; Mathon, Rudolf
1990-08-01
Boundary approximation methods with partial solutions are presented for solving a complicated problem on an unbounded domain, with both a crack singularity and a corner singularity. Also an analysis of partial solutions near the singular points is provided. These methods are easy to apply, have good stability properties, and lead to highly accurate solutions. Hence, boundary approximation methods with partial solutions are recommended for the treatment of elliptic problems on unbounded domains provided that piecewise solution expansions, in particular, asymptotic solutions near the singularities and infinity, can be found.
ERIC Educational Resources Information Center
Chi, Michelene T. H.; And Others
Based on the premise that the quality of domain-specific knowledge is the main determinant of expertise in that domain, an examination was made of the shift from considering general, domain-independent skills and procedures, in both cognitive psychology and artificial intelligence, to the study of the knowledge base. Empirical findings and…
Changes in problem-solving appraisal after cognitive therapy for the prevention of suicide.
Ghahramanlou-Holloway, M; Bhar, S S; Brown, G K; Olsen, C; Beck, A T
2012-06-01
Cognitive therapy has been found to be effective in decreasing the recurrence of suicide attempts. A theoretical aim of cognitive therapy is to improve problem-solving skills so that suicide no longer remains the only available option. This study examined the differential rate of change in problem-solving appraisal following suicide attempts among individuals who participated in a randomized controlled trial for the prevention of suicide. Changes in problem-solving appraisal from pre- to 6-months post-treatment in individuals with a recent suicide attempt, randomized to either cognitive therapy (n = 60) or a control condition (n = 60), were assessed by using the Social Problem-Solving Inventory-Revised, Short Form. Improvements in problem-solving appraisal were similarly observed for both groups within the 6-month follow-up. However, during this period, individuals assigned to the cognitive therapy condition demonstrated a significantly faster rate of improvement in negative problem orientation and impulsivity/carelessness. More specifically, individuals receiving cognitive therapy were significantly less likely to report a negative view toward life problems and impulsive/carelessness problem-solving style. Cognitive therapy for the prevention of suicide provides rapid changes within 6 months on negative problem orientation and impulsivity/carelessness problem-solving style. Given that individuals are at the greatest risk for suicide within 6 months of their last suicide attempt, the current study demonstrates that a brief cognitive intervention produces a rapid rate of improvement in two important domains of problem-solving appraisal during this sensitive period.
Shock/vortex interaction and vortex-breakdown modes
NASA Technical Reports Server (NTRS)
Kandil, Osama A.; Kandil, H. A.; Liu, C. H.
1992-01-01
Computational simulation and study of shock/vortex interaction and vortex-breakdown modes are considered for bound (internal) and unbound (external) flow domains. The problem is formulated using the unsteady, compressible, full Navier-Stokes (NS) equations which are solved using an implicit, flux-difference splitting, finite-volume scheme. For the bound flow domain, a supersonic swirling flow is considered in a configured circular duct and the problem is solved for quasi-axisymmetric and three-dimensional flows. For the unbound domain, a supersonic swirling flow issued from a nozzle into a uniform supersonic flow of lower Mach number is considered for quasi-axisymmetric and three-dimensional flows. The results show several modes of breakdown; e.g., no-breakdown, transient single-bubble breakdown, transient multi-bubble breakdown, periodic multi-bubble multi-frequency breakdown and helical breakdown.
Problem-Solving Test: Southwestern Blotting
ERIC Educational Resources Information Center
Szeberényi, József
2014-01-01
Terms to be familiar with before you start to solve the test: Southern blotting, Western blotting, restriction endonucleases, agarose gel electrophoresis, nitrocellulose filter, molecular hybridization, polyacrylamide gel electrophoresis, proto-oncogene, c-abl, Src-homology domains, tyrosine protein kinase, nuclear localization signal, cDNA,…
Scalable software architectures for decision support.
Musen, M A
1999-12-01
Interest in decision-support programs for clinical medicine soared in the 1970s. Since that time, workers in medical informatics have been particularly attracted to rule-based systems as a means of providing clinical decision support. Although developers have built many successful applications using production rules, they also have discovered that creation and maintenance of large rule bases is quite problematic. In the 1980s, several groups of investigators began to explore alternative programming abstractions that can be used to build decision-support systems. As a result, the notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) problem-solving methods--domain-independent algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper highlights how developers can construct large, maintainable decision-support systems using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.
An event-based architecture for solving constraint satisfaction problems
Mostafa, Hesham; Müller, Lorenz K.; Indiveri, Giacomo
2015-01-01
Constraint satisfaction problems are ubiquitous in many domains. They are typically solved using conventional digital computing architectures that do not reflect the distributed nature of many of these problems, and are thus ill-suited for solving them. Here we present a parallel analogue/digital hardware architecture specifically designed to solve such problems. We cast constraint satisfaction problems as networks of stereotyped nodes that communicate using digital pulses, or events. Each node contains an oscillator implemented using analogue circuits. The non-repeating phase relations among the oscillators drive the exploration of the solution space. We show that this hardware architecture can yield state-of-the-art performance on random SAT problems under reasonable assumptions on the implementation. We present measurements from a prototype electronic chip to demonstrate that a physical implementation of the proposed architecture is robust to practical non-idealities and to validate the theory proposed. PMID:26642827
NASA Astrophysics Data System (ADS)
Vera, N. C.; GMMC
2013-05-01
In this paper we present the results of macrohybrid mixed Darcian flow in porous media in a general three-dimensional domain. The global problem is solved as a set of local subproblems which are posed using a domain decomposition method. Unknown fields of local problems, velocity and pressure are approximated using mixed finite elements. For this application, a general three-dimensional domain is considered which is discretized using tetrahedra. The discrete domain is decomposed into subdomains and reformulated the original problem as a set of subproblems, communicated through their interfaces. To solve this set of subproblems, we use finite element mixed and parallel computing. The parallelization of a problem using this methodology can, in principle, to fully exploit a computer equipment and also provides results in less time, two very important elements in modeling. Referencias G.Alduncin and N.Vera-Guzmán Parallel proximal-point algorithms for mixed _nite element models of _ow in the subsurface, Commun. Numer. Meth. Engng 2004; 20:83-104 (DOI: 10.1002/cnm.647) Z. Chen, G.Huan and Y. Ma Computational Methods for Multiphase Flows in Porous Media, SIAM, Society for Industrial and Applied Mathematics, Philadelphia, 2006. A. Quarteroni and A. Valli, Numerical Approximation of Partial Differential Equations, Springer-Verlag, Berlin, 1994. Brezzi F, Fortin M. Mixed and Hybrid Finite Element Methods. Springer: New York, 1991.
Fung, Wenson; Swanson, H Lee
2017-07-01
The purpose of this study was to assess whether the differential effects of working memory (WM) components (the central executive, phonological loop, and visual-spatial sketchpad) on math word problem-solving accuracy in children (N = 413, ages 6-10) are completely mediated by reading, calculation, and fluid intelligence. The results indicated that all three WM components predicted word problem solving in the nonmediated model, but only the storage component of WM yielded a significant direct path to word problem-solving accuracy in the fully mediated model. Fluid intelligence was found to moderate the relationship between WM and word problem solving, whereas reading, calculation, and related skills (naming speed, domain-specific knowledge) completely mediated the influence of the executive system on problem-solving accuracy. Our results are consistent with findings suggesting that storage eliminates the predictive contribution of executive WM to various measures Colom, Rebollo, Abad, & Shih (Memory & Cognition, 34: 158-171, 2006). The findings suggest that the storage component of WM, rather than the executive component, has a direct path to higher-order processing in children.
A New Domain Decomposition Approach for the Gust Response Problem
NASA Technical Reports Server (NTRS)
Scott, James R.; Atassi, Hafiz M.; Susan-Resiga, Romeo F.
2002-01-01
A domain decomposition method is developed for solving the aerodynamic/aeroacoustic problem of an airfoil in a vortical gust. The computational domain is divided into inner and outer regions wherein the governing equations are cast in different forms suitable for accurate computations in each region. Boundary conditions which ensure continuity of pressure and velocity are imposed along the interface separating the two regions. A numerical study is presented for reduced frequencies ranging from 0.1 to 3.0. It is seen that the domain decomposition approach in providing robust and grid independent solutions.
Real-time trajectory optimization on parallel processors
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
1993-01-01
A parallel algorithm has been developed for rapidly solving trajectory optimization problems. The goal of the work has been to develop an algorithm that is suitable to do real-time, on-line optimal guidance through repeated solution of a trajectory optimization problem. The algorithm has been developed on an INTEL iPSC/860 message passing parallel processor. It uses a zero-order-hold discretization of a continuous-time problem and solves the resulting nonlinear programming problem using a custom-designed augmented Lagrangian nonlinear programming algorithm. The algorithm achieves parallelism of function, derivative, and search direction calculations through the principle of domain decomposition applied along the time axis. It has been encoded and tested on 3 example problems, the Goddard problem, the acceleration-limited, planar minimum-time to the origin problem, and a National Aerospace Plane minimum-fuel ascent guidance problem. Execution times as fast as 118 sec of wall clock time have been achieved for a 128-stage Goddard problem solved on 32 processors. A 32-stage minimum-time problem has been solved in 151 sec on 32 processors. A 32-stage National Aerospace Plane problem required 2 hours when solved on 32 processors. A speed-up factor of 7.2 has been achieved by using 32-nodes instead of 1-node to solve a 64-stage Goddard problem.
Combining Computational and Social Effort for Collaborative Problem Solving
Wagy, Mark D.; Bongard, Josh C.
2015-01-01
Rather than replacing human labor, there is growing evidence that networked computers create opportunities for collaborations of people and algorithms to solve problems beyond either of them. In this study, we demonstrate the conditions under which such synergy can arise. We show that, for a design task, three elements are sufficient: humans apply intuitions to the problem, algorithms automatically determine and report back on the quality of designs, and humans observe and innovate on others’ designs to focus creative and computational effort on good designs. This study suggests how such collaborations should be composed for other domains, as well as how social and computational dynamics mutually influence one another during collaborative problem solving. PMID:26544199
Learning Domains and the Process of Creativity
ERIC Educational Resources Information Center
Reid, Anna; Petocz, Peter
2004-01-01
Creativity is viewed in different ways in different disciplines: in education it is called "innovation", in business it is "entrepreneurship", in mathematics it is often equated with "problem solving", and in music it is "performance" or "composition". A creative product in different domains is measured against the norms of that domain, with its…
A Framework and a Methodology for Developing Authentic Constructivist e-Learning Environments
ERIC Educational Resources Information Center
Zualkernan, Imran A.
2006-01-01
Semantically rich domains require operative knowledge to solve complex problems in real-world settings. These domains provide an ideal environment for developing authentic constructivist e-learning environments. In this paper we present a framework and a methodology for developing authentic learning environments for such domains. The framework is…
NASA Technical Reports Server (NTRS)
Hudlicka, Eva; Corker, Kevin
1988-01-01
In this paper, a problem-solving system which uses a multilevel causal model of its domain is described. The system functions in the role of a pilot's assistant in the domain of commercial air transport emergencies. The model represents causal relationships among the aircraft subsystems, the effectors (engines, control surfaces), the forces that act on an aircraft in flight (thrust, lift), and the aircraft's flight profile (speed, altitude, etc.). The causal relationships are represented at three levels of abstraction: Boolean, qualitative, and quantitative, and reasoning about causes and effects can take place at each of these levels. Since processing at each level has different characteristics with respect to speed, the type of data required, and the specificity of the results, the problem-solving system can adapt to a wide variety of situations. The system is currently being implemented in the KEE(TM) development environment on a Symbolics Lisp machine.
An investigation of successful and unsuccessful students' problem solving in stoichiometry
NASA Astrophysics Data System (ADS)
Gulacar, Ozcan
In this study, I investigated how successful and unsuccessful students solve stoichiometry problems. I focus on three research questions: (1) To what extent do the difficulties in solving stoichiometry problems stem from poor understanding of pieces (domain-specific knowledge) versus students' inability to link those pieces together (conceptual knowledge)? (2) What are the differences between successful and unsuccessful students in knowledge, ability, and practice? (3) Is there a connection between students' (a) cognitive development levels, (b) formal (proportional) reasoning abilities, (c) working memory capacities, (d) conceptual understanding of particle nature of matter, (e) understanding of the mole concept, and their problem-solving achievement in stoichiometry? In this study, nine successful students and eight unsuccessful students participated. Both successful and unsuccessful students were selected among the students taking a general chemistry course at a mid-western university. The students taking this class were all science, non-chemistry majors. Characteristics of successful and unsuccessful students were determined through tests, audio and videotapes analyses, and subjects' written works. The Berlin Particle Concept Inventory, the Mole Concept Achievement Test, the Test of Logical Thinking, the Digits Backward Test, and the Longeot Test were used to measure students' conceptual understanding of particle nature of matter and mole concept, formal (proportional) reasoning ability, working memory capacity, and cognitive development, respectively. Think-aloud problem-solving protocols were also used to better explore the differences between successful and unsuccessful students' knowledge structures and behaviors during problem solving. Although successful students did not show significantly better performance on doing pieces (domain-specific knowledge) and solving exercises than unsuccessful counterparts did, they appeared to be more successful in linking the pieces (conceptual knowledge) and solving complex problems than the unsuccessful student did. Successful students also appeared to be different in how they approach problems, what strategies they use, and in making fewer algorithmic mistakes when compared to unsuccessful students. Successful students, however, did not seem to be statistically significantly different from the unsuccessful students in terms of quantitatively tested cognitive abilities except formal (proportional) reasoning ability and in the understanding of mole concept.
A framework for simultaneous aerodynamic design optimization in the presence of chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de; Gauger, Nicolas R.; Wang, Qiqi
Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimizationmore » that is independent of the time domain length, even in the presence of chaos.« less
Frontiers of biomedical text mining: current progress
Zweigenbaum, Pierre; Demner-Fushman, Dina; Yu, Hong; Cohen, Kevin B.
2008-01-01
It is now almost 15 years since the publication of the first paper on text mining in the genomics domain, and decades since the first paper on text mining in the medical domain. Enormous progress has been made in the areas of information retrieval, evaluation methodologies and resource construction. Some problems, such as abbreviation-handling, can essentially be considered solved problems, and others, such as identification of gene mentions in text, seem likely to be solved soon. However, a number of problems at the frontiers of biomedical text mining continue to present interesting challenges and opportunities for great improvements and interesting research. In this article we review the current state of the art in biomedical text mining or ‘BioNLP’ in general, focusing primarily on papers published within the past year. PMID:17977867
Using hybrid expert system approaches for engineering applications
NASA Technical Reports Server (NTRS)
Allen, R. H.; Boarnet, M. G.; Culbert, C. J.; Savely, R. T.
1987-01-01
In this paper, the use of hybrid expert system shells and hybrid (i.e., algorithmic and heuristic) approaches for solving engineering problems is reported. Aspects of various engineering problem domains are reviewed for a number of examples with specific applications made to recently developed prototype expert systems. Based on this prototyping experience, critical evaluations of and comparisons between commercially available tools, and some research tools, in the United States and Australia, and their underlying problem-solving paradigms are made. Characteristics of the implementation tool and the engineering domain are compared and practical software engineering issues are discussed with respect to hybrid tools and approaches. Finally, guidelines are offered with the hope that expert system development will be less time consuming, more effective, and more cost-effective than it has been in the past.
High-frequency CAD-based scattering model: SERMAT
NASA Astrophysics Data System (ADS)
Goupil, D.; Boutillier, M.
1991-09-01
Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.
Heidari, Mohammad; Shahbazi, Sara
2016-01-01
Background: The aim of this study was to determine the effect of problem-solving training on decision-making skill and critical thinking in emergency medical personnel. Materials and Methods: This study is an experimental study that performed in 95 emergency medical personnel in two groups of control (48) and experimental (47). Then, a short problem-solving course based on 8 sessions of 2 h during the term, was performed for the experimental group. Of data gathering was used demographic and researcher made decision-making and California critical thinking skills questionnaires. Data were analyzed using SPSS software. Results: The finding revealed that decision-making and critical thinking score in emergency medical personnel are low and problem-solving course, positively affected the personnel’ decision-making skill and critical thinking after the educational program (P < 0.05). Conclusions: Therefore, this kind of education on problem-solving in various emergency medicine domains such as education, research, and management, is recommended. PMID:28149823
Problem solving with genetic algorithms and Splicer
NASA Technical Reports Server (NTRS)
Bayer, Steven E.; Wang, Lui
1991-01-01
Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.
Domain decomposition for a mixed finite element method in three dimensions
Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.
2003-01-01
We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.
NASA Astrophysics Data System (ADS)
Pskhu, A. V.
2017-12-01
We solve the first boundary-value problem in a non-cylindrical domain for a diffusion-wave equation with the Dzhrbashyan- Nersesyan operator of fractional differentiation with respect to the time variable. We prove an existence and uniqueness theorem for this problem, and construct a representation of the solution. We show that a sufficient condition for unique solubility is the condition of Hölder smoothness for the lateral boundary of the domain. The corresponding results for equations with Riemann- Liouville and Caputo derivatives are particular cases of results obtained here.
NASA Astrophysics Data System (ADS)
Alhusaini, Abdulnasser Alashaal F.
The Real Engagement in Active Problem Solving (REAPS) model was developed in 2004 by C. June Maker and colleagues as an intervention for gifted students to develop creative problem solving ability through the use of real-world problems. The primary purpose of this study was to examine the effects of the REAPS model on developing students' general creativity and creative problem solving in science with two durations as independent variables. The long duration of the REAPS model implementation lasted five academic quarters or approximately 10 months; the short duration lasted two quarters or approximately four months. The dependent variables were students' general creativity and creative problem solving in science. The second purpose of the study was to explore which aspects of creative problem solving (i.e., generating ideas, generating different types of ideas, generating original ideas, adding details to ideas, generating ideas with social impact, finding problems, generating and elaborating on solutions, and classifying elements) were most affected by the long duration of the intervention. The REAPS model in conjunction with Amabile's (1983; 1996) model of creative performance provided the theoretical framework for this study. The study was conducted using data from the Project of Differentiation for Diverse Learners in Regular Classrooms (i.e., the Australian Project) in which one public elementary school in the eastern region of Australia cooperated with the DISCOVER research team at the University of Arizona. All students in the school from first to sixth grade participated in the study. The total sample was 360 students, of which 115 were exposed to a long duration and 245 to a short duration of the REAPS model. The principal investigators used a quasi-experimental research design in which all students in the school received the treatment for different durations. Students in both groups completed pre- and posttests using the Test of Creative Thinking-Drawing Production (TCT-DP) and the Test of Creative Problem Solving in Science (TCPS-S). A one-way analysis of covariance (ANCOVA) was conducted to control for differences between the two groups on pretest results. Statistically significant differences were not found between posttest scores on the TCT-DP for the two durations of REAPS model implementation. However, statistically significant differences were found between posttest scores on the TCPS-S. These findings are consistent with Amabile's (1983; 1996) model of creative performance, particularly her explanation that domain-specific creativity requires knowledge such as specific content and technical skills that must be learned prior to being applied creatively. The findings are also consistent with literature in which researchers have found that longer interventions typically result in expected positive growth in domain-specific creativity, while both longer and shorter interventions have been found effective in improving domain-general creativity. Change scores were also calculated between pre- and posttest scores on the 8 aspects of creativity (Maker, Jo, Alfaiz, & Alhusaini, 2015a), and a binary logistic regression was conducted to assess which were the most affected by the long duration of the intervention. The regression model was statistically significant, with aspects of generating ideas, adding details to ideas, and finding problems being the most affected by the long duration of the intervention. Based on these findings, the researcher believes that the REAPS model is a useful intervention to develop students' creativity. Future researchers should implement the model for longer durations if they are interested in developing students' domain-specific creative problem solving ability.
A Simple Label Switching Algorithm for Semisupervised Structural SVMs.
Balamurugan, P; Shevade, Shirish; Sundararajan, S
2015-10-01
In structured output learning, obtaining labeled data for real-world applications is usually costly, while unlabeled examples are available in abundance. Semisupervised structured classification deals with a small number of labeled examples and a large number of unlabeled structured data. In this work, we consider semisupervised structural support vector machines with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labeled and unlabeled examples, along with the domain constraints. We propose a simple optimization approach that alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective label switching method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching and avoiding poor local minima, which are not very useful. The algorithm is simple and easy to implement. Further, it is suitable for any structured output learning problem where exact inference is available. Experiments on benchmark sequence labeling data sets and a natural language parsing data set show that the proposed approach, though simple, achieves comparable generalization performance.
Spectral Collocation Time-Domain Modeling of Diffractive Optical Elements
NASA Astrophysics Data System (ADS)
Hesthaven, J. S.; Dinesen, P. G.; Lynov, J. P.
1999-11-01
A spectral collocation multi-domain scheme is developed for the accurate and efficient time-domain solution of Maxwell's equations within multi-layered diffractive optical elements. Special attention is being paid to the modeling of out-of-plane waveguide couplers. Emphasis is given to the proper construction of high-order schemes with the ability to handle very general problems of considerable geometric and material complexity. Central questions regarding efficient absorbing boundary conditions and time-stepping issues are also addressed. The efficacy of the overall scheme for the time-domain modeling of electrically large, and computationally challenging, problems is illustrated by solving a number of plane as well as non-plane waveguide problems.
How to help intelligent systems with different uncertainty representations cooperate with each other
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.; Kumar, Sundeep
1991-01-01
In order to solve a complicated problem one must use the knowledge from different domains. Therefore, if one wants to automatize the solution of these problems, one has to help the knowledge-based systems that correspond to these domains cooperate, that is, communicate facts and conclusions to each other in the process of decision making. One of the main obstacles to such cooperation is the fact that different intelligent systems use different methods of knowledge acquisition and different methods and formalisms for uncertainty representation. So an interface f is needed, 'translating' the values x, y, which represent uncertainty of the experts' knowledge in one system, into the values f(x), f(y) appropriate for another one. The problem of designing such an interface as a mathematical problem is formulated and solved. It is shown that the interface must be fractionally linear: f(x) = (ax + b)/(cx + d).
Solution and reasoning reuse in space planning and scheduling applications
NASA Technical Reports Server (NTRS)
Verfaillie, Gerard; Schiex, Thomas
1994-01-01
In the space domain, as in other domains, the CSP (Constraint Satisfaction Problems) techniques are increasingly used to represent and solve planning and scheduling problems. But these techniques have been developed to solve CSP's which are composed of fixed sets of variables and constraints, whereas many planning and scheduling problems are dynamic. It is therefore important to develop methods which allow a new solution to be rapidly found, as close as possible to the previous one, when some variables or constraints are added or removed. After presenting some existing approaches, this paper proposes a simple and efficient method, which has been developed on the basis of the dynamic backtracking algorithm. This method allows previous solution and reasoning to be reused in the framework of a CSP which is close to the previous one. Some experimental results on general random CSPs and on operation scheduling problems for remote sensing satellites are given.
A Coupling Strategy of FEM and BEM for the Solution of a 3D Industrial Crack Problem
NASA Astrophysics Data System (ADS)
Kouitat Njiwa, Richard; Taha Niane, Ngadia; Frey, Jeremy; Schwartz, Martin; Bristiel, Philippe
2015-03-01
Analyzing crack stability in an industrial context is challenging due to the geometry of the structure. The finite element method is effective for defect-free problems. The boundary element method is effective for problems in simple geometries with singularities. We present a strategy that takes advantage of both approaches. Within the iterative solution procedure, the FEM solves a defect-free problem over the structure while the BEM solves the crack problem over a fictitious domain with simple geometry. The effectiveness of the approach is demonstrated on some simple examples which allow comparison with literature results and on an industrial problem.
Asymptotic analysis of the narrow escape problem in dendritic spine shaped domain: three dimensions
NASA Astrophysics Data System (ADS)
Li, Xiaofei; Lee, Hyundae; Wang, Yuliang
2017-08-01
This paper deals with the three-dimensional narrow escape problem in a dendritic spine shaped domain, which is composed of a relatively big head and a thin neck. The narrow escape problem is to compute the mean first passage time of Brownian particles traveling from inside the head to the end of the neck. The original model is to solve a mixed Dirichlet-Neumann boundary value problem for the Poisson equation in the composite domain, and is computationally challenging. In this paper we seek to transfer the original problem to a mixed Robin-Neumann boundary value problem by dropping the thin neck part, and rigorously derive the asymptotic expansion of the mean first passage time with high order terms. This study is a nontrivial three-dimensional generalization of the work in Li (2014 J. Phys. A: Math. Theor. 47 505202), where a two-dimensional analogue domain is considered.
A knowledge engineering taxonomy for intelligent tutoring system development
NASA Technical Reports Server (NTRS)
Fink, Pamela K.; Herren, L. Tandy
1993-01-01
This paper describes a study addressing the issue of developing an appropriate mapping of knowledge acquisition methods to problem types for intelligent tutoring system development. Recent research has recognized that knowledge acquisition methodologies are not general across problem domains; the effectiveness of a method for obtaining knowledge depends on the characteristics of the domain and problem solving task. Southwest Research Institute developed a taxonomy of problem types by evaluating the characteristics that discriminate between problems and grouping problems that share critical characteristics. Along with the problem taxonomy, heuristics that guide the knowledge acquisition process based on the characteristics of the class are provided.
Characteristic of cognitive decline in Parkinson's disease: a 1-year follow-up.
McKinlay, Audrey; Grace, Randolph C
2011-10-01
The aim of this study was to track the evolution of cognitive decline in Parkinson's disease (PD) patients 1 year after baseline testing. Thirty-three PD patients, divided according to three previously determined subgroups based on their initial cognitive performance, and a healthy comparison group were reassessed after a 1-year interval. Participants were assessed in the following five domains: Executive Function, Problem Solving, Working Memory/Attention, Memory, and Visuospatial Ability. The PD groups differed on the domains of Executive Function, Problem Solving, and Working Memory, with the most severe deficits being evident for the group that had previously shown the greatest level of impairment. Increased cognitive problems were also associated with decreased functioning in activities of daily living. The most severely impaired group had evidence of global cognitive decline, possibly reflecting a stage of preclinical dementia.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Watson, Willie R. (Technical Monitor)
2005-01-01
The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.
High performance techniques for space mission scheduling
NASA Technical Reports Server (NTRS)
Smith, Stephen F.
1994-01-01
In this paper, we summarize current research at Carnegie Mellon University aimed at development of high performance techniques and tools for space mission scheduling. Similar to prior research in opportunistic scheduling, our approach assumes the use of dynamic analysis of problem constraints as a basis for heuristic focusing of problem solving search. This methodology, however, is grounded in representational assumptions more akin to those adopted in recent temporal planning research, and in a problem solving framework which similarly emphasizes constraint posting in an explicitly maintained solution constraint network. These more general representational assumptions are necessitated by the predominance of state-dependent constraints in space mission planning domains, and the consequent need to integrate resource allocation and plan synthesis processes. First, we review the space mission problems we have considered to date and indicate the results obtained in these application domains. Next, we summarize recent work in constraint posting scheduling procedures, which offer the promise of better future solutions to this class of problems.
Visual modeling in an analysis of multidimensional data
NASA Astrophysics Data System (ADS)
Zakharova, A. A.; Vekhter, E. V.; Shklyar, A. V.; Pak, A. J.
2018-01-01
The article proposes an approach to solve visualization problems and the subsequent analysis of multidimensional data. Requirements to the properties of visual models, which were created to solve analysis problems, are described. As a perspective direction for the development of visual analysis tools for multidimensional and voluminous data, there was suggested an active use of factors of subjective perception and dynamic visualization. Practical results of solving the problem of multidimensional data analysis are shown using the example of a visual model of empirical data on the current state of studying processes of obtaining silicon carbide by an electric arc method. There are several results of solving this problem. At first, an idea of possibilities of determining the strategy for the development of the domain, secondly, the reliability of the published data on this subject, and changes in the areas of attention of researchers over time.
A partitioning strategy for nonuniform problems on multiprocessors
NASA Technical Reports Server (NTRS)
Berger, M. J.; Bokhari, S.
1985-01-01
The partitioning of a problem on a domain with unequal work estimates in different subddomains is considered in a way that balances the work load across multiple processors. Such a problem arises for example in solving partial differential equations using an adaptive method that places extra grid points in certain subregions of the domain. A binary decomposition of the domain is used to partition it into rectangles requiring equal computational effort. The communication costs of mapping this partitioning onto different microprocessors: a mesh-connected array, a tree machine and a hypercube is then studied. The communication cost expressions can be used to determine the optimal depth of the above partitioning.
Lesion mapping of social problem solving
Colom, Roberto; Paul, Erick J.; Chau, Aileen; Solomon, Jeffrey; Grafman, Jordan H.
2014-01-01
Accumulating neuroscience evidence indicates that human intelligence is supported by a distributed network of frontal and parietal regions that enable complex, goal-directed behaviour. However, the contributions of this network to social aspects of intellectual function remain to be well characterized. Here, we report a human lesion study (n = 144) that investigates the neural bases of social problem solving (measured by the Everyday Problem Solving Inventory) and examine the degree to which individual differences in performance are predicted by a broad spectrum of psychological variables, including psychometric intelligence (measured by the Wechsler Adult Intelligence Scale), emotional intelligence (measured by the Mayer, Salovey, Caruso Emotional Intelligence Test), and personality traits (measured by the Neuroticism-Extraversion-Openness Personality Inventory). Scores for each variable were obtained, followed by voxel-based lesion–symptom mapping. Stepwise regression analyses revealed that working memory, processing speed, and emotional intelligence predict individual differences in everyday problem solving. A targeted analysis of specific everyday problem solving domains (involving friends, home management, consumerism, work, information management, and family) revealed psychological variables that selectively contribute to each. Lesion mapping results indicated that social problem solving, psychometric intelligence, and emotional intelligence are supported by a shared network of frontal, temporal, and parietal regions, including white matter association tracts that bind these areas into a coordinated system. The results support an integrative framework for understanding social intelligence and make specific recommendations for the application of the Everyday Problem Solving Inventory to the study of social problem solving in health and disease. PMID:25070511
Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.
Li, Shuai; Li, Yangming
2013-10-28
The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.
Solution of internal ballistic problem for SRM with grain of complex shape during main firing phase
NASA Astrophysics Data System (ADS)
Kiryushkin, A. E.; Minkov, L. L.
2017-10-01
Solid rocket motor (SRM) internal ballistics problems are related to the problems with moving boundaries. The algorithm able to solve similar problems in axisymmetric formulation on Cartesian mesh with an arbitrary order of accuracy is considered in this paper. The base of this algorithm is the ghost point extrapolation using inverse Lax-Wendroff procedure. Level set method is used as an implicit representation of the domain boundary. As an example, the internal ballistics problem for SRM with umbrella type grain was solved during the main firing phase. In addition, flow parameters distribution in the combustion chamber was obtained for different time moments.
NASA Astrophysics Data System (ADS)
Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.
2015-10-01
In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.
Driving into the future: how imaging technology is shaping the future of cars
NASA Astrophysics Data System (ADS)
Zhang, Buyue
2015-03-01
Fueled by the development of advanced driver assistance system (ADAS), autonomous vehicles, and the proliferation of cameras and sensors, automotive is becoming a rich new domain for innovations in imaging technology. This paper presents an overview of ADAS, the important imaging and computer vision problems to solve for automotive, and examples of how some of these problems are solved, through which we highlight the challenges and opportunities in the automotive imaging space.
Modelling human problem solving with data from an online game.
Rach, Tim; Kirsch, Alexandra
2016-11-01
Since the beginning of cognitive science, researchers have tried to understand human strategies in order to develop efficient and adequate computational methods. In the domain of problem solving, the travelling salesperson problem has been used for the investigation and modelling of human solutions. We propose to extend this effort with an online game, in which instances of the travelling salesperson problem have to be solved in the context of a game experience. We report on our effort to design and run such a game, present the data contained in the resulting openly available data set and provide an outlook on the use of games in general for cognitive science research. In addition, we present three geometrical models mapping the starting point preferences in the problems presented in the game as the result of an evaluation of the data set.
Parallel Computation of Flow in Heterogeneous Media Modelled by Mixed Finite Elements
NASA Astrophysics Data System (ADS)
Cliffe, K. A.; Graham, I. G.; Scheichl, R.; Stals, L.
2000-11-01
In this paper we describe a fast parallel method for solving highly ill-conditioned saddle-point systems arising from mixed finite element simulations of stochastic partial differential equations (PDEs) modelling flow in heterogeneous media. Each realisation of these stochastic PDEs requires the solution of the linear first-order velocity-pressure system comprising Darcy's law coupled with an incompressibility constraint. The chief difficulty is that the permeability may be highly variable, especially when the statistical model has a large variance and a small correlation length. For reasonable accuracy, the discretisation has to be extremely fine. We solve these problems by first reducing the saddle-point formulation to a symmetric positive definite (SPD) problem using a suitable basis for the space of divergence-free velocities. The reduced problem is solved using parallel conjugate gradients preconditioned with an algebraically determined additive Schwarz domain decomposition preconditioner. The result is a solver which exhibits a good degree of robustness with respect to the mesh size as well as to the variance and to physically relevant values of the correlation length of the underlying permeability field. Numerical experiments exhibit almost optimal levels of parallel efficiency. The domain decomposition solver (DOUG, http://www.maths.bath.ac.uk/~parsoft) used here not only is applicable to this problem but can be used to solve general unstructured finite element systems on a wide range of parallel architectures.
Topological defects in alternative theories to cosmic inflation and string cosmology
NASA Astrophysics Data System (ADS)
Alexander, Stephon H. S.
The physics of the Early Universe is described in terms of the inflationary paradigm, which is based on a marriage between Einstein's general theory of relativity minimally coupled to quantum field theory. Inflation was posed to solve some of the outstanding problems of the Standard Big Bang Cosmology (SBB) such as the horizon, formation of structure and monopole problems. Despite its observational and theoretical successes, inflation is plagued with fine tuning and initial singularity problems. On the other hand, superstring/M theory, a theory of quantum gravity, possesses symmetries which naturally avoid space-time singularities. This thesis investigates alternative theories to cosmic inflation for solving the initial singularity, horizon and monopole problems, making use of topological defects. It was proposed by Dvali, Liu and Vaschaspati that the monopole problem can be solved without inflation if domain walls "sweep" up the monopoles in the early universe, thus reducing their number density significantly. Necessary for this mechanism to work is the presence of an attractive force between the monopole and the domain wall as well as a channel for the monopole's unwinding. We show numerically and analytically in two field theory models that for global defects the attraction is a universal result but the unwinding is model specific. The second part of this thesis investigates a string/M theory inspired model for solving the horizon problem. It was proposed by Moffat, Albrecht and Magueijo that the horizon problem is solved with a "phase transition" associated with a varying speed of light before the surface of last scattering. We provide a string/M theory mechanism based on assuming that our space-time is a D-3 brane probing a bulk supergravity black hole bulk background. This mechanism provides the necessary time variation of the velocity of light to solve the horizon problem. We suggest a mechanism which stablilizes the speed of light on the D-3 brane. We finally address the cosmological initial singularity problem using the target space duality inherent in string/M theory. It was suggested by Brandenberger and Vafa that superstring theory can solve the singularity problem and in addition explain why only three spatial dimensions can become large. We show that under specific conditions this mechanism still persists when including the effects of D-branes.
Examining the Generality of Self-Explanation
ERIC Educational Resources Information Center
Wylie, Ruth
2011-01-01
Prompting students to self-explain during problem solving has proven to be an effective instructional strategy across many domains. However, despite being called "domain general", very little work has been done in areas outside of math and science. In this dissertation, I investigate whether the self-explanation effect holds when applied…
An Ontology for Learning Services on the Shop Floor
ERIC Educational Resources Information Center
Ullrich, Carsten
2016-01-01
An ontology expresses a common understanding of a domain that serves as a basis of communication between people or systems, and enables knowledge sharing, reuse of domain knowledge, reasoning and thus problem solving. In Technology-Enhanced Learning, especially in Intelligent Tutoring Systems and Adaptive Learning Environments, ontologies serve as…
Mapping University Students' Epistemic Framing of Computational Physics Using Network Analysis
ERIC Educational Resources Information Center
Bodin, Madelen
2012-01-01
Solving physics problem in university physics education using a computational approach requires knowledge and skills in several domains, for example, physics, mathematics, programming, and modeling. These competences are in turn related to students' beliefs about the domains as well as about learning. These knowledge and beliefs components are…
Working wonders? investigating insight with magic tricks.
Danek, Amory H; Fraps, Thomas; von Müller, Albrecht; Grothe, Benedikt; Ollinger, Michael
2014-02-01
We propose a new approach to differentiate between insight and noninsight problem solving, by introducing magic tricks as problem solving domain. We argue that magic tricks are ideally suited to investigate representational change, the key mechanism that yields sudden insight into the solution of a problem, because in order to gain insight into the magicians' secret method, observers must overcome implicit constraints and thus change their problem representation. In Experiment 1, 50 participants were exposed to 34 different magic tricks, asking them to find out how the trick was accomplished. Upon solving a trick, participants indicated if they had reached the solution either with or without insight. Insight was reported in 41.1% of solutions. The new task domain revealed differences in solution accuracy, time course and solution confidence with insight solutions being more likely to be true, reached earlier, and obtaining higher confidence ratings. In Experiment 2, we explored which role self-imposed constraints actually play in magic tricks. 62 participants were presented with 12 magic tricks. One group received verbal cues, providing solution relevant information without giving the solution away. The control group received no informative cue. Experiment 2 showed that participants' constraints were suggestible to verbal cues, resulting in higher solution rates. Thus, magic tricks provide more detailed information about the differences between insightful and noninsightful problem solving, and the underlying mechanisms that are necessary to have an insight. Copyright © 2013 Elsevier B.V. All rights reserved.
Do job demands and job control affect problem-solving?
Bergman, Peter N; Ahlberg, Gunnel; Johansson, Gun; Stoetzer, Ulrich; Aborg, Carl; Hallsten, Lennart; Lundberg, Ingvar
2012-01-01
The Job Demand Control model presents combinations of working conditions that may facilitate learning, the active learning hypothesis, or have detrimental effects on health, the strain hypothesis. To test the active learning hypothesis, this study analysed the effects of job demands and job control on general problem-solving strategies. A population-based sample of 4,636 individuals (55% women, 45% men) with the same job characteristics measured at two times with a three year time lag was used. Main effects of demands, skill discretion, task authority and control, and the combined effects of demands and control were analysed in logistic regressions, on four outcomes representing general problem-solving strategies. Those reporting high on skill discretion, task authority and control, as well as those reporting high demand/high control and low demand/high control job characteristics were more likely to state using problem solving strategies. Results suggest that working conditions including high levels of control may affect how individuals cope with problems and that workplace characteristics may affect behaviour in the non-work domain.
Brains, brawn and sociality: a hyaena’s tale
Holekamp, Kay E.; Dantzer, Ben; Stricker, Gregory; Shaw Yoshida, Kathryn C.; Benson-Amram, Sarah
2015-01-01
Theoretically intelligence should evolve to help animals solve specific types of problems posed by the environment, but it remains unclear how environmental complexity or novelty facilitates the evolutionary enhancement of cognitive abilities, or whether domain-general intelligence can evolve in response to domain-specific selection pressures. The social complexity hypothesis, which posits that intelligence evolved to cope with the labile behaviour of conspecific group-mates, has been strongly supported by work on the sociocognitive abilities of primates and other animals. Here we review the remarkable convergence in social complexity between cercopithecine primates and spotted hyaenas, and describe our tests of predictions of the social complexity hypothesis in regard to both cognition and brain size in hyaenas. Behavioural data indicate that there has been remarkable convergence between primates and hyaenas with respect to their abilities in the domain of social cognition. Furthermore, within the family Hyaenidae, our data suggest that social complexity might have contributed to enlargement of the frontal cortex. However, social complexity failed to predict either brain volume or frontal cortex volume in a larger array of mammalian carnivores. To address the question of whether or not social complexity might be able to explain the evolution of domain-general intelligence as well as social cognition in particular, we presented simple puzzle boxes, baited with food and scaled to accommodate body size, to members of 39 carnivore species housed in zoos and found that species with larger brains relative to their body mass were more innovative and more successful at opening the boxes. However, social complexity failed to predict success in solving this problem. Overall our work suggests that, although social complexity enhances social cognition, there are no unambiguous causal links between social complexity and either brain size or performance in problem-solving tasks outside the social domain in mammalian carnivores. PMID:26160980
NASA Astrophysics Data System (ADS)
Voznyuk, I.; Litman, A.; Tortel, H.
2015-08-01
A Quasi-Newton method for reconstructing the constitutive parameters of three-dimensional (3D) penetrable scatterers from scattered field measurements is presented. This method is adapted for handling large-scale electromagnetic problems while keeping the memory requirement and the time flexibility as low as possible. The forward scattering problem is solved by applying the finite-element tearing and interconnecting full-dual-primal (FETI-FDP2) method which shares the same spirit as the domain decomposition methods for finite element methods. The idea is to split the computational domain into smaller non-overlapping sub-domains in order to simultaneously solve local sub-problems. Various strategies are proposed in order to efficiently couple the inversion algorithm with the FETI-FDP2 method: a separation into permanent and non-permanent subdomains is performed, iterative solvers are favorized for resolving the interface problem and a marching-on-in-anything initial guess selection further accelerates the process. The computational burden is also reduced by applying the adjoint state vector methodology. Finally, the inversion algorithm is confronted to measurements extracted from the 3D Fresnel database.
NASA Astrophysics Data System (ADS)
Jia, Shouqing; La, Dongsheng; Ma, Xuelian
2018-04-01
The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.
Problem-Based Learning in Formal and Informal Learning Environments
ERIC Educational Resources Information Center
Shimic, Goran; Jevremovic, Aleksandar
2012-01-01
Problem-based learning (PBL) is a student-centered instructional strategy in which students solve problems and reflect on their experiences. Different domains need different approaches in the design of PBL systems. Therefore, we present one case study in this article: A Java Programming PBL. The application is developed as an additional module for…
Interface conditions for domain decomposition with radical grid refinement
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1991-01-01
Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.
Sheldon, S; Vandermorris, S; Al-Haj, M; Cohen, S; Winocur, G; Moscovitch, M
2015-02-01
It is well accepted that the medial temporal lobes (MTL), and the hippocampus specifically, support episodic memory processes. Emerging evidence suggests that these processes also support the ability to effectively solve ill-defined problems which are those that do not have a set routine or solution. To test the relation between episodic memory and problem solving, we examined the ability of individuals with single domain amnestic mild cognitive impairment (aMCI), a condition characterized by episodic memory impairment, to solve ill-defined social problems. Participants with aMCI and age and education matched controls were given a battery of tests that included standardized neuropsychological measures, the Autobiographical Interview (Levine et al., 2002) that scored for episodic content in descriptions of past personal events, and a measure of ill-defined social problem solving. Corroborating previous findings, the aMCI group generated less episodically rich narratives when describing past events. Individuals with aMCI also generated less effective solutions when solving ill-defined problems compared to the control participants. Correlation analyses demonstrated that the ability to recall episodic elements from autobiographical memories was positively related to the ability to effectively solve ill-defined problems. The ability to solve these ill-defined problems was related to measures of activities of daily living. In conjunction with previous reports, the results of the present study point to a new functional role of episodic memory in ill-defined goal-directed behavior and other non-memory tasks that require flexible thinking. Our findings also have implications for the cognitive and behavioural profile of aMCI by suggesting that the ability to effectively solve ill-defined problems is related to sustained functional independence. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, F.; Banks, J. W.; Henshaw, W. D.
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
The testing effect and analogical problem-solving.
Peterson, Daniel J; Wissman, Kathryn T
2018-06-25
Researchers generally agree that retrieval practice of previously learned material facilitates subsequent recall of same material, a phenomenon known as the testing effect. There is debate, however, about when such benefits transfer to related (though not identical) material. The current study examines the phenomenon of transfer in the domain of analogical problem-solving. In Experiments 1 and 2, learners were presented a source text describing a problem and solution to read which was subsequently either restudied or recalled. Following a short (Experiment 1) or long (Experiment 2) delay, learners were given a new target text and asked to solve a problem. The two texts shared a common structure such that the provided solution for the source text could be applied to solve the problem in the target text. In a combined analysis of both experiments, learners in the retrieval practice condition were more successful at solving the problem than those in the restudy condition. Experiment 3 explored the degree to which retrieval practice promotes cued versus spontaneous transfer by manipulating whether participants were provided with an explicit hint that the source and target texts were related. Results revealed no effect of retrieval practice.
NASA Astrophysics Data System (ADS)
Tayebi, A.; Shekari, Y.; Heydari, M. H.
2017-07-01
Several physical phenomena such as transformation of pollutants, energy, particles and many others can be described by the well-known convection-diffusion equation which is a combination of the diffusion and advection equations. In this paper, this equation is generalized with the concept of variable-order fractional derivatives. The generalized equation is called variable-order time fractional advection-diffusion equation (V-OTFA-DE). An accurate and robust meshless method based on the moving least squares (MLS) approximation and the finite difference scheme is proposed for its numerical solution on two-dimensional (2-D) arbitrary domains. In the time domain, the finite difference technique with a θ-weighted scheme and in the space domain, the MLS approximation are employed to obtain appropriate semi-discrete solutions. Since the newly developed method is a meshless approach, it does not require any background mesh structure to obtain semi-discrete solutions of the problem under consideration, and the numerical solutions are constructed entirely based on a set of scattered nodes. The proposed method is validated in solving three different examples including two benchmark problems and an applied problem of pollutant distribution in the atmosphere. In all such cases, the obtained results show that the proposed method is very accurate and robust. Moreover, a remarkable property so-called positive scheme for the proposed method is observed in solving concentration transport phenomena.
Application of the perturbation iteration method to boundary layer type problems.
Pakdemirli, Mehmet
2016-01-01
The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.
Domain decomposition methods in aerodynamics
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Saltz, Joel
1990-01-01
Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.
Belief-Desire Reasoning as a Process of Selection
ERIC Educational Resources Information Center
Leslie, Alan M.; German, Tim P.; Polizzi, Pamela
2005-01-01
Human learning may depend upon domain specialized mechanisms. A plausible example is rapid, early learning about the thoughts and feelings of other people. A major achievement in this domain, at about age four in the typically developing child, is the ability to solve problems in which the child attributes false beliefs to other people and…
ERIC Educational Resources Information Center
Reif, Frederick
2008-01-01
Many students find it difficult to learn the kinds of knowledge and thinking required by college or high school courses in mathematics, science, or other complex domains. Thus they often emerge with significant misconceptions, fragmented knowledge, and inadequate problem-solving skills. Most instructors or textbook authors approach their teaching…
The Effect of Contrasting Analogies on Understanding of and Reasoning about Natural Selection
ERIC Educational Resources Information Center
Sota, Melinda
2012-01-01
Analogies play significant roles in communication as well as in problem solving and model building in science domains. Analogies have also been incorporated into several different instructional strategies--most notably in science domains where the concepts and principles to be learned are abstract or complex. Although several instructional models…
Neuropsychological profile in adult schizophrenia measured with the CMINDS.
van Erp, Theo G M; Preda, Adrian; Turner, Jessica A; Callahan, Shawn; Calhoun, Vince D; Bustillo, Juan R; Lim, Kelvin O; Mueller, Bryon; Brown, Gregory G; Vaidya, Jatin G; McEwen, Sarah; Belger, Aysenil; Voyvodic, James; Mathalon, Daniel H; Nguyen, Dana; Ford, Judith M; Potkin, Steven G
2015-12-30
Schizophrenia neurocognitive domain profiles are predominantly based on paper-and-pencil batteries. This study presents the first schizophrenia domain profile based on the Computerized Multiphasic Interactive Neurocognitive System (CMINDS(®)). Neurocognitive domain z-scores were computed from computerized neuropsychological tests, similar to those in the Measurement and Treatment Research to Improve Cognition in Schizophrenia Consensus Cognitive Battery (MCCB), administered to 175 patients with schizophrenia and 169 demographically similar healthy volunteers. The schizophrenia domain profile order by effect size was Speed of Processing (d=-1.14), Attention/Vigilance (d=-1.04), Working Memory (d=-1.03), Verbal Learning (d=-1.02), Visual Learning (d=-0.91), and Reasoning/Problem Solving (d=-0.67). There were no significant group by sex interactions, but overall women, compared to men, showed advantages on Attention/Vigilance, Verbal Learning, and Visual Learning compared to Reasoning/Problem Solving on which men showed an advantage over women. The CMINDS can readily be employed in the assessment of cognitive deficits in neuropsychiatric disorders; particularly in large-scale studies that may benefit most from electronic data capture. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Foresight beyond the very next event: four-year-olds can link past and deferred future episodes
Redshaw, Jonathan; Suddendorf, Thomas
2013-01-01
Previous experiments have demonstrated that by 4 years of age children can use information from a past episode to solve a problem for the very next future episode. However, it remained unclear whether 4-year-olds can similarly use such information to solve a problem for a more removed future episode that is not of immediate concern. In the current study we introduced 4-year-olds to problems in one room before taking them to another room and distracting them for 15 min. The children were then offered a choice of items to place into a bucket that was to be taken back to the first room when a 5-min sand-timer had completed a cycle. Across two conceptually distinct domains, the children placed the item that could solve the deferred future problem above chance level. This result demonstrates that by 48 months many children can recall a problem from the past and act in the present to solve that problem for a deferred future episode. We discuss implications for theories about the nature of episodic foresight. PMID:23847575
Domain wall and isocurvature perturbation problems in axion models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawasaki, Masahiro; Yoshino, Kazuyoshi; Yanagida, Tsutomu T., E-mail: kawasaki@icrr.u-tokyo.ac.jp, E-mail: tsutomu.tyanagida@ipmu.jp, E-mail: yoshino@icrr.u-tokyo.ac.jp
2013-11-01
Axion models have two serious cosmological problems, domain wall and isocurvature perturbation problems. In order to solve these problems we investigate the Linde's model in which the field value of the Peccei-Quinn (PQ) scalar is large during inflation. In this model the fluctuations of the PQ field grow after inflation through the parametric resonance and stable axionic strings may be produced, which results in the domain wall problem. We study formation of axionic strings using lattice simulations. It is found that in chaotic inflation the axion model is free from both the domain wall and the isocurvature perturbation problems ifmore » the initial misalignment angle θ{sub a} is smaller than O(10{sup −2}). Furthermore, axions can also account for the dark matter for the breaking scale v ≅ 10{sup 12−16} GeV and the Hubble parameter during inflation H{sub inf}∼<10{sup 11−12} GeV in general inflation models.« less
Finite element modeling of electromagnetic fields and waves using NASTRAN
NASA Technical Reports Server (NTRS)
Moyer, E. Thomas, Jr.; Schroeder, Erwin
1989-01-01
The various formulations of Maxwell's equations are reviewed with emphasis on those formulations which most readily form analogies with Navier's equations. Analogies involving scalar and vector potentials and electric and magnetic field components are presented. Formulations allowing for media with dielectric and conducting properties are emphasized. It is demonstrated that many problems in electromagnetism can be solved using the NASTRAN finite element code. Several fundamental problems involving time harmonic solutions of Maxwell's equations with known analytic solutions are solved using NASTRAN to demonstrate convergence and mesh requirements. Mesh requirements are studied as a function of frequency, conductivity, and dielectric properties. Applications in both low frequency and high frequency are highlighted. The low frequency problems demonstrate the ability to solve problems involving media inhomogeneity and unbounded domains. The high frequency applications demonstrate the ability to handle problems with large boundary to wavelength ratios.
Discontinuous finite element method for vector radiative transfer
NASA Astrophysics Data System (ADS)
Wang, Cun-Hai; Yi, Hong-Liang; Tan, He-Ping
2017-03-01
The discontinuous finite element method (DFEM) is applied to solve the vector radiative transfer in participating media. The derivation in a discrete form of the vector radiation governing equations is presented, in which the angular space is discretized by the discrete-ordinates approach with a local refined modification, and the spatial domain is discretized into finite non-overlapped discontinuous elements. The elements in the whole solution domain are connected by modelling the boundary numerical flux between adjacent elements, which makes the DFEM numerically stable for solving radiative transfer equations. Several various problems of vector radiative transfer are tested to verify the performance of the developed DFEM, including vector radiative transfer in a one-dimensional parallel slab containing a Mie/Rayleigh/strong forward scattering medium and a two-dimensional square medium. The fact that DFEM results agree very well with the benchmark solutions in published references shows that the developed DFEM in this paper is accurate and effective for solving vector radiative transfer problems.
Incompressibility without tears - How to avoid restrictions of mixed formulation
NASA Technical Reports Server (NTRS)
Zienkiewicz, O. C.; Wu, J.
1991-01-01
Several time-stepping schemes for incompressibility problems are presented which can be solved directly for steady state or iteratively through the time domain. The difficulty of mixed interpolation is avoided by using these schemes. The schemes are applicable to problems of fluid and solid mechanics.
ERIC Educational Resources Information Center
Lee, Ming; Wimmers, Paul F.
2016-01-01
Although problem-based learning (PBL) has been widely used in medical schools, few studies have attended to the assessment of PBL processes using validated instruments. This study examined reliability and validity for an instrument assessing PBL performance in four domains: Problem Solving, Use of Information, Group Process, and Professionalism.…
Adjoint-based optimization of PDEs in moving domains
NASA Astrophysics Data System (ADS)
Protas, Bartosz; Liao, Wenyuan
2008-02-01
In this investigation we address the problem of adjoint-based optimization of PDE systems in moving domains. As an example we consider the one-dimensional heat equation with prescribed boundary temperatures and heat fluxes. We discuss two methods of deriving an adjoint system necessary to obtain a gradient of a cost functional. In the first approach we derive the adjoint system after mapping the problem to a fixed domain, whereas in the second approach we derive the adjoint directly in the moving domain by employing methods of the noncylindrical calculus. We show that the operations of transforming the system from a variable to a fixed domain and deriving the adjoint do not commute and that, while the gradient information contained in both systems is the same, the second approach results in an adjoint problem with a simpler structure which is therefore easier to implement numerically. This approach is then used to solve a moving boundary optimization problem for our model system.
Machining Chatter Analysis for High Speed Milling Operations
NASA Astrophysics Data System (ADS)
Sekar, M.; Kantharaj, I.; Amit Siddhappa, Savale
2017-10-01
Chatter in high speed milling is characterized by time delay differential equations (DDE). Since closed form solution exists only for simple cases, the governing non-linear DDEs of chatter problems are solved by various numerical methods. Custom codes to solve DDEs are tedious to build, implement and not error free and robust. On the other hand, software packages provide solution to DDEs, however they are not straight forward to implement. In this paper an easy way to solve DDE of chatter in milling is proposed and implemented with MATLAB. Time domain solution permits the study and model of non-linear effects of chatter vibration with ease. Time domain results are presented for various stable and unstable conditions of cut and compared with stability lobe diagrams.
The Davey-Stewartson Equation on the Half-Plane
NASA Astrophysics Data System (ADS)
Fokas, A. S.
2009-08-01
The Davey-Stewartson (DS) equation is a nonlinear integrable evolution equation in two spatial dimensions. It provides a multidimensional generalisation of the celebrated nonlinear Schrödinger (NLS) equation and it appears in several physical situations. The implementation of the Inverse Scattering Transform (IST) to the solution of the initial-value problem of the NLS was presented in 1972, whereas the analogous problem for the DS equation was solved in 1983. These results are based on the formulation and solution of certain classical problems in complex analysis, namely of a Riemann Hilbert problem (RH) and of either a d-bar or a non-local RH problem respectively. A method for solving the mathematically more complicated but physically more relevant case of boundary-value problems for evolution equations in one spatial dimension, like the NLS, was finally presented in 1997, after interjecting several novel ideas to the panoply of the IST methodology. Here, this method is further extended so that it can be applied to evolution equations in two spatial dimensions, like the DS equation. This novel extension involves several new steps, including the formulation of a d-bar problem for a sectionally non-analytic function, i.e. for a function which has different non-analytic representations in different domains of the complex plane. This, in addition to the computation of a d-bar derivative, also requires the computation of the relevant jumps across the different domains. This latter step has certain similarities (but is more complicated) with the corresponding step for those initial-value problems in two dimensions which can be solved via a non-local RH problem, like KPI.
Vogel, Curtis R; Yang, Qiang
2006-08-21
We present two different implementations of the Fourier domain preconditioned conjugate gradient algorithm (FD-PCG) to efficiently solve the large structured linear systems that arise in optimal volume turbulence estimation, or tomography, for multi-conjugate adaptive optics (MCAO). We describe how to deal with several critical technical issues, including the cone coordinate transformation problem and sensor subaperture grid spacing. We also extend the FD-PCG approach to handle the deformable mirror fitting problem for MCAO.
In Search of Structures: How Does the Mind Explore Infinity?
ERIC Educational Resources Information Center
Singer, Florence Mihaela; Voica, Cristian
2010-01-01
When reasoning about infinite sets, children seem to activate four categories of conceptual structures: geometric (g-structures), arithmetic (a-structures), fractal-type (f-structures), and density-type (d-structures). Students select different problem-solving strategies depending on the structure they recognize within the problem domain. They…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ochiai, Yoshihiro
Heat-conduction analysis under steady state without heat generation can easily be treated by the boundary element method. However, in the case with heat conduction with heat generation can approximately be solved without a domain integral by an improved multiple-reciprocity boundary element method. The convention multiple-reciprocity boundary element method is not suitable for complicated heat generation. In the improved multiple-reciprocity boundary element method, on the other hand, the domain integral in each step is divided into point, line, and area integrals. In order to solve the problem, the contour lines of heat generation, which approximate the actual heat generation, are used.
A first packet processing subdomain cluster model based on SDN
NASA Astrophysics Data System (ADS)
Chen, Mingyong; Wu, Weimin
2017-08-01
For the current controller cluster packet processing performance bottlenecks and controller downtime problems. An SDN controller is proposed to allocate the priority of each device in the SDN (Software Defined Network) network, and the domain contains several network devices and Controller, the controller is responsible for managing the network equipment within the domain, the switch performs data delivery based on the load of the controller, processing network equipment data. The experimental results show that the model can effectively solve the risk of single point failure of the controller, and can solve the performance bottleneck of the first packet processing.
An Integrated Architecture for Engineering Problem Solving
1998-12-01
mentioned in the problem statement In the next section, we describe the definition of qualitative state and our extension to Gizmo (Forbus & de Kleer...possible qualitative transitions Figure 28: Mapping from problem specification to qualitative states We use Gizmo (Forbus, 1984b), developed by Ken...Forbus, to perform the necessary qualitative analysis. The initial partial problem specification becomes the scenario for Gizmo . The domain knowledge
NASA Technical Reports Server (NTRS)
Chandrasekaran, B.; Josephson, J.; Herman, D.
1987-01-01
The current generation of languages for the construction of knowledge-based systems as being at too low a level of abstraction is criticized, and the need for higher level languages for building problem solving systems is advanced. A notion of generic information processing tasks in knowledge-based problem solving is introduced. A toolset which can be used to build expert systems in a way that enhances intelligibility and productivity in knowledge acquistion and system construction is described. The power of these ideas is illustrated by paying special attention to a high level language called DSPL. A description is given of how it was used in the construction of a system called MPA, which assists with planning in the domain of offensive counter air missions.
Donnarumma, Francesco; Maisto, Domenico; Pezzulo, Giovanni
2016-01-01
How do humans and other animals face novel problems for which predefined solutions are not available? Human problem solving links to flexible reasoning and inference rather than to slow trial-and-error learning. It has received considerable attention since the early days of cognitive science, giving rise to well known cognitive architectures such as SOAR and ACT-R, but its computational and brain mechanisms remain incompletely known. Furthermore, it is still unclear whether problem solving is a “specialized” domain or module of cognition, in the sense that it requires computations that are fundamentally different from those supporting perception and action systems. Here we advance a novel view of human problem solving as probabilistic inference with subgoaling. In this perspective, key insights from cognitive architectures are retained such as the importance of using subgoals to split problems into subproblems. However, here the underlying computations use probabilistic inference methods analogous to those that are increasingly popular in the study of perception and action systems. To test our model we focus on the widely used Tower of Hanoi (ToH) task, and show that our proposed method can reproduce characteristic idiosyncrasies of human problem solvers: their sensitivity to the “community structure” of the ToH and their difficulties in executing so-called “counterintuitive” movements. Our analysis reveals that subgoals have two key roles in probabilistic inference and problem solving. First, prior beliefs on (likely) useful subgoals carve the problem space and define an implicit metric for the problem at hand—a metric to which humans are sensitive. Second, subgoals are used as waypoints in the probabilistic problem solving inference and permit to find effective solutions that, when unavailable, lead to problem solving deficits. Our study thus suggests that a probabilistic inference scheme enhanced with subgoals provides a comprehensive framework to study problem solving and its deficits. PMID:27074140
Donnarumma, Francesco; Maisto, Domenico; Pezzulo, Giovanni
2016-04-01
How do humans and other animals face novel problems for which predefined solutions are not available? Human problem solving links to flexible reasoning and inference rather than to slow trial-and-error learning. It has received considerable attention since the early days of cognitive science, giving rise to well known cognitive architectures such as SOAR and ACT-R, but its computational and brain mechanisms remain incompletely known. Furthermore, it is still unclear whether problem solving is a "specialized" domain or module of cognition, in the sense that it requires computations that are fundamentally different from those supporting perception and action systems. Here we advance a novel view of human problem solving as probabilistic inference with subgoaling. In this perspective, key insights from cognitive architectures are retained such as the importance of using subgoals to split problems into subproblems. However, here the underlying computations use probabilistic inference methods analogous to those that are increasingly popular in the study of perception and action systems. To test our model we focus on the widely used Tower of Hanoi (ToH) task, and show that our proposed method can reproduce characteristic idiosyncrasies of human problem solvers: their sensitivity to the "community structure" of the ToH and their difficulties in executing so-called "counterintuitive" movements. Our analysis reveals that subgoals have two key roles in probabilistic inference and problem solving. First, prior beliefs on (likely) useful subgoals carve the problem space and define an implicit metric for the problem at hand-a metric to which humans are sensitive. Second, subgoals are used as waypoints in the probabilistic problem solving inference and permit to find effective solutions that, when unavailable, lead to problem solving deficits. Our study thus suggests that a probabilistic inference scheme enhanced with subgoals provides a comprehensive framework to study problem solving and its deficits.
What's in a Domain: Understanding How Students Approach Questioning in History and Science
ERIC Educational Resources Information Center
Portnoy, Lindsay Blau
2013-01-01
During their education, students are presented with information across a variety of academic domains. How students ask questions as they learn has implications for understanding, retention, and problem solving. The current research investigates the influence of age and prior knowledge on the ways students approach questioning across history and…
What's in a Domain: Understanding How Students Approach Questioning in History and Science
ERIC Educational Resources Information Center
Portnoy, Lindsay Blau; Rabinowitz, Mitchell
2014-01-01
How students ask questions as they learn has implications for understanding, retention, and problem solving. The current research investigates the influence of domain, age, and previous experience with content on the ways students approach questioning across history and science texts. In 3 experiments, 3rd-, 8th-, and 10th-grade students in large…
Lesion mapping of social problem solving.
Barbey, Aron K; Colom, Roberto; Paul, Erick J; Chau, Aileen; Solomon, Jeffrey; Grafman, Jordan H
2014-10-01
Accumulating neuroscience evidence indicates that human intelligence is supported by a distributed network of frontal and parietal regions that enable complex, goal-directed behaviour. However, the contributions of this network to social aspects of intellectual function remain to be well characterized. Here, we report a human lesion study (n = 144) that investigates the neural bases of social problem solving (measured by the Everyday Problem Solving Inventory) and examine the degree to which individual differences in performance are predicted by a broad spectrum of psychological variables, including psychometric intelligence (measured by the Wechsler Adult Intelligence Scale), emotional intelligence (measured by the Mayer, Salovey, Caruso Emotional Intelligence Test), and personality traits (measured by the Neuroticism-Extraversion-Openness Personality Inventory). Scores for each variable were obtained, followed by voxel-based lesion-symptom mapping. Stepwise regression analyses revealed that working memory, processing speed, and emotional intelligence predict individual differences in everyday problem solving. A targeted analysis of specific everyday problem solving domains (involving friends, home management, consumerism, work, information management, and family) revealed psychological variables that selectively contribute to each. Lesion mapping results indicated that social problem solving, psychometric intelligence, and emotional intelligence are supported by a shared network of frontal, temporal, and parietal regions, including white matter association tracts that bind these areas into a coordinated system. The results support an integrative framework for understanding social intelligence and make specific recommendations for the application of the Everyday Problem Solving Inventory to the study of social problem solving in health and disease. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
On Reformulating Planning as Dynamic Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Jonsson, Ari K.; Morris, Paul; Koga, Dennis (Technical Monitor)
2000-01-01
In recent years, researchers have reformulated STRIPS planning problems as SAT problems or CSPs. In this paper, we discuss the Constraint-Based Interval Planning (CBIP) paradigm, which can represent planning problems incorporating interval time and resources. We describe how to reformulate mutual exclusion constraints for a CBIP-based system, the Extendible Uniform Remote Operations Planner Architecture (EUROPA). We show that reformulations involving dynamic variable domains restrict the algorithms which can be used to solve the resulting DCSP. We present an alternative formulation which does not employ dynamic domains, and describe the relative merits of the different reformulations.
Coupled variational formulations of linear elasticity and the DPG methodology
NASA Astrophysics Data System (ADS)
Fuentes, Federico; Keith, Brendan; Demkowicz, Leszek; Le Tallec, Patrick
2017-11-01
This article presents a general approach akin to domain-decomposition methods to solve a single linear PDE, but where each subdomain of a partitioned domain is associated to a distinct variational formulation coming from a mutually well-posed family of broken variational formulations of the original PDE. It can be exploited to solve challenging problems in a variety of physical scenarios where stability or a particular mode of convergence is desired in a part of the domain. The linear elasticity equations are solved in this work, but the approach can be applied to other equations as well. The broken variational formulations, which are essentially extensions of more standard formulations, are characterized by the presence of mesh-dependent broken test spaces and interface trial variables at the boundaries of the elements of the mesh. This allows necessary information to be naturally transmitted between adjacent subdomains, resulting in coupled variational formulations which are then proved to be globally well-posed. They are solved numerically using the DPG methodology, which is especially crafted to produce stable discretizations of broken formulations. Finally, expected convergence rates are verified in two different and illustrative examples.
Naturally selecting solutions: the use of genetic algorithms in bioinformatics.
Manning, Timmy; Sleator, Roy D; Walsh, Paul
2013-01-01
For decades, computer scientists have looked to nature for biologically inspired solutions to computational problems; ranging from robotic control to scheduling optimization. Paradoxically, as we move deeper into the post-genomics era, the reverse is occurring, as biologists and bioinformaticians look to computational techniques, to solve a variety of biological problems. One of the most common biologically inspired techniques are genetic algorithms (GAs), which take the Darwinian concept of natural selection as the driving force behind systems for solving real world problems, including those in the bioinformatics domain. Herein, we provide an overview of genetic algorithms and survey some of the most recent applications of this approach to bioinformatics based problems.
Digital program for solving the linear stochastic optimal control and estimation problem
NASA Technical Reports Server (NTRS)
Geyser, L. C.; Lehtinen, B.
1975-01-01
A computer program is described which solves the linear stochastic optimal control and estimation (LSOCE) problem by using a time-domain formulation. The LSOCE problem is defined as that of designing controls for a linear time-invariant system which is disturbed by white noise in such a way as to minimize a performance index which is quadratic in state and control variables. The LSOCE problem and solution are outlined; brief descriptions are given of the solution algorithms, and complete descriptions of each subroutine, including usage information and digital listings, are provided. A test case is included, as well as information on the IBM 7090-7094 DCS time and storage requirements.
An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-02-13
The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less
An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less
Direct Solve of Electrically Large Integral Equations for Problem Sizes to 1M Unknowns
NASA Technical Reports Server (NTRS)
Shaeffer, John
2008-01-01
Matrix methods for solving integral equations via direct solve LU factorization are presently limited to weeks to months of very expensive supercomputer time for problems sizes of several hundred thousand unknowns. This report presents matrix LU factor solutions for electromagnetic scattering problems for problem sizes to one million unknowns with thousands of right hand sides that run in mere days on PC level hardware. This EM solution is accomplished by utilizing the numerical low rank nature of spatially blocked unknowns using the Adaptive Cross Approximation for compressing the rank deficient blocks of the system Z matrix, the L and U factors, the right hand side forcing function and the final current solution. This compressed matrix solution is applied to a frequency domain EM solution of Maxwell's equations using standard Method of Moments approach. Compressed matrix storage and operations count leads to orders of magnitude reduction in memory and run time.
Generalized vector calculus on convex domain
NASA Astrophysics Data System (ADS)
Agrawal, Om P.; Xu, Yufeng
2015-06-01
In this paper, we apply recently proposed generalized integral and differential operators to develop generalized vector calculus and generalized variational calculus for problems defined over a convex domain. In particular, we present some generalization of Green's and Gauss divergence theorems involving some new operators, and apply these theorems to generalized variational calculus. For fractional power kernels, the formulation leads to fractional vector calculus and fractional variational calculus for problems defined over a convex domain. In special cases, when certain parameters take integer values, we obtain formulations for integer order problems. Two examples are presented to demonstrate applications of the generalized variational calculus which utilize the generalized vector calculus developed in the paper. The first example leads to a generalized partial differential equation and the second example leads to a generalized eigenvalue problem, both in two dimensional convex domains. We solve the generalized partial differential equation by using polynomial approximation. A special case of the second example is a generalized isoperimetric problem. We find an approximate solution to this problem. Many physical problems containing integer order integrals and derivatives are defined over arbitrary domains. We speculate that future problems containing fractional and generalized integrals and derivatives in fractional mechanics will be defined over arbitrary domains, and therefore, a general variational calculus incorporating a general vector calculus will be needed for these problems. This research is our first attempt in that direction.
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Solving Partial Differential Equations on Overlapping Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henshaw, W D
2008-09-22
We discuss the solution of partial differential equations (PDEs) on overlapping grids. This is a powerful technique for efficiently solving problems in complex, possibly moving, geometry. An overlapping grid consists of a set of structured grids that overlap and cover the computational domain. By allowing the grids to overlap, grids for complex geometries can be more easily constructed. The overlapping grid approach can also be used to remove coordinate singularities by, for example, covering a sphere with two or more patches. We describe the application of the overlapping grid approach to a variety of different problems. These include the solutionmore » of incompressible fluid flows with moving and deforming geometry, the solution of high-speed compressible reactive flow with rigid bodies using adaptive mesh refinement (AMR), and the solution of the time-domain Maxwell's equations of electromagnetism.« less
NASA Astrophysics Data System (ADS)
Chen, Xudong
2010-07-01
This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging.
EUROPA2: Plan Database Services for Planning and Scheduling Applications
NASA Technical Reports Server (NTRS)
Bedrax-Weiss, Tania; Frank, Jeremy; Jonsson, Ari; McGann, Conor
2004-01-01
NASA missions require solving a wide variety of planning and scheduling problems with temporal constraints; simple resources such as robotic arms, communications antennae and cameras; complex replenishable resources such as memory, power and fuel; and complex constraints on geometry, heat and lighting angles. Planners and schedulers that solve these problems are used in ground tools as well as onboard systems. The diversity of planning problems and applications of planners and schedulers precludes a one-size fits all solution. However, many of the underlying technologies are common across planning domains and applications. We describe CAPR, a formalism for planning that is general enough to cover a wide variety of planning and scheduling domains of interest to NASA. We then describe EUROPA(sub 2), a software framework implementing CAPR. EUROPA(sub 2) provides efficient, customizable Plan Database Services that enable the integration of CAPR into a wide variety of applications. We describe the design of EUROPA(sub 2) from the perspective of both modeling, customization and application integration to different classes of NASA missions.
NASA Astrophysics Data System (ADS)
Ortega Gelabert, Olga; Zlotnik, Sergio; Afonso, Juan Carlos; Díez, Pedro
2017-04-01
The determination of the present-day physical state of the thermal and compositional structure of the Earth's lithosphere and sub-lithospheric mantle is one of the main goals in modern lithospheric research. All this data is essential to build Earth's evolution models and to reproduce many geophysical observables (e.g. elevation, gravity anomalies, travel time data, heat flow, etc) together with understanding the relationship between them. Determining the lithospheric state involves the solution of high-resolution inverse problems and, consequently, the solution of many direct models is required. The main objective of this work is to contribute to the existing inversion techniques in terms of improving the estimation of the elevation (topography) by including a dynamic component arising from sub-lithospheric mantle flow. In order to do so, we implement an efficient Reduced Order Method (ROM) built upon classic Finite Elements. ROM allows to reduce significantly the computational cost of solving a family of problems, for example all the direct models that are required in the solution of the inverse problem. The strategy of the method consists in creating a (reduced) basis of solutions, so that when a new problem has to be solved, its solution is sought within the basis instead of attempting to solve the problem itself. In order to check the Reduced Basis approach, we implemented the method in a 3D domain reproducing a portion of Earth that covers up to 400 km depth. Within the domain the Stokes equation is solved with realistic viscosities and densities. The different realizations (the family of problems) is created by varying viscosities and densities in a similar way as it would happen in an inversion problem. The Reduced Basis method is shown to be an extremely efficiently solver for the Stokes equation in this context.
Maximum Principles and Application to the Analysis of An Explicit Time Marching Algorithm
NASA Technical Reports Server (NTRS)
LeTallec, Patrick; Tidriri, Moulay D.
1996-01-01
In this paper we develop local and global estimates for the solution of convection-diffusion problems. We then study the convergence properties of a Time Marching Algorithm solving Advection-Diffusion problems on two domains using incompatible discretizations. This study is based on a De-Giorgi-Nash maximum principle.
Possibilities of the particle finite element method for fluid-soil-structure interaction problems
NASA Astrophysics Data System (ADS)
Oñate, Eugenio; Celigueta, Miguel Angel; Idelsohn, Sergio R.; Salazar, Fernando; Suárez, Benjamín
2011-09-01
We present some developments in the particle finite element method (PFEM) for analysis of complex coupled problems in mechanics involving fluid-soil-structure interaction (FSSI). The PFEM uses an updated Lagrangian description to model the motion of nodes (particles) in both the fluid and the solid domains (the later including soil/rock and structures). A mesh connects the particles (nodes) defining the discretized domain where the governing equations for each of the constituent materials are solved as in the standard FEM. The stabilization for dealing with an incompressibility continuum is introduced via the finite calculus method. An incremental iterative scheme for the solution of the non linear transient coupled FSSI problem is described. The procedure to model frictional contact conditions and material erosion at fluid-solid and solid-solid interfaces is described. We present several examples of application of the PFEM to solve FSSI problems such as the motion of rocks by water streams, the erosion of a river bed adjacent to a bridge foundation, the stability of breakwaters and constructions sea waves and the study of landslides.
A stable and accurate partitioned algorithm for conjugate heat transfer
NASA Astrophysics Data System (ADS)
Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.
2017-09-01
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.
A stable and accurate partitioned algorithm for conjugate heat transfer
Meng, F.; Banks, J. W.; Henshaw, W. D.; ...
2017-04-25
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
Multilevel semantic analysis and problem-solving in the flight-domain
NASA Technical Reports Server (NTRS)
Chien, R. T.
1982-01-01
The use of knowledge-base architecture and planning control; mechanisms to perform an intelligent monitoring task in the flight domain is addressed. The route level, the trajectory level, and parts of the aerodynamics level are demonstrated. Hierarchical planning and monitoring conceptual levels, functional-directed mechanism rationalization, and using deep-level mechanism models for diagnoses of dependent failures are discussed.
Liu, Haorui; Yi, Fengyan; Yang, Heli
2016-01-01
The shuffled frog leaping algorithm (SFLA) easily falls into local optimum when it solves multioptimum function optimization problem, which impacts the accuracy and convergence speed. Therefore this paper presents grouped SFLA for solving continuous optimization problems combined with the excellent characteristics of cloud model transformation between qualitative and quantitative research. The algorithm divides the definition domain into several groups and gives each group a set of frogs. Frogs of each region search in their memeplex, and in the search process the algorithm uses the “elite strategy” to update the location information of existing elite frogs through cloud model algorithm. This method narrows the searching space and it can effectively improve the situation of a local optimum; thus convergence speed and accuracy can be significantly improved. The results of computer simulation confirm this conclusion. PMID:26819584
Proceedings of the Conference on Joint Problem Solving and Microcomputers.
1983-08-01
socio - cultural norm, not as "truth" about the domain, nor on the basis of formal properties attributed to the expert’s mental model. According to Grif...the basis of the task studied or on the grounds of abstract hypotheses about . the domain. Socio - cultural domain representations cannot rely on these...orthogonal to concrete vs. abstract. The con- crete specific acts (some, genetically primary examples) occupy the top left cell. The set of socio - cultural
Compressible-Incompressible Two-Phase Flows with Phase Transition: Model Problem
NASA Astrophysics Data System (ADS)
Watanabe, Keiichi
2017-12-01
We study the compressible and incompressible two-phase flows separated by a sharp interface with a phase transition and a surface tension. In particular, we consider the problem in R^N , and the Navier-Stokes-Korteweg equations is used in the upper domain and the Navier-Stokes equations is used in the lower domain. We prove the existence of R -bounded solution operator families for a resolvent problem arising from its model problem. According to Göts and Shibata (Asymptot Anal 90(3-4):207-236, 2014), the regularity of ρ _+ is W^1_q in space, but to solve the kinetic equation: u_Γ \\cdot n_t = [[ρ u
NASA Technical Reports Server (NTRS)
Voigt, Kerstin
1992-01-01
We present MENDER, a knowledge based system that implements software design techniques that are specialized to automatically compile generate-and-patch problem solvers that satisfy global resource assignments problems. We provide empirical evidence of the superior performance of generate-and-patch over generate-and-test: even with constrained generation, for a global constraint in the domain of '2D-floorplanning'. For a second constraint in '2D-floorplanning' we show that even when it is possible to incorporate the constraint into a constrained generator, a generate-and-patch problem solver may satisfy the constraint more rapidly. We also briefly summarize how an extended version of our system applies to a constraint in the domain of 'multiprocessor scheduling'.
Solving time-dependent two-dimensional eddy current problems
NASA Technical Reports Server (NTRS)
Lee, Min Eig; Hariharan, S. I.; Ida, Nathan
1990-01-01
Transient eddy current calculations are presented for an EM wave-scattering and field-penetrating case in which a two-dimensional transverse magnetic field is incident on a good (i.e., not perfect) and infinitely long conductor. The problem thus posed is of initial boundary-value interface type, where the boundary of the conductor constitutes the interface. A potential function is used for time-domain modeling of the situation, and finite difference-time domain techniques are used to march the potential function explicitly in time. Attention is given to the case of LF radiation conditions.
Scaffolding for solving problem in static fluid: A case study
NASA Astrophysics Data System (ADS)
Koes-H, Supriyono; Muhardjito, Wijaya, Charisma P.
2018-01-01
Problem solving is one of the basic abilities that should be developed from learning physics. However, students still face difficulties in the process of non-routine problem-solving. Efforts are necessary to be taken in order to identify such difficulties and the solutions to solve them. An effort in the form of a diagnosis of students' performance in problem solving can be taken to identify their difficulties, and various instructional scaffolding supports can be utilized to eliminate the difficulties. This case study aimed to describe the students' difficulties in solving static fluid problems and the effort to overcome such difficulties through different scaffolding supports. The research subjects consisted of four 10-grade students of (Public Senior High School) SMAN 4 Malang selected by purposive sampling technique. The data of students' difficulties were collected via think-aloud protocol implemented on students' performance in solving non-routine static fluid problems. Subsequently, combined scaffolding supports were given to the students based on their particular difficulties. The research findings pointed out that there were several conceptual difficulties discovered from the students when solving static fluid problems, i.e. the use of buoyancy force formula, determination of all forces acting on a plane in a fluid, the resultant force on a plane in a fluid, and determination of a plane depth in a fluid. An effort that can be taken to overcome such conceptual difficulties is providing a combination of some appropriate scaffolding supports, namely question prompts with specific domains, simulation, and parallel modeling. The combination can solve students' lack of knowledge and improve their conceptual understanding, as well as help them to find solutions by linking the problems with their prior knowledge. According to the findings, teachers are suggested to diagnose the students' difficulties so that they can provide an appropriate combination of scaffolding to support their students in finding the solutions.
Innovation design of medical equipment based on TRIZ.
Gao, Changqing; Guo, Leiming; Gao, Fenglan; Yang, Bo
2015-01-01
Medical equipment is closely related to personal health and safety, and this can be of concern to the equipment user. Furthermore, there is much competition among medical equipment manufacturers. Innovative design is the key to success for those enterprises. The design of medical equipment usually covers vastly different domains of knowledge. The application of modern design methodology in medical equipment and technology invention is an urgent requirement. TRIZ (Russian abbreviation of what can be translated as `theory of inventive problem solving') was born in Russia, which contain some problem-solving methods developed by patent analysis around the world, including Conflict Matrix, Substance Field Analysis, Standard Solution, Effects, etc. TRIZ is an inventive methodology for problems solving. As an Engineering example, infusion system is analyzed and re-designed by TRIZ. The innovative idea is generated to liberate the caretaker from the infusion bag watching out. The research in this paper shows the process of the application of TRIZ in medical device inventions. It is proved that TRIZ is an inventive methodology for problems solving and can be used widely in medical device development.
Improving the learning of clinical reasoning through computer-based cognitive representation.
Wu, Bian; Wang, Minhong; Johnson, Janice M; Grotzer, Tina A
2014-01-01
Objective Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Methods Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. Results A significant improvement was found in students' learning products from the beginning to the end of the study, consistent with students' report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. Conclusions The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction.
Improving the learning of clinical reasoning through computer-based cognitive representation
Wu, Bian; Wang, Minhong; Johnson, Janice M.; Grotzer, Tina A.
2014-01-01
Objective Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Methods Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. Results A significant improvement was found in students’ learning products from the beginning to the end of the study, consistent with students’ report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. Conclusions The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction. PMID:25518871
Improving the learning of clinical reasoning through computer-based cognitive representation.
Wu, Bian; Wang, Minhong; Johnson, Janice M; Grotzer, Tina A
2014-01-01
Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. A significant improvement was found in students' learning products from the beginning to the end of the study, consistent with students' report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pautz, Shawn D.; Bailey, Teresa S.
Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
Pautz, Shawn D.; Bailey, Teresa S.
2016-11-29
Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less
Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report
NASA Technical Reports Server (NTRS)
Ahmad, Shahid
1991-01-01
An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons with available analytical and numerical results, the stability and high accuracy of these dynamic analysis techniques are established.
Dix, Annika; van der Meer, Elke
2015-04-01
This study investigates cognitive resource allocation dependent on fluid and numerical intelligence in arithmetic/algebraic tasks varying in difficulty. Sixty-six 11th grade students participated in a mathematical verification paradigm, while pupil dilation as a measure of resource allocation was collected. Students with high fluid intelligence solved the tasks faster and more accurately than those with average fluid intelligence, as did students with high compared to average numerical intelligence. However, fluid intelligence sped up response times only in students with average but not high numerical intelligence. Further, high fluid but not numerical intelligence led to greater task-related pupil dilation. We assume that fluid intelligence serves as a domain-general resource that helps to tackle problems for which domain-specific knowledge (numerical intelligence) is missing. The allocation of this resource can be measured by pupil dilation. Copyright © 2014 Society for Psychophysiological Research.
Mapping university students' epistemic framing of computational physics using network analysis
NASA Astrophysics Data System (ADS)
Bodin, Madelen
2012-06-01
Solving physics problem in university physics education using a computational approach requires knowledge and skills in several domains, for example, physics, mathematics, programming, and modeling. These competences are in turn related to students’ beliefs about the domains as well as about learning. These knowledge and beliefs components are referred to here as epistemic elements, which together represent the students’ epistemic framing of the situation. The purpose of this study was to investigate university physics students’ epistemic framing when solving and visualizing a physics problem using a particle-spring model system. Students’ epistemic framings are analyzed before and after the task using a network analysis approach on interview transcripts, producing visual representations as epistemic networks. The results show that students change their epistemic framing from a modeling task, with expectancies about learning programming, to a physics task, in which they are challenged to use physics principles and conservation laws in order to troubleshoot and understand their simulations. This implies that the task, even though it is not introducing any new physics, helps the students to develop a more coherent view of the importance of using physics principles in problem solving. The network analysis method used in this study is shown to give intelligible representations of the students’ epistemic framing and is proposed as a useful method of analysis of textual data.
Social cognition and social problem solving abilities in individuals with alcohol use disorder.
Schmidt, Tobias; Roser, Patrik; Juckel, Georg; Brüne, Martin; Suchan, Boris; Thoma, Patrizia
2016-11-01
Up to now, little is known about higher order cognitive abilities like social cognition and social problem solving abilities in alcohol-dependent patients. However, impairments in these domains lead to an increased probability for relapse and are thus highly relevant in treatment contexts. This cross-sectional study assessed distinct aspects of social cognition and social problem solving in 31 hospitalized patients with alcohol use disorder (AUD) and 30 matched healthy controls (HC). Three ecologically valid scenario-based tests were used to gauge the ability to infer the mental state of story characters in complicated interpersonal situations, the capacity to select the best problem solving strategy among other less optimal alternatives, and the ability to freely generate appropriate strategies to handle difficult interpersonal conflicts. Standardized tests were used to assess executive function, attention, trait empathy, and memory, and correlations were computed between measures of executive function, attention, trait empathy, and tests of social problem solving. AUD patients generated significantly fewer socially sensitive and practically effective solutions for problematic interpersonal situations than the HC group. Furthermore, patients performed significantly worse when asked to select the best alternative among a list of presented alternatives for scenarios containing sarcastic remarks and had significantly more problems to interpret sarcastic remarks in difficult interpersonal situations. These specific patterns of impairments should be considered in treatment programs addressing impaired social skills in individuals with AUD.
NASA Technical Reports Server (NTRS)
Abolhassani, J. S.; Tiwari, S. N.
1983-01-01
The feasibility of the method of lines for solutions of physical problems requiring nonuniform grid distributions is investigated. To attain this, it is also necessary to investigate the stiffness characteristics of the pertinent equations. For specific applications, the governing equations considered are those for viscous, incompressible, two dimensional and axisymmetric flows. These equations are transformed from the physical domain having a variable mesh to a computational domain with a uniform mesh. The two governing partial differential equations are the vorticity and stream function equations. The method of lines is used to solve the vorticity equation and the successive over relaxation technique is used to solve the stream function equation. The method is applied to three laminar flow problems: the flow in ducts, curved-wall diffusers, and a driven cavity. Results obtained for different flow conditions are in good agreement with available analytical and numerical solutions. The viability and validity of the method of lines are demonstrated by its application to Navier-Stokes equations in the physical domain having a variable mesh.
Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0
NASA Technical Reports Server (NTRS)
Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine
2004-01-01
We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.
Fields, Chris
2013-08-01
The theory of computation and category theory both employ arrow-based notations that suggest that the basic metaphor "state changes are like motions" plays a fundamental role in all mathematical reasoning involving formal manipulations. If this is correct, structure-mapping inferences implemented by the pre-motor action planning system can be expected to be involved in solving any mathematics problems not solvable by table lookups and number line manipulations alone. Available functional imaging studies of multi-digit arithmetic, algebra, geometry and calculus problem solving are consistent with this expectation.
Visser, Marieke M; Heijenbrok-Kal, Majanka H; Spijker, Adriaan Van't; Oostra, Kristine M; Busschbach, Jan J; Ribbers, Gerard M
2015-08-01
To investigate whether patients with high and low depression scores after stroke use different coping strategies and problem-solving skills and whether these variables are related to psychosocial health-related quality of life (HRQOL) independent of depression. Cross-sectional study. Two rehabilitation centers. Patients participating in outpatient stroke rehabilitation (N=166; mean age, 53.06±10.19y; 53% men; median time poststroke, 7.29mo). Not applicable. Coping strategy was measured using the Coping Inventory for Stressful Situations; problem-solving skills were measured using the Social Problem Solving Inventory-Revised: Short Form; depression was assessed using the Center for Epidemiologic Studies Depression Scale; and HRQOL was measured using the five-level EuroQol five-dimensional questionnaire and the Stroke-Specific Quality of Life Scale. Independent samples t tests and multivariable regression analyses, adjusted for patient characteristics, were performed. Compared with patients with low depression scores, patients with high depression scores used less positive problem orientation (P=.002) and emotion-oriented coping (P<.001) and more negative problem orientation (P<.001) and avoidance style (P<.001). Depression score was related to all domains of both general HRQOL (visual analog scale: β=-.679; P<.001; utility: β=-.009; P<.001) and stroke-specific HRQOL (physical HRQOL: β=-.020; P=.001; psychosocial HRQOL: β=-.054, P<.001; total HRQOL: β=-.037; P<.001). Positive problem orientation was independently related to psychosocial HRQOL (β=.086; P=.018) and total HRQOL (β=.058; P=.031). Patients with high depression scores use different coping strategies and problem-solving skills than do patients with low depression scores. Independent of depression, positive problem-solving skills appear to be most significantly related to better HRQOL. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
LORENE: Spectral methods differential equations solver
NASA Astrophysics Data System (ADS)
Gourgoulhon, Eric; Grandclément, Philippe; Marck, Jean-Alain; Novak, Jérôme; Taniguchi, Keisuke
2016-08-01
LORENE (Langage Objet pour la RElativité NumériquE) solves various problems arising in numerical relativity, and more generally in computational astrophysics. It is a set of C++ classes and provides tools to solve partial differential equations by means of multi-domain spectral methods. LORENE classes implement basic structures such as arrays and matrices, but also abstract mathematical objects, such as tensors, and astrophysical objects, such as stars and black holes.
Prediction of protein-protein interaction network using a multi-objective optimization approach.
Chowdhury, Archana; Rakshit, Pratyusha; Konar, Amit
2016-06-01
Protein-Protein Interactions (PPIs) are very important as they coordinate almost all cellular processes. This paper attempts to formulate PPI prediction problem in a multi-objective optimization framework. The scoring functions for the trial solution deal with simultaneous maximization of functional similarity, strength of the domain interaction profiles, and the number of common neighbors of the proteins predicted to be interacting. The above optimization problem is solved using the proposed Firefly Algorithm with Nondominated Sorting. Experiments undertaken reveal that the proposed PPI prediction technique outperforms existing methods, including gene ontology-based Relative Specific Similarity, multi-domain-based Domain Cohesion Coupling method, domain-based Random Decision Forest method, Bagging with REP Tree, and evolutionary/swarm algorithm-based approaches, with respect to sensitivity, specificity, and F1 score.
Fuchs, Lynn S; Geary, David C; Compton, Donald L; Fuchs, Douglas; Hamlett, Carol L; Seethaler, Pamela M; Bryant, Joan D; Schatschneider, Christopher
2010-11-01
The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (N = 280; mean age = 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations, and word problems in fall and then reassessed on procedural calculations and word problems in spring. Development was indexed by latent change scores, and the interplay between numerical and domain-general abilities was analyzed by multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of procedural calculations and word problems development. Yet, for procedural calculations development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for word problems development, the set of domain-general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive.
NASA Astrophysics Data System (ADS)
Schumacher, F.; Friederich, W.; Lamara, S.
2016-02-01
We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be done using different mathematical approaches. Since kernels are stored on disk, it can be repeated many times for different regularization parameters without need to solve the forward problem, making the approach accessible to Occam's method. Changes of choice of misfit functional, weighting of data and selection of data subsets are still possible at this stage. We have coded our approach to FWI into a program package called ASKI (Analysis of Sensitivity and Kernel Inversion) which can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. It is written in modern FORTRAN language using object-oriented concepts that reflect the modular structure of the inversion procedure. We validate our FWI method by a small-scale synthetic study and present first results of its application to high-quality seismological data acquired in the southern Aegean.
Investigating and developing engineering students' mathematical modelling and problem-solving skills
NASA Astrophysics Data System (ADS)
Wedelin, Dag; Adawi, Tom; Jahan, Tabassum; Andersson, Sven
2015-09-01
How do engineering students approach mathematical modelling problems and how can they learn to deal with such problems? In the context of a course in mathematical modelling and problem solving, and using a qualitative case study approach, we found that the students had little prior experience of mathematical modelling. They were also inexperienced problem solvers, unaware of the importance of understanding the problem and exploring alternatives, and impeded by inappropriate beliefs, attitudes and expectations. Important impacts of the course belong to the metacognitive domain. The nature of the problems, the supervision and the follow-up lectures were emphasised as contributing to the impacts of the course, where students show major development. We discuss these empirical results in relation to a framework for mathematical thinking and the notion of cognitive apprenticeship. Based on the results, we argue that this kind of teaching should be considered in the education of all engineers.
ERIC Educational Resources Information Center
Slabon, Wayne A.; Richards, Randy L.; Dennen, Vanessa P.
2014-01-01
In this paper, we introduce restorying, a pedagogical approach based on social constructivism that employs successive iterations of rewriting and discussing personal, student-generated, domain-relevant stories to promote conceptual application, critical thinking, and ill-structured problem solving skills. Using a naturalistic, qualitative case…
FastMag: Fast micromagnetic simulator for complex magnetic structures (invited)
NASA Astrophysics Data System (ADS)
Chang, R.; Li, S.; Lubarda, M. V.; Livshitz, B.; Lomakin, V.
2011-04-01
A fast micromagnetic simulator (FastMag) for general problems is presented. FastMag solves the Landau-Lifshitz-Gilbert equation and can handle multiscale problems with a high computational efficiency. The simulator derives its high performance from efficient methods for evaluating the effective field and from implementations on massively parallel graphics processing unit (GPU) architectures. FastMag discretizes the computational domain into tetrahedral elements and therefore is highly flexible for general problems. The magnetostatic field is computed via the superposition principle for both volume and surface parts of the computational domain. This is accomplished by implementing efficient quadrature rules and analytical integration for overlapping elements in which the integral kernel is singular. Thus, discretized superposition integrals are computed using a nonuniform grid interpolation method, which evaluates the field from N sources at N collocated observers in O(N) operations. This approach allows handling objects of arbitrary shape, allows easily calculating of the field outside the magnetized domains, does not require solving a linear system of equations, and requires little memory. FastMag is implemented on GPUs with ?> GPU-central processing unit speed-ups of 2 orders of magnitude. Simulations are shown of a large array of magnetic dots and a recording head fully discretized down to the exchange length, with over a hundred million tetrahedral elements on an inexpensive desktop computer.
Exploration versus exploitation in space, mind, and society
Hills, Thomas T.; Todd, Peter M.; Lazer, David; Redish, A. David; Couzin, Iain D.
2015-01-01
Search is a ubiquitous property of life. Although diverse domains have worked on search problems largely in isolation, recent trends across disciplines indicate that the formal properties of these problems share similar structures and, often, similar solutions. Moreover, internal search (e.g., memory search) shows similar characteristics to external search (e.g., spatial foraging), including shared neural mechanisms consistent with a common evolutionary origin across species. Search problems and their solutions also scale from individuals to societies, underlying and constraining problem solving, memory, information search, and scientific and cultural innovation. In summary, search represents a core feature of cognition, with a vast influence on its evolution and processes across contexts and requiring input from multiple domains to understand its implications and scope. PMID:25487706
Caracciolo, Sergio; Sicuro, Gabriele
2014-10-01
We discuss the equivalence relation between the Euclidean bipartite matching problem on the line and on the circumference and the Brownian bridge process on the same domains. The equivalence allows us to compute the correlation function and the optimal cost of the original combinatorial problem in the thermodynamic limit; moreover, we solve also the minimax problem on the line and on the circumference. The properties of the average cost and correlation functions are discussed.
Optimal Planning and Problem-Solving
NASA Technical Reports Server (NTRS)
Clemet, Bradley; Schaffer, Steven; Rabideau, Gregg
2008-01-01
CTAEMS MDP Optimal Planner is a problem-solving software designed to command a single spacecraft/rover, or a team of spacecraft/rovers, to perform the best action possible at all times according to an abstract model of the spacecraft/rover and its environment. It also may be useful in solving logistical problems encountered in commercial applications such as shipping and manufacturing. The planner reasons around uncertainty according to specified probabilities of outcomes using a plan hierarchy to avoid exploring certain kinds of suboptimal actions. Also, planned actions are calculated as the state-action space is expanded, rather than afterward, to reduce by an order of magnitude the processing time and memory used. The software solves planning problems with actions that can execute concurrently, that have uncertain duration and quality, and that have functional dependencies on others that affect quality. These problems are modeled in a hierarchical planning language called C_TAEMS, a derivative of the TAEMS language for specifying domains for the DARPA Coordinators program. In realistic environments, actions often have uncertain outcomes and can have complex relationships with other tasks. The planner approaches problems by considering all possible actions that may be taken from any state reachable from a given, initial state, and from within the constraints of a given task hierarchy that specifies what tasks may be performed by which team member.
NASA Astrophysics Data System (ADS)
Navas, Pedro; Sanavia, Lorenzo; López-Querol, Susana; Yu, Rena C.
2017-12-01
Solving dynamic problems for fluid saturated porous media at large deformation regime is an interesting but complex issue. An implicit time integration scheme is herein developed within the framework of the u-w (solid displacement-relative fluid displacement) formulation for the Biot's equations. In particular, liquid water saturated porous media is considered and the linearization of the linear momentum equations taking into account all the inertia terms for both solid and fluid phases is for the first time presented. The spatial discretization is carried out through a meshfree method, in which the shape functions are based on the principle of local maximum entropy LME. The current methodology is firstly validated with the dynamic consolidation of a soil column and the plastic shear band formulation of a square domain loaded by a rigid footing. The feasibility of this new numerical approach for solving large deformation dynamic problems is finally demonstrated through the application to an embankment problem subjected to an earthquake.
The Role of Motion Concepts in Understanding Non-Motion Concepts
Khatin-Zadeh, Omid; Banaruee, Hassan; Khoshsima, Hooshang; Marmolejo-Ramos, Fernando
2017-01-01
This article discusses a specific type of metaphor in which an abstract non-motion domain is described in terms of a motion event. Abstract non-motion domains are inherently different from concrete motion domains. However, motion domains are used to describe abstract non-motion domains in many metaphors. Three main reasons are suggested for the suitability of motion events in such metaphorical descriptions. Firstly, motion events usually have high degrees of concreteness. Secondly, motion events are highly imageable. Thirdly, components of any motion event can be imagined almost simultaneously within a three-dimensional space. These three characteristics make motion events suitable domains for describing abstract non-motion domains, and facilitate the process of online comprehension throughout language processing. Extending the main point into the field of mathematics, this article discusses the process of transforming abstract mathematical problems into imageable geometric representations within the three-dimensional space. This strategy is widely used by mathematicians to solve highly abstract and complex problems. PMID:29240715
NASA Astrophysics Data System (ADS)
Sanan, P.; Schnepp, S. M.; May, D.; Schenk, O.
2014-12-01
Geophysical applications require efficient forward models for non-linear Stokes flow on high resolution spatio-temporal domains. The bottleneck in applying the forward model is solving the linearized, discretized Stokes problem which takes the form of a large, indefinite (saddle point) linear system. Due to the heterogeniety of the effective viscosity in the elliptic operator, devising effective preconditioners for saddle point problems has proven challenging and highly problem-dependent. Nevertheless, at least three approaches show promise for preconditioning these difficult systems in an algorithmically scalable way using multigrid and/or domain decomposition techniques. The first is to work with a hierarchy of coarser or smaller saddle point problems. The second is to use the Schur complement method to decouple and sequentially solve for the pressure and velocity. The third is to use the Schur decomposition to devise preconditioners for the full operator. These involve sub-solves resembling inexact versions of the sequential solve. The choice of approach and sub-methods depends crucially on the motivating physics, the discretization, and available computational resources. Here we examine the performance trade-offs for preconditioning strategies applied to idealized models of mantle convection and lithospheric dynamics, characterized by large viscosity gradients. Due to the arbitrary topological structure of the viscosity field in geodynamical simulations, we utilize low order, inf-sup stable mixed finite element spatial discretizations which are suitable when sharp viscosity variations occur in element interiors. Particular attention is paid to possibilities within the decoupled and approximate Schur complement factorization-based monolithic approaches to leverage recently-developed flexible, communication-avoiding, and communication-hiding Krylov subspace methods in combination with `heavy' smoothers, which require solutions of large per-node sub-problems, well-suited to solution on hybrid computational clusters. To manage the combinatorial explosion of solver options (which include hybridizations of all the approaches mentioned above), we leverage the modularity of the PETSc library.
Mental additions and verbal-domain interference in children with developmental dyscalculia.
Mammarella, Irene C; Caviola, Sara; Cornoldi, Cesare; Lucangeli, Daniela
2013-09-01
This study examined the involvement of verbal and visuo-spatial domains in solving addition problems with carrying in a sample of children diagnosed with developmental dyscalculia (DD) divided into two groups: (i) those with DD alone and (ii) those with DD and dyslexia. Age and stage matched typically developing (TD) children were also studied. The addition problems were presented horizontally or vertically and associated with verbal or visuo-spatial information. Study results showed that DD children's performance on mental calculation tasks was more impaired when they tackled horizontally presented addition problems compared to vertically presented ones that are associated to verbal domain involvement. The performance pattern in the two DD groups was found to be similar. The theoretical, clinical and educational implications of these findings are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Unified Lambert Tool for Massively Parallel Applications in Space Situational Awareness
NASA Astrophysics Data System (ADS)
Woollands, Robyn M.; Read, Julie; Hernandez, Kevin; Probe, Austin; Junkins, John L.
2018-03-01
This paper introduces a parallel-compiled tool that combines several of our recently developed methods for solving the perturbed Lambert problem using modified Chebyshev-Picard iteration. This tool (unified Lambert tool) consists of four individual algorithms, each of which is unique and better suited for solving a particular type of orbit transfer. The first is a Keplerian Lambert solver, which is used to provide a good initial guess (warm start) for solving the perturbed problem. It is also used to determine the appropriate algorithm to call for solving the perturbed problem. The arc length or true anomaly angle spanned by the transfer trajectory is the parameter that governs the automated selection of the appropriate perturbed algorithm, and is based on the respective algorithm convergence characteristics. The second algorithm solves the perturbed Lambert problem using the modified Chebyshev-Picard iteration two-point boundary value solver. This algorithm does not require a Newton-like shooting method and is the most efficient of the perturbed solvers presented herein, however the domain of convergence is limited to about a third of an orbit and is dependent on eccentricity. The third algorithm extends the domain of convergence of the modified Chebyshev-Picard iteration two-point boundary value solver to about 90% of an orbit, through regularization with the Kustaanheimo-Stiefel transformation. This is the second most efficient of the perturbed set of algorithms. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver for solving multiple revolution perturbed transfers. This method does require "shooting" but differs from Newton-like shooting methods in that it does not require propagation of a state transition matrix. The unified Lambert tool makes use of the General Mission Analysis Tool and we use it to compute thousands of perturbed Lambert trajectories in parallel on the Space Situational Awareness computer cluster at the LASR Lab, Texas A&M University. We demonstrate the power of our tool by solving a highly parallel example problem, that is the generation of extremal field maps for optimal spacecraft rendezvous (and eventual orbit debris removal). In addition we demonstrate the need for including perturbative effects in simulations for satellite tracking or data association. The unified Lambert tool is ideal for but not limited to space situational awareness applications.
An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.
Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur
2017-01-01
Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.
NASA Astrophysics Data System (ADS)
Belyaev, V. A.; Shapeev, V. P.
2017-10-01
New versions of the collocations and least squares method of high-order accuracy are proposed and implemented for the numerical solution of the boundary value problems for the biharmonic equation in non-canonical domains. The solution of the biharmonic equation is used for simulating the stress-strain state of an isotropic plate under the action of transverse load. The differential problem is projected into a space of fourth-degree polynomials by the CLS method. The boundary conditions for the approximate solution are put down exactly on the boundary of the computational domain. The versions of the CLS method are implemented on the grids which are constructed in two different ways. It is shown that the approximate solution of problems converges with high order. Thus it matches with high accuracy with the analytical solution of the test problems in the case of known solution in the numerical experiments on the convergence of the solution of various problems on a sequence of grids.
A system-approach to the elastohydrodynamic lubrication point-contact problem
NASA Technical Reports Server (NTRS)
Lim, Sang Gyu; Brewe, David E.
1991-01-01
The classical EHL (elastohydrodynamic lubrication) point contact problem is solved using a new system-approach, similar to that introduced by Houpert and Hamrock for the line-contact problem. Introducing a body-fitted coordinate system, the troublesome free-boundary is transformed to a fixed domain. The Newton-Raphson method can then be used to determine the pressure distribution and the cavitation boundary subject to the Reynolds boundary condition. This method provides an efficient and rigorous way of solving the EHL point contact problem with the aid of a supercomputer and a promising method to deal with the transient EHL point contact problem. A typical pressure distribution and film thickness profile are presented and the minimum film thicknesses are compared with the solution of Hamrock and Dowson. The details of the cavitation boundaries for various operating parameters are discussed.
High-order Two-way Artificial Boundary Conditions for Nonlinear Wave Propagation with Backscattering
NASA Technical Reports Server (NTRS)
Fibich, Gadi; Tsynkov, Semyon
2000-01-01
When solving linear scattering problems, one typically first solves for the impinging wave in the absence of obstacles. Then, by linear superposition, the original problem is reduced to one that involves only the scattered waves driven by the values of the impinging field at the surface of the obstacles. In addition, when the original domain is unbounded, special artificial boundary conditions (ABCs) that would guarantee the reflectionless propagation of waves have to be set at the outer boundary of the finite computational domain. The situation becomes conceptually different when the propagation equation is nonlinear. In this case the impinging and scattered waves can no longer be separated, and the problem has to be solved in its entirety. In particular, the boundary on which the incoming field values are prescribed, should transmit the given incoming waves in one direction and simultaneously be transparent to all the outgoing waves that travel in the opposite direction. We call this type of boundary conditions two-way ABCs. In the paper, we construct the two-way ABCs for the nonlinear Helmholtz equation that models the laser beam propagation in a medium with nonlinear index of refraction. In this case, the forward propagation is accompanied by backscattering, i.e., generation of waves in the direction opposite to that of the incoming signal. Our two-way ABCs generate no reflection of the backscattered waves and at the same time impose the correct values of the incoming wave. The ABCs are obtained for a fourth-order accurate discretization to the Helmholtz operator; the fourth-order grid convergence is corroborated experimentally by solving linear model problems. We also present solutions in the nonlinear case using the two-way ABC which, unlike the traditional Dirichlet boundary condition, allows for direct calculation of the magnitude of backscattering.
Overset meshing coupled with hybridizable discontinuous Galerkin finite elements
Kauffman, Justin A.; Sheldon, Jason P.; Miller, Scott T.
2017-03-01
We introduce the use of hybridizable discontinuous Galerkin (HDG) finite element methods on overlapping (overset) meshes. Overset mesh methods are advantageous for solving problems on complex geometrical domains. We also combine geometric flexibility of overset methods with the advantages of HDG methods: arbitrarily high-order accuracy, reduced size of the global discrete problem, and the ability to solve elliptic, parabolic, and/or hyperbolic problems with a unified form of discretization. This approach to developing the ‘overset HDG’ method is to couple the global solution from one mesh to the local solution on the overset mesh. We present numerical examples for steady convection–diffusionmore » and static elasticity problems. The examples demonstrate optimal order convergence in all primal fields for an arbitrary amount of overlap of the underlying meshes.« less
Computer analysis of multicircuit shells of revolution by the field method
NASA Technical Reports Server (NTRS)
Cohen, G. A.
1975-01-01
The field method, presented previously for the solution of even-order linear boundary value problems defined on one-dimensional open branch domains, is extended to boundary value problems defined on one-dimensional domains containing circuits. This method converts the boundary value problem into two successive numerically stable initial value problems, which may be solved by standard forward integration techniques. In addition, a new method for the treatment of singular boundary conditions is presented. This method, which amounts to a partial interchange of the roles of force and displacement variables, is problem independent with respect to both accuracy and speed of execution. This method was implemented in a computer program to calculate the static response of ring stiffened orthotropic multicircuit shells of revolution to asymmetric loads. Solutions are presented for sample problems which illustrate the accuracy and efficiency of the method.
QCD axion dark matter from long-lived domain walls during matter domination
NASA Astrophysics Data System (ADS)
Harigaya, Keisuke; Kawasaki, Masahiro
2018-07-01
The domain wall problem of the Peccei-Quinn mechanism can be solved if the Peccei-Quinn symmetry is explicitly broken by a small amount. Domain walls decay into axions, which may account for dark matter of the universe. This scheme is however strongly constrained by overproduction of axions unless the phase of the explicit breaking term is tuned. We investigate the case where the universe is matter-dominated around the temperature of the MeV scale and domain walls decay during this matter dominated epoch. We show how the viable parameter space is expanded.
Problem-solving test: Southwestern blotting.
Szeberényi, József
2014-01-01
Terms to be familiar with before you start to solve the test: Southern blotting, Western blotting, restriction endonucleases, agarose gel electrophoresis, nitrocellulose filter, molecular hybridization, polyacrylamide gel electrophoresis, proto-oncogene, c-abl, Src-homology domains, tyrosine protein kinase, nuclear localization signal, cDNA, deletion mutants, expression plasmid, transfection, RNA polymerase II, promoter, Shine-Dalgarno sequence, polyadenylation element, affinity chromatography, Northern blotting, immunoprecipitation, sodium dodecylsulfate, autoradiography, tandem repeats. Copyright © 2014 The International Union of Biochemistry and Molecular Biology.
Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems
NASA Astrophysics Data System (ADS)
Arrarás, A.; Portero, L.; Yotov, I.
2014-01-01
We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.
ERIC Educational Resources Information Center
Kim, Hye Jeong; Pedersen, Susan
2010-01-01
Recently, the importance of ill-structured problem-solving in real-world contexts has become a focus of educational research. Particularly, the hypothesis-development process has been examined as one of the keys to developing a high-quality solution in a problem context. The authors of this study examined predictive relations between young…
ben-Avraham, D; Fokas, A S
2001-07-01
A new transform method for solving boundary value problems for linear and integrable nonlinear partial differential equations recently introduced in the literature is used here to obtain the solution of the modified Helmholtz equation q(xx)(x,y)+q(yy)(x,y)-4 beta(2)q(x,y)=0 in the triangular domain 0< or =x< or =L-y< or =L, with mixed boundary conditions. This solution is applied to the problem of diffusion-limited coalescence, A+A<==>A, in the segment (-L/2,L/2), with traps at the edges.
Shaw, William S; Feuerstein, Michael; Miller, Virginia I; Wood, Patricia M
2003-08-01
Improving health and work outcomes for individuals with work related upper extremity disorders (WRUEDs) may require a broad assessment of potential return to work barriers by engaging workers in collaborative problem solving. In this study, half of all nurse case managers from a large workers' compensation system were randomly selected and invited to participate in a randomized, controlled trial of an integrated case management (ICM) approach for WRUEDs. The focus of ICM was problem solving skills training and workplace accommodation. Volunteer nurses attended a 2 day ICM training workshop including instruction in a 6 step process to engage clients in problem solving to overcome barriers to recovery. A chart review of WRUED case management reports (n = 70) during the following 2 years was conducted to extract case managers' reports of barriers to recovery and return to work. Case managers documented from 0 to 21 barriers per case (M = 6.24, SD = 4.02) within 5 domains: signs and symptoms (36%), work environment (27%), medical care (13%), functional limitations (12%), and coping (12%). Compared with case managers who did not receive the training (n = 67), workshop participants identified more barriers related to signs and symptoms, work environment, functional limitations, and coping (p < .05), but not to medical care. Problem solving skills training may help focus case management services on the most salient recovery factors affecting return to work.
Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas
2017-01-01
As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.
NASA Astrophysics Data System (ADS)
Izquierdo, Joaquín; Montalvo, Idel; Campbell, Enrique; Pérez-García, Rafael
2016-08-01
Selecting the most appropriate heuristic for solving a specific problem is not easy, for many reasons. This article focuses on one of these reasons: traditionally, the solution search process has operated in a given manner regardless of the specific problem being solved, and the process has been the same regardless of the size, complexity and domain of the problem. To cope with this situation, search processes should mould the search into areas of the search space that are meaningful for the problem. This article builds on previous work in the development of a multi-agent paradigm using techniques derived from knowledge discovery (data-mining techniques) on databases of so-far visited solutions. The aim is to improve the search mechanisms, increase computational efficiency and use rules to enrich the formulation of optimization problems, while reducing the search space and catering to realistic problems.
Reconstruction of local perturbations in periodic surfaces
NASA Astrophysics Data System (ADS)
Lechleiter, Armin; Zhang, Ruming
2018-03-01
This paper concerns the inverse scattering problem to reconstruct a local perturbation in a periodic structure. Unlike the periodic problems, the periodicity for the scattered field no longer holds, thus classical methods, which reduce quasi-periodic fields in one periodic cell, are no longer available. Based on the Floquet-Bloch transform, a numerical method has been developed to solve the direct problem, that leads to a possibility to design an algorithm for the inverse problem. The numerical method introduced in this paper contains two steps. The first step is initialization, that is to locate the support of the perturbation by a simple method. This step reduces the inverse problem in an infinite domain into one periodic cell. The second step is to apply the Newton-CG method to solve the associated optimization problem. The perturbation is then approximated by a finite spline basis. Numerical examples are given at the end of this paper, showing the efficiency of the numerical method.
Time domain localization technique with sparsity constraint for imaging acoustic sources
NASA Astrophysics Data System (ADS)
Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain
2017-09-01
This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.
NASA Astrophysics Data System (ADS)
Lezina, Natalya; Agoshkov, Valery
2017-04-01
Domain decomposition method (DDM) allows one to present a domain with complex geometry as a set of essentially simpler subdomains. This method is particularly applied for the hydrodynamics of oceans and seas. In each subdomain the system of thermo-hydrodynamic equations in the Boussinesq and hydrostatic approximations is solved. The problem of obtaining solution in the whole domain is that it is necessary to combine solutions in subdomains. For this purposes iterative algorithm is created and numerical experiments are conducted to investigate an effectiveness of developed algorithm using DDM. For symmetric operators in DDM, Poincare-Steklov's operators [1] are used, but for the problems of the hydrodynamics, it is not suitable. In this case for the problem, adjoint equation method [2] and inverse problem theory are used. In addition, it is possible to create algorithms for the parallel calculations using DDM on multiprocessor computer system. DDM for the model of the Baltic Sea dynamics is numerically studied. The results of numerical experiments using DDM are compared with the solution of the system of hydrodynamic equations in the whole domain. The work was supported by the Russian Science Foundation (project 14-11-00609, the formulation of the iterative process and numerical experiments). [1] V.I. Agoshkov, Domain Decompositions Methods in the Mathematical Physics Problem // Numerical processes and systems, No 8, Moscow, 1991 (in Russian). [2] V.I. Agoshkov, Optimal Control Approaches and Adjoint Equations in the Mathematical Physics Problem, Institute of Numerical Mathematics, RAS, Moscow, 2003 (in Russian).
NASA Astrophysics Data System (ADS)
Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin
2018-02-01
Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model
NASA Astrophysics Data System (ADS)
Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.
2016-09-01
The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.
NASA Astrophysics Data System (ADS)
Min, Huang; Na, Cai
2017-06-01
These years, ant colony algorithm has been widely used in solving the domain of discrete space optimization, while the research on solving the continuous space optimization was relatively little. Based on the original optimization for continuous space, the article proposes the improved ant colony algorithm which is used to Solve the optimization for continuous space, so as to overcome the ant colony algorithm’s disadvantages of searching for a long time in continuous space. The article improves the solving way for the total amount of information of each interval and the due number of ants. The article also introduces a function of changes with the increase of the number of iterations in order to enhance the convergence rate of the improved ant colony algorithm. The simulation results show that compared with the result in literature[5], the suggested improved ant colony algorithm that based on the information distribution function has a better convergence performance. Thus, the article provides a new feasible and effective method for ant colony algorithm to solve this kind of problem.
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
Applying research to practice: generalist and specialist (visual ergonomics) consultancy.
Long, Jennifer; Long, Airdrie
2012-01-01
Ergonomics is a holistic discipline encompassing a wide range of special interest groups. The role of an ergonomics consultant is to provide integrated solutions to improve comfort, safety and productivity. In Australia, there are two types of consultants--generalists and specialists. Both have training in ergonomics but specialist knowledge may be the result of previous education or work experience. This paper presents three projects illustrating generalist and specialist (visual ergonomics) consultancy: development of a vision screening protocol, solving visual discomfort in an office environment and solving postural discomfort in heavy industry. These case studies demonstrate how multiple ergonomics consultants may work together to solve ergonomics problems. It also describes some of the challenges for consultants, for those engaging their services and for the ergonomics profession, e.g. recognizing the boundaries of expertise, sharing information with business competitors, the costs-benefits of engaging multiple consultants and the risk of fragmentation of ergonomics knowledge and solutions. Since ergonomics problems are often multifaceted, ergonomics consultants should have a solid grounding in all domains of ergonomics, even if they ultimately only practice in one specialty or domain. This will benefit the profession and ensure that ergonomics remains a holistic discipline.
A Radiation Transfer Solver for Athena Using Short Characteristics
NASA Astrophysics Data System (ADS)
Davis, Shane W.; Stone, James M.; Jiang, Yan-Fei
2012-03-01
We describe the implementation of a module for the Athena magnetohydrodynamics (MHD) code that solves the time-independent, multi-frequency radiative transfer (RT) equation on multidimensional Cartesian simulation domains, including scattering and non-local thermodynamic equilibrium (LTE) effects. The module is based on well known and well tested algorithms developed for modeling stellar atmospheres, including the method of short characteristics to solve the RT equation, accelerated Lambda iteration to handle scattering and non-LTE effects, and parallelization via domain decomposition. The module serves several purposes: it can be used to generate spectra and images, to compute a variable Eddington tensor (VET) for full radiation MHD simulations, and to calculate the heating and cooling source terms in the MHD equations in flows where radiation pressure is small compared with gas pressure. For the latter case, the module is combined with the standard MHD integrators using operator splitting: we describe this approach in detail, including a new constraint on the time step for stability due to radiation diffusion modes. Implementation of the VET method for radiation pressure dominated flows is described in a companion paper. We present results from a suite of test problems for both the RT solver itself and for dynamical problems that include radiative heating and cooling. These tests demonstrate that the radiative transfer solution is accurate and confirm that the operator split method is stable, convergent, and efficient for problems of interest. We demonstrate there is no need to adopt ad hoc assumptions of questionable accuracy to solve RT problems in concert with MHD: the computational cost for our general-purpose module for simple (e.g., LTE gray) problems can be comparable to or less than a single time step of Athena's MHD integrators, and only few times more expensive than that for more general (non-LTE) problems.
Category 3: Sound Generation by Interacting with a Gust
NASA Technical Reports Server (NTRS)
Scott, James R.
2004-01-01
The cascade-gust interaction problem is solved employing a time-domain approach. The purpose of this problem is to test the ability of a CFD/CAA code to accurately predict the unsteady aerodynamic and aeroacoustic response of a single airfoil to a two-dimensional, periodic vortical gust.Nonlinear time dependent Euler equations are solved using higher order spatial differencing and time marching techniques. The solutions indicate the generation and propagation of expected mode orders for the given configuration and flow conditions. The blade passing frequency (BPF) is cut off for this cascade while higher harmonic, 2BPF and 3BPF, modes are cut on.
ERIC Educational Resources Information Center
Glaser, Robert
This paper briefly reviews research on tasks in knowledge-rich domains including developmental studies, work in artificial intelligence, studies of expert/novice problem solving, and information processing analysis of aptitude test tasks that have provided increased understanding of the nature of expertise. Particularly evident is the finding that…
Recommending Research Profiles for Multidisciplinary Academic Collaboration
ERIC Educational Resources Information Center
Gunawardena, Sidath Deepal
2013-01-01
This research investigates how data on multidisciplinary collaborative experiences can be used to solve a novel problem: recommending research profiles of potential collaborators to academic researchers seeking to engage in multidisciplinary research collaboration. As the current domain theories of multidisciplinary collaboration are insufficient…
A New Strategic Approach to Technology Transfer
USDA-ARS?s Scientific Manuscript database
The principal goal of Federal research and development (R&D) is to solve problems for public benefit. Technology transfer, innovation, entrepreneurship: words and concepts that once belonged exclusively in the domain of private research enterprises, have quickly become part of everyday lexicon in Fe...
Ceberio, Josu; Calvo, Borja; Mendiburu, Alexander; Lozano, Jose A
2018-02-15
In the last decade, many works in combinatorial optimisation have shown that, due to the advances in multi-objective optimisation, the algorithms from this field could be used for solving single-objective problems as well. In this sense, a number of papers have proposed multi-objectivising single-objective problems in order to use multi-objective algorithms in their optimisation. In this article, we follow up this idea by presenting a methodology for multi-objectivising combinatorial optimisation problems based on elementary landscape decompositions of their objective function. Under this framework, each of the elementary landscapes obtained from the decomposition is considered as an independent objective function to optimise. In order to illustrate this general methodology, we consider four problems from different domains: the quadratic assignment problem and the linear ordering problem (permutation domain), the 0-1 unconstrained quadratic optimisation problem (binary domain), and the frequency assignment problem (integer domain). We implemented two widely known multi-objective algorithms, NSGA-II and SPEA2, and compared their performance with that of a single-objective GA. The experiments conducted on a large benchmark of instances of the four problems show that the multi-objective algorithms clearly outperform the single-objective approaches. Furthermore, a discussion on the results suggests that the multi-objective space generated by this decomposition enhances the exploration ability, thus permitting NSGA-II and SPEA2 to obtain better results in the majority of the tested instances.
Emitter signal separation method based on multi-level digital channelization
NASA Astrophysics Data System (ADS)
Han, Xun; Ping, Yifan; Wang, Sujun; Feng, Ying; Kuang, Yin; Yang, Xinquan
2018-02-01
To solve the problem of emitter separation under complex electromagnetic environment, a signal separation method based on multi-level digital channelization is proposed in this paper. A two-level structure which can divide signal into different channel is designed first, after that, the peaks of different channels are tracked using the track filter and the coincident signals in time domain are separated in time-frequency domain. Finally, the time domain waveforms of different signals are acquired by reverse transformation. The validness of the proposed method is proved by experiment.
1991-08-01
performed entirely in the time domain, solves the KZK (Khokhlov-Zabolotskaya-Kuznetsov) nonlinear parabolic wdve equation for pulsed, axisymmetric...finite amplitude sound beams. The KZK equation accounts for the combined effects of nonlinearity, diffraction and thermoviscous absorption on the...those used by Naze Tjotta, Tjotta, and Vefring to produce Fig. 7 of Ref. 4 with a frequency domain numerical solution of the KZK equation. However
Empirical results on scheduling and dynamic backtracking
NASA Technical Reports Server (NTRS)
Boddy, Mark S.; Goldman, Robert P.
1994-01-01
At the Honeywell Technology Center (HTC), we have been working on a scheduling problem related to commercial avionics. This application is large, complex, and hard to solve. To be a little more concrete: 'large' means almost 20,000 activities, 'complex' means several activity types, periodic behavior, and assorted types of temporal constraints, and 'hard to solve' means that we have been unable to eliminate backtracking through the use of search heuristics. At this point, we can generate solutions, where solutions exist, or report failure and sometimes why the system failed. To the best of our knowledge, this is among the largest and most complex scheduling problems to have been solved as a constraint satisfaction problem, at least that has appeared in the published literature. This abstract is a preliminary report on what we have done and how. In the next section, we present our approach to treating scheduling as a constraint satisfaction problem. The following sections present the application in more detail and describe how we solve scheduling problems in the application domain. The implemented system makes use of Ginsberg's Dynamic Backtracking algorithm, with some minor extensions to improve its utility for scheduling. We describe those extensions and the performance of the resulting system. The paper concludes with some general remarks, open questions and plans for future work.
NASA Astrophysics Data System (ADS)
Gao, X.-L.; Ma, H. M.
2010-05-01
A solution for Eshelby's inclusion problem of a finite homogeneous isotropic elastic body containing an inclusion prescribed with a uniform eigenstrain and a uniform eigenstrain gradient is derived in a general form using a simplified strain gradient elasticity theory (SSGET). An extended Betti's reciprocal theorem and an extended Somigliana's identity based on the SSGET are proposed and utilized to solve the finite-domain inclusion problem. The solution for the disturbed displacement field is expressed in terms of the Green's function for an infinite three-dimensional elastic body in the SSGET. It contains a volume integral term and a surface integral term. The former is the same as that for the infinite-domain inclusion problem based on the SSGET, while the latter represents the boundary effect. The solution reduces to that of the infinite-domain inclusion problem when the boundary effect is not considered. The problem of a spherical inclusion embedded concentrically in a finite spherical elastic body is analytically solved by applying the general solution, with the Eshelby tensor and its volume average obtained in closed forms. This Eshelby tensor depends on the position, inclusion size, matrix size, and material length scale parameter, and, as a result, can capture the inclusion size and boundary effects, unlike existing Eshelby tensors. It reduces to the classical Eshelby tensor for the spherical inclusion in an infinite matrix if both the strain gradient and boundary effects are suppressed. Numerical results quantitatively show that the inclusion size effect can be quite large when the inclusion is very small and that the boundary effect can dominate when the inclusion volume fraction is very high. However, the inclusion size effect is diminishing as the inclusion becomes large enough, and the boundary effect is vanishing as the inclusion volume fraction gets sufficiently low.
Cognitive and behavioral knowledge about insulin-dependent diabetes among children and parents.
Johnson, S B; Pollak, R T; Silverstein, J H; Rosenbloom, A L; Spillar, R; McCallum, M; Harkavy, J
1982-06-01
Youngster's knowledge about insulin-dependent diabetes was assessed across three domains: (1) general information; (2) problem solving and (3) skill at urine testing and self-injection. These youngster's parents completed the general information and problem-solving components of the assessment battery. All test instruments were showed good reliability. The test of problem solving was more difficult than the test of general information for both parents and patients. Mothers were more knowledgeable than fathers and children. Girls performed more accurately than boys, and older children obtained better scores than did younger children. Nevertheless, more than 80% of the youngsters made significant errors on urine testing and almost 40% made serious errors in self-injection. A number of other knowledge deficits were also noted. Duration of diabetes was not related to any of the knowledge measures. Intercorrelations between scores on the assessment instruments indicated that skill at urine testing or self-injection was not highly related to other types of knowledge about diabetes. Furthermore, knowledge in one content are was not usually predictive of knowledge in another content area. The results of this study emphasize the importance of measuring knowledge from several different domains. Patient variables such as sex and age need to be given further consideration in the development and use of patient educational programs. Regular assessment of patients' and parents' knowledge of all critical aspects of diabetes home management seems essential.
Rapid prototyping strategy for a surgical data warehouse.
Tang, S-T; Huang, Y-F; Hsiao, M-L; Yang, S-H; Young, S-T
2003-01-01
Healthcare processes typically generate an enormous volume of patient information. This information largely represents unexploited knowledge, since current hospital operational systems (e.g., HIS, RIS) are not suitable for knowledge exploitation. Data warehousing provides an attractive method for solving these problems, but the process is very complicated. This study presents a novel strategy for effectively implementing a healthcare data warehouse. This study adopted the rapid prototyping (RP) method, which involves intensive interactions. System developers and users were closely linked throughout the life cycle of the system development. The presence of iterative RP loops meant that the system requirements were increasingly integrated and problems were gradually solved, such that the prototype system evolved into the final operational system. The results were analyzed by monitoring the series of iterative RP loops. First a definite workflow for ensuring data completeness was established, taking a patient-oriented viewpoint when collecting the data. Subsequently the system architecture was determined for data retrieval, storage, and manipulation. This architecture also clarifies the relationships among the novel system and legacy systems. Finally, a graphic user interface for data presentation was implemented. Our results clearly demonstrate the potential for adopting an RP strategy in the successful establishment of a healthcare data warehouse. The strategy can be modified and expanded to provide new services or support new application domains. The design patterns and modular architecture used in the framework will be useful in solving problems in different healthcare domains.
Lenarda, P; Paggi, M
A comprehensive computational framework based on the finite element method for the simulation of coupled hygro-thermo-mechanical problems in photovoltaic laminates is herein proposed. While the thermo-mechanical problem takes place in the three-dimensional space of the laminate, moisture diffusion occurs in a two-dimensional domain represented by the polymeric layers and by the vertical channel cracks in the solar cells. Therefore, a geometrical multi-scale solution strategy is pursued by solving the partial differential equations governing heat transfer and thermo-elasticity in the three-dimensional space, and the partial differential equation for moisture diffusion in the two dimensional domains. By exploiting a staggered scheme, the thermo-mechanical problem is solved first via a fully implicit solution scheme in space and time, with a specific treatment of the polymeric layers as zero-thickness interfaces whose constitutive response is governed by a novel thermo-visco-elastic cohesive zone model based on fractional calculus. Temperature and relative displacements along the domains where moisture diffusion takes place are then projected to the finite element model of diffusion, coupled with the thermo-mechanical problem by the temperature and crack opening dependent diffusion coefficient. The application of the proposed method to photovoltaic modules pinpoints two important physical aspects: (i) moisture diffusion in humidity freeze tests with a temperature dependent diffusivity is a much slower process than in the case of a constant diffusion coefficient; (ii) channel cracks through Silicon solar cells significantly enhance moisture diffusion and electric degradation, as confirmed by experimental tests.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2016-12-01
Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.
Hybrid mesh finite volume CFD code for studying heat transfer in a forward-facing step
NASA Astrophysics Data System (ADS)
Jayakumar, J. S.; Kumar, Inder; Eswaran, V.
2010-12-01
Computational fluid dynamics (CFD) methods employ two types of grid: structured and unstructured. Developing the solver and data structures for a finite-volume solver is easier than for unstructured grids. But real-life problems are too complicated to be fitted flexibly by structured grids. Therefore, unstructured grids are widely used for solving real-life problems. However, using only one type of unstructured element consumes a lot of computational time because the number of elements cannot be controlled. Hence, a hybrid grid that contains mixed elements, such as the use of hexahedral elements along with tetrahedral and pyramidal elements, gives the user control over the number of elements in the domain, and thus only the domain that requires a finer grid is meshed finer and not the entire domain. This work aims to develop such a finite-volume hybrid grid solver capable of handling turbulence flows and conjugate heat transfer. It has been extended to solving flow involving separation and subsequent reattachment occurring due to sudden expansion or contraction. A significant effect of mixing high- and low-enthalpy fluid occurs in the reattached regions of these devices. This makes the study of the backward-facing and forward-facing step with heat transfer an important field of research. The problem of the forward-facing step with conjugate heat transfer was taken up and solved for turbulence flow using a two-equation model of k-ω. The variation in the flow profile and heat transfer behavior has been studied with the variation in Re and solid to fluid thermal conductivity ratios. The results for the variation in local Nusselt number, interface temperature and skin friction factor are presented.
The Domain-Specific Software Architecture Program
1992-06-01
Kang, K.C; Cohen, S.C: Jess, J.A; Novak, W.E; Peterson, A.S. Feature- Oriented Domain Analysis ( FODA ) Feasibility Study. (CMU/SEI-90-TR-21, ADA235785...perspective of a con- trols engineer solving a problem using an iterative process of simulation and analysis . The CMU/SEI-92-SR-9 1 I ~math AnalysislP...for schedulability analysis and Markov processes for the determination of reliability. Software architectures are derived from these formal models. ORA
Moe, Aubrey M; Breitborde, Nicholas J K; Bourassa, Kyle J; Gallagher, Colin J; Shakeel, Mohammed K; Docherty, Nancy M
2018-06-01
Schizophrenia researchers have focused on phenomenological aspects of the disorder to better understand its underlying nature. In particular, development of personal narratives-that is, the complexity with which people form, organize, and articulate their "life stories"-has recently been investigated in individuals with schizophrenia. However, less is known about how aspects of narrative relate to indicators of neurocognitive and social functioning. The objective of the present study was to investigate the association of linguistic complexity of life-story narratives to measures of cognitive and social problem-solving abilities among people with schizophrenia. Thirty-two individuals with a diagnosis of schizophrenia completed a research battery consisting of clinical interviews, a life-story narrative, neurocognitive testing, and a measure assessing multiple aspects of social problem solving. Narrative interviews were assessed for linguistic complexity using computerized technology. The results indicate differential relationships of linguistic complexity and neurocognition to domains of social problem-solving skills. More specifically, although neurocognition predicted how well one could both describe and enact a solution to a social problem, linguistic complexity alone was associated with accurately recognizing that a social problem had occurred. In addition, linguistic complexity appears to be a cognitive factor that is discernible from other broader measures of neurocognition. Linguistic complexity may be more relevant in understanding earlier steps of the social problem-solving process than more traditional, broad measures of cognition, and thus is relevant in conceptualizing treatment targets. These findings also support the relevance of developing narrative-focused psychotherapies. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
An integral formulation for wave propagation on weakly non-uniform potential flows
NASA Astrophysics Data System (ADS)
Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel
2016-12-01
An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.
ERIC Educational Resources Information Center
Andreucci, Colette; Chatoney, Marjolaine; Ginestie, Jacques
2012-01-01
The purpose of this study is to verify whether pupils (15-16 years old) who have received technology education on a systemic approach of industrial systems, are better than other pupils (of the same age but from other academic domains such as literary ones or ones that are economics-based) at solving physical science problems which involve…
Numerical Solution of Time-Dependent Problems with a Fractional-Power Elliptic Operator
NASA Astrophysics Data System (ADS)
Vabishchevich, P. N.
2018-03-01
A time-dependent problem in a bounded domain for a fractional diffusion equation is considered. The first-order evolution equation involves a fractional-power second-order elliptic operator with Robin boundary conditions. A finite-element spatial approximation with an additive approximation of the operator of the problem is used. The time approximation is based on a vector scheme. The transition to a new time level is ensured by solving a sequence of standard elliptic boundary value problems. Numerical results obtained for a two-dimensional model problem are presented.
A Relaxation Method for Nonlocal and Non-Hermitian Operators
NASA Astrophysics Data System (ADS)
Lagaris, I. E.; Papageorgiou, D. G.; Braun, M.; Sofianos, S. A.
1996-06-01
We present a grid method to solve the time dependent Schrödinger equation (TDSE). It uses the Crank-Nicholson scheme to propagate the wavefunction forward in time and finite differences to approximate the derivative operators. The resulting sparse linear system is solved by the symmetric successive overrelaxation iterative technique. The method handles local and nonlocal interactions and Hamiltonians that correspond to either Hermitian or to non-Hermitian matrices with real eigenvalues. We test the method by solving the TDSE in the imaginary time domain, thus converting the time propagation to asymptotic relaxation. Benchmark problems solved are both in one and two dimensions, with local, nonlocal, Hermitian and non-Hermitian Hamiltonians.
Jaarsveld, Saskia; Lachmann, Thomas
2017-01-01
This paper discusses the importance of three features of psychometric tests for cognition research: construct definition, problem space, and knowledge domain. Definition of constructs, e.g., intelligence or creativity, forms the theoretical basis for test construction. Problem space, being well or ill-defined, is determined by the cognitive abilities considered to belong to the constructs, e.g., convergent thinking to intelligence, divergent thinking to creativity. Knowledge domain and the possibilities it offers cognition are reflected in test results. We argue that (a) comparing results of tests with different problem spaces is more informative when cognition operates in both tests on an identical knowledge domain, and (b) intertwining of abilities related to both constructs can only be expected in tests developed to instigate such a process. Test features should guarantee that abilities can contribute to self-generated and goal-directed processes bringing forth solutions that are both new and applicable. We propose and discuss a test example that was developed to address these issues. PMID:28220098
Executive Functions Contribute Uniquely to Reading Competence in Minority Youth
ERIC Educational Resources Information Center
Jacobson, Lisa A.; Koriakin, Taylor; Lipkin, Paul; Boada, Richard; Frijters, Jan C.; Lovett, Maureen W.; Hill, Dina; Willcutt, Erik; Gottwald, Stephanie; Wolf, Maryanne; Bosson-Heenan, Joan; Gruen, Jeffrey R.; Mahone, E. Mark
2017-01-01
Competent reading requires various skills beyond those for basic word reading (i.e., core language skills, rapid naming, phonological processing). Contributing "higher-level" or domain-general processes include information processing speed and executive functions (working memory, strategic problem solving, attentional switching).…
Measuring the Value of AI in Space Science and Exploration
NASA Astrophysics Data System (ADS)
Blair, B.; Parr, J.; Diamond, B.; Pittman, B.; Rasky, D.
2017-10-01
FDL is tackling knowledge gaps useful to the space program by forming small teams of industrial partners, cutting-edge AI researchers and space science domain experts, and tasking them to solve problems that are important to NASA as well as humanity's future.
A spectral domain method for remotely probing stratified media
NASA Technical Reports Server (NTRS)
Schaubert, D. H.; Mittra, R.
1977-01-01
The problem of remotely probing a stratified, lossless, dielectric medium is formulated using the spectral domain method of probing. The response of the medium to a spectrum of plane waves incident at various angles is used to invert the unknown profile. For TE polarization, the electric field satisfies a Helmholtz equation. The inverse problem is solved by means of a new representation for the wave function. The principal step in this inversion is solving a second kind Fredholm equation which is very amenable to numerical computations. Several examples are presented including some which indicate that the method can be used with experimentally obtained data. When the fields exhibit a surface wave behavior, a unique inversion can be obtained only if information about the magnetic field is also available. In this case, the inversion is accomplished by a two-step procedure which employs a formula of Jost and Kohn. Some examples are presented, and an approach which greatly shortens the computations without greatly deteriorating the results is discussed.
NASA Astrophysics Data System (ADS)
Hosseini, E.; Loghmani, G. B.; Heydari, M.; Rashidi, M. M.
2017-02-01
In this paper, the boundary layer flow and heat transfer of unsteady flow over a porous accelerating stretching surface in the presence of the velocity slip and temperature jump effects are investigated numerically. A new effective collocation method based on rational Bernstein functions is applied to solve the governing system of nonlinear ordinary differential equations. This method solves the problem on the semi-infinite domain without truncating or transforming it to a finite domain. In addition, the presented method reduces the solution of the problem to the solution of a system of algebraic equations. Graphical and tabular results are presented to investigate the influence of the unsteadiness parameter A , Prandtl number Pr, suction parameter fw, velocity slip parameter γ and thermal slip parameter φ on the velocity and temperature profiles of the fluid. The numerical experiments are reported to show the accuracy and efficiency of the novel proposed computational procedure. Comparisons of present results are made with those obtained by previous works and show excellent agreement.
Jõgi, Anna-Liisa; Kikas, Eve
2016-06-01
Primary school math skills form a basis for academic success down the road. Different math skills have different antecedents and there is a reason to believe that more complex math tasks require better self-regulation. The study aimed to investigate longitudinal interrelations of calculation and problem-solving skills, and task-persistent behaviour in Grade 1 and Grade 3, and the effect of non-verbal intelligence, linguistic abilities, and executive functioning on math skills and task persistence. Participants were 864 students (52.3% boys) from 33 different schools in Estonia. Students were tested twice - at the end of Grade1 and at the end of Grade 3. Calculation and problem-solving skills, and teacher-rated task-persistent behaviour were measured at both time points. Non-verbal intelligence, linguistic abilities, and executive functioning were measured in Grade 1. Cross-lagged structural equation modelling indicated that calculation skills depend on previous math skills and linguistic abilities, while problem-solving skills require also non-verbal intelligence, executive functioning, and task persistence. Task-persistent behaviour in Grade 3 was predicted by previous problem-solving skills, linguistic abilities, and executive functioning. Gender and mother's educational level were added as covariates. The findings indicate that math skills and self-regulation are strongly related in primary grades and that solving complex tasks requires executive functioning and task persistence from children. Findings support the idea that instructional practices might benefit from supporting self-regulation in order to gain domain-specific, complex skill achievement. © 2015 The British Psychological Society.
Understanding Coreference in a System for Solving Physics Word Problems.
NASA Astrophysics Data System (ADS)
Bulko, William Charles
In this thesis, a computer program (BEATRIX) is presented which takes as input an English statement of a physics problem and a figure associated with it, understands the two kinds of input in combination, and produces a data structure containing a model of the physical objects described and the relationships between them. BEATRIX provides a mouse-based graphic interface with which the user sketches a picture and enters English sentences; meanwhile, BEATRIX creates a neutral internal representation of the picture similar to the which might be produced as the output of a vision system. It then parses the text and the picture representation, resolves the references between objects common to the two data sources, and produces a unified model of the problem world. The correctness and completeness of this model has been validated by applying it as input to a physics problem-solving program currently under development. Two descriptions of a world are said to be coreferent when they contain references to overlapping sets of objects. Resolving coreferences to produce a correct world model is a common task in scientific and industrial problem-solving: because English is typically not a good language for expressing spatial relationships, people in these fields frequently use diagrams to supplement textual descriptions. Elementary physics problems from college-level textbooks provide a useful and convenient domain for exploring the mechanisms of coreference. Because flexible, opportunistic control is necessary in order to recognize coreference and to act upon it, the understanding module of BEATRIX uses a blackboard control structure. The blackboard knowledge sources serve to identify physical objects in the picture, parse the English text, and resolve coreferences between the two. We believed that BEATRIX demonstrates a control structure and collection of knowledge that successfully implements understanding of text and picture by computer. We also believe that this organization can be applied successfully to similar understanding tasks in domains other than physics problem -solving, where data such as the output from vision systems and speech understanders can be used in place of text and pictures.
Real-Time Parameter Estimation Using Output Error
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2014-01-01
Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.
Morgan, R M
2017-11-01
This paper builds on the FoRTE conceptual model presented in part I to address the forms of knowledge that are integral to the four components of the model. Articulating the different forms of knowledge within effective forensic reconstructions is valuable. It enables a nuanced approach to the development and use of evidence bases to underpin decision-making at every stage of a forensic reconstruction by enabling transparency in the reporting of inferences. It also enables appropriate methods to be developed to ensure quality and validity. It is recognised that the domains of practice, research, and policy/law intersect to form the nexus where forensic science is situated. Each domain has a distinctive infrastructure that influences the production and application of different forms of knowledge in forensic science. The channels that can enable the interaction between these domains, enhance the impact of research in theory and practice, increase access to research findings, and support quality are presented. The particular strengths within the different domains to deliver problem solving forensic reconstructions are thereby identified and articulated. It is argued that a conceptual understanding of forensic reconstruction that draws on the full range of both explicit and tacit forms of knowledge, and incorporates the strengths of the different domains pertinent to forensic science, offers a pathway to harness the full value of trace evidence for context sensitive, problem-solving forensic applications. Copyright © 2017 The Author. Published by Elsevier B.V. All rights reserved.
Action recognition in depth video from RGB perspective: A knowledge transfer manner
NASA Astrophysics Data System (ADS)
Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen
2018-03-01
Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.
A blind deconvolution method based on L1/L2 regularization prior in the gradient space
NASA Astrophysics Data System (ADS)
Cai, Ying; Shi, Yu; Hua, Xia
2018-02-01
In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.
Resolution enhancement of robust Bayesian pre-stack inversion in the frequency domain
NASA Astrophysics Data System (ADS)
Yin, Xingyao; Li, Kun; Zong, Zhaoyun
2016-10-01
AVO/AVA (amplitude variation with an offset or angle) inversion is one of the most practical and useful approaches to estimating model parameters. So far, publications on AVO inversion in the Fourier domain have been quite limited in view of its poor stability and sensitivity to noise compared with time-domain inversion. For the resolution and stability of AVO inversion in the Fourier domain, a novel robust Bayesian pre-stack AVO inversion based on the mixed domain formulation of stationary convolution is proposed which could solve the instability and achieve superior resolution. The Fourier operator will be integrated into the objective equation and it avoids the Fourier inverse transform in our inversion process. Furthermore, the background constraints of model parameters are taken into consideration to improve the stability and reliability of inversion which could compensate for the low-frequency components of seismic signals. Besides, the different frequency components of seismic signals can realize decoupling automatically. This will help us to solve the inverse problem by means of multi-component successive iterations and the convergence precision of the inverse problem could be improved. So, superior resolution compared with the conventional time-domain pre-stack inversion could be achieved easily. Synthetic tests illustrate that the proposed method could achieve high-resolution results with a high degree of agreement with the theoretical model and verify the quality of anti-noise. Finally, applications on a field data case demonstrate that the proposed method could obtain stable inversion results of elastic parameters from pre-stack seismic data in conformity with the real logging data.
NASA Technical Reports Server (NTRS)
Zimmerman, Martin L.
1995-01-01
This manual explains the theory and operation of the finite-difference time domain code FDTD-ANT developed by Analex Corporation at the NASA Lewis Research Center in Cleveland, Ohio. This code can be used for solving electromagnetic problems that are electrically small or medium (on the order of 1 to 50 cubic wavelengths). Calculated parameters include transmission line impedance, relative effective permittivity, antenna input impedance, and far-field patterns in both the time and frequency domains. The maximum problem size may be adjusted according to the computer used. This code has been run on the DEC VAX and 486 PC's and on workstations such as the Sun Sparc and the IBM RS/6000.
Directive sources in acoustic discrete-time domain simulations based on directivity diagrams.
Escolano, José; López, José J; Pueo, Basilio
2007-06-01
Discrete-time domain methods provide a simple and flexible way to solve initial boundary value problems. With regard to the sources in such methods, only monopoles or dipoles can be considered. However, in many problems such as room acoustics, the radiation of realistic sources is directional-dependent and their directivity patterns have a clear influence on the total sound field. In this letter, a method to synthesize the directivity of sources is proposed, especially in cases where the knowledge is only based on discrete values of the directivity diagram. Some examples have been carried out in order to show the behavior and accuracy of the proposed method.
A comparative study on stress and compliance based structural topology optimization
NASA Astrophysics Data System (ADS)
Hailu Shimels, G.; Dereje Engida, W.; Fakhruldin Mohd, H.
2017-10-01
Most of structural topology optimization problems have been formulated and solved to either minimize compliance or weight of a structure under volume or stress constraints, respectively. Even if, a lot of researches are conducted on these two formulation techniques separately, there is no clear comparative study between the two approaches. This paper intends to compare these formulation techniques, so that an end user or designer can choose the best one based on the problems they have. Benchmark problems under the same boundary and loading conditions are defined, solved and results are compared based on these formulations. Simulation results shows that the two formulation techniques are dependent on the type of loading and boundary conditions defined. Maximum stress induced in the design domain is higher when the design domains are formulated using compliance based formulations. Optimal layouts from compliance minimization formulation has complex layout than stress based ones which may lead the manufacturing of the optimal layouts to be challenging. Optimal layouts from compliance based formulations are dependent on the material to be distributed. On the other hand, optimal layouts from stress based formulation are dependent on the type of material used to define the design domain. High computational time for stress based topology optimization is still a challenge because of the definition of stress constraints at element level. Results also shows that adjustment of convergence criterions can be an alternative solution to minimize the maximum stress developed in optimal layouts. Therefore, a designer or end user should choose a method of formulation based on the design domain defined and boundary conditions considered.
NASA Astrophysics Data System (ADS)
Roul, Pradip; Warbhe, Ujwal
2017-08-01
The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Jakeman, John; Gittelson, Claude
2015-01-08
In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less
Linear homotopy solution of nonlinear systems of equations in geodesy
NASA Astrophysics Data System (ADS)
Paláncz, Béla; Awange, Joseph L.; Zaletnyik, Piroska; Lewis, Robert H.
2010-01-01
A fundamental task in geodesy is solving systems of equations. Many geodetic problems are represented as systems of multivariate polynomials. A common problem in solving such systems is improper initial starting values for iterative methods, leading to convergence to solutions with no physical meaning, or to convergence that requires global methods. Though symbolic methods such as Groebner bases or resultants have been shown to be very efficient, i.e., providing solutions for determined systems such as 3-point problem of 3D affine transformation, the symbolic algebra can be very time consuming, even with special Computer Algebra Systems (CAS). This study proposes the Linear Homotopy method that can be implemented easily in high-level computer languages like C++ and Fortran that are faster than CAS by at least two orders of magnitude. Using Mathematica, the power of Homotopy is demonstrated in solving three nonlinear geodetic problems: resection, GPS positioning, and affine transformation. The method enlarging the domain of convergence is found to be efficient, less sensitive to rounding of numbers, and has lower complexity compared to other local methods like Newton-Raphson.
NASA Astrophysics Data System (ADS)
Negrello, Camille; Gosselet, Pierre; Rey, Christian
2018-05-01
An efficient method for solving large nonlinear problems combines Newton solvers and Domain Decomposition Methods (DDM). In the DDM framework, the boundary conditions can be chosen to be primal, dual or mixed. The mixed approach presents the advantage to be eligible for the research of an optimal interface parameter (often called impedance) which can increase the convergence rate. The optimal value for this parameter is often too expensive to be computed exactly in practice: an approximate version has to be sought for, along with a compromise between efficiency and computational cost. In the context of parallel algorithms for solving nonlinear structural mechanical problems, we propose a new heuristic for the impedance which combines short and long range effects at a low computational cost.
Mental Health and Head Start: Teaching Adaptive Skills.
ERIC Educational Resources Information Center
Forness, Steven R.; Serna, Loretta A.; Kavale, Kenneth A.; Nielsen, Elizabeth
1998-01-01
Describes the use of a self-determination curriculum for mental-health intervention and primary prevention for Head Start children. The curriculum addresses critical adaptive-skills domains, including social skills, self-evaluation, self-direction, networking or friendship, collaboration or support seeking, problem solving and decision making, and…
Exploring the Use of Conceptual Metaphors in Solving Problems on Entropy
ERIC Educational Resources Information Center
Jeppsson, Fredrik; Haglund, Jesper; Amin, Tamer G.; Stromdahl, Helge
2013-01-01
A growing body of research has examined the experiential grounding of scientific thought and the role of experiential intuitive knowledge in science learning. Meanwhile, research in cognitive linguistics has identified many "conceptual metaphors" (CMs), metaphorical mappings between abstract concepts and experiential source domains,…
Beyond Ball-and-Stick: Students' Processing of Novel STEM Visualizations
ERIC Educational Resources Information Center
Hinze, Scott R.; Rapp, David N.; Williamson, Vickie M.; Shultz, Mary Jane; Deslongchamps, Ghislain; Williamson, Kenneth C.
2013-01-01
Students are frequently presented with novel visualizations introducing scientific concepts and processes normally unobservable to the naked eye. Despite being unfamiliar, students are expected to understand and employ the visualizations to solve problems. Domain experts exhibit more competency than novices when using complex visualizations, but…
Scientific Culture and School Culture: Epistemic and Procedural Components.
ERIC Educational Resources Information Center
Jimenez-Aleixandre, Maria Pilar; Diaz de Bustamante, Joaquin; Duschl, Richard A.
This paper discusses the elaboration and application of "scientific culture" categories to the analysis of students' discourse while solving problems in inquiry contexts. Scientific culture means the particular domain culture of science, the culture of science practitioners. The categories proposed include both epistemic operations and…
Solving Homeland Security’s Wicked Problems: A Design Thinking Approach
2015-09-01
spur solutions. This thesis provides a framework for how S&T can incorporate design- thinking principles that are working well in other domains to...to spur solutions. This thesis provides a framework for how S&T can incorporate design-thinking principles that are working well in other domains to...Galbraith’s Star Model was used to analyze how DHS S&T, MindLab, and DARPA apply design-thinking principles to inform the framework to apply and
NASA Astrophysics Data System (ADS)
Dana, Saumik; Ganis, Benjamin; Wheeler, Mary F.
2018-01-01
In coupled flow and poromechanics phenomena representing hydrocarbon production or CO2 sequestration in deep subsurface reservoirs, the spatial domain in which fluid flow occurs is usually much smaller than the spatial domain over which significant deformation occurs. The typical approach is to either impose an overburden pressure directly on the reservoir thus treating it as a coupled problem domain or to model flow on a huge domain with zero permeability cells to mimic the no flow boundary condition on the interface of the reservoir and the surrounding rock. The former approach precludes a study of land subsidence or uplift and further does not mimic the true effect of the overburden on stress sensitive reservoirs whereas the latter approach has huge computational costs. In order to address these challenges, we augment the fixed-stress split iterative scheme with upscaling and downscaling operators to enable modeling flow and mechanics on overlapping nonmatching hexahedral grids. Flow is solved on a finer mesh using a multipoint flux mixed finite element method and mechanics is solved on a coarse mesh using a conforming Galerkin method. The multiscale operators are constructed using a procedure that involves singular value decompositions, a surface intersections algorithm and Delaunay triangulations. We numerically demonstrate the convergence of the augmented scheme using the classical Mandel's problem solution.
Physics-Aware Informative Coverage Planning for Autonomous Vehicles
2014-06-01
environment and find the optimal path connecting fixed nodes, which is equivalent to solving the Traveling Salesman Problem (TSP). While TSP is an NP...intended for application to USV harbor patrolling, it is applicable to many different domains. The problem of traveling over an area and gathering...environment. I. INTRODUCTION There are many applications that need persistent monitor- ing of a given area, requiring repeated travel over the area to
Knowledge Distance, Cognitive-Search Processes, and Creativity
Acar, Oguz Ali; van den Ende, Jan
2016-01-01
Prior research has provided conflicting arguments and evidence about whether people who are outsiders or insiders relative to a knowledge domain are more likely to demonstrate scientific creativity in that particular domain. We propose that the nature of the relationship between creativity and the distance of an individual’s expertise from a knowledge domain depends on his or her cognitive processes of problem solving (i.e., cognitive-search effort and cognitive-search variation). In an analysis of 230 solutions generated in a science contest platform, we found that distance was positively associated with creativity when problem solvers engaged in a focused search (i.e., low cognitive-search variation) and exerted a high level of cognitive effort. People whose expertise was close to a knowledge domain, however, were more likely to demonstrate creativity in that domain when they drew on a wide variety of different knowledge elements for recombination (i.e., high cognitive-search variation) and exerted substantial cognitive effort. PMID:27016241
Acar, Oguz Ali; van den Ende, Jan
2016-05-01
Prior research has provided conflicting arguments and evidence about whether people who are outsiders or insiders relative to a knowledge domain are more likely to demonstrate scientific creativity in that particular domain. We propose that the nature of the relationship between creativity and the distance of an individual's expertise from a knowledge domain depends on his or her cognitive processes of problem solving (i.e., cognitive-search effort and cognitive-search variation). In an analysis of 230 solutions generated in a science contest platform, we found that distance was positively associated with creativity when problem solvers engaged in a focused search (i.e., low cognitive-search variation) and exerted a high level of cognitive effort. People whose expertise was close to a knowledge domain, however, were more likely to demonstrate creativity in that domain when they drew on a wide variety of different knowledge elements for recombination (i.e., high cognitive-search variation) and exerted substantial cognitive effort. © The Author(s) 2016.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)
2002-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)
2001-01-01
A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.
Identification of Bouc-Wen hysteretic parameters based on enhanced response sensitivity approach
NASA Astrophysics Data System (ADS)
Wang, Li; Lu, Zhong-Rong
2017-05-01
This paper aims to identify parameters of Bouc-Wen hysteretic model using time-domain measured data. It follows a general inverse identification procedure, that is, identifying model parameters is treated as an optimization problem with the nonlinear least squares objective function. Then, the enhanced response sensitivity approach, which has been shown convergent and proper for such kind of problems, is adopted to solve the optimization problem. Numerical tests are undertaken to verify the proposed identification approach.
Challenges in building intelligent systems for space mission operations
NASA Technical Reports Server (NTRS)
Hartman, Wayne
1991-01-01
The purpose here is to provide a top-level look at the stewardship functions performed in space operations, and to identify the major issues and challenges that must be addressed to build intelligent systems that can realistically support operations functions. The focus is on decision support activities involving monitoring, state assessment, goal generation, plan generation, and plan execution. The bottom line is that problem solving in the space operations domain is a very complex process. A variety of knowledge constructs, representations, and reasoning processes are necessary to support effective human problem solving. Emulating these kinds of capabilities in intelligent systems offers major technical challenges that the artificial intelligence community is only beginning to address.
Parallel computation using boundary elements in solid mechanics
NASA Technical Reports Server (NTRS)
Chien, L. S.; Sun, C. T.
1990-01-01
The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.
Hambrick, David Z; Libarkin, Julie C; Petcovic, Heather L; Baker, Kathleen M; Elkins, Joe; Callahan, Caitlin N; Turner, Sheldon P; Rench, Tara A; Ladue, Nicole D
2012-08-01
Sources of individual differences in scientific problem solving were investigated. Participants representing a wide range of experience in geology completed tests of visuospatial ability and geological knowledge, and performed a geological bedrock mapping task, in which they attempted to infer the geological structure of an area in the Tobacco Root Mountains of Montana. A Visuospatial Ability × Geological Knowledge interaction was found, such that visuospatial ability positively predicted mapping performance at low, but not high, levels of geological knowledge. This finding suggests that high levels of domain knowledge may sometimes enable circumvention of performance limitations associated with cognitive abilities. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sackett, S.J.
JASON solves general electrostatics problems having either slab or cylindrical symmetry. More specifically, it solves the self-adjoint elliptic equation, div . (KgradV) - ..gamma..V + rho = 0 in an aritrary two-dimensional domain. For electrostatics, V is the electrostatic potential, K is the dielectric tensor, and rho is the free-charge density. The parameter ..gamma.. is identically zero for electrostatics but may have a positive nonzero value in other cases (e.g., capillary surface problems with gravity loading). The system of algebraic equations used in JASON is generated by the finite element method. Four-node quadrilateral elements are used for most of themore » mesh. Triangular elements, however, are occasionally used on boundaries to avoid severe mesh distortions. 15 figures. (RWR)« less
Variational Trajectory Optimization Tool Set: Technical description and user's manual
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.
1993-01-01
The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.
NASA Astrophysics Data System (ADS)
Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.
2011-08-01
This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.
A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rockway, J D; Champagne, N J; Sharpe, R M
2004-01-14
Frequency domain techniques are popular for analyzing electromagnetics (EM) and coupled circuit-EM problems. These techniques, such as the method of moments (MoM) and the finite element method (FEM), are used to determine the response of the EM portion of the problem at a single frequency. Since only one frequency is solved at a time, it may take a long time to calculate the parameters for wideband devices. In this paper, a fast frequency sweep based on the Asymptotic Wave Expansion (AWE) method is developed and applied to generalized mixed circuit-EM problems. The AWE method, which was originally developed for lumped-loadmore » circuit simulations, has recently been shown to be effective at quasi-static and low frequency full-wave simulations. Here it is applied to a full-wave MoM solver, capable of solving for metals, dielectrics, and coupled circuit-EM problems.« less
An investigation of the effects of interventions on problem-solving strategies and abilities
NASA Astrophysics Data System (ADS)
Cox, Charles Terrence, Jr.
Problem-solving has been described as being the "heart" of the chemistry classroom, and students' development of problem-solving skills is essential for their success in chemistry. Despite the importance of problem-solving, there has been little research within the chemistry domain, largely because of the lack of tools to collect data for large populations. Problem-solving was assessed using a software package known as IMMEX (for Interactive Multimedia Exercises) which has an HTML tracking feature that allows for collection of problem-solving data in the background as students work the problems. The primary goal of this research was to develop methods (known as interventions) that could promote improvements in students' problem-solving and most notably aid in their transition from the novice to competent level. Three intervention techniques that were incorporated within the chemistry curricula: collaborative grouping (face-to-face and distance), concept mapping, and peer-led team learning. The face-to-face collaborative grouping intervention was designed to probe the factors affecting the quality of the group interaction. Students' logical reasoning abilities were measured using the Group Assessment of Logical Thinking (GALT) test which classifies students as formal, transitional, or concrete. These classifications essentially provide a basis for identifying scientific aptitude. These designations were used as the basis for forming collaborative groups of two students. The six possibilities (formal-formal, formal-transitional, etc.) were formed to determine how the group composition influences the gains in student abilities observed from collaborative grouping interventions. Students were given three assignments (an individual pre-collaborative, an individual post collaborative, and a collaborative assignment) each requiring them to work an IMMEX problem set. Similar gains in performance of 10% gains were observed for each group with two exceptions. The transitional students who were paired with concrete students had a 15% gain, and the concrete students paired with other concrete students had only a marginal gain. In fact, there was no statistical difference in the pre-collaborative and post-collaborative student abilities for concrete-concrete groups. The distance collaborative intervention was completed using a new interface for the IMMEX software designed to mimic face-to-face collaboration. A stereochemistry problem set which had a solved rate of 28% prior to collaboration was chosen for incorporation into this distance collaboration study. (Abstract shortened by UMI.)
Do Knowledge-Component Models Need to Incorporate Representational Competencies?
ERIC Educational Resources Information Center
Rau, Martina Angela
2017-01-01
Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…
Practical Handbook of School Psychology: Effective Practices for the 21st Century
ERIC Educational Resources Information Center
Peacock, Gretchen Gimpel, Ed.; Ervin, Ruth A., Ed.; Daly, Edward J., III, Ed.; Merrell, Kenneth W., Ed.
2009-01-01
This authoritative guide addresses all aspects of school psychology practice in a response-to-intervention (RTI) framework. Thirty-four focused chapters present effective methods for problem-solving-based assessment, instruction, and intervention. Specific guidelines are provided for promoting success in core academic domains--reading, writing,…
The Implications of Research on Expertise for Curriculum and Pedagogy
ERIC Educational Resources Information Center
Feldon, David F.
2007-01-01
Instruction on problem solving in particular domains typically relies on explanations from experts about their strategies. However, research indicates that such self-reports often are incomplete or inaccurate (e.g., Chao & Salvendy, 1994; Cooke & Breedin, 1994). This article evaluates research on experts' cognition, the accuracy of experts'…
ERIC Educational Resources Information Center
Education Development Center, Inc., 2016
2016-01-01
In the domain of "Operations & Algebraic Thinking," Common Core State Standards indicate that in kindergarten, first grade, and second grade, children should demonstrate and expand their ability to understand, represent, and solve problems using the operations of addition and subtraction, laying the foundation for operations using…
Processes and Knowledge in Designing Instruction.
ERIC Educational Resources Information Center
Greeno, James G.; And Others
Results from a study of problem solving in the domain of instructional design are presented. Subjects were eight teacher trainees who were recent graduates of or were enrolled in the Stanford Teacher Education Program at Stanford University (California). Subjects studied a computer-based tutorial--the VST2000--about a fictitious vehicle. The…
Design Rationale for a Complex Performance Assessment
ERIC Educational Resources Information Center
Williamson, David M.; Bauer, Malcolm; Steinberg, Linda S.; Mislevy, Robert J.; Behrens, John T.; DeMark, Sarah F.
2004-01-01
In computer-based interactive environments meant to support learning, students must bring a wide range of relevant knowledge, skills, and abilities to bear jointly as they solve meaningful problems in a learning domain. To function effectively as an assessment, a computer system must additionally be able to evoke and interpret observable evidence…
Design Features of Pedagogically-Sound Software in Mathematics.
ERIC Educational Resources Information Center
Haase, Howard; And Others
Weaknesses in educational software currently available in the domain of mathematics are discussed. A technique that was used for the design and production of mathematics software aimed at improving problem-solving skills which combines sound pedagogy and innovative programming is presented. To illustrate the design portion of this technique, a…
Seven Times Around A City: The Evolution Of Israeli Operational Art In Urban Operations
2016-05-26
urban conflict represents a complex adaptive ecology , the physical environment, the intangible domain, and the problem-solving approach will come...and therefore, also, a Jewish state. Perennial questions about the purpose of the military, its force structure and ethics , law and civil-military
ERIC Educational Resources Information Center
Jitendra, Asha K.; Lein, Amy E.; Star, Jon R.; Dupuis, Danielle N.
2013-01-01
Proportional thinking, which requires understanding fractions, ratios, and proportions, is an area of mathematics that is cognitively challenging for many children and adolescents (Fujimura, 2001; Lamon, 2007; Lobato, Ellis, Charles, & Zbiek, 2010; National Mathematics Advisory Panel [NMAP], 2008) and "transcends topical barriers in adult…
Data from: Solving the Robot-World Hand-Eye(s) Calibration Problem with
Iterative Methods | National Agricultural Library Skip to main content Home National Agricultural Library United States Department of Agriculture Ag Data Commons Beta Toggle navigation Datasets . License U.S. Public Domain Funding Source(s) National Science Foundation IOS-1339211 Agricultural Research
Pedagogical Strategies for Human and Computer Tutoring.
ERIC Educational Resources Information Center
Reiser, Brian J.
The pedagogical strategies of human tutors in problem solving domains are described and the possibility of incorporating these techniques into computerized tutors is examined. GIL (Graphical Instruction in LISP), an intelligent tutoring system for LISP programming, is compared to human tutors teaching the same material in order to identify how the…
Toward High-Performance Communications Interfaces for Science Problem Solving
ERIC Educational Resources Information Center
Oviatt, Sharon L.; Cohen, Adrienne O.
2010-01-01
From a theoretical viewpoint, educational interfaces that facilitate communicative actions involving representations central to a domain can maximize students' effort associated with constructing new schemas. In addition, interfaces that minimize working memory demands due to the interface per se, for example by mimicking existing non-digital work…
Cognitive Correlates of Math Skills in Third-Grade Students
ERIC Educational Resources Information Center
Mannamaa, Mairi; Kikas, Eve; Peets, Katlin; Palu, Anu
2012-01-01
Math achievement is not a unidimensional construct but includes different skills that require different cognitive abilities. The focus of this study was to examine associations between a number of cognitive abilities and three domains of math skills (knowing, applying and problem solving) simultaneously in a multivariate framework. Participants…
Creative Performances and Gifted Education: Studies from Art Education
ERIC Educational Resources Information Center
Thomas, Kerry
2017-01-01
This paper acknowledges that there is widespread support in Gifted Education for students' creative aptitudes to be identified as a domain that includes imagination, originality, fluency, and problem solving. I explore where and when these concepts originated and briefly identify how they are represented in Gifted Education. Then various…
Strategy Training Eliminates Sex Differences in Spatial Problem Solving in a STEM Domain
ERIC Educational Resources Information Center
Stieff, Mike; Dixon, Bonnie L.; Ryu, Minjung; Kumi, Bryna C.; Hegarty, Mary
2014-01-01
Poor spatial ability can limit success in science, technology, engineering, and mathematics (STEM) disciplines. Many initiatives aim to increase STEM achievement and degree attainment through selective recruitment of high-spatial students or targeted training to improve spatial ability. The current study examines an alternative approach to…
Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.
ERIC Educational Resources Information Center
Hoppe, H. Ulrich
1994-01-01
Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)
ERIC Educational Resources Information Center
Osman, Magda
2008-01-01
Given the privileged status claimed for active learning in a variety of domains (visuomotor learning, causal induction, problem solving, education, skill learning), the present study examines whether action-based learning is a necessary, or a sufficient, means of acquiring the relevant skills needed to perform a task typically described as…
Nakahara, Soichiro; Medland, Sarah; Turner, Jessica A; Calhoun, Vince D; Lim, Kelvin O; Mueller, Bryon A; Bustillo, Juan R; O'Leary, Daniel S; Vaidya, Jatin G; McEwen, Sarah; Voyvodic, James; Belger, Aysenil; Mathalon, Daniel H; Ford, Judith M; Guffanti, Guia; Macciardi, Fabio; Potkin, Steven G; van Erp, Theo G M
2018-06-12
This study assessed genetic contributions to six cognitive domains, identified by the MATRICS Cognitive Consensus Battery as relevant for schizophrenia, cognition-enhancing, clinical trials. Psychiatric Genomics Consortium Schizophrenia polygenic risk scores showed significant negative correlations with each cognitive domain. Genome-wide association analyses identified loci associated with attention/vigilance (rs830786 within HNF4G), verbal memory (rs67017972 near NDUFS4), and reasoning/problem solving (rs76872642 within HDAC9). Gene set analysis identified unique and shared genes across cognitive domains. These findings suggest involvement of common and unique mechanisms across cognitive domains and may contribute to the discovery of new therapeutic targets to treat cognitive deficits in schizophrenia. Copyright © 2018 Elsevier B.V. All rights reserved.
"Fast" Is Not "Real-Time": Designing Effective Real-Time AI Systems
NASA Astrophysics Data System (ADS)
O'Reilly, Cindy A.; Cromarty, Andrew S.
1985-04-01
Realistic practical problem domains (such as robotics, process control, and certain kinds of signal processing) stand to benefit greatly from the application of artificial intelligence techniques. These problem domains are of special interest because they are typified by complex dynamic environments in which the ability to select and initiate a proper response to environmental events in real time is a strict prerequisite to effective environmental interaction. Artificial intelligence systems developed to date have been sheltered from this real-time requirement, however, largely by virtue of their use of simplified problem domains or problem representations. The plethora of colloquial and (in general) mutually inconsistent interpretations of the term "real-time" employed by workers in each of these domains further exacerbates the difficul-ties in effectively applying state-of-the-art problem solving tech-niques to time-critical problems. Indeed, the intellectual waters are by now sufficiently muddied that the pursuit of a rigorous treatment of intelligent real-time performance mandates the redevelopment of proper problem perspective on what "real-time" means, starting from first principles. We present a simple but nonetheless formal definition of real-time performance. We then undertake an analysis of both conventional techniques and AI technology with respect to their ability to meet substantive real-time performance criteria. This analysis provides a basis for specification of problem-independent design requirements for systems that would claim real-time performance. Finally, we discuss the application of these design principles to a pragmatic problem in real-time signal understanding.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
Li, Ruipeng; Saad, Yousef
2017-08-01
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ruipeng; Saad, Yousef
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
The boundary element method applied to 3D magneto-electro-elastic dynamic problems
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Markov, I. P.; Kuznetsov, Iu A.
2017-11-01
Due to the coupling properties, the magneto-electro-elastic materials possess a wide number of applications. They exhibit general anisotropic behaviour. Three-dimensional transient analyses of magneto-electro-elastic solids can hardly be found in the literature. 3D direct boundary element formulation based on the weakly-singular boundary integral equations in Laplace domain is presented in this work for solving dynamic linear magneto-electro-elastic problems. Integral expressions of the three-dimensional fundamental solutions are employed. Spatial discretization is based on a collocation method with mixed boundary elements. Convolution quadrature method is used as a numerical inverse Laplace transform scheme to obtain time domain solutions. Numerical examples are provided to illustrate the capability of the proposed approach to treat highly dynamic problems.
Modeling human response errors in synthetic flight simulator domain
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.
1992-01-01
This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.
McCarty, David E.
2010-01-01
The rule of diagnostic parsimony—otherwise known as “Ockham's Razor”—teaches students of medicine to find a single unifying diagnosis to explain a given patient's symptoms. While this approach has merits in some settings, a more comprehensive approach is often needed for patients with chronic, nonspecific presentations for which there is a broad differential diagnosis. The cardinal manifestations of sleep disorders—daytime neurocognitive impairment and subjective sleep disturbances—are examples of such presentations. Successful sleep medicine clinicians therefore approach every patient with the knowledge that multiple diagnoses—rather than simply one—are likely to be found. Teaching an integrated and comprehensive approach to other clinicians in an organized and reproducible fashion is challenging, and the evaluation of effectiveness of such teaching is even more so. As a practical aid for teaching the approach to—and evaluation of—a comprehensive sleep medicine encounter, five functional domains of sleep medicine clinical problem-solving are presented as potential sources for sleep/wake disruption: (1) circadian misalignment, (2) pharmacologic factors, (3) medical factors, (4) psychiatric/psychosocial factors, and (5) primary sleep medicine diagnoses. These domains are presented and explained in an easy-to-remember “five finger” format. The five finger format can be used in real time to evaluate the completeness of a clinical encounter, or can be used in the design of standardized patients to identify areas of strength and potential weakness. A score sheet based upon this approach is offered as an alternative to commonly used Likert scales as a potentially more objective and practical measure of clinical problem-solving competence, making it useful for training programs striving to achieve or maintain fellowship accreditation. Citation: McCarty DE. Beyond Ockham's Razor: redefining problem-solving in clinical sleep medicine using a “five-finger” approach. J Clin Sleep Med 2010;6(3):292-269. PMID:20572425
Domain-specific rationality in human choices: violations of utility axioms and social contexts.
Wang, X T
1996-07-01
This study presents a domain-specific view of human decision rationality. It explores social and ecological domain-specific psychological mechanisms underlying choice biases and violations of utility axioms. Results from both the USA and China revealed a social group domain-specific choice pattern. The irrational preference reversal in a hypothetical life-death decision problem (a classical example of framing effects) was eliminated by providing a small group or family context in which most subjects favored a risky choice option regardless of the positive/negative framing of choice outcomes. The risk preference data also indicate that the subjective scope of small group domain is larger for Chinese subjects, suggesting that human choice mechanisms are sensitive to culturally specific features of group living. A further experiment provided evidence that perceived fairness might be one major factor regulating the choice preferences found in small group (kith-and-kin) contexts. Finally, the violation of the stochastic dominance axiom of the rational theory of choice was predicted and tested. The violations were found only when the "life-death" problem was presented in small group contexts; the strongest violation was found in a family context. These results suggest that human decisions and choices are regulated by domain-specific choice mechanisms designed to solve evolutionary recurrent and adaptively important problems.
Reinforcement learning in scheduling
NASA Technical Reports Server (NTRS)
Dietterich, Tom G.; Ok, Dokyeong; Zhang, Wei; Tadepalli, Prasad
1994-01-01
The goal of this research is to apply reinforcement learning methods to real-world problems like scheduling. In this preliminary paper, we show that learning to solve scheduling problems such as the Space Shuttle Payload Processing and the Automatic Guided Vehicle (AGV) scheduling can be usefully studied in the reinforcement learning framework. We discuss some of the special challenges posed by the scheduling domain to these methods and propose some possible solutions we plan to implement.
Exact semi-separation of variables in waveguides with non-planar boundaries
NASA Astrophysics Data System (ADS)
Athanassoulis, G. A.; Papoutsellis, Ch. E.
2017-05-01
Series expansions of unknown fields Φ =∑φn Zn in elongated waveguides are commonly used in acoustics, optics, geophysics, water waves and other applications, in the context of coupled-mode theories (CMTs). The transverse functions Zn are determined by solving local Sturm-Liouville problems (reference waveguides). In most cases, the boundary conditions assigned to Zn cannot be compatible with the physical boundary conditions of Φ, leading to slowly convergent series, and rendering CMTs mild-slope approximations. In the present paper, the heuristic approach introduced in Athanassoulis & Belibassakis (Athanassoulis & Belibassakis 1999 J. Fluid Mech. 389, 275-301) is generalized and justified. It is proved that an appropriately enhanced series expansion becomes an exact, rapidly convergent representation of the field Φ, valid for any smooth, non-planar boundaries and any smooth enough Φ. This series expansion can be differentiated termwise everywhere in the domain, including the boundaries, implementing an exact semi-separation of variables for non-separable domains. The efficiency of the method is illustrated by solving a boundary value problem for the Laplace equation, and computing the corresponding Dirichlet-to-Neumann operator, involved in Hamiltonian equations for nonlinear water waves. The present method provides accurate results with only a few modes for quite general domains. Extensions to general waveguides are also discussed.
A review of estimation of distribution algorithms in bioinformatics
Armañanzas, Rubén; Inza, Iñaki; Santana, Roberto; Saeys, Yvan; Flores, Jose Luis; Lozano, Jose Antonio; Peer, Yves Van de; Blanco, Rosa; Robles, Víctor; Bielza, Concha; Larrañaga, Pedro
2008-01-01
Evolutionary search algorithms have become an essential asset in the algorithmic toolbox for solving high-dimensional optimization problems in across a broad range of bioinformatics problems. Genetic algorithms, the most well-known and representative evolutionary search technique, have been the subject of the major part of such applications. Estimation of distribution algorithms (EDAs) offer a novel evolutionary paradigm that constitutes a natural and attractive alternative to genetic algorithms. They make use of a probabilistic model, learnt from the promising solutions, to guide the search process. In this paper, we set out a basic taxonomy of EDA techniques, underlining the nature and complexity of the probabilistic model of each EDA variant. We review a set of innovative works that make use of EDA techniques to solve challenging bioinformatics problems, emphasizing the EDA paradigm's potential for further research in this domain. PMID:18822112
Caplan, M; Weissberg, R P; Grober, J S; Sivo, P J; Grady, K; Jacoby, C
1992-02-01
This study assessed the impact of school-based social competence training on skills, social adjustment, and self-reported substance use of 282 sixth and seventh graders. Training emphasized broad-based competence promotion in conjunction with domain-specific application to substance abuse prevention. The 20-session program comprised six units: stress management, self-esteem, problem solving, substances and health information, assertiveness, and social networks. Findings indicated positive training effects on Ss' skills in handling interpersonal problems and coping with anxiety. Teacher ratings revealed improvements in Ss' constructive conflict resolution with peers, impulse control, and popularity. Self-report ratings indicated gains in problem-solving efficacy. Results suggest some preventive impact on self-reported substance use intentions and excessive alcohol use. In general, the program was found to be beneficial for both inner-city and suburban students.
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
Grid adaption based on modified anisotropic diffusion equations formulated in the parametic domain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagmeijer, R.
1994-11-01
A new grid-adaption algorithm for problems in computational fluid dynamics is presented. The basic equations are derived from a variational problem formulated in the parametric domain of the mapping that defines the existing grid. Modification of the basic equations provides desirable properties in boundary layers. The resulting modified anisotropic diffusion equations are solved for the computational coordinates as functions of the parametric coordinates and these functions are numerically inverted. Numerical examples show that the algorithm is robust, that shocks and boundary layers are well-resolved on the adapted grid, and that the flow solution becomes a globally smooth function of themore » computational coordinates.« less
Team formation and breakup in multiagent systems
NASA Astrophysics Data System (ADS)
Rao, Venkatesh Guru
The goal of this dissertation is to pose and solve problems involving team formation and breakup in two specific multiagent domains: formation travel and space-based interferometric observatories. The methodology employed comprises elements drawn from control theory, scheduling theory and artificial intelligence (AI). The original contribution of the work comprises three elements. The first contribution, the partitioned state-space approach is a technique for formulating and solving co-ordinated motion problem using calculus of variations techniques. The approach is applied to obtain optimal two-agent formation travel trajectories on graphs. The second contribution is the class of MixTeam algorithms, a class of team dispatchers that extends classical dispatching by accommodating team formation and breakup and exploration/exploitation learning. The algorithms are applied to observation scheduling and constellation geometry design for interferometric space telescopes. The use of feedback control for team scheduling is also demonstrated with these algorithms. The third contribution is the analysis of the optimality properties of greedy, or myopic, decision-making for a simple class of team dispatching problems. This analysis represents a first step towards the complete analysis of complex team schedulers such as the MixTeam algorithms. The contributions represent an extension to the literature on team dynamics in control theory. The broad conclusions that emerge from this research are that greedy or myopic decision-making strategies for teams perform well when specific parameters in the domain are weakly affected by an agent's actions, and that intelligent systems require a closer integration of domain knowledge in decision-making functions.
NASA Astrophysics Data System (ADS)
Kit Luk, Chuen; Chesi, Graziano
2015-11-01
This paper addresses the estimation of the domain of attraction for discrete-time nonlinear systems where the vector field is subject to changes. First, the paper considers the case of switched systems, where the vector field is allowed to arbitrarily switch among the elements of a finite family. Second, the paper considers the case of hybrid systems, where the state space is partitioned into several regions described by polynomial inequalities, and the vector field is defined on each region independently from the other ones. In both cases, the problem consists of computing the largest sublevel set of a Lyapunov function included in the domain of attraction. An approach is proposed for solving this problem based on convex programming, which provides a guaranteed inner estimate of the sought sublevel set. The conservatism of the provided estimate can be decreased by increasing the size of the optimisation problem. Some numerical examples illustrate the proposed approach.
Numerical solution of the general coupled nonlinear Schrödinger equations on unbounded domains.
Li, Hongwei; Guo, Yue
2017-12-01
The numerical solution of the general coupled nonlinear Schrödinger equations on unbounded domains is considered by applying the artificial boundary method in this paper. In order to design the local absorbing boundary conditions for the coupled nonlinear Schrödinger equations, we generalize the unified approach previously proposed [J. Zhang et al., Phys. Rev. E 78, 026709 (2008)PLEEE81539-375510.1103/PhysRevE.78.026709]. Based on the methodology underlying the unified approach, the original problem is split into two parts, linear and nonlinear terms, and we then achieve a one-way operator to approximate the linear term to make the wave out-going, and finally we combine the one-way operator with the nonlinear term to derive the local absorbing boundary conditions. Then we reduce the original problem into an initial boundary value problem on the bounded domain, which can be solved by the finite difference method. The stability of the reduced problem is also analyzed by introducing some auxiliary variables. Ample numerical examples are presented to verify the accuracy and effectiveness of our proposed method.
Two-dimensional frequency-domain acoustic full-waveform inversion with rugged topography
NASA Astrophysics Data System (ADS)
Zhang, Qian-Jiang; Dai, Shi-Kun; Chen, Long-Wei; Li, Kun; Zhao, Dong-Dong; Huang, Xing-Xing
2015-09-01
We studied finite-element-method-based two-dimensional frequency-domain acoustic FWI under rugged topography conditions. The exponential attenuation boundary condition suitable for rugged topography is proposed to solve the cutoff boundary problem as well as to consider the requirement of using the same subdivision grid in joint multifrequency inversion. The proposed method introduces the attenuation factor, and by adjusting it, acoustic waves are sufficiently attenuated in the attenuation layer to minimize the cutoff boundary effect. Based on the law of exponential attenuation, expressions for computing the attenuation factor and the thickness of attenuation layers are derived for different frequencies. In multifrequency-domain FWI, the conjugate gradient method is used to solve equations in the Gauss-Newton algorithm and thus minimize the computation cost in calculating the Hessian matrix. In addition, the effect of initial model selection and frequency combination on FWI is analyzed. Examples using numerical simulations and FWI calculations are used to verify the efficiency of the proposed method.
Adaptive eigenspace method for inverse scattering problems in the frequency domain
NASA Astrophysics Data System (ADS)
Grote, Marcus J.; Kray, Marie; Nahum, Uri
2017-02-01
A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.
NASA Astrophysics Data System (ADS)
Bristol, S.
2013-12-01
Through the evolution of search technologies since the web was born, the problem of finding something of interest has been somewhat solved in many domains. If I want to purchase a pair of hiking boots or some other commercial product, there aren't many steps I need to go through before I can make a purchase. I might have to take time to find the best price, and I might want to do some reading to determine if the product is most suitable to my needs. But there aren't very many search and discovery steps between me and hitting the trail to break in my new boots. So, why haven't we solved this problem yet for scientific data, and why are we still talking about it? Is it that a dataset, database, data service, or some form of scientific data are all so different from a pair of shoes? Is it that there's often no direct profit motive associated with scientific data, or at least not on the same level with consumer products? Is it that government and academic institutions, the major producers of scientific data, aren't as technically adept as big commercial companies who have solved this problem in other domains? Or is it that maybe we aren't thinking about the problem in the right way, and we think our domain of scientific data is fundamentally different from all the places where the problem seems to be in the process of being solved? We definitely have issues of scale and complexity to deal with. A pair of shoes only has so many possible descriptive parameters, many of which can be shared across a wide array of other types of products. There are delivery issues as well. For those cases where we do have well established data centers and repositories, they are not exactly the same type of operation as a network of product distribution centers for a major online retailer. But perhaps there are similarities and lessons learned that can be effectively exploited to accelerate our ability to solve this problem for scientific data so that we are not struggling trying to answer the same questions in another 5-10 years. One concept that may be useful is that of the wholesale and retail dynamic. Products move through society and are found readily and often when they are available through many different outlets. The process of efficiently distributing products has developed where wholesalers do a great job on the backend with all the logistics of getting the product ready and out to market, and retailers do a great job of getting product directly into the hands of the consumer. So, how does this model play out with scientific data? How might it help us in examining the potentially flawed notion of 'one-stop shop' catalogs and portals and other things we've tried to make scientific data more discoverable? We may also benefit from determining where we are simply outgunned in developing a certain capability, adopting something that is already established, and shifting focus to the hard problems not yet solved. Recent developments to establish a profile for datasets as part of the schema.org methodology adopted by commercial search providers shows great promise as a way to distinguish scientific data from all the other possible search results. Rather than determining that big public search engines are a priori not suited to the discovery of scientific data, how might we experiment with applying those capabilities to our domain and so take advantage of all the other things the world is figuring out about needles in haystacks?
An accurate, fast, and scalable solver for high-frequency wave propagation
NASA Astrophysics Data System (ADS)
Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.
2017-12-01
In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.
A RADIATION TRANSFER SOLVER FOR ATHENA USING SHORT CHARACTERISTICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Shane W.; Stone, James M.; Jiang Yanfei
2012-03-01
We describe the implementation of a module for the Athena magnetohydrodynamics (MHD) code that solves the time-independent, multi-frequency radiative transfer (RT) equation on multidimensional Cartesian simulation domains, including scattering and non-local thermodynamic equilibrium (LTE) effects. The module is based on well known and well tested algorithms developed for modeling stellar atmospheres, including the method of short characteristics to solve the RT equation, accelerated Lambda iteration to handle scattering and non-LTE effects, and parallelization via domain decomposition. The module serves several purposes: it can be used to generate spectra and images, to compute a variable Eddington tensor (VET) for full radiationmore » MHD simulations, and to calculate the heating and cooling source terms in the MHD equations in flows where radiation pressure is small compared with gas pressure. For the latter case, the module is combined with the standard MHD integrators using operator splitting: we describe this approach in detail, including a new constraint on the time step for stability due to radiation diffusion modes. Implementation of the VET method for radiation pressure dominated flows is described in a companion paper. We present results from a suite of test problems for both the RT solver itself and for dynamical problems that include radiative heating and cooling. These tests demonstrate that the radiative transfer solution is accurate and confirm that the operator split method is stable, convergent, and efficient for problems of interest. We demonstrate there is no need to adopt ad hoc assumptions of questionable accuracy to solve RT problems in concert with MHD: the computational cost for our general-purpose module for simple (e.g., LTE gray) problems can be comparable to or less than a single time step of Athena's MHD integrators, and only few times more expensive than that for more general (non-LTE) problems.« less
Gust Acoustics Computation with a Space-Time CE/SE Parallel 3D Solver
NASA Technical Reports Server (NTRS)
Wang, X. Y.; Himansu, A.; Chang, S. C.; Jorgenson, P. C. E.; Reddy, D. R. (Technical Monitor)
2002-01-01
The benchmark Problem 2 in Category 3 of the Third Computational Aero-Acoustics (CAA) Workshop is solved using the space-time conservation element and solution element (CE/SE) method. This problem concerns the unsteady response of an isolated finite-span swept flat-plate airfoil bounded by two parallel walls to an incident gust. The acoustic field generated by the interaction of the gust with the flat-plate airfoil is computed by solving the 3D (three-dimensional) Euler equations in the time domain using a parallel version of a 3D CE/SE solver. The effect of the gust orientation on the far-field directivity is studied. Numerical solutions are presented and compared with analytical solutions, showing a reasonable agreement.
Creating a Complex Measurement Model Using Evidence Centered Design.
ERIC Educational Resources Information Center
Williamson, David M.; Bauer, Malcom; Steinberg, Linda S.; Mislevy, Robert J.; Behrens, John T.
In computer-based simulations meant to support learning, students must bring a wide range of relevant knowledge, skills, and abilities to bear jointly as they solve meaningful problems in a learning domain. To function efficiently as an assessment, a simulation system must also be able to evoke and interpret observable evidence about targeted…
CASE: A Configurable Argumentation Support Engine
ERIC Educational Resources Information Center
Scheuer, O.; McLaren, B. M.
2013-01-01
One of the main challenges in tapping the full potential of modern educational software is to devise mechanisms to automatically analyze and adaptively support students' problem solving and learning. A number of such approaches have been developed to teach argumentation skills in domains as diverse as science, the Law, and ethics. Yet,…
ERIC Educational Resources Information Center
Cacioppo, John T.; Semin, Gun R.; Berntson, Gary G.
2004-01-01
Scientific realism holds that scientific theories are approximations of universal truths about reality, whereas scientific instrumentalism posits that scientific theories are intellectual structures that provide adequate predictions of what is observed and useful frameworks for answering questions and solving problems in a given domain. These…
Thinking in Patterns to Solve Multiplication, Division, and Fraction Problems in Second Grade
ERIC Educational Resources Information Center
Stokes, Patricia D.
2016-01-01
Experts think in patterns and structures using the specific "language" of their domains. For mathematicians, these patterns and structures are represented by numbers, symbols and their relationships (Stokes, 2014a). To determine whether elementary students in the United States could learn to think in mathematical patterns to solve…
Assessing Students' Reflective Responses to Chemistry-related Learning Tasks
ERIC Educational Resources Information Center
Tan, Kok Siang; Goh, Ngoh Khang
2008-01-01
Key to renewed concern on the affective domain of education (Fensham, 2007) and on school graduates' readiness for a world of work (DEST, 2008; WDA, 2006) is the student's inclination-to-reflect when engaged in a learning or problem-solving task. Reflective learning is not new to education (Dewey, 1933; Ellis, 2001). Since the…
Rhetorical Consequences of the Computer Society: Expert Systems and Human Communication.
ERIC Educational Resources Information Center
Skopec, Eric Wm.
Expert systems are computer programs that solve selected problems by modelling domain-specific behaviors of human experts. These computer programs typically consist of an input/output system that feeds data into the computer and retrieves advice, an inference system using the reasoning and heuristic processes of human experts, and a knowledge…
ReACT!: An Interactive Educational Tool for AI Planning for Robotics
ERIC Educational Resources Information Center
Dogmus, Zeynep; Erdem, Esra; Patogulu, Volkan
2015-01-01
This paper presents ReAct!, an interactive educational tool for artificial intelligence (AI) planning for robotics. ReAct! enables students to describe robots' actions and change in dynamic domains without first having to know about the syntactic and semantic details of the underlying formalism, and to solve planning problems using…
ERIC Educational Resources Information Center
Zacharis, Nick Z.
2009-01-01
Rapid technological advances in the areas of telecommunications, computer technology and the Internet have made available to tutors and learners in the domain of online learning, a broad array of tools that provide the possibility to facilitate and enhance learning to higher levels of critical reflective thinking. Computer mediated communication…
Multilevel semantic analysis and problem-solving in the flight domain
NASA Technical Reports Server (NTRS)
Chien, R. T.; Chen, D. C.; Ho, W. P. C.; Pan, Y. C.
1982-01-01
A computer based cockpit system which is capable of assisting the pilot in such important tasks as monitoring, diagnosis, and trend analysis was developed. The system is properly organized and is endowed with a knowledge base so that it enhances the pilot's control over the aircraft while simultaneously reducing his workload.
Learning Skills; Review and Domain Chart.
ERIC Educational Resources Information Center
Clark, N. Cecil; Thompson, Faith E.
A major goal of the elementary and secondary schools is to help each person become an efficient and autonomous learner. Outlined in this report are skills abstracted from the literature on such topics as verbal learning, problem solving, study habits, and behavior modification. The learner-oriented skills are presented so that they may be…
ERIC Educational Resources Information Center
Elvira, Quincy; Beausaert, Simon; Segers, Mien; Imants, Jeroen; Dankbaar, Ben
2016-01-01
Development of professional expertise is the process of continually transforming the repertoire of knowledge, skills and attitudes necessary to solve domain-specific problems which begins in late secondary education and continues during higher education and throughout professional life. One educational goal is to train students to think more like…
ERIC Educational Resources Information Center
Pulz, Michael; Lusti, Markus
PROJECTTUTOR is an intelligent tutoring system that enhances conventional classroom instruction by teaching problem solving in project planning. The domain knowledge covered by the expert module is divided into three functions. Structural analysis, identifies the activities that make up the project, time analysis, computes the earliest and latest…
A four stage approach for ontology-based health information system design.
Kuziemsky, Craig E; Lau, Francis
2010-11-01
To describe and illustrate a four stage methodological approach to capture user knowledge in a biomedical domain area, use that knowledge to design an ontology, and then implement and evaluate the ontology as a health information system (HIS). A hybrid participatory design-grounded theory (GT-PD) method was used to obtain data and code them for ontology development. Prototyping was used to implement the ontology as a computer-based tool. Usability testing evaluated the computer-based tool. An empirically derived domain ontology and set of three problem-solving approaches were developed as a formalized model of the concepts and categories from the GT coding. The ontology and problem-solving approaches were used to design and implement a HIS that tested favorably in usability testing. The four stage approach illustrated in this paper is useful for designing and implementing an ontology as the basis for a HIS. The approach extends existing ontology development methodologies by providing an empirical basis for theory incorporated into ontology design. Copyright © 2010 Elsevier B.V. All rights reserved.
Towards Inferring Protein Interactions: Challenges and Solutions
NASA Astrophysics Data System (ADS)
Zhang, Ya; Zha, Hongyuan; Chu, Chao-Hsien; Ji, Xiang
2006-12-01
Discovering interacting proteins has been an essential part of functional genomics. However, existing experimental techniques only uncover a small portion of any interactome. Furthermore, these data often have a very high false rate. By conceptualizing the interactions at domain level, we provide a more abstract representation of interactome, which also facilitates the discovery of unobserved protein-protein interactions. Although several domain-based approaches have been proposed to predict protein-protein interactions, they usually assume that domain interactions are independent on each other for the convenience of computational modeling. A new framework to predict protein interactions is proposed in this paper, where no assumption is made about domain interactions. Protein interactions may be the result of multiple domain interactions which are dependent on each other. A conjunctive norm form representation is used to capture the relationships between protein interactions and domain interactions. The problem of interaction inference is then modeled as a constraint satisfiability problem and solved via linear programing. Experimental results on a combined yeast data set have demonstrated the robustness and the accuracy of the proposed algorithm. Moreover, we also map some predicted interacting domains to three-dimensional structures of protein complexes to show the validity of our predictions.
Exploring quantum computing application to satellite data assimilation
NASA Astrophysics Data System (ADS)
Cheung, S.; Zhang, S. Q.
2015-12-01
This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.
Virtual manufacturing in reality
NASA Astrophysics Data System (ADS)
Papstel, Jyri; Saks, Alo
2000-10-01
SMEs play an important role in manufacturing industry. But from time to time there is a shortage in resources to complete the particular order in time. Number of systems is introduced to produce digital information in order to support product and process development activities. Main problem is lack of opportunity for direct data transition within design system modules when needed temporary extension of design capacity (virtuality) or to implement integrated concurrent product development principles. The planning experience in the field is weakly used as well. The concept of virtual manufacturing is a supporting idea to solve this problem. At the same time a number of practical problems should be solved like information conformity, data transfer, unified technological concepts acceptation etc. In the present paper the proposed ways to solve the practical problems of virtual manufacturing are described. General objective is to introduce the knowledge-based CAPP system as missing module for Virtual Manufacturing in the selected product domain. Surface-centered planning concept based on STEP- based modeling principles, and knowledge-based process planning methodology will be used to gain the objectives. As a result the planning module supplied by design data with direct access, and supporting advising environment is expected. Mould producing SME would be as test basis.
Using domain decomposition in the multigrid NAS parallel benchmark on the Fujitsu VPP500
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J.C.H.; Lung, H.; Katsumata, Y.
1995-12-01
In this paper, we demonstrate how domain decomposition can be applied to the multigrid algorithm to convert the code for MPP architectures. We also discuss the performance and scalability of this implementation on the new product line of Fujitsu`s vector parallel computer, VPP500. This computer has Fujitsu`s well-known vector processor as the PE each rated at 1.6 C FLOPS. The high speed crossbar network rated at 800 MB/s provides the inter-PE communication. The results show that the physical domain decomposition is the best way to solve MG problems on VPP500.
A Space Affine Matching Approach to fMRI Time Series Analysis.
Chen, Liang; Zhang, Weishi; Liu, Hongbo; Feng, Shigang; Chen, C L Philip; Wang, Huili
2016-07-01
For fMRI time series analysis, an important challenge is to overcome the potential delay between hemodynamic response signal and cognitive stimuli signal, namely the same frequency but different phase (SFDP) problem. In this paper, a novel space affine matching feature is presented by introducing the time domain and frequency domain features. The time domain feature is used to discern different stimuli, while the frequency domain feature to eliminate the delay. And then we propose a space affine matching (SAM) algorithm to match fMRI time series by our affine feature, in which a normal vector is estimated using gradient descent to explore the time series matching optimally. The experimental results illustrate that the SAM algorithm is insensitive to the delay between the hemodynamic response signal and the cognitive stimuli signal. Our approach significantly outperforms GLM method while there exists the delay. The approach can help us solve the SFDP problem in fMRI time series matching and thus of great promise to reveal brain dynamics.
NASA Technical Reports Server (NTRS)
Lyusternik, L. A.
1980-01-01
The mathematics involved in numerically solving for the plane boundary value of the Laplace equation by the grid method is developed. The approximate solution of a boundary value problem for the domain of the Laplace equation by the grid method consists of finding u at the grid corner which satisfies the equation at the internal corners (u=Du) and certain boundary value conditions at the boundary corners.
On the solution of the Helmholtz equation on regions with corners.
Serkh, Kirill; Rokhlin, Vladimir
2016-08-16
In this paper we solve several boundary value problems for the Helmholtz equation on polygonal domains. We observe that when the problems are formulated as the boundary integral equations of potential theory, the solutions are representable by series of appropriately chosen Bessel functions. In addition to being analytically perspicuous, the resulting expressions lend themselves to the construction of accurate and efficient numerical algorithms. The results are illustrated by a number of numerical examples.
On the solution of the Helmholtz equation on regions with corners
Serkh, Kirill; Rokhlin, Vladimir
2016-01-01
In this paper we solve several boundary value problems for the Helmholtz equation on polygonal domains. We observe that when the problems are formulated as the boundary integral equations of potential theory, the solutions are representable by series of appropriately chosen Bessel functions. In addition to being analytically perspicuous, the resulting expressions lend themselves to the construction of accurate and efficient numerical algorithms. The results are illustrated by a number of numerical examples. PMID:27482110
Hermite Functional Link Neural Network for Solving the Van der Pol-Duffing Oscillator Equation.
Mall, Susmita; Chakraverty, S
2016-08-01
Hermite polynomial-based functional link artificial neural network (FLANN) is proposed here to solve the Van der Pol-Duffing oscillator equation. A single-layer hermite neural network (HeNN) model is used, where a hidden layer is replaced by expansion block of input pattern using Hermite orthogonal polynomials. A feedforward neural network model with the unsupervised error backpropagation principle is used for modifying the network parameters and minimizing the computed error function. The Van der Pol-Duffing and Duffing oscillator equations may not be solved exactly. Here, approximate solutions of these types of equations have been obtained by applying the HeNN model for the first time. Three mathematical example problems and two real-life application problems of Van der Pol-Duffing oscillator equation, extracting the features of early mechanical failure signal and weak signal detection problems, are solved using the proposed HeNN method. HeNN approximate solutions have been compared with results obtained by the well known Runge-Kutta method. Computed results are depicted in term of graphs. After training the HeNN model, we may use it as a black box to get numerical results at any arbitrary point in the domain. Thus, the proposed HeNN method is efficient. The results reveal that this method is reliable and can be applied to other nonlinear problems too.
A fast numerical method for ideal fluid flow in domains with multiple stirrers
NASA Astrophysics Data System (ADS)
Nasser, Mohamed M. S.; Green, Christopher C.
2018-03-01
A collection of arbitrarily-shaped solid objects, each moving at a constant speed, can be used to mix or stir ideal fluid, and can give rise to interesting flow patterns. Assuming these systems of fluid stirrers are two-dimensional, the mathematical problem of resolving the flow field—given a particular distribution of any finite number of stirrers of specified shape and speed—can be formulated as a Riemann-Hilbert (R-H) problem. We show that this R-H problem can be solved numerically using a fast and accurate algorithm for any finite number of stirrers based around a boundary integral equation with the generalized Neumann kernel. Various systems of fluid stirrers are considered, and our numerical scheme is shown to handle highly multiply connected domains (i.e. systems of many fluid stirrers) with minimal computational expense.
Developing close combat behaviors for simulated soldiers using genetic programming techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, Richard J.; Schaller, Mark J.
2003-10-01
Genetic programming is a powerful methodology for automatically producing solutions to problems in a variety of domains. It has been used successfully to develop behaviors for RoboCup soccer players and simple combat agents. We will attempt to use genetic programming to solve a problem in the domain of strategic combat, keeping in mind the end goal of developing sophisticated behaviors for compound defense and infiltration. The simplified problem at hand is that of two armed agents in a small room, containing obstacles, fighting against each other for survival. The base case and three changes are considered: a memory of positionsmore » using stacks, context-dependent genetic programming, and strongly typed genetic programming. Our work demonstrates slight improvements from the first two techniques, and no significant improvement from the last.« less
A two-level stochastic collocation method for semilinear elliptic equations with random coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Luoping; Zheng, Bin; Lin, Guang
In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse meshmore » $$\\mathcal{T}_H$$ with a low level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_{P}$$) and solve linearized equations on a fine mesh $$\\mathcal{T}_h$$ using high level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_p$$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $$\\mathcal{T}_h$$ and $$\\mathcal{P}_p$$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.« less
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
Traffic Flow Density Distribution Based on FEM
NASA Astrophysics Data System (ADS)
Ma, Jing; Cui, Jianming
In analysis of normal traffic flow, it usually uses the static or dynamic model to numerical analyze based on fluid mechanics. However, in such handling process, the problem of massive modeling and data handling exist, and the accuracy is not high. Finite Element Method (FEM) is a production which is developed from the combination of a modern mathematics, mathematics and computer technology, and it has been widely applied in various domain such as engineering. Based on existing theory of traffic flow, ITS and the development of FEM, a simulation theory of the FEM that solves the problems existing in traffic flow is put forward. Based on this theory, using the existing Finite Element Analysis (FEA) software, the traffic flow is simulated analyzed with fluid mechanics and the dynamics. Massive data processing problem of manually modeling and numerical analysis is solved, and the authenticity of simulation is enhanced.
Plan-graph Based Heuristics for Conformant Probabilistic Planning
NASA Technical Reports Server (NTRS)
Ramakrishnan, Salesh; Pollack, Martha E.; Smith, David E.
2004-01-01
In this paper, we introduce plan-graph based heuristics to solve a variation of the conformant probabilistic planning (CPP) problem. In many real-world problems, it is the case that the sensors are unreliable or take too many resources to provide knowledge about the environment. These domains are better modeled as conformant planning problems. POMDP based techniques are currently the most successful approach for solving CPP but have the limitation of state- space explosion. Recent advances in deterministic and conformant planning have shown that plan-graphs can be used to enhance the performance significantly. We show that this enhancement can also be translated to CPP. We describe our process for developing the plan-graph heuristics and estimating the probability of a partial plan. We compare the performance of our planner PVHPOP when used with different heuristics. We also perform a comparison with a POMDP solver to show over a order of magnitude improvement in performance.
Creativity and Ethics: The Relationship of Creative and Ethical Problem-Solving.
Mumford, Michael D; Waples, Ethan P; Antes, Alison L; Brown, Ryan P; Connelly, Shane; Murphy, Stephen T; Devenport, Lynn D
2010-02-01
Students of creativity have long been interested in the relationship between creativity and deviant behaviors such as criminality, mental disease, and unethical behavior. In the present study we wished to examine the relationship between creative thinking skills and ethical decision-making among scientists. Accordingly, 258 doctoral students in the health, biological, and social sciences were asked to complete a measure of creative processing skills (e.g., problem definition, conceptual combination, idea generation) and a measure of ethical decision-making examining four domains, data management, study conduct, professional practices, and business practices. It was found that ethical decision-making in all four of these areas was related to creative problem-solving processes with late cycle processes (e.g., idea generation and solution monitoring) proving particularly important. The implications of these findings for understanding the relationship between creative and deviant thought are discussed.
Creativity and Ethics: The Relationship of Creative and Ethical Problem-Solving
Mumford, Michael D.; Waples, Ethan P.; Antes, Alison L.; Brown, Ryan P.; Connelly, Shane; Murphy, Stephen T.; Devenport, Lynn D.
2010-01-01
Students of creativity have long been interested in the relationship between creativity and deviant behaviors such as criminality, mental disease, and unethical behavior. In the present study we wished to examine the relationship between creative thinking skills and ethical decision-making among scientists. Accordingly, 258 doctoral students in the health, biological, and social sciences were asked to complete a measure of creative processing skills (e.g., problem definition, conceptual combination, idea generation) and a measure of ethical decision-making examining four domains, data management, study conduct, professional practices, and business practices. It was found that ethical decision-making in all four of these areas was related to creative problem-solving processes with late cycle processes (e.g., idea generation and solution monitoring) proving particularly important. The implications of these findings for understanding the relationship between creative and deviant thought are discussed. PMID:21057603
Hybrid fully nonlinear BEM-LBM numerical wave tank with applications in naval hydrodynamics
NASA Astrophysics Data System (ADS)
Mivehchi, Amin; Grilli, Stephan T.; Dahl, Jason M.; O'Reilly, Chris M.; Harris, Jeffrey C.; Kuznetsov, Konstantin; Janssen, Christian F.
2017-11-01
simulation of the complex dynamics response of ships in waves is typically modeled by nonlinear potential flow theory, usually solved with a higher order BEM. In some cases, the viscous/turbulent effects around a structure and in its wake need to be accurately modeled to capture the salient physics of the problem. Here, we present a fully 3D model based on a hybrid perturbation method. In this method, the velocity and pressure are decomposed as the sum of an inviscid flow and viscous perturbation. The inviscid part is solved over the whole domain using a BEM based on cubic spline element. These inviscid results are then used to force a near-field perturbation solution on a smaller domain size, which is solved with a NS model based on LBM-LES, and implemented on GPUs. The BEM solution for large grids is greatly accelerated by using a parallelized FMM, which is efficiently implemented on large and small clusters, yielding an almost linear scaling with the number of unknowns. A new representation of corners and edges is implemented, which improves the global accuracy of the BEM solver, particularly for moving boundaries. We present model results and the recent improvements of the BEM, alongside results of the hybrid model, for applications to problems. Office of Naval Research Grants N000141310687 and N000141612970.
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.
2017-08-01
This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.
Numerical Boundary Conditions for Computational Aeroacoustics Benchmark Problems
NASA Technical Reports Server (NTRS)
Tam, Chritsopher K. W.; Kurbatskii, Konstantin A.; Fang, Jun
1997-01-01
Category 1, Problems 1 and 2, Category 2, Problem 2, and Category 3, Problem 2 are solved computationally using the Dispersion-Relation-Preserving (DRP) scheme. All these problems are governed by the linearized Euler equations. The resolution requirements of the DRP scheme for maintaining low numerical dispersion and dissipation as well as accurate wave speeds in solving the linearized Euler equations are now well understood. As long as 8 or more mesh points per wavelength is employed in the numerical computation, high quality results are assured. For the first three categories of benchmark problems, therefore, the real challenge is to develop high quality numerical boundary conditions. For Category 1, Problems 1 and 2, it is the curved wall boundary conditions. For Category 2, Problem 2, it is the internal radiation boundary conditions inside the duct. For Category 3, Problem 2, they are the inflow and outflow boundary conditions upstream and downstream of the blade row. These are the foci of the present investigation. Special nonhomogeneous radiation boundary conditions that generate the incoming disturbances and at the same time allow the outgoing reflected or scattered acoustic disturbances to leave the computation domain without significant reflection are developed. Numerical results based on these boundary conditions are provided.
SIG: a general-purpose signal processing program. User's manual. Revision 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lager, D.; Azevedo, S.
1985-05-09
SIG is a general-purpose signal processing, analysis, and display program. Its main purpose is to perform manipulations on time-domain and frequenccy-domain signals. The manual contains a complete description of the SIG program from the user's stand-point. A brief exercise in using SIG is shown. Complete descriptions are given of each command in the SIG core. General information about the SIG structure, command processor, and graphics options are provided. An example usage of SIG for solving a problem is developed, and error message formats are briefly discussed. (LEW)
NASA Technical Reports Server (NTRS)
Oliger, Joseph
1997-01-01
Topics considered include: high-performance computing; cognitive and perceptual prostheses (computational aids designed to leverage human abilities); autonomous systems. Also included: development of a 3D unstructured grid code based on a finite volume formulation and applied to the Navier-stokes equations; Cartesian grid methods for complex geometry; multigrid methods for solving elliptic problems on unstructured grids; algebraic non-overlapping domain decomposition methods for compressible fluid flow problems on unstructured meshes; numerical methods for the compressible navier-stokes equations with application to aerodynamic flows; research in aerodynamic shape optimization; S-HARP: a parallel dynamic spectral partitioner; numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains; application of high-order shock capturing schemes to direct simulation of turbulence; multicast technology; network testbeds; supercomputer consolidation project.
The unified acoustic and aerodynamic prediction theory of advanced propellers in the time domain
NASA Technical Reports Server (NTRS)
Farassat, F.
1984-01-01
This paper presents some numerical results for the noise of an advanced supersonic propeller based on a formulation published last year. This formulation was derived to overcome some of the practical numerical difficulties associated with other acoustic formulations. The approach is based on the Ffowcs Williams-Hawkings equation and time domain analysis is used. To illustrate the method of solution, a model problem in three dimensions and based on the Laplace equation is solved. A brief sketch of derivation of the acoustic formula is then given. Another model problem is used to verify validity of the acoustic formulation. A recent singular integral equation for aerodynamic applications derived from the acoustic formula is also presented here.
NASA Technical Reports Server (NTRS)
Sharma, Naveen
1992-01-01
In this paper we briefly describe a combined symbolic and numeric approach for solving mathematical models on parallel computers. An experimental software system, PIER, is being developed in Common Lisp to synthesize computationally intensive and domain formulation dependent phases of finite element analysis (FEA) solution methods. Quantities for domain formulation like shape functions, element stiffness matrices, etc., are automatically derived using symbolic mathematical computations. The problem specific information and derived formulae are then used to generate (parallel) numerical code for FEA solution steps. A constructive approach to specify a numerical program design is taken. The code generator compiles application oriented input specifications into (parallel) FORTRAN77 routines with the help of built-in knowledge of the particular problem, numerical solution methods and the target computer.
Trans-dimensional Bayesian inversion of airborne electromagnetic data for 2D conductivity profiles
NASA Astrophysics Data System (ADS)
Hawkins, Rhys; Brodie, Ross C.; Sambridge, Malcolm
2018-02-01
This paper presents the application of a novel trans-dimensional sampling approach to a time domain airborne electromagnetic (AEM) inverse problem to solve for plausible conductivities of the subsurface. Geophysical inverse field problems, such as time domain AEM, are well known to have a large degree of non-uniqueness. Common least-squares optimisation approaches fail to take this into account and provide a single solution with linearised estimates of uncertainty that can result in overly optimistic appraisal of the conductivity of the subsurface. In this new non-linear approach, the spatial complexity of a 2D profile is controlled directly by the data. By examining an ensemble of proposed conductivity profiles it accommodates non-uniqueness and provides more robust estimates of uncertainties.
Transfer learning for visual categorization: a survey.
Shao, Ling; Zhu, Fan; Li, Xuelong
2015-05-01
Regular machine learning and data mining techniques study the training data for future inferences under a major assumption that the future data are within the same feature space or have the same distribution as the training data. However, due to the limited availability of human labeled training data, training data that stay in the same feature space or have the same distribution as the future data cannot be guaranteed to be sufficient enough to avoid the over-fitting problem. In real-world applications, apart from data in the target domain, related data in a different domain can also be included to expand the availability of our prior knowledge about the target future data. Transfer learning addresses such cross-domain learning problems by extracting useful information from data in a related domain and transferring them for being used in target tasks. In recent years, with transfer learning being applied to visual categorization, some typical problems, e.g., view divergence in action recognition tasks and concept drifting in image classification tasks, can be efficiently solved. In this paper, we survey state-of-the-art transfer learning algorithms in visual categorization applications, such as object recognition, image classification, and human action recognition.
ERIC Educational Resources Information Center
Demetriadis, S. N.; Papadopoulos, P. M.; Stamelos, I. G.; Fischer, F.
2008-01-01
This study investigates the hypothesis that students' learning and problem-solving performance in ill-structured domains can be improved, if elaborative question prompts are used to activate students' context-generating cognitive processes, during case study. Two groups of students used a web-based learning environment to criss-cross and study…
ERIC Educational Resources Information Center
Watts, Logan L.; Steele, Logan M.; Song, Hairong
2017-01-01
Prior studies have demonstrated inconsistent findings with regard to the relationship between need for cognition and creativity. In our study, measurement issues were explored as a potential source of these inconsistencies. Structural equation modeling techniques were used to examine the factor structure underlying the 18-item need for cognition…
How To Create Complex Measurement Models: A Case Study of Principled Assessment Design.
ERIC Educational Resources Information Center
Bauer, Malcolm; Williamson, David M.; Steinberg, Linda S.; Mislevy, Robert J.; Behrens, John T.
In computer-based simulations, students must bring a wide range of relevant knowledge, skills, and abilities to bear jointly as they solve meaningful problems in a learning domain. To function effectively as an assessment, a simulation system must additionally be able to evoke and interpret observable evidence about targeted knowledge in a manner…
The Role of Human Intelligence in Computer-Based Intelligent Tutoring Systems.
ERIC Educational Resources Information Center
Epstein, Kenneth; Hillegeist, Eleanor
An Intelligent Tutoring System (ITS) consists of an expert problem-solving program in a subject domain, a tutoring model capable of remediation or primary instruction, and an assessment model that monitors student understanding. The Geometry Proof Tutor (GPT) is an ITS which was developed at Carnegie Mellon University and field tested in the…
ERIC Educational Resources Information Center
Bhagat, Kaushal Kumar; Spector, J. Michael
2017-01-01
Much of the focus on learning technologies has been on structuring innovative learning experiences and on managing distance and hybrid learning environments. This article focuses on the use of technology as an important formative assessment and feedback tool. The rationale for this focus is based on prior research findings that suggest that timely…
Methodology to estimate the relative pressure field from noisy experimental velocity data
NASA Astrophysics Data System (ADS)
Bolin, C. D.; Raguin, L. G.
2008-11-01
The determination of intravascular pressure fields is important to the characterization of cardiovascular pathology. We present a two-stage method that solves the inverse problem of estimating the relative pressure field from noisy velocity fields measured by phase contrast magnetic resonance imaging (PC-MRI) on an irregular domain with limited spatial resolution, and includes a filter for the experimental noise. For the pressure calculation, the Poisson pressure equation is solved by embedding the irregular flow domain into a regular domain. To lessen the propagation of the noise inherent to the velocity measurements, three filters - a median filter and two physics-based filters - are evaluated using a 2-D Couette flow. The two physics-based filters outperform the median filter for the estimation of the relative pressure field for realistic signal-to-noise ratios (SNR = 5 to 30). The most accurate pressure field results from a filter that applies in a least-squares sense three constraints simultaneously: consistency between measured and filtered velocity fields, divergence-free and additional smoothness conditions. This filter leads to a 5-fold gain in accuracy for the estimated relative pressure field compared to without noise filtering, in conditions consistent with PC-MRI of the carotid artery: SNR = 5, 20 x 20 discretized flow domain (25 X 25 computational domain).
Review of analytical models to stream depletion induced by pumping: Guide to model selection
NASA Astrophysics Data System (ADS)
Huang, Ching-Sheng; Yang, Tao; Yeh, Hund-Der
2018-06-01
Stream depletion due to groundwater extraction by wells may cause impact on aquatic ecosystem in streams, conflict over water rights, and contamination of water from irrigation wells near polluted streams. A variety of studies have been devoted to addressing the issue of stream depletion, but a fundamental framework for analytical modeling developed from aquifer viewpoint has not yet been found. This review shows key differences in existing models regarding the stream depletion problem and provides some guidelines for choosing a proper analytical model in solving the problem of concern. We introduce commonly used models composed of flow equations, boundary conditions, well representations and stream treatments for confined, unconfined, and leaky aquifers. They are briefly evaluated and classified according to six categories of aquifer type, flow dimension, aquifer domain, stream representation, stream channel geometry, and well type. Finally, we recommend promising analytical approaches that can solve stream depletion problem in reality with aquifer heterogeneity and irregular geometry of stream channel. Several unsolved stream depletion problems are also recommended.
Steady state solutions to dynamically loaded periodic structures
NASA Technical Reports Server (NTRS)
Kalinowski, A. J.
1980-01-01
The general problem of solving for the steady state (time domain) dynamic response (i.e., NASTRAN rigid format-8) of a general elastic periodic structure subject to a phase difference loading of the type encountered in traveling wave propagation problems was studied. Two types of structural configurations were considered; in the first type, the structure has a repeating pattern over a span that is long enough to be considered, for all practical purposes, as infinite; in the second type, the structure has structural rotational symmetry in the circumferential direction. The theory and a corresponding set of DMAP instructions which permits the NASTRAN user to automatically alter the rigid format-8 sequence to solve the intended class of problems are presented. Final results are recovered as with any ordinary rigid format-8 solution, except that the results are only printed for the typical periodic segment of the structure. A simple demonstration problem having a known exact solution is used to illustrate the implementation of the procedure.
Numerical formulation for the prediction of solid/liquid change of a binary alloy
NASA Technical Reports Server (NTRS)
Schneider, G. E.; Tiwari, S. N.
1990-01-01
A computational model is presented for the prediction of solid/liquid phase change energy transport including the influence of free convection fluid flow in the liquid phase region. The computational model considers the velocity components of all non-liquid phase change material control volumes to be zero but fully solves the coupled mass-momentum problem within the liquid region. The thermal energy model includes the entire domain and uses an enthalpy like model and a recently developed method for handling the phase change interface nonlinearity. Convergence studies are performed and comparisons made with experimental data for two different problem specifications. The convergence studies indicate that grid independence was achieved and the comparison with experimental data indicates excellent quantitative prediction of the melt fraction evolution. Qualitative data is also provided in the form of velocity vector diagrams and isotherm plots for selected times in the evolution of both problems. The computational costs incurred are quite low by comparison with previous efforts on solving these problems.
General-purpose abductive algorithm for interpretation
NASA Astrophysics Data System (ADS)
Fox, Richard K.; Hartigan, Julie
1996-11-01
Abduction, inference to the best explanation, is an information-processing task that is useful for solving interpretation problems such as diagnosis, medical test analysis, legal reasoning, theory evaluation, and perception. The task is a generative one in which an explanation comprising of domain hypotheses is assembled and used to account for given findings. The explanation is taken to be an interpretation as to why the findings have arisen within the given situation. Research in abduction has led to the development of a general-purpose computational strategy which has been demonstrated on all of the above types of problems. This abduction strategy can be performed in layers so that different types of knowledge can come together in deriving an explanation at different levels of description. Further, the abduction strategy is tractable and offers a very useful tradeoff between confidence in the explanation and completeness of the explanation. This paper will describe this computational strategy for abduction and demonstrate its usefulness towards perceptual problems by examining problem-solving systems in speech recognition and natural language understanding.
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175
Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis
2014-01-01
The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.
Angle-domain inverse scattering migration/inversion in isotropic media
NASA Astrophysics Data System (ADS)
Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan
2018-07-01
The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Parameter estimation problems for distributed systems using a multigrid method
NASA Technical Reports Server (NTRS)
Ta'asan, Shlomo; Dutt, Pravir
1990-01-01
The problem of estimating spatially varying coefficients of partial differential equations is considered from observation of the solution and of the right hand side of the equation. It is assumed that the observations are distributed in the domain and that enough observations are given. A method of discretization and an efficient multigrid method for solving the resulting discrete systems are described. Numerical results are presented for estimation of coefficients in an elliptic and a parabolic partial differential equation.
NASA Technical Reports Server (NTRS)
Smith, Philip J.; Giffin, Walter C.; Rockwell, Thomas H.; Thomas, Mark
1986-01-01
Twenty pilots with instrument flight ratings were asked to perform a fault-diagnosis task for which they had relevant domain knowledge. The pilots were asked to think out loud as they requested and interpreted information. Performances were then modeled as the activation and use of a frame system. Cognitive biases, memory distortions and losses, and failures to correctly diagnose the problem were studied in the context of this frame system model.
Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.
2017-10-01
The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.
An efficient method for solving the steady Euler equations
NASA Technical Reports Server (NTRS)
Liou, M.-S.
1986-01-01
An efficient numerical procedure for solving a set of nonlinear partial differential equations, the steady Euler equations, using Newton's linearization procedure is presented. A theorem indicating quadratic convergence for the case of differential equations is demonstrated. A condition for the domain of quadratic convergence Omega(2) is obtained which indicates that whether an approximation lies in Omega(2) depends on the rate of change and the smoothness of the flow vectors, and hence is problem-dependent. The choice of spatial differencing, of particular importance for the present method, is discussed. The treatment of boundary conditions is addressed, and the system of equations resulting from the foregoing analysis is summarized and solution strategies are discussed. The convergence of calculated solutions is demonstrated by comparing them with exact solutions to one and two-dimensional problems.
Rodriguez-Jimenez, R; Dompablo, M; Bagney, A; Santabárbara, J; Aparicio, A I; Torio, I; Moreno-Ortega, M; Lopez-Anton, R; Lobo, A; Kern, R S; Green, M F; Jimenez-Arriero, M A; Santos, J L; Nuechterlein, K H; Palomo, T
2015-12-01
The MATRICS Consensus Cognitive Battery (MCCB) was administered to 293 schizophrenia outpatients and 210 community residents in Spain. Our first objective was to identify the age- and gender-corrected MCCB cognitive profile of patients with schizophrenia. The profile of schizophrenia patients showed deficits when compared to controls across the seven MCCB domains. Reasoning and Problem Solving and Social Cognition were the least impaired, while Visual Learning and Verbal Learning showed the greatest deficits. Our second objective was to study the effects on cognitive functioning of age and gender, in addition to diagnosis. Diagnosis was found to have the greatest effect on cognition (Cohen's d>0.8 for all MCCB domains); age and gender also had effects on cognitive functioning, although to a lesser degree (with age usually having slightly larger effects than gender). The effects of age were apparent in all domains (with better performance in younger subjects), except for Social Cognition. Gender had effects on Attention/Vigilance, Working Memory, Reasoning and Problem Solving (better performance in males), and Social Cognition (better performance in females). No interaction effects were found between diagnosis and age, or between diagnosis and gender. This lack of interactions suggests that age and gender effects are not different in patients and controls. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shlivinski, A., E-mail: amirshli@ee.bgu.ac.il; Lomakin, V., E-mail: vlomakin@eng.ucsd.edu
2016-03-01
Scattering or coupling of electromagnetic beam-field at a surface discontinuity separating two homogeneous or inhomogeneous media with different propagation characteristics is formulated using surface integral equation, which are solved by the Method of Moments with the aid of the Gabor-based Gaussian window frame set of basis and testing functions. The application of the Gaussian window frame provides (i) a mathematically exact and robust tool for spatial-spectral phase-space formulation and analysis of the problem; (ii) a system of linear equations in a transmission-line like form relating mode-like wave objects of one medium with mode-like wave objects of the second medium; (iii)more » furthermore, an appropriate setting of the frame parameters yields mode-like wave objects that blend plane wave properties (as if solving in the spectral domain) with Green's function properties (as if solving in the spatial domain); and (iv) a representation of the scattered field with Gaussian-beam propagators that may be used in many large (in terms of wavelengths) systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Bradley, E-mail: brma7253@colorado.edu; Fornberg, Bengt, E-mail: Fornberg@colorado.edu
In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy formore » the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.« less
A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles
Crawford, Broderick; Paredes, Fernando; Norero, Enrique
2015-01-01
The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n 2 × n 2 grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n 2. Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods. PMID:26078751
NASA Astrophysics Data System (ADS)
Martin, Bradley; Fornberg, Bengt
2017-04-01
In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy for the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.
A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles.
Soto, Ricardo; Crawford, Broderick; Galleguillos, Cristian; Paredes, Fernando; Norero, Enrique
2015-01-01
The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n(2) × n(2) grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n(2). Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods.
Discussion summary: Fictitious domain methods
NASA Technical Reports Server (NTRS)
Glowinski, Rowland; Rodrigue, Garry
1991-01-01
Fictitious Domain methods are constructed in the following manner: Suppose a partial differential equation is to be solved on an open bounded set, Omega, in 2-D or 3-D. Let R be a rectangle domain containing the closure of Omega. The partial differential equation is first solved on R. Using the solution on R, the solution of the equation on Omega is then recovered by some procedure. The advantage of the fictitious domain method is that in many cases the solution of a partial differential equation on a rectangular region is easier to compute than on a nonrectangular region. Fictitious domain methods for solving elliptic PDEs on general regions are also very efficient when used on a parallel computer. The reason is that one can use the many domain decomposition methods that are available for solving the PDE on the fictitious rectangular region. The discussion on fictitious domain methods began with a talk by R. Glowinski in which he gave some examples of a variational approach to ficititious domain methods for solving the Helmholtz and Navier-Stokes equations.
A framework for qualitative reasoning about solid objects
NASA Technical Reports Server (NTRS)
Davis, E.
1987-01-01
Predicting the behavior of a qualitatively described system of solid objects requires a combination of geometrical, temporal, and physical reasoning. Methods based upon formulating and solving differential equations are not adequate for robust prediction, since the behavior of a system over extended time may be much simpler than its behavior over local time. A first-order logic, in which one can state simple physical problems and derive their solution deductively, without recourse to solving the differential equations, is discussed. This logic is substantially more expressive and powerful than any previous AI representational system in this domain.
Optimel: Software for selecting the optimal method
NASA Astrophysics Data System (ADS)
Popova, Olga; Popov, Boris; Romanov, Dmitry; Evseeva, Marina
Optimel: software for selecting the optimal method automates the process of selecting a solution method from the optimization methods domain. Optimel features practical novelty. It saves time and money when conducting exploratory studies if its objective is to select the most appropriate method for solving an optimization problem. Optimel features theoretical novelty because for obtaining the domain a new method of knowledge structuring was used. In the Optimel domain, extended quantity of methods and their properties are used, which allows identifying the level of scientific studies, enhancing the user's expertise level, expand the prospects the user faces and opening up new research objectives. Optimel can be used both in scientific research institutes and in educational institutions.
NASA Astrophysics Data System (ADS)
Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin
2017-12-01
We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.
Optimization of Operations Resources via Discrete Event Simulation Modeling
NASA Technical Reports Server (NTRS)
Joshi, B.; Morris, D.; White, N.; Unal, R.
1996-01-01
The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.
NASA Technical Reports Server (NTRS)
Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.
1994-01-01
Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.
A fast numerical method for the valuation of American lookback put options
NASA Astrophysics Data System (ADS)
Song, Haiming; Zhang, Qi; Zhang, Ran
2015-10-01
A fast and efficient numerical method is proposed and analyzed for the valuation of American lookback options. American lookback option pricing problem is essentially a two-dimensional unbounded nonlinear parabolic problem. We reformulate it into a two-dimensional parabolic linear complementary problem (LCP) on an unbounded domain. The numeraire transformation and domain truncation technique are employed to convert the two-dimensional unbounded LCP into a one-dimensional bounded one. Furthermore, the variational inequality (VI) form corresponding to the one-dimensional bounded LCP is obtained skillfully by some discussions. The resulting bounded VI is discretized by a finite element method. Meanwhile, the stability of the semi-discrete solution and the symmetric positive definiteness of the full-discrete matrix are established for the bounded VI. The discretized VI related to options is solved by a projection and contraction method. Numerical experiments are conducted to test the performance of the proposed method.
Accurate boundary conditions for exterior problems in gas dynamics
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Hariharan, S. I.
1988-01-01
The numerical solution of exterior problems is typically accomplished by introducing an artificial, far field boundary and solving the equations on a truncated domain. For hyperbolic systems, boundary conditions at this boundary are often derived by imposing a principle of no reflection. However, waves with spherical symmetry in gas dynamics satisfy equations where incoming and outgoing Riemann variables are coupled. This suggests that natural reflections may be important. A reflecting boundary condition is proposed based on an asymptotic solution of the far field equations. Nonlinear energy estimates are obtained for the truncated problem and numerical experiments presented to validate the theory.
Accurate boundary conditions for exterior problems in gas dynamics
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Hariharan, S. I.
1988-01-01
The numerical solution of exterior problems is typically accomplished by introducing an artificial, far-field boundary and solving the equations on a truncated domain. For hyperbolic systems, boundary conditions at this boundary are often derived by imposing a principle of no reflection. However, waves with spherical symmetry in gas dynamics satisfy equations where incoming and outgoing Riemann variables are coupled. This suggests that natural reflections may be important. A reflecting boundary condition is proposed based on an asymptotic solution of the far-field equations. Nonlinear energy estimates are obtained for the truncated problem and numerical experiments presented to validate the theory.
Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms
NASA Astrophysics Data System (ADS)
Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.
2017-09-01
Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.
Numerical Inverse Scattering for the Toda Lattice
NASA Astrophysics Data System (ADS)
Bilman, Deniz; Trogdon, Thomas
2017-06-01
We present a method to compute the inverse scattering transform (IST) for the famed Toda lattice by solving the associated Riemann-Hilbert (RH) problem numerically. Deformations for the RH problem are incorporated so that the IST can be evaluated in O(1) operations for arbitrary points in the ( n, t)-domain, including short- and long-time regimes. No time-stepping is required to compute the solution because ( n, t) appear as parameters in the associated RH problem. The solution of the Toda lattice is computed in long-time asymptotic regions where the asymptotics are not known rigorously.
Philip, Bobby; Berrill, Mark A.; Allu, Srikanth; ...
2015-01-26
We describe an efficient and nonlinearly consistent parallel solution methodology for solving coupled nonlinear thermal transport problems that occur in nuclear reactor applications over hundreds of individual 3D physical subdomains. Efficiency is obtained by leveraging knowledge of the physical domains, the physics on individual domains, and the couplings between them for preconditioning within a Jacobian Free Newton Krylov method. Details of the computational infrastructure that enabled this work, namely the open source Advanced Multi-Physics (AMP) package developed by the authors are described. The details of verification and validation experiments, and parallel performance analysis in weak and strong scaling studies demonstratingmore » the achieved efficiency of the algorithm are presented. Moreover, numerical experiments demonstrate that the preconditioner developed is independent of the number of fuel subdomains in a fuel rod, which is particularly important when simulating different types of fuel rods. Finally, we demonstrate the power of the coupling methodology by considering problems with couplings between surface and volume physics and coupling of nonlinear thermal transport in fuel rods to an external radiation transport code.« less
Toward High-Performance Communications Interfaces for Science Problem Solving
NASA Astrophysics Data System (ADS)
Oviatt, Sharon L.; Cohen, Adrienne O.
2010-12-01
From a theoretical viewpoint, educational interfaces that facilitate communicative actions involving representations central to a domain can maximize students' effort associated with constructing new schemas. In addition, interfaces that minimize working memory demands due to the interface per se, for example by mimicking existing non-digital work practice, can preserve students' attentional focus on their learning task. In this research, we asked the question: What type of interface input capabilities provide best support for science problem solving in both low- and high- performing students? High school students' ability to solve a diverse range of biology problems was compared over longitudinal sessions while they used: (1) hardcopy paper and pencil (2) a digital paper and pen interface (3) pen tablet interface, and (4) graphical tablet interface. Post-test evaluations revealed that time to solve problems, meta-cognitive control, solution correctness, and memory all were significantly enhanced when using the digital pen and paper interface, compared with tablet interfaces. The tangible pen and paper interface also was the only alternative that significantly facilitated skill acquisition in low-performing students. Paradoxically, all students nonetheless believed that the tablet interfaces provided best support for their performance, revealing a lack of self-awareness about how to use computational tools to best advantage. Implications are discussed for how pen interfaces can be optimized for future educational purposes, and for establishing technology fluency curricula to improve students' awareness of the impact of digital tools on their performance.
Nonlinear Representation and Pulse Testing of Communication Subsystems.
1982-05-01
The Post-Doctoral Program provides an opportunity for faculty at participating universities to spend up to one year full time on explora- tory...development and problem-solving efforts with the post-doctorals splitting their time between the customer location and their educational institutions. The...CHAPTER II z-DOMAIN CHARACTERIZATION OF THE QUJADRATIC VOLTERRA SYSTEM................3 2.1 Continuous- Time Analysis .................. 3 Rational
ERIC Educational Resources Information Center
Saab, Nadira
2012-01-01
Computer-supported collaborative learning (CSCL) is an approach to learning in which learners can actively and collaboratively construct knowledge by means of interaction and joint problem solving. Regulation of learning is especially important in the domain of CSCL. Next to the regulation of task performance, the interaction between learners who…
ERIC Educational Resources Information Center
Manches, Andrew; O'Malley, Claire; Benford, Steve
2010-01-01
This research aims to explore the role of physical representations in young children's numerical learning then identify the benefits of using a graphical interface in order to understand the potential for developing interactive technologies in this domain. Three studies are reported that examined the effect of using physical representations…
ERIC Educational Resources Information Center
Toledo, Raciel Yera; Mota, Yailé Caballero
2014-01-01
The paper proposes a recommender system approach to cover online judge's domains. Online judges are e-learning tools that support the automatic evaluation of programming tasks done by individual users, and for this reason they are usually used for training students in programming contest and for supporting basic programming teachings. The…
ERIC Educational Resources Information Center
Trawick-Smith, Jeffrey; Russell, Heather; Swaminathan, Sudha
2011-01-01
Although previous research has explored the effects of various environmental influences on young children's play, the influence of toys has rarely been examined. This paucity of toy studies is due to a lack of a scientifically constructed observation system to evaluate the impact of play materials across developmental domains. The purpose of this…
ERIC Educational Resources Information Center
Recker, Margaret M.; Pirolli, Peter
Students learning to program recursive LISP functions in a typical school-like lesson on recursion were observed. The typical lesson contains text and examples and involves solving a series of programming problems. The focus of this study is on students' learning strategies in new domains. In this light, a Soar computational model of…
Stop Talking and Type: Comparing Virtual and Face-to-Face Mentoring in an Epistemic Game
ERIC Educational Resources Information Center
Bagley, E. A.; Shaffer, D. W.
2015-01-01
Research has shown that computer games and other virtual environments can support significant learning gains because they allow young people to explore complex concepts in simulated form. However, in complex problem-solving domains, complex thinking is learned not only by taking action, but also with the aid of mentors who provide guidance in the…
Box schemes and their implementation on the iPSC/860
NASA Technical Reports Server (NTRS)
Chattot, J. J.; Merriam, M. L.
1991-01-01
Research on algoriths for efficiently solving fluid flow problems on massively parallel computers is continued in the present paper. Attention is given to the implementation of a box scheme on the iPSC/860, a massively parallel computer with a peak speed of 10 Gflops and a memory of 128 Mwords. A domain decomposition approach to parallelism is used.
Yasuhara, Tomohisa; Sone, Tomomichi; Kohno, Takeyuki; Ogita, Kiyokazu
2015-01-01
A revised core curriculum model for pharmaceutical education, developed on the basis of the principles of outcome-based education, will be introduced in 2015. Inevitably, appropriate assessments of students' academic achievements will be required. Although evaluations of the cognitive domain can be carried out by paper tests, evaluation methods for the attitude domain and problem-solving abilities need to be established. From the viewpoint of quality assurance for graduates, pharmaceutical education reforms have become vital to evaluation as well as learning strategies. To evaluate student academic achievements on problem-solving abilities, authentic assessment is required. Authentic assessment is the evaluation that mimics the context tried in work and life. Specifically, direct evaluation of performances, demonstration or the learners' own work with integrated variety knowledge and skills, is required. To clarify the process of graduate research, we obtained qualitative data through focus group interviews with six teachers and analyzed the data using the modified grounded theory approach. Based on the results, we clarify the performance students should show in graduate research and create a rubric for evaluation of performance in graduate research.
A Fluid Structure Algorithm with Lagrange Multipliers to Model Free Swimming
NASA Astrophysics Data System (ADS)
Sahin, Mehmet; Dilek, Ezgi
2017-11-01
A new monolithic approach is prosed to solve the fluid-structure interaction (FSI) problem with Lagrange multipliers in order to model free swimming/flying. In the present approach, the fluid domain is modeled by the incompressible Navier-Stokes equations and discretized using an Arbitrary Lagrangian-Eulerian (ALE) formulation based on the stable side-centered unstructured finite volume method. The solid domain is modeled by the constitutive laws for the nonlinear Saint Venant-Kirchhoff material and the classical Galerkin finite element method is used to discretize the governing equations in a Lagrangian frame. In order to impose the body motion/deformation, the distance between the constraint pair nodes is imposed using the Lagrange multipliers, which is independent from the frame of reference. The resulting algebraic linear equations are solved in a fully coupled manner using a dual approach (null space method). The present numerical algorithm is initially validated for the classical FSI benchmark problems and then applied to the free swimming of three linked ellipses. The authors are grateful for the use of the computing resources provided by the National Center for High Performance Computing (UYBHM) under Grant Number 10752009 and the computing facilities at TUBITAK-ULAKBIM, High Performance and Grid Computing Center.
Structuring students’ analogical reasoning in solving algebra problem
NASA Astrophysics Data System (ADS)
Lailiyah, S.; Nusantara, T.; Sa'dijah, C.; Irawan, E. B.; Kusaeri; Asyhar, A. H.
2018-01-01
The average achievement of Indonesian students’ mathematics skills according to Benchmark International Trends in Mathematics and Science Study (TIMSS) is ranked at the 38th out of 42 countries and according to the survey result in Program for International Student Assessment (PISA) is ranked at the 64th out of 65 countries. The low mathematics skill of Indonesian student has become an important reason to research more deeply about reasoning and algebra in mathematics. Analogical reasoning is a very important component in mathematics because it is the key to creativity and it can make the learning process in the classroom become effective. The major part of the analogical reasoning is about structuring including the processes of inferencing and decision-making happens. Those processes involve base domain and target domain. Methodologically, the subjects of this research were 42 students from class XII. The sources of data were derived from the results of thinks aloud, the transcribed interviews, and the videos taken while the subject working on the instruments and interviews. The collected data were analyzed using qualitative techniques. The result of this study described the structuring characteristics of students’ analogical reasoning in solving algebra problems from all the research subjects.
FDTD method and models in optical education
NASA Astrophysics Data System (ADS)
Lin, Xiaogang; Wan, Nan; Weng, Lingdong; Zhu, Hao; Du, Jihe
2017-08-01
In this paper, finite-difference time-domain (FDTD) method has been proposed as a pedagogical way in optical education. Meanwhile, FDTD solutions, a simulation software based on the FDTD algorithm, has been presented as a new tool which helps abecedarians to build optical models and to analyze optical problems. The core of FDTD algorithm is that the time-dependent Maxwell's equations are discretized to the space and time partial derivatives, and then, to simulate the response of the interaction between the electronic pulse and the ideal conductor or semiconductor. Because the solving of electromagnetic field is in time domain, the memory usage is reduced and the simulation consequence on broadband can be obtained easily. Thus, promoting FDTD algorithm in optical education is available and efficient. FDTD enables us to design, analyze and test modern passive and nonlinear photonic components (such as bio-particles, nanoparticle and so on) for wave propagation, scattering, reflection, diffraction, polarization and nonlinear phenomena. The different FDTD models can help teachers and students solve almost all of the optical problems in optical education. Additionally, the GUI of FDTD solutions is so friendly to abecedarians that learners can master it quickly.
Executive Functions Contribute Uniquely to Reading Competence in Minority Youth.
Jacobson, Lisa A; Koriakin, Taylor; Lipkin, Paul; Boada, Richard; Frijters, Jan C; Lovett, Maureen W; Hill, Dina; Willcutt, Erik; Gottwald, Stephanie; Wolf, Maryanne; Bosson-Heenan, Joan; Gruen, Jeffrey R; Mahone, E Mark
Competent reading requires various skills beyond those for basic word reading (i.e., core language skills, rapid naming, phonological processing). Contributing "higher-level" or domain-general processes include information processing speed and executive functions (working memory, strategic problem solving, attentional switching). Research in this area has relied on largely Caucasian samples, with limited representation of children from racial or ethnic minority groups. This study examined contributions of executive skills to reading competence in 761 children of minority backgrounds. Hierarchical linear regressions examined unique contributions of executive functions (EF) to word reading, fluency, and comprehension. EF contributed uniquely to reading performance, over and above reading-related language skills; working memory contributed uniquely to all components of reading; while attentional switching, but not problem solving, contributed to isolated and contextual word reading and reading fluency. Problem solving uniquely predicted comprehension, suggesting that this skill may be especially important for reading comprehension in minority youth. Attentional switching may play a unique role in development of reading fluency in minority youth, perhaps as a result of the increased demand for switching between spoken versus written dialects. Findings have implications for educational and clinical practice with regard to reading instruction, remedial reading intervention, and assessment of individuals with reading difficulty.
The Sensitivity Analysis for the Flow Past Obstacles Problem with Respect to the Reynolds Number
Ito, Kazufumi; Li, Zhilin; Qiao, Zhonghua
2013-01-01
In this paper, numerical sensitivity analysis with respect to the Reynolds number for the flow past obstacle problem is presented. To carry out such analysis, at each time step, we need to solve the incompressible Navier-Stokes equations on irregular domains twice, one for the primary variables; the other is for the sensitivity variables with homogeneous boundary conditions. The Navier-Stokes solver is the augmented immersed interface method for Navier-Stokes equations on irregular domains. One of the most important contribution of this paper is that our analysis can predict the critical Reynolds number at which the vortex shading begins to develop in the wake of the obstacle. Some interesting experiments are shown to illustrate how the critical Reynolds number varies with different geometric settings. PMID:24910780
The Sensitivity Analysis for the Flow Past Obstacles Problem with Respect to the Reynolds Number.
Ito, Kazufumi; Li, Zhilin; Qiao, Zhonghua
2012-02-01
In this paper, numerical sensitivity analysis with respect to the Reynolds number for the flow past obstacle problem is presented. To carry out such analysis, at each time step, we need to solve the incompressible Navier-Stokes equations on irregular domains twice, one for the primary variables; the other is for the sensitivity variables with homogeneous boundary conditions. The Navier-Stokes solver is the augmented immersed interface method for Navier-Stokes equations on irregular domains. One of the most important contribution of this paper is that our analysis can predict the critical Reynolds number at which the vortex shading begins to develop in the wake of the obstacle. Some interesting experiments are shown to illustrate how the critical Reynolds number varies with different geometric settings.
NASA Astrophysics Data System (ADS)
Jamaludin, N. A.; Ahmedov, A.
2017-09-01
Many boundary value problems in the theory of partial differential equations can be solved by separation methods of partial differential equations. When Schrödinger operator is considered then the influence of the singularity of potential on the solution of the partial differential equation is interest of researchers. In this paper the problems of the uniform convergence of the eigenfunction expansions of the functions from corresponding to the Schrödinger operator with the potential from classes of Sobolev are investigated. The spectral function corresponding to the Schrödinger operator is estimated in closed domain. The isomorphism of the Nikolskii classes is applied to prove uniform convergence of eigenfunction expansions of Schrödinger operator in closed domain.
McCarty, David E
2010-06-15
The rule of diagnostic parsimony--otherwise known as "Ockham's Razor"--teaches students of medicine to find a single unifying diagnosis to explain a given patient's symptoms. While this approach has merits in some settings, a more comprehensive approach is often needed for patients with chronic, nonspecific presentations for which there is a broad differential diagnosis. The cardinal manifestations of sleep disorders--daytime neurocognitive impairment and subjective sleep disturbances-are examples of such presentations. Successful sleep medicine clinicians therefore approach every patient with the knowledge that multiple diagnoses-rather than simply one-are likely to be found. Teaching an integrated and comprehensive approach to other clinicians in an organized and reproducible fashion is challenging, and the evaluation of effectiveness of such teaching is even more so. As a practical aid for teaching the approach to--and evaluation of--a comprehensive sleep medicine encounter, five functional domains of sleep medicine clinical problem-solving are presented as potential sources for sleep/wake disruption: (1) circadian misalignment, (2) pharmacologic factors, (3) medical factors, (4) psychiatric/psychosocial factors, and (5) primary sleep medicine diagnoses. These domains are presented and explained in an easy-to-remember "five finger" format. The five finger format can be used in real time to evaluate the completeness of a clinical encounter, or can be used in the design of standardized patients to identify areas of strength and potential weakness. A score sheet based upon this approach is offered as an alternative to commonly used Likert scales as a potentially more objective and practical measure of clinical problem-solving competence, making it useful for training programs striving to achieve or maintain fellowship accreditation.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.
2015-09-08
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.
Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David
2016-02-01
In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.
Insightful problem solving and creative tool modification by captive nontool-using rooks.
Bird, Christopher D; Emery, Nathan J
2009-06-23
The ability to use tools has been suggested to indicate advanced physical cognition in animals. Here we show that rooks, a member of the corvid family that do not appear to use tools in the wild are capable of insightful problem solving related to sophisticated tool use, including spontaneously modifying and using a variety of tools, shaping hooks out of wire, and using a series of tools in a sequence to gain a reward. It is remarkable that a species that does not use tools in the wild appears to possess an understanding of tools rivaling habitual tool users such as New Caledonian crows and chimpanzees. Our findings suggest that the ability to represent tools may be a domain-general cognitive capacity rather than an adaptive specialization and questions the relationship between physical intelligence and wild tool use.
A new scheme of the time-domain fluorescence tomography for a semi-infinite turbid medium
NASA Astrophysics Data System (ADS)
Prieto, Kernel; Nishimura, Goro
2017-04-01
A new scheme for reconstruction of a fluorophore target embedded in a semi-infinite medium was proposed and evaluated. In this scheme, we neglected the presence of the fluorophore target for the excitation light and used an analytical solution of the time-dependent radiative transfer equation (RTE) for the excitation light in a homogeneous semi-infinite media instead of solving the RTE numerically in the forward calculation. The inverse problem for imaging the fluorophore target was solved using the Landweber-Kaczmarz method with the concept of the adjoint fields. Numerical experiments show that the proposed scheme provides acceptable results of the reconstructed shape and location of the target. The computation times of the solution of the forward problem and the whole reconstruction process were reduced by about 40 and 15%, respectively.
A stable partitioned FSI algorithm for incompressible flow and deforming beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L., E-mail: lil19@rpi.edu; Henshaw, W.D., E-mail: henshw@rpi.edu; Banks, J.W., E-mail: banksj3@rpi.edu
2016-05-01
An added-mass partitioned (AMP) algorithm is described for solving fluid–structure interaction (FSI) problems coupling incompressible flows with thin elastic structures undergoing finite deformations. The new AMP scheme is fully second-order accurate and stable, without sub-time-step iterations, even for very light structures when added-mass effects are strong. The fluid, governed by the incompressible Navier–Stokes equations, is solved in velocity-pressure form using a fractional-step method; large deformations are treated with a mixed Eulerian-Lagrangian approach on deforming composite grids. The motion of the thin structure is governed by a generalized Euler–Bernoulli beam model, and these equations are solved in a Lagrangian frame usingmore » two approaches, one based on finite differences and the other on finite elements. The key AMP interface condition is a generalized Robin (mixed) condition on the fluid pressure. This condition, which is derived at a continuous level, has no adjustable parameters and is applied at the discrete level to couple the partitioned domain solvers. Special treatment of the AMP condition is required to couple the finite-element beam solver with the finite-difference-based fluid solver, and two coupling approaches are described. A normal-mode stability analysis is performed for a linearized model problem involving a beam separating two fluid domains, and it is shown that the AMP scheme is stable independent of the ratio of the mass of the fluid to that of the structure. A traditional partitioned (TP) scheme using a Dirichlet–Neumann coupling for the same model problem is shown to be unconditionally unstable if the added mass of the fluid is too large. A series of benchmark problems of increasing complexity are considered to illustrate the behavior of the AMP algorithm, and to compare the behavior with that of the TP scheme. The results of all these benchmark problems verify the stability and accuracy of the AMP scheme. Results for one benchmark problem modeling blood flow in a deforming artery are also compared with corresponding results available in the literature.« less
Gelman, Susan A; Noles, Nicholaus S
2011-09-01
Human cognition entails domain-specific cognitive processes that influence memory, attention, categorization, problem-solving, reasoning, and knowledge organization. This article examines domain-specific causal theories, which are of particular interest for permitting an examination of how knowledge structures change over time. We first describe the properties of commonsense theories, and how commonsense theories differ from scientific theories, illustrating with children's classification of biological and nonbiological kinds. We next consider the implications of domain-specificity for broader issues regarding cognitive development and conceptual change. We then examine the extent to which domain-specific theories interact, and how people reconcile competing causal frameworks. Future directions for research include examining how different content domains interact, the nature of theory change, the role of context (including culture, language, and social interaction) in inducing different frameworks, and the neural bases for domain-specific reasoning. WIREs Cogni Sci 2011 2 490-502 DOI: 10.1002/wcs.124 This article is categorized under: Psychology > Reasoning and Decision Making. Copyright © 2010 John Wiley & Sons, Ltd.
Drake, John H; Özcan, Ender; Burke, Edmund K
2016-01-01
Hyper-heuristics are high-level methodologies for solving complex problems that operate on a search space of heuristics. In a selection hyper-heuristic framework, a heuristic is chosen from an existing set of low-level heuristics and applied to the current solution to produce a new solution at each point in the search. The use of crossover low-level heuristics is possible in an increasing number of general-purpose hyper-heuristic tools such as HyFlex and Hyperion. However, little work has been undertaken to assess how best to utilise it. Since a single-point search hyper-heuristic operates on a single candidate solution, and two candidate solutions are required for crossover, a mechanism is required to control the choice of the other solution. The frameworks we propose maintain a list of potential solutions for use in crossover. We investigate the use of such lists at two conceptual levels. First, crossover is controlled at the hyper-heuristic level where no problem-specific information is required. Second, it is controlled at the problem domain level where problem-specific information is used to produce good-quality solutions to use in crossover. A number of selection hyper-heuristics are compared using these frameworks over three benchmark libraries with varying properties for an NP-hard optimisation problem: the multidimensional 0-1 knapsack problem. It is shown that allowing crossover to be managed at the domain level outperforms managing crossover at the hyper-heuristic level in this problem domain.
Direct Numerical Simulation of Automobile Cavity Tones
NASA Technical Reports Server (NTRS)
Kurbatskii, Konstantin; Tam, Christopher K. W.
2000-01-01
The Navier Stokes equation is solved computationally by the Dispersion-Relation-Preserving (DRP) scheme for the flow and acoustic fields associated with a laminar boundary layer flow over an automobile door cavity. In this work, the flow Reynolds number is restricted to R(sub delta*) < 3400; the range of Reynolds number for which laminar flow may be maintained. This investigation focuses on two aspects of the problem, namely, the effect of boundary layer thickness on the cavity tone frequency and intensity and the effect of the size of the computation domain on the accuracy of the numerical simulation. It is found that the tone frequency decreases with an increase in boundary layer thickness. When the boundary layer is thicker than a certain critical value, depending on the flow speed, no tone is emitted by the cavity. Computationally, solutions of aeroacoustics problems are known to be sensitive to the size of the computation domain. Numerical experiments indicate that the use of a small domain could result in normal mode type acoustic oscillations in the entire computation domain leading to an increase in tone frequency and intensity. When the computation domain is expanded so that the boundaries are at least one wavelength away from the noise source, the computed tone frequency and intensity are found to be computation domain size independent.
Fuchs, Lynn S.; Geary, David C.; Compton, Donald L.; Fuchs, Douglas; Hamlett, Carol L.; Seethaler, Pamela M.; Bryant, Joan D.; Schatschneider, Christopher
2010-01-01
The purpose of this study was to examine the interplay between basic numerical cognition and domain-general abilities (such as working memory) in explaining school mathematics learning. First graders (n=280; 5.77 years) were assessed on 2 types of basic numerical cognition, 8 domain-general abilities, procedural calculations (PCs), and word problems (WPs) in fall and then reassessed on PCs and WPs in spring. Development was indexed via latent change scores, and the interplay between numerical and domain-general abilities was analyzed via multiple regression. Results suggest that the development of different types of formal school mathematics depends on different constellations of numerical versus general cognitive abilities. When controlling for 8 domain-general abilities, both aspects of basic numerical cognition were uniquely predictive of PC and WP development. Yet, for PC development, the additional amount of variance explained by the set of domain-general abilities was not significant, and only counting span was uniquely predictive. By contrast, for WP development, the set of domain- general abilities did provide additional explanatory value, accounting for about the same amount of variance as the basic numerical cognition variables. Language, attentive behavior, nonverbal problem solving, and listening span were uniquely predictive. PMID:20822213
Model Order Reduction Algorithm for Estimating the Absorption Spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Beeumen, Roel; Williams-Young, David B.; Kasper, Joseph M.
The ab initio description of the spectral interior of the absorption spectrum poses both a theoretical and computational challenge for modern electronic structure theory. Due to the often spectrally dense character of this domain in the quantum propagator’s eigenspectrum for medium-to-large sized systems, traditional approaches based on the partial diagonalization of the propagator often encounter oscillatory and stagnating convergence. Electronic structure methods which solve the molecular response problem through the solution of spectrally shifted linear systems, such as the complex polarization propagator, offer an alternative approach which is agnostic to the underlying spectral density or domain location. This generality comesmore » at a seemingly high computational cost associated with solving a large linear system for each spectral shift in some discretization of the spectral domain of interest. In this work, we present a novel, adaptive solution to this high computational overhead based on model order reduction techniques via interpolation. Model order reduction reduces the computational complexity of mathematical models and is ubiquitous in the simulation of dynamical systems and control theory. The efficiency and effectiveness of the proposed algorithm in the ab initio prediction of X-ray absorption spectra is demonstrated using a test set of challenging water clusters which are spectrally dense in the neighborhood of the oxygen K-edge. On the basis of a single, user defined tolerance we automatically determine the order of the reduced models and approximate the absorption spectrum up to the given tolerance. We also illustrate that, for the systems studied, the automatically determined model order increases logarithmically with the problem dimension, compared to a linear increase of the number of eigenvalues within the energy window. Furthermore, we observed that the computational cost of the proposed algorithm only scales quadratically with respect to the problem dimension.« less
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.
Fast sweeping methods for hyperbolic systems of conservation laws at steady state II
NASA Astrophysics Data System (ADS)
Engquist, Björn; Froese, Brittany D.; Tsai, Yen-Hsi Richard
2015-04-01
The idea of using fast sweeping methods for solving stationary systems of conservation laws has previously been proposed for efficiently computing solutions with sharp shocks. We further develop these methods to allow for a more challenging class of problems including problems with sonic points, shocks originating in the interior of the domain, rarefaction waves, and two-dimensional systems. We show that fast sweeping methods can produce higher-order accuracy. Computational results validate the claims of accuracy, sharp shock curves, and optimal computational efficiency.
A Model-Free No-arbitrage Price Bound for Variance Options
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonnans, J. Frederic, E-mail: frederic.bonnans@inria.fr; Tan Xiaolu, E-mail: xiaolu.tan@polytechnique.edu
2013-08-01
We suggest a numerical approximation for an optimization problem, motivated by its applications in finance to find the model-free no-arbitrage bound of variance options given the marginal distributions of the underlying asset. A first approximation restricts the computation to a bounded domain. Then we propose a gradient projection algorithm together with the finite difference scheme to solve the optimization problem. We prove the general convergence, and derive some convergence rate estimates. Finally, we give some numerical examples to test the efficiency of the algorithm.
Comments on "Image denoising by sparse 3-D transform-domain collaborative filtering".
Hou, Yingkun; Zhao, Chunxia; Yang, Deyun; Cheng, Yong
2011-01-01
In order to resolve the problem that the denoising performance has a sharp drop when noise standard deviation reaches 40, proposed to replace the wavelet transform by the DCT. In this comment, we argue that this replacement is unnecessary, and that the problem can be solved by adjusting some numerical parameters. We also present this parameter modification approach here. Experimental results demonstrate that the proposed modification achieves better results in terms of both peak signal-to-noise ratio and subjective visual quality than the original method for strong noise.
1990-10-02
between neighboring goals in verbal protocols. 18 2.2 Materials All subjects received the following problem: The manufacturer of Coca Cola wants to improve...his product. Recently, he has received complaints that Coca Cola does not taste as good any more as it used to. Therefore, he wants to investigate...what it is exactly that people taste when they drink Coca Cola . In order to be able to make a comparison with the competitors, Pepsi Cola and a house
Perception-oriented fusion of multi-sensor imagery: visible, IR, and SAR
NASA Astrophysics Data System (ADS)
Sidorchuk, D.; Volkov, V.; Gladilin, S.
2018-04-01
This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.
Typed Linear Chain Conditional Random Fields and Their Application to Intrusion Detection
NASA Astrophysics Data System (ADS)
Elfers, Carsten; Horstmann, Mirko; Sohr, Karsten; Herzog, Otthein
Intrusion detection in computer networks faces the problem of a large number of both false alarms and unrecognized attacks. To improve the precision of detection, various machine learning techniques have been proposed. However, one critical issue is that the amount of reference data that contains serious intrusions is very sparse. In this paper we present an inference process with linear chain conditional random fields that aims to solve this problem by using domain knowledge about the alerts of different intrusion sensors represented in an ontology.
NASA Astrophysics Data System (ADS)
Aji Hapsoro, Cahyo; Purqon, Acep; Srigutomo, Wahyu
2017-07-01
2-D Time Domain Electromagnetic (TDEM) has been successfully conducted to illustrate the value of Electric field distribution under the Earth surface. Electric field compared by magnetic field is used to analyze resistivity and resistivity is one of physical properties which very important to determine the reservoir potential area of geothermal systems as one of renewable energy. In this modeling we used Time Domain Electromagnetic method because it can solve EM field interaction problem with complex geometry and to analyze transient problems. TDEM methods used to model the value of electric and magnetic fields as a function of the time combined with the function of distance and depth. The result of this modeling is Electric field intensity value which is capable to describe the structure of the Earth’s subsurface. The result of this modeling can be applied to describe the Earths subsurface resistivity values to determine the reservoir potential of geothermal systems.
Genetic algorithms for the vehicle routing problem
NASA Astrophysics Data System (ADS)
Volna, Eva
2016-06-01
The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.
Children's patterns of reasoning about reading and addition concepts.
Farrington-Flint, Lee; Canobi, Katherine H; Wood, Clare; Faulkner, Dorothy
2010-06-01
Children's reasoning was examined within two educational contexts (word reading and addition) so as to understand the factors that contribute to relational reasoning in the two domains. Sixty-seven 5- to 7-year-olds were given a series of related words to read or single-digit addition items to solve (interspersed with unrelated items). The frequency, accuracy, and response times of children's self-reports on the conceptually related items provided a measure of relational reasoning, while performance on the unrelated addition and reading items provided a measure of procedural skill. The results indicated that the children's ability to use conceptual relations to solve both reading and addition problems enhanced speed and accuracy levels, increased with age, and was related to procedural skill. However, regression analyses revealed that domain-specific competencies can best explain the use of conceptual relations in both reading and addition. Moreover, a cluster analysis revealed that children differ according to the academic domain in which they first apply conceptual relations and these differences are related to individual variation in their procedural skills within these particular domains. These results highlight the developmental significance of relational reasoning in the context of reading and addition and underscore the importance of concept-procedure links in explaining children's literacy and arithmetical development.
NASA Astrophysics Data System (ADS)
Tan, Kian Lam; Lim, Chen Kim
2017-10-01
With the explosive growth of online information such as email messages, news articles, and scientific literature, many institutions and museums are converting their cultural collections from physical data to digital format. However, this conversion resulted in the issues of inconsistency and incompleteness. Besides, the usage of inaccurate keywords also resulted in short query problem. Most of the time, the inconsistency and incompleteness are caused by the aggregation fault in annotating a document itself while the short query problem is caused by naive user who has prior knowledge and experience in cultural heritage domain. In this paper, we presented an approach to solve the problem of inconsistency, incompleteness and short query by incorporating the Term Similarity Matrix into the Language Model. Our approach is tested on the Cultural Heritage in CLEF (CHiC) collection which consists of short queries and documents. The results show that the proposed approach is effective and has improved the accuracy in retrieval time.
Using Decision Procedures to Build Domain-Specific Deductive Synthesis Systems
NASA Technical Reports Server (NTRS)
VanBaalen, Jeffrey; Roach, Steven; Lau, Sonie (Technical Monitor)
1998-01-01
This paper describes a class of decision procedures that we have found useful for efficient, domain-specific deductive synthesis. These procedures are called closure-based ground literal satisfiability procedures. We argue that this is a large and interesting class of procedures and show how to interface these procedures to a theorem prover for efficient deductive synthesis. Finally, we describe some results we have observed from our implementation. Amphion/NAIF is a domain-specific, high-assurance software synthesis system. It takes an abstract specification of a problem in solar system mechanics, such as 'when will a signal sent from the Cassini spacecraft to Earth be blocked by the planet Saturn?', and automatically synthesizes a FORTRAN program to solve it.
Time-domain Surveys and Data Shift: Case Study at the intermediate Palomar Transient Factory
NASA Astrophysics Data System (ADS)
Rebbapragada, Umaa; Bue, Brian; Wozniak, Przemyslaw R.
2015-01-01
Next generation time-domain surveys are susceptible to the problem of data shift that is caused by upgrades to data processing pipelines and instruments. Data shift degrades the performance of automated machine learning classifiers that vet detections and classify source types because fundamental assumptions are violated when classifiers are built in one data regime but are deployed on data from another. This issue is not currently discussed within the astronomical community, but will be increasingly pressing over the next decade with the advent of new time domain surveys.We look at the problem of data shift that was caused by a data pipeline upgrade when the intermediate Palomar Transient Factory (iPTF) succeeded the Palomar Transient Factory (PTF) in January 2013. iPTF relies upon machine-learned Real-Bogus classifiers to vet sources extracted from subtracted images on a scale of zero to one where zero indicates a bogus (image artifact) and one indicates a real astronomical transient, with the overwhelming majority of candidates are scored as bogus. An effective Real-Bogus system filters all but the most promising candidates, which are presented to human scanners who make decisions about triggering follow up assets.The Real-Bogus systems currently in operation at iPTF (RB4 and RB5) solve the data shift problem. The statistical models of RB4 and RB5 were built from the ground up using examples from iPTF alone, whereas an older system, RB2, was built using PTF data, but was deployed after iPTF launched. We discuss the machine learning assumptions that are violated when a system is trained on one domain (PTF) but deployed on another (iPTF) that experiences data shift. We provide illustrative examples of data parameters and statistics that experienced shift. Finally, we show results comparing the three systems in operation, demonstrating that systems that solve domain shift (RB4 and RB5) are superior to those that don't (RB2).Research described in this abstract was carried out at the Jet Propulsion Laboratory under contract with the National Aeronautics and Space Administration. US Government Support Acknowledged.
NASA Astrophysics Data System (ADS)
Glotov, V. V.; Ostroumov, I. V.; Romashchenko, M. A.
2018-05-01
To study the effect of phase-shift signals parameters on EMC of REM, a generalized signal generation model in a radio transmitter was developed which allows obtaining digital representations of phase-shift signals, which are a continuous pulse in the time domain and on the frequency axis with different signal element envelope shapes.
Iontophoretic transdermal drug delivery: a multi-layered approach.
Pontrelli, Giuseppe; Lauricella, Marco; Ferreira, José A; Pena, Gonçalo
2017-12-11
We present a multi-layer mathematical model to describe the transdermal drug release from an iontophoretic system. The Nernst-Planck equation describes the basic convection-diffusion process, with the electric potential obtained by solving the Laplace's equation. These equations are complemented with suitable interface and boundary conditions in a multi-domain. The stability of the mathematical problem is discussed in different scenarios and a finite-difference method is used to solve the coupled system. Numerical experiments are included to illustrate the drug dynamics under different conditions. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
A Linear-Elasticity Solver for Higher-Order Space-Time Mesh Deformation
NASA Technical Reports Server (NTRS)
Diosady, Laslo T.; Murman, Scott M.
2018-01-01
A linear-elasticity approach is presented for the generation of meshes appropriate for a higher-order space-time discontinuous finite-element method. The equations of linear-elasticity are discretized using a higher-order, spatially-continuous, finite-element method. Given an initial finite-element mesh, and a specified boundary displacement, we solve for the mesh displacements to obtain a higher-order curvilinear mesh. Alternatively, for moving-domain problems we use the linear-elasticity approach to solve for a temporally discontinuous mesh velocity on each time-slab and recover a continuous mesh deformation by integrating the velocity. The applicability of this methodology is presented for several benchmark test cases.
Recursive recovery of Markov transition probabilities from boundary value data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patch, Sarah Kathyrn
1994-04-01
In an effort to mathematically describe the anisotropic diffusion of infrared radiation in biological tissue Gruenbaum posed an anisotropic diffusion boundary value problem in 1989. In order to accommodate anisotropy, he discretized the temporal as well as the spatial domain. The probabilistic interpretation of the diffusion equation is retained; radiation is assumed to travel according to a random walk (of sorts). In this random walk the probabilities with which photons change direction depend upon their previous as well as present location. The forward problem gives boundary value data as a function of the Markov transition probabilities. The inverse problem requiresmore » finding the transition probabilities from boundary value data. Problems in the plane are studied carefully in this thesis. Consistency conditions amongst the data are derived. These conditions have two effects: they prohibit inversion of the forward map but permit smoothing of noisy data. Next, a recursive algorithm which yields a family of solutions to the inverse problem is detailed. This algorithm takes advantage of all independent data and generates a system of highly nonlinear algebraic equations. Pluecker-Grassmann relations are instrumental in simplifying the equations. The algorithm is used to solve the 4 x 4 problem. Finally, the smallest nontrivial problem in three dimensions, the 2 x 2 x 2 problem, is solved.« less
[Application of CWT to extract characteristic monitoring parameters during spine surgery].
Chen, Penghui; Wu, Baoming; Hu, Yong
2005-10-01
It is necessary to monitor intraoperative spinal function in order to prevent spinal neurological deficit during spine surgery. This study aims to extract characteristic electrophysiological monitoring parameters during surgical treatment of scoliosis. The problem, "the monitoring parameters in time domain are of great variability and are sensitive to noise", may also be solved in this study. By use of continuous wavelet transform to analyze the intraoperative cortical somatosensory evoked potential (CSEP), three new characteristic monitoring parameters in time-frequency domain (TFD) are extracted. The results indicate that the variability of CSEP characteristic parameters in TFD is lower than the variability of those in time domain. Therefore, the TFD characteristic monitoring parameters are more stable and reliable parameters of latency and amplitude in time domain. The application of TFD monitoring parameters during spine surgery may avoid spinal injury effectively.
Shape and Reinforcement Optimization of Underground Tunnels
NASA Astrophysics Data System (ADS)
Ghabraie, Kazem; Xie, Yi Min; Huang, Xiaodong; Ren, Gang
Design of support system and selecting an optimum shape for the opening are two important steps in designing excavations in rock masses. Currently selecting the shape and support design are mainly based on designer's judgment and experience. Both of these problems can be viewed as material distribution problems where one needs to find the optimum distribution of a material in a domain. Topology optimization techniques have proved to be useful in solving these kinds of problems in structural design. Recently the application of topology optimization techniques in reinforcement design around underground excavations has been studied by some researchers. In this paper a three-phase material model will be introduced changing between normal rock, reinforced rock, and void. Using such a material model both problems of shape and reinforcement design can be solved together. A well-known topology optimization technique used in structural design is bi-directional evolutionary structural optimization (BESO). In this paper the BESO technique has been extended to simultaneously optimize the shape of the opening and the distribution of reinforcements. Validity and capability of the proposed approach have been investigated through some examples.
NASA Technical Reports Server (NTRS)
Prince, Mary Ellen
1987-01-01
The expert system is a computer program which attempts to reproduce the problem-solving behavior of an expert, who is able to view problems from a broad perspective and arrive at conclusions rapidly, using intuition, shortcuts, and analogies to previous situations. Expert systems are a departure from the usual artificial intelligence approach to problem solving. Researchers have traditionally tried to develop general modes of human intelligence that could be applied to many different situations. Expert systems, on the other hand, tend to rely on large quantities of domain specific knowledge, much of it heuristic. The reasoning component of the system is relatively simple and straightforward. For this reason, expert systems are often called knowledge based systems. The report expands on the foregoing. Section 1 discusses the architecture of a typical expert system. Section 2 deals with the characteristics that make a problem a suitable candidate for expert system solution. Section 3 surveys current technology, describing some of the software aids available for expert system development. Section 4 discusses the limitations of the latter. The concluding section makes predictions of future trends.
Performance of a parallel thermal-hydraulics code TEMPEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fann, G.I.; Trent, D.S.
The authors describe the parallelization of the Tempest thermal-hydraulics code. The serial version of this code is used for production quality 3-D thermal-hydraulics simulations. Good speedup was obtained with a parallel diagonally preconditioned BiCGStab non-symmetric linear solver, using a spatial domain decomposition approach for the semi-iterative pressure-based and mass-conserved algorithm. The test case used here to illustrate the performance of the BiCGStab solver is a 3-D natural convection problem modeled using finite volume discretization in cylindrical coordinates. The BiCGStab solver replaced the LSOR-ADI method for solving the pressure equation in TEMPEST. BiCGStab also solves the coupled thermal energy equation. Scalingmore » performance of 3 problem sizes (221220 nodes, 358120 nodes, and 701220 nodes) are presented. These problems were run on 2 different parallel machines: IBM-SP and SGI PowerChallenge. The largest problem attains a speedup of 68 on an 128 processor IBM-SP. In real terms, this is over 34 times faster than the fastest serial production time using the LSOR-ADI solver.« less
Co-simulation coupling spectral/finite elements for 3D soil/structure interaction problems
NASA Astrophysics Data System (ADS)
Zuchowski, Loïc; Brun, Michael; De Martin, Florent
2018-05-01
The coupling between an implicit finite elements (FE) code and an explicit spectral elements (SE) code has been explored for solving the elastic wave propagation in the case of soil/structure interaction problem. The coupling approach is based on domain decomposition methods in transient dynamics. The spatial coupling at the interface is managed by a standard coupling mortar approach, whereas the time integration is dealt with an hybrid asynchronous time integrator. An external coupling software, handling the interface problem, has been set up in order to couple the FE software Code_Aster with the SE software EFISPEC3D.
Mobile transporter path planning
NASA Technical Reports Server (NTRS)
Baffes, Paul; Wang, Lui
1990-01-01
The use of a genetic algorithm (GA) for solving the mobile transporter path planning problem is investigated. The mobile transporter is a traveling robotic vehicle proposed for the space station which must be able to reach any point of the structure autonomously. Elements of the genetic algorithm are explored in both a theoretical and experimental sense. Specifically, double crossover, greedy crossover, and tournament selection techniques are examined. Additionally, the use of local optimization techniques working in concert with the GA are also explored. Recent developments in genetic algorithm theory are shown to be particularly effective in a path planning problem domain, though problem areas can be cited which require more research.
Research and implementation of simulation for TDICCD remote sensing in vibration of optical axis
NASA Astrophysics Data System (ADS)
Liu, Zhi-hong; Kang, Xiao-jun; Lin, Zhe; Song, Li
2013-12-01
During the exposure time, the charge transfer speed in the push-broom direction and the line-by-lines canning speed of the sensor are required to match each other strictly for a space-borne TDICCD push-broom camera. However, as attitude disturbance of satellite and vibration of camera are inevitable, it is impossible to eliminate the speed mismatch, which will make the signal of different targets overlay each other and result in a decline of image resolution. The effects of velocity mismatch will be visually observed and analyzed by simulating the degradation of image quality caused by the vibration of the optical axis, and it is significant for the evaluation of image quality and design of the image restoration algorithm. How to give a model in time domain and space domain during the imaging time is the problem needed to be solved firstly. As vibration information for simulation is usually given by a continuous curve, the pixels of original image matrix and sensor matrix are discrete, as a result, they cannot always match each other well. The effect of simulation will also be influenced by the discrete sampling in integration time. In conclusion, it is quite significant for improving simulation accuracy and efficiency to give an appropriate discrete modeling and simulation method. The paper analyses discretization schemes in time domain and space domain and presents a method to simulate the quality of image of the optical system in the vibration of the line of sight, which is based on the principle of TDICCD sensor. The gray value of pixels in sensor matrix is obtained by a weighted arithmetic, which solves the problem of pixels dismatch. The result which compared with the experiment of hardware test indicate that this simulation system performances well in accuracy and reliability.
Parental Obesity and Early Childhood Development.
Yeung, Edwina H; Sundaram, Rajeshwari; Ghassabian, Akhgar; Xie, Yunlong; Buck Louis, Germaine
2017-02-01
Previous studies identified associations between maternal obesity and childhood neurodevelopment, but few examined paternal obesity despite potentially distinct genetic/epigenetic effects related to developmental programming. Upstate KIDS (2008-2010) recruited mothers from New York State (excluding New York City) at ∼4 months postpartum. Parents completed the Ages and Stages Questionnaire (ASQ) when their children were 4, 8, 12, 18, 24, 30, and 36 months of age corrected for gestation. The ASQ is validated to screen for delays in 5 developmental domains (ie, fine motor, gross motor, communication, personal-social functioning, and problem-solving ability). Analyses included 3759 singletons and 1062 nonrelated twins with ≥1 ASQs returned. Adjusted odds ratios (aORs) and 95% confidence intervals were estimated by using generalized linear mixed models accounting for maternal covariates (ie, age, race, education, insurance, marital status, parity, and pregnancy smoking). Compared with normal/underweight mothers (BMI <25), children of obese mothers (26% with BMI ≥30) had increased odds of failing the fine motor domain (aOR 1.67; confidence interval 1.12-2.47). The association remained after additional adjustment for paternal BMI (1.67; 1.11-2.52). Paternal obesity (29%) was associated with increased risk of failing the personal-social domain (1.75; 1.13-2.71), albeit attenuated after adjustment for maternal obesity (aOR 1.71; 1.08-2.70). Children whose parents both had BMI ≥35 were likely to additionally fail the problem-solving domain (2.93; 1.09-7.85). Findings suggest that maternal and paternal obesity are each associated with specific delays in early childhood development, emphasizing the importance of family information when screening child development. Copyright © 2017 by the American Academy of Pediatrics.
Parental Obesity and Early Childhood Development
Sundaram, Rajeshwari; Ghassabian, Akhgar; Xie, Yunlong; Buck Louis, Germaine
2017-01-01
BACKGROUND: Previous studies identified associations between maternal obesity and childhood neurodevelopment, but few examined paternal obesity despite potentially distinct genetic/epigenetic effects related to developmental programming. METHODS: Upstate KIDS (2008–2010) recruited mothers from New York State (excluding New York City) at ∼4 months postpartum. Parents completed the Ages and Stages Questionnaire (ASQ) when their children were 4, 8, 12, 18, 24, 30, and 36 months of age corrected for gestation. The ASQ is validated to screen for delays in 5 developmental domains (ie, fine motor, gross motor, communication, personal-social functioning, and problem-solving ability). Analyses included 3759 singletons and 1062 nonrelated twins with ≥1 ASQs returned. Adjusted odds ratios (aORs) and 95% confidence intervals were estimated by using generalized linear mixed models accounting for maternal covariates (ie, age, race, education, insurance, marital status, parity, and pregnancy smoking). RESULTS: Compared with normal/underweight mothers (BMI <25), children of obese mothers (26% with BMI ≥30) had increased odds of failing the fine motor domain (aOR 1.67; confidence interval 1.12–2.47). The association remained after additional adjustment for paternal BMI (1.67; 1.11–2.52). Paternal obesity (29%) was associated with increased risk of failing the personal-social domain (1.75; 1.13–2.71), albeit attenuated after adjustment for maternal obesity (aOR 1.71; 1.08–2.70). Children whose parents both had BMI ≥35 were likely to additionally fail the problem-solving domain (2.93; 1.09–7.85). CONCLUSIONS: Findings suggest that maternal and paternal obesity are each associated with specific delays in early childhood development, emphasizing the importance of family information when screening child development. PMID:28044047
Unlocking the spatial inversion of large scanning magnetic microscopy datasets
NASA Astrophysics Data System (ADS)
Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.
2013-12-01
Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.
Using adaptive grid in modeling rocket nozzle flow
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1992-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which cannot be solved analytically. However, this system of equations called the Navier-Stokes equations can be solved numerically. The accuracy and the convergence of the solution of the system of equations will depend largely on how precisely the sharp gradients in the domain of interest can be resolved. With the advances in computer technology, more sophisticated algorithms are available to improve the accuracy and convergence of the solutions. An adaptive grid generation is one of the schemes which can be incorporated into the algorithm to enhance the capability of numerical modeling. It is equivalent to putting intelligence into the algorithm to optimize the use of computer memory. With this scheme, the finite difference domain of the flow field called the grid does neither have to be very fine nor strategically placed at the location of sharp gradients. The grid is self adapting as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzles by taking the refinement part of grid generation out of the hands of computational fluid dynamics (CFD) specialists and place it into the computer algorithm itself.
A methodology for constraining power in finite element modeling of radiofrequency ablation.
Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng
2017-07-01
Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.
On flows of viscoelastic fluids under threshold-slip boundary conditions
NASA Astrophysics Data System (ADS)
Baranovskii, E. S.
2018-03-01
We investigate a boundary-value problem for the steady isothermal flow of an incompressible viscoelastic fluid of Oldroyd type in a 3D bounded domain with impermeable walls. We use the Fujita threshold-slip boundary condition. This condition states that the fluid can slip along a solid surface when the shear stresses reach a certain critical value; otherwise the slipping velocity is zero. Assuming that the flow domain is not rotationally symmetric, we prove an existence theorem for the corresponding slip problem in the framework of weak solutions. The proof uses methods for solving variational inequalities with pseudo-monotone operators and convex functionals, the method of introduction of auxiliary viscosity, as well as a passage-to-limit procedure based on energy estimates of approximate solutions, Korn’s inequality, and compactness arguments. Also, some properties and estimates of weak solutions are established.
Spacelike matching to null infinity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zenginoglu, Anil; Tiglio, Manuel
2009-07-15
We present two methods to include the asymptotic domain of a background spacetime in null directions for numerical solutions of evolution equations so that both the radiation extraction problem and the outer boundary problem are solved. The first method is based on the geometric conformal approach, the second is a coordinate based approach. We apply these methods to the case of a massless scalar wave equation on a Kerr spacetime. Our methods are designed to allow existing codes to reach the radiative zone by including future null infinity in the computational domain with relatively minor modifications. We demonstrate the flexibilitymore » of the methods by considering both Boyer-Lindquist and ingoing Kerr coordinates near the black hole. We also confirm numerically predictions concerning tail decay rates for scalar fields at null infinity in Kerr spacetime due to Hod for the first time.« less
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
Chen, Jianhui; Liu, Ji; Ye, Jieping
2013-01-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.
Chen, Jianhui; Liu, Ji; Ye, Jieping
2012-02-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.
Extending substructure based iterative solvers to multiple load and repeated analyses
NASA Technical Reports Server (NTRS)
Farhat, Charbel
1993-01-01
Direct solvers currently dominate commercial finite element structural software, but do not scale well in the fine granularity regime targeted by emerging parallel processors. Substructure based iterative solvers--often called also domain decomposition algorithms--lend themselves better to parallel processing, but must overcome several obstacles before earning their place in general purpose structural analysis programs. One such obstacle is the solution of systems with many or repeated right hand sides. Such systems arise, for example, in multiple load static analyses and in implicit linear dynamics computations. Direct solvers are well-suited for these problems because after the system matrix has been factored, the multiple or repeated solutions can be obtained through relatively inexpensive forward and backward substitutions. On the other hand, iterative solvers in general are ill-suited for these problems because they often must restart from scratch for every different right hand side. In this paper, we present a methodology for extending the range of applications of domain decomposition methods to problems with multiple or repeated right hand sides. Basically, we formulate the overall problem as a series of minimization problems over K-orthogonal and supplementary subspaces, and tailor the preconditioned conjugate gradient algorithm to solve them efficiently. The resulting solution method is scalable, whereas direct factorization schemes and forward and backward substitution algorithms are not. We illustrate the proposed methodology with the solution of static and dynamic structural problems, and highlight its potential to outperform forward and backward substitutions on parallel computers. As an example, we show that for a linear structural dynamics problem with 11640 degrees of freedom, every time-step beyond time-step 15 is solved in a single iteration and consumes 1.0 second on a 32 processor iPSC-860 system; for the same problem and the same parallel processor, a pair of forward/backward substitutions at each step consumes 15.0 seconds.
Merchán-Naranjo, Jessica; Boada, Leticia; del Rey-Mejías, Ángel; Mayoral, María; Llorente, Cloe; Arango, Celso; Parellada, Mara
2016-01-01
Studies of executive function in autism spectrum disorder without intellectual disability (ASD-WID) patients are contradictory. We assessed a wide range of executive functioning cognitive domains in a sample of children and adolescents with ASD-WID and compared them with age-, sex-, and intelligence quotient (IQ)-matched healthy controls. Twenty-four ASD-WID patients (mean age 12.8±2.5 years; 23 males; mean IQ 99.20±18.81) and 32 healthy controls (mean age 12.9±2.7 years; 30 males; mean IQ 106.81±11.02) were recruited. Statistically significant differences were found in all cognitive domains assessed, with better performance by the healthy control group: attention (U=185.0; P=.0005; D=0.90), working memory (T51.48=2.597; P=.006; D=0.72), mental flexibility (U=236.0; P=.007; D=0.67), inhibitory control (U=210.0; P=.002; D=0.71), and problem solving (U=261.0; P=0.021; D=0.62). These statistically significant differences were also found after controlling for IQ. Children and adolescents with ASD-WID have difficulties transforming and mentally manipulating verbal information, longer response latency, attention problems (difficulty set shifting), trouble with automatic response inhibition and problem solving, despite having normal IQ. Considering the low executive functioning profile found in those patients, we recommend a comprehensive intervention including work on non-social problems related to executive cognitive difficulties. Copyright © 2015 SEP y SEPB. Published by Elsevier España. All rights reserved.