Sample records for model problem based

  1. Improving mathematical problem solving ability through problem-based learning and authentic assessment for the students of Bali State Polytechnic

    NASA Astrophysics Data System (ADS)

    Darma, I. K.

    2018-01-01

    This research is aimed at determining: 1) the differences of mathematical problem solving ability between the students facilitated with problem-based learning model and conventional learning model, 2) the differences of mathematical problem solving ability between the students facilitated with authentic and conventional assessment model, and 3) interaction effect between learning and assessment model on mathematical problem solving. The research was conducted in Bali State Polytechnic, using the 2x2 experiment factorial design. The samples of this research were 110 students. The data were collected using a theoretically and empirically-validated test. Instruments were validated by using Aiken’s approach of technique content validity and item analysis, and then analyzed using anova stylistic. The result of the analysis shows that the students facilitated with problem-based learning and authentic assessment models get the highest score average compared to the other students, both in the concept understanding and mathematical problem solving. The result of hypothesis test shows that, significantly: 1) there is difference of mathematical problem solving ability between the students facilitated with problem-based learning model and conventional learning model, 2) there is difference of mathematical problem solving ability between the students facilitated with authentic assessment model and conventional assessment model, and 3) there is interaction effect between learning model and assessment model on mathematical problem solving. In order to improve the effectiveness of mathematics learning, collaboration between problem-based learning model and authentic assessment model can be considered as one of learning models in class.

  2. Case Problems for Problem-Based Pedagogical Approaches: A Comparative Analysis

    ERIC Educational Resources Information Center

    Dabbagh, Nada; Dass, Susan

    2013-01-01

    A comparative analysis of 51 case problems used in five problem-based pedagogical models was conducted to examine whether there are differences in their characteristics and the implications of such differences on the selection and generation of ill-structured case problems. The five pedagogical models were: situated learning, goal-based scenario,…

  3. The Implementation and Evaluation of a Project-Oriented Problem-Based Learning Module in a First Year Engineering Programme

    ERIC Educational Resources Information Center

    McLoone, Seamus C.; Lawlor, Bob J.; Meehan, Andrew R.

    2016-01-01

    This paper describes how a circuits-based project-oriented problem-based learning educational model was integrated into the first year of a Bachelor of Engineering in Electronic Engineering programme at Maynooth University, Ireland. While many variations of problem based learning exist, the presented model is closely aligned with the model used in…

  4. A Model for Ubiquitous Serious Games Development Focused on Problem Based Learning

    ERIC Educational Resources Information Center

    Dorneles, Sandro Oliveira; da Costa, Cristiano André; Rigo, Sandro José

    2015-01-01

    The possibility of using serious games with problem-based learning opens up huge opportunities to connect the experiences of daily life of students with learning. In this context, this article presents a model for serious and ubiquitous games development, focusing on problem based learning methodology. The model allows teachers to create games…

  5. The implementation of multiple intelligences based teaching model to improve mathematical problem solving ability for student of junior high school

    NASA Astrophysics Data System (ADS)

    Fasni, Nurli; Fatimah, Siti; Yulanda, Syerli

    2017-05-01

    This research aims to achieve some purposes such as: to know whether mathematical problem solving ability of students who have learned mathematics using Multiple Intelligences based teaching model is higher than the student who have learned mathematics using cooperative learning; to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using Multiple Intelligences based teaching model., to know the improvement of the mathematical problem solving ability of the student who have learned mathematics using cooperative learning; to know the attitude of the students to Multiple Intelligences based teaching model. The method employed here is quasi-experiment which is controlled by pre-test and post-test. The population of this research is all of VII grade in SMP Negeri 14 Bandung even-term 2013/2014, later on two classes of it were taken for the samples of this research. A class was taught using Multiple Intelligences based teaching model and the other one was taught using cooperative learning. The data of this research were gotten from the test in mathematical problem solving, scale questionnaire of the student attitudes, and observation. The results show the mathematical problem solving of the students who have learned mathematics using Multiple Intelligences based teaching model learning is higher than the student who have learned mathematics using cooperative learning, the mathematical problem solving ability of the student who have learned mathematics using cooperative learning and Multiple Intelligences based teaching model are in intermediate level, and the students showed the positive attitude in learning mathematics using Multiple Intelligences based teaching model. As for the recommendation for next author, Multiple Intelligences based teaching model can be tested on other subject and other ability.

  6. Robust Programming Problems Based on the Mean-Variance Model Including Uncertainty Factors

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Ishii, Hiroaki

    2009-01-01

    This paper considers robust programming problems based on the mean-variance model including uncertainty sets and fuzzy factors. Since these problems are not well-defined problems due to fuzzy factors, it is hard to solve them directly. Therefore, introducing chance constraints, fuzzy goals and possibility measures, the proposed models are transformed into the deterministic equivalent problems. Furthermore, in order to solve these equivalent problems efficiently, the solution method is constructed introducing the mean-absolute deviation and doing the equivalent transformations.

  7. A dependency-based modelling mechanism for problem solving

    NASA Technical Reports Server (NTRS)

    London, P.

    1978-01-01

    The paper develops a technique of dependency net modeling which relies on an explicit representation of justifications for beliefs held by the problem solver. Using these justifications, the modeling mechanism is able to determine the relevant lines of inference to pursue during problem solving. Three particular problem-solving difficulties which may be handled by the dependency-based technique are discussed: (1) subgoal violation detection, (2) description binding, and (3) maintaining a consistent world model.

  8. Numerical Problems and Agent-Based Models for a Mass Transfer Course

    ERIC Educational Resources Information Center

    Murthi, Manohar; Shea, Lonnie D.; Snurr, Randall Q.

    2009-01-01

    Problems requiring numerical solutions of differential equations or the use of agent-based modeling are presented for use in a course on mass transfer. These problems were solved using the popular technical computing language MATLABTM. Students were introduced to MATLAB via a problem with an analytical solution. A more complex problem to which no…

  9. Middle School Children's Problem-Solving Behavior: A Cognitive Analysis from a Reading Comprehension Perspective

    ERIC Educational Resources Information Center

    Pape, Stephen J.

    2004-01-01

    Many children read mathematics word problems and directly translate them to arithmetic operations. More sophisticated problem solvers transform word problems into object-based or mental models. Subsequent solutions are often qualitatively different because these models differentially support cognitive processing. Based on a conception of problem…

  10. Model-based conifer crown surface reconstruction from multi-ocular high-resolution aerial imagery

    NASA Astrophysics Data System (ADS)

    Sheng, Yongwei

    2000-12-01

    Tree crown parameters such as width, height, shape and crown closure are desirable in forestry and ecological studies, but they are time-consuming and labor intensive to measure in the field. The stereoscopic capability of high-resolution aerial imagery provides a way to crown surface reconstruction. Existing photogrammetric algorithms designed to map terrain surfaces, however, cannot adequately extract crown surfaces, especially for steep conifer crowns. Considering crown surface reconstruction in a broader context of tree characterization from aerial images, we develop a rigorous perspective tree image formation model to bridge image-based tree extraction and crown surface reconstruction, and an integrated model-based approach to conifer crown surface reconstruction. Based on the fact that most conifer crowns are in a solid geometric form, conifer crowns are modeled as a generalized hemi-ellipsoid. Both the automatic and semi-automatic approaches are investigated to optimal tree model development from multi-ocular images. The semi-automatic 3D tree interpreter developed in this thesis is able to efficiently extract reliable tree parameters and tree models in complicated tree stands. This thesis starts with a sophisticated stereo matching algorithm, and incorporates tree models to guide stereo matching. The following critical problems are addressed in the model-based surface reconstruction process: (1) the problem of surface model composition from tree models, (2) the occlusion problem in disparity prediction from tree models, (3) the problem of integrating the predicted disparities into image matching, (4) the tree model edge effect reduction on the disparity map, (5) the occlusion problem in orthophoto production, and (6) the foreshortening problem in image matching, which is very serious for conifer crown surfaces. Solutions to the above problems are necessary for successful crown surface reconstruction. The model-based approach was applied to recover the canopy surface of a dense redwood stand using tri-ocular high-resolution images scanned from 1:2,400 aerial photographs. The results demonstrate the approach's ability to reconstruct complicated stands. The model-based approach proposed in this thesis is potentially applicable to other surfaces recovering problems with a priori knowledge about objects.

  11. A Comparison of Filter-based Approaches for Model-based Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Saha, Bhaskar; Goebel, Kai

    2012-01-01

    Model-based prognostics approaches use domain knowledge about a system and its failure modes through the use of physics-based models. Model-based prognosis is generally divided into two sequential problems: a joint state-parameter estimation problem, in which, using the model, the health of a system or component is determined based on the observations; and a prediction problem, in which, using the model, the stateparameter distribution is simulated forward in time to compute end of life and remaining useful life. The first problem is typically solved through the use of a state observer, or filter. The choice of filter depends on the assumptions that may be made about the system, and on the desired algorithm performance. In this paper, we review three separate filters for the solution to the first problem: the Daum filter, an exact nonlinear filter; the unscented Kalman filter, which approximates nonlinearities through the use of a deterministic sampling method known as the unscented transform; and the particle filter, which approximates the state distribution using a finite set of discrete, weighted samples, called particles. Using a centrifugal pump as a case study, we conduct a number of simulation-based experiments investigating the performance of the different algorithms as applied to prognostics.

  12. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  13. Modeling Students' Problem Solving Performance in the Computer-Based Mathematics Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2017-01-01

    Purpose: The purpose of this paper is to develop a quantitative model of problem solving performance of students in the computer-based mathematics learning environment. Design/methodology/approach: Regularized logistic regression was used to create a quantitative model of problem solving performance of students that predicts whether students can…

  14. Project-Based Learning Using Discussion and Lesson-Learned Methods via Social Media Model for Enhancing Problem Solving Skills

    ERIC Educational Resources Information Center

    Jewpanich, Chaiwat; Piriyasurawong, Pallop

    2015-01-01

    This research aims to 1) develop the project-based learning using discussion and lesson-learned methods via social media model (PBL-DLL SoMe Model) used for enhancing problem solving skills of undergraduate in education student, and 2) evaluate the PBL-DLL SoMe Model used for enhancing problem solving skills of undergraduate in education student.…

  15. The Use of Problem-Based Learning Model to Improve Quality Learning Students Morals

    ERIC Educational Resources Information Center

    Nurzaman

    2017-01-01

    Model of moral cultivation in MTsN Bangunharja done using three methods, classical cultivation methods, extra-curricular activities in the form of religious activities, scouting, sports, and Islamic art, and habituation of morals. Problem base learning models in MTsN Bangunharja applied using the following steps: find the problem, define the…

  16. Problem-Oriented Corporate Knowledge Base Models on the Case-Based Reasoning Approach Basis

    NASA Astrophysics Data System (ADS)

    Gluhih, I. N.; Akhmadulin, R. K.

    2017-07-01

    One of the urgent directions of efficiency enhancement of production processes and enterprises activities management is creation and use of corporate knowledge bases. The article suggests a concept of problem-oriented corporate knowledge bases (PO CKB), in which knowledge is arranged around possible problem situations and represents a tool for making and implementing decisions in such situations. For knowledge representation in PO CKB a case-based reasoning approach is encouraged to use. Under this approach, the content of a case as a knowledge base component has been defined; based on the situation tree a PO CKB knowledge model has been developed, in which the knowledge about typical situations as well as specific examples of situations and solutions have been represented. A generalized problem-oriented corporate knowledge base structural chart and possible modes of its operation have been suggested. The obtained models allow creating and using corporate knowledge bases for support of decision making and implementing, training, staff skill upgrading and analysis of the decisions taken. The universal interpretation of terms “situation” and “solution” adopted in the work allows using the suggested models to develop problem-oriented corporate knowledge bases in different subject domains. It has been suggested to use the developed models for making corporate knowledge bases of the enterprises that operate engineer systems and networks at large production facilities.

  17. Computer-Mediated Assessment of Higher-Order Thinking Development

    ERIC Educational Resources Information Center

    Tilchin, Oleg; Raiyn, Jamal

    2015-01-01

    Solving complicated problems in a contemporary knowledge-based society requires higher-order thinking (HOT). The most productive way to encourage development of HOT in students is through use of the Problem-based Learning (PBL) model. This model organizes learning by solving corresponding problems relative to study courses. Students are directed…

  18. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  19. Restart Operator Meta-heuristics for a Problem-Oriented Evolutionary Strategies Algorithm in Inverse Mathematical MISO Modelling Problem Solving

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.

    2017-02-01

    This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.

  20. Problem Solving: Physics Modeling-Based Interactive Engagement

    ERIC Educational Resources Information Center

    Ornek, Funda

    2009-01-01

    The purpose of this study was to investigate how modeling-based instruction combined with an interactive-engagement teaching approach promotes students' problem solving abilities. I focused on students in a calculus-based introductory physics course, based on the matter and interactions curriculum of Chabay & Sherwood (2002) at a large state…

  1. Representative Structural Element - A New Paradigm for Multi-Scale Structural Modeling

    DTIC Science & Technology

    2016-07-05

    developed by NASA Glenn Research Center based on Aboudi’s micromechanics theories [5] that provides a wide range of capabilities for modeling ...to use appropriate models for related problems based on the capability of corresponding approaches. Moreover, the analyses will give a general...interface of heterogeneous materials but also help engineers to use appropriate models for related problems based on the capability of corresponding

  2. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models.

    PubMed

    Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  3. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  4. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    PubMed

    Laghari, Samreen; Niazi, Muaz A

    2016-01-01

    Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  5. The Spiritual and Social Attitudes of Students towards Integrated Problem Based Learning Models

    ERIC Educational Resources Information Center

    Bachtiar, Suhaedir; Zubaidah, Siti; Corebima, Aloysius Duran; Indriwati, Sri Endah

    2018-01-01

    This research aimed to investigate the spiritual and social attitudes of students with different academic abilities towards four educational models: problem based learning (PBL); numbered heads together (NHT); integrated PBL and NHT; and multi-strategies model. This quasi-experimental investigation employed a pretest-posttest non-equivalent…

  6. GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.

    PubMed

    Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N

    2018-01-01

    Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.

  7. Theory of the decision/problem state

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1980-01-01

    A theory of the decision-problem state was introduced and elaborated. Starting with the basic model of a decision-problem condition, an attempt was made to explain how a major decision-problem may consist of subsets of decision-problem conditions composing different condition sequences. In addition, the basic classical decision-tree model was modified to allow for the introduction of a series of characteristics that may be encountered in an analysis of a decision-problem state. The resulting hierarchical model reflects the unique attributes of the decision-problem state. The basic model of a decision-problem condition was used as a base to evolve a more complex model that is more representative of the decision-problem state and may be used to initiate research on decision-problem states.

  8. The Role of Model Building in Problem Solving and Conceptual Change

    ERIC Educational Resources Information Center

    Lee, Chwee Beng; Jonassen, David; Teo, Timothy

    2011-01-01

    This study examines the effects of the activity of building systems models for school-based problems on problem solving and on conceptual change in elementary science classes. During a unit on the water cycle in an Asian elementary school, students constructed systems models of the water cycle. We found that representing ill-structured problems as…

  9. Teaching Problem-Solving and Critical-Thinking Skills Online Using Problem-Based Learning

    ERIC Educational Resources Information Center

    Romero, Liz; Orzechowski, Agnes; Rahatka, Ola

    2014-01-01

    The availability of technological tools is promoting a shift toward more student-centered online instruction. This article describes the implementation of a Problem-Based Learning (PBL) model and the technological tools used to meet the expectations of the model as well as the needs of the students. The end product is a hybrid course with eight…

  10. A Comparison of Reasoning Processes in a Collaborative Modelling Environment: Learning about genetics problems using virtual chat

    NASA Astrophysics Data System (ADS)

    Pata, Kai; Sarapuu, Tago

    2006-09-01

    This study investigated the possible activation of different types of model-based reasoning processes in two learning settings, and the influence of various terms of reasoning on the learners’ problem representation development. Changes in 53 students’ problem representations about genetic issue were analysed while they worked with different modelling tools in a synchronous network-based environment. The discussion log-files were used for the “microgenetic” analysis of reasoning types. For studying the stages of students’ problem representation development, individual pre-essays and post-essays and their utterances during two reasoning phases were used. An approach for mapping problem representations was developed. Characterizing the elements of mental models and their reasoning level enabled the description of five hierarchical categories of problem representations. Learning in exploratory and experimental settings was registered as the shift towards more complex stages of problem representations in genetics. The effect of different types of reasoning could be observed as the divergent development of problem representations within hierarchical categories.

  11. Coupled electromagnetic-thermodynamic simulations of microwave heating problems using the FDTD algorithm.

    PubMed

    Kopyt, Paweł; Celuch, Małgorzata

    2007-01-01

    A practical implementation of a hybrid simulation system capable of modeling coupled electromagnetic-thermodynamic problems typical in microwave heating is described. The paper presents two approaches to modeling such problems. Both are based on an FDTD-based commercial electromagnetic solver coupled to an external thermodynamic analysis tool required for calculations of heat diffusion. The first approach utilizes a simple FDTD-based thermal solver while in the second it is replaced by a universal commercial CFD solver. The accuracy of the two modeling systems is verified against the original experimental data as well as the measurement results available in literature.

  12. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach

    PubMed Central

    2016-01-01

    Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235

  13. An introductory pharmacy practice experience based on a medication therapy management service model.

    PubMed

    Agness, Chanel F; Huynh, Donna; Brandt, Nicole

    2011-06-10

    To implement and evaluate an introductory pharmacy practice experience (IPPE) based on the medication therapy management (MTM) service model. Patient Care 2 is an IPPE that introduces third-year pharmacy students to the MTM service model. Students interacted with older adults to identify medication-related problems and develop recommendations using core MTM elements. Course outcome evaluations were based on number of documented medication-related problems, recommendations, and student reviews. Fifty-seven older adults participated in the course. Students identified 52 medication-related problems and 66 medical problems, and documented 233 recommendations relating to health maintenance and wellness, pharmacotherapy, referrals, and education. Students reported having adequate experience performing core MTM elements. Patient Care 2 may serve as an experiential learning model for pharmacy schools to teach the core elements of MTM and provide patient care services to the community.

  14. Definition and solution of a stochastic inverse problem for the Manning’s n parameter field in hydrodynamic models

    DOE PAGES

    Butler, Troy; Graham, L.; Estep, D.; ...

    2015-02-03

    The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less

  15. An Instructional Model to Support Problem-Based Historical Inquiry: The Persistent Issues in History Network

    ERIC Educational Resources Information Center

    Brush, Thomas; Saye, John

    2014-01-01

    For over a decade, we have collaborated with secondary school history teachers in an evolving line of inquiry that applies research-based propositions to the design and testing of a problem-based learning framework and a set of wise practices that represent a professional teaching knowledge base for implementing a particular model of instruction,…

  16. Student Modeling Based on Problem Solving Times

    ERIC Educational Resources Information Center

    Pelánek, Radek; Jarušek, Petr

    2015-01-01

    Student modeling in intelligent tutoring systems is mostly concerned with modeling correctness of students' answers. As interactive problem solving activities become increasingly common in educational systems, it is useful to focus also on timing information associated with problem solving. We argue that the focus on timing is natural for certain…

  17. Integrating Computers into the Problem-Solving Process.

    ERIC Educational Resources Information Center

    Lowther, Deborah L.; Morrison, Gary R.

    2003-01-01

    Asserts that within the context of problem-based learning environments, professors can encourage students to use computers as problem-solving tools. The ten-step Integrating Technology for InQuiry (NteQ) model guides professors through the process of integrating computers into problem-based learning activities. (SWM)

  18. Significance testing of rules in rule-based models of human problem solving

    NASA Technical Reports Server (NTRS)

    Lewis, C. M.; Hammer, J. M.

    1986-01-01

    Rule-based models of human problem solving have typically not been tested for statistical significance. Three methods of testing rules - analysis of variance, randomization, and contingency tables - are presented. Advantages and disadvantages of the methods are also described.

  19. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  20. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  1. Promoting Post-Formal Thinking in a U.S. History Survey Course: A Problem-Based Approach

    ERIC Educational Resources Information Center

    Wynn, Charles T.; Mosholder, Richard S.; Larsen, Carolee A.

    2016-01-01

    This article presents a problem-based learning (PBL) model for teaching a college U.S. history survey course (U.S. history since 1890) designed to promote postformal thinking skills and identify and explain thinking systems inherent in adult complex problem-solving. We also present the results of a study in which the outcomes of the PBL model were…

  2. Working Towards a Scalable Model of Problem-Based Learning Instruction in Undergraduate Engineering Education

    ERIC Educational Resources Information Center

    Mantri, Archana

    2014-01-01

    The intent of the study presented in this paper is to show that the model of problem-based learning (PBL) can be made scalable by designing curriculum around a set of open-ended problems (OEPs). The detailed statistical analysis of the data collected to measure the effects of traditional and PBL instructions for three courses in Electronics and…

  3. Problem-Based Learning: Modifying the Medical School Model for Teaching High School Economics.

    ERIC Educational Resources Information Center

    Maxwell, Nan L.; Bellisimo, Yolanda; Mergendoller, John

    2001-01-01

    Provides background information on the problem-based learning (PBL) model used in medical education that was adapted for high school economics. Describes the high school economics curriculum and outline the stages of the PBL model using examples from a unit called "The High School Food Court." Discusses the design considerations. (CMK)

  4. Problem-Based Learning--Buginese Cultural Knowledge Model--Case Study: Teaching Mathematics at Junior High School

    ERIC Educational Resources Information Center

    Cheriani, Cheriani; Mahmud, Alimuddin; Tahmir, Suradi; Manda, Darman; Dirawan, Gufran Darma

    2015-01-01

    This study aims to determine the differences in learning output by using Problem Based Model combines with the "Buginese" Local Cultural Knowledge (PBL-Culture). It is also explores the students activities in learning mathematics subject by using PBL-Culture Models. This research is using Mixed Methods approach that combined quantitative…

  5. Mathematical modeling of moving boundary problems in thermal energy storage

    NASA Technical Reports Server (NTRS)

    Solomon, A. D.

    1980-01-01

    The capability for predicting the performance of thermal energy storage (RES) subsystems and components using PCM's based on mathematical and physical models is developed. Mathematical models of the dynamic thermal behavior of (TES) subsystems using PCM's based on solutions of the moving boundary thermal conduction problem and on heat and mass transfer engineering correlations are also discussed.

  6. An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LI, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  7. DEVELOPMENT OF A PORTABLE SOFTWARE LANGUAGE FOR PHYSIOLOGICALLY-BASED PHARMACOKINETIC (PBPK) MODELS

    EPA Science Inventory

    The PBPK modeling community has had a long-standing problem with modeling software compatibility. The numerous software packages used for PBPK models are, at best, minimally compatible. This creates problems ranging from model obsolescence due to software support discontinuation...

  8. Constraint-Based Local Search for Constrained Optimum Paths Problems

    NASA Astrophysics Data System (ADS)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  9. Students’ errors in solving combinatorics problems observed from the characteristics of RME modeling

    NASA Astrophysics Data System (ADS)

    Meika, I.; Suryadi, D.; Darhim

    2018-01-01

    This article was written based on the learning evaluation results of students’ errors in solving combinatorics problems observed from the characteristics of Realistic Mathematics Education (RME); that is modeling. Descriptive method was employed by involving 55 students from two international-based pilot state senior high schools in Banten. The findings of the study suggested that the students still committed errors in simplifying the problem as much 46%; errors in making mathematical model (horizontal mathematization) as much 60%; errors in finishing mathematical model (vertical mathematization) as much 65%; and errors in interpretation as well as validation as much 66%.

  10. A Descriptive Model of Information Problem Solving while Using Internet

    ERIC Educational Resources Information Center

    Brand-Gruwel, Saskia; Wopereis, Iwan; Walraven, Amber

    2009-01-01

    This paper presents the IPS-I-model: a model that describes the process of information problem solving (IPS) in which the Internet (I) is used to search information. The IPS-I-model is based on three studies, in which students in secondary and (post) higher education were asked to solve information problems, while thinking aloud. In-depth analyses…

  11. A majorized Newton-CG augmented Lagrangian-based finite element method for 3D restoration of geological models

    NASA Astrophysics Data System (ADS)

    Tang, Peipei; Wang, Chengjing; Dai, Xiaoxia

    2016-04-01

    In this paper, we propose a majorized Newton-CG augmented Lagrangian-based finite element method for 3D elastic frictionless contact problems. In this scheme, we discretize the restoration problem via the finite element method and reformulate it to a constrained optimization problem. Then we apply the majorized Newton-CG augmented Lagrangian method to solve the optimization problem, which is very suitable for the ill-conditioned case. Numerical results demonstrate that the proposed method is a very efficient algorithm for various large-scale 3D restorations of geological models, especially for the restoration of geological models with complicated faults.

  12. Problem-Based Learning: A Critical Rationalist Perspective

    ERIC Educational Resources Information Center

    Parton, Graham; Bailey, Richard

    2008-01-01

    Although problem-based learning is being adopted by many institutions around the world as an effective model of learning in higher education, there is a surprising lack of critique in the problem-based learning literature in relation to its philosophical characteristics. This paper explores epistemology as a starting point for investigating the…

  13. Enhancing Large-Group Problem-Based Learning in Veterinary Medical Education.

    ERIC Educational Resources Information Center

    Pickrell, John A.

    This project for large-group, problem-based learning at Kansas State University College of Veterinary Medicine developed 47 case-based videotapes that are used to model clinical conditions and also involved veterinary practitioners to formulate true practice cases into student learning opportunities. Problem-oriented, computer-assisted diagnostic…

  14. A simulation-based approach for solving assembly line balancing problem

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyu

    2017-09-01

    Assembly line balancing problem is directly related to the production efficiency, since the last century, the problem of assembly line balancing was discussed and still a lot of people are studying on this topic. In this paper, the problem of assembly line is studied by establishing the mathematical model and simulation. Firstly, the model of determing the smallest production beat under certain work station number is anysized. Based on this model, the exponential smoothing approach is applied to improve the the algorithm efficiency. After the above basic work, the gas stirling engine assembly line balancing problem is discussed as a case study. Both two algorithms are implemented using the Lingo programming environment and the simulation results demonstrate the validity of the new methods.

  15. Using Technology to Facilitate and Enhance Project-based Learning in Mathematical Physics

    NASA Astrophysics Data System (ADS)

    Duda, Gintaras

    2011-04-01

    Problem-based and project-based learning are two pedagogical techniques that have several clear advantages over traditional instructional methods: 1) both techniques are active and student centered, 2) students confront real-world and/or highly complex problems, and 3) such exercises model the way science and engineering are done professionally. This talk will present an experiment in project/problem-based learning in a mathematical physics course. The group project in the course involved modeling a zombie outbreak of the type seen in AMC's ``The Walking Dead.'' Students researched, devised, and solved their mathematical models for the spread of zombie-like infection. Students used technology in all stages; in fact, since analytical solutions to the models were often impossible, technology was a necessary and critical component of the challenge. This talk will explore the use of technology in general in problem and project-based learning and will detail some specific examples of how technology was used to enhance student learning in this course. A larger issue of how students use the Internet to learn will also be explored.

  16. The effectiveness of clinical problem-based learning model of medico-jurisprudence education on general law knowledge for Obstetrics/Gynecological interns.

    PubMed

    Chang, Hui-Chin; Wang, Ning-Yen; Ko, Wen-Ru; Yu, You-Tsz; Lin, Long-Yau; Tsai, Hui-Fang

    2017-06-01

    The effective education method of medico-jurisprudence for medical students is unclear. The study was designed to evaluate the effectiveness of problem-based learning (PBL) model teaching medico-jurisprudence in clinical setting on General Law Knowledge (GLK) for medical students. Senior medical students attending either campus-based law curriculum or Obstetrics/Gynecology (Ob/Gyn) clinical setting morning meeting from February to July in 2015 were enrolled. A validated questionnaire comprising 45 questions were completed before and after the law education. The interns attending clinical setting small group improvisation medico-jurisprudence problem-based learning education had significantly better GLK scores than the GLK of students attending campus-based medical law education course after the period studied. PBL teaching model of medico-jurisprudence is an ideal alternative pedagogy model in medical law education curriculum. Copyright © 2017. Published by Elsevier B.V.

  17. A Model of e-Learning by Constructivism Approach Using Problem-Based Learning to Develop Thinking Skills for Students in Rajaghat University

    ERIC Educational Resources Information Center

    Shutimarrungson, Werayut; Pumipuntu, Sangkom; Noirid, Surachet

    2014-01-01

    This research aimed to develop a model of e-learning by using Problem-Based Learning--PBL to develop thinking skills for students in Rajabhat University. The research is divided into three phases through the e-learning model via PBL with Constructivism approach as follows: Phase 1 was to study characteristics and factors through the model to…

  18. Validity of Students Worksheet Based Problem-Based Learning for 9th Grade Junior High School in living organism Inheritance and Food Biotechnology.

    NASA Astrophysics Data System (ADS)

    Jefriadi, J.; Ahda, Y.; Sumarmin, R.

    2018-04-01

    Based on preliminary research of students worksheet used by teachers has several disadvantages such as students worksheet arranged directly drove learners conduct an investigation without preceded by directing learners to a problem or provide stimulation, student's worksheet not provide a concrete imageand presentation activities on the students worksheet not refer to any one learning models curicullum recommended. To address problems Reviews these students then developed a worksheet based on problem-based learning. This is a research development that using Ploom models. The phases are preliminary research, development and assessment. The instruments used in data collection that includes pieces of observation/interviews, instrument self-evaluation, instruments validity. The results of the validation expert on student worksheets get a valid result the average value 80,1%. Validity of students worksheet based problem-based learning for 9th grade junior high school in living organism inheritance and food biotechnology get valid category.

  19. Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.

    PubMed

    Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei

    2015-08-01

    In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.

  20. Conceptualization of an R&D Based Learning-to-Innovate Model for Science Education

    ERIC Educational Resources Information Center

    Lai, Oiki Sylvia

    2013-01-01

    The purpose of this research was to conceptualize an R & D based learning-to-innovate (LTI) model. The problem to be addressed was the lack of a theoretical L TI model, which would inform science pedagogy. The absorptive capacity (ACAP) lens was adopted to untangle the R & D LTI phenomenon into four learning processes: problem-solving via…

  1. The Effect of Inquiry Training Learning Model Based on Just in Time Teaching for Problem Solving Skill

    ERIC Educational Resources Information Center

    Turnip, Betty; Wahyuni, Ida; Tanjung, Yul Ifda

    2016-01-01

    One of the factors that can support successful learning activity is the use of learning models according to the objectives to be achieved. This study aimed to analyze the differences in problem-solving ability Physics student learning model Inquiry Training based on Just In Time Teaching [JITT] and conventional learning taught by cooperative model…

  2. Hierarchical modeling and inference in ecology: The analysis of data from populations, metapopulations and communities

    USGS Publications Warehouse

    Royle, J. Andrew; Dorazio, Robert M.

    2008-01-01

    A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.

  3. Quantum computation with coherent spin states and the close Hadamard problem

    NASA Astrophysics Data System (ADS)

    Adcock, Mark R. A.; Høyer, Peter; Sanders, Barry C.

    2016-04-01

    We study a model of quantum computation based on the continuously parameterized yet finite-dimensional Hilbert space of a spin system. We explore the computational powers of this model by analyzing a pilot problem we refer to as the close Hadamard problem. We prove that the close Hadamard problem can be solved in the spin system model with arbitrarily small error probability in a constant number of oracle queries. We conclude that this model of quantum computation is suitable for solving certain types of problems. The model is effective for problems where symmetries between the structure of the information associated with the problem and the structure of the unitary operators employed in the quantum algorithm can be exploited.

  4. Inverse problems in the design, modeling and testing of engineering systems

    NASA Technical Reports Server (NTRS)

    Alifanov, Oleg M.

    1991-01-01

    Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.

  5. Problem-Posing in Education: Transformation of the Practice of the Health Professional.

    ERIC Educational Resources Information Center

    Casagrande, L. D. R.; Caron-Ruffino, M.; Rodrigues, R. A. P.; Vendrusculo, D. M. S.; Takayanagui, A. M. M.; Zago, M. M. F.; Mendes, M. D.

    1998-01-01

    Studied the use of a problem-posing model in health education. The model based on the ideas of Paulo Freire is presented. Four innovative experiences of teaching-learning in environmental and occupational health and patient education are reported. Notes that the problem-posing model has the capability to transform health-education practice.…

  6. Systematizing Scaffolding for Problem-Based Learning: A View from Case-Based Reasoning

    ERIC Educational Resources Information Center

    Tawfik, Andrew A.; Kolodner, Janet L.

    2016-01-01

    Current theories and models of education often argue that instruction is best administered when knowledge is situated within a context. Problem-based learning (PBL) provides an approach to education that has particularly powerful affordances for learning disciplinary content and practices by solving authentic problems within a discipline. However,…

  7. Problem-Based Educational Game Becomes Student-Centered Learning Environment

    ERIC Educational Resources Information Center

    Rodkroh, Pornpimon; Suwannatthachote, Praweenya; Kaemkate, Wannee

    2013-01-01

    Problem-based educational games are able to provide a fun and motivating environment for teaching and learning of certain subjects. However, most educational game models do not address the learning elements of problem-based educational games. This study aims to synthesize and to propose the important elements to facilitate the learning process and…

  8. A review on the modelling of collection and distribution of blood donation based on vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Azezan, Nur Arif; Ramli, Mohammad Fadzli; Masran, Hafiz

    2017-11-01

    In this paper, we discussed a literature on blood collection-distribution that based on vehicle routing problem. This problem emergence when the process from collection to stock up must be completed in timely manner. We also modified the mathematical model so that it will suited to general collection of blood. A discussion on its algorithm and solution methods are also pointed out briefly in this paper.

  9. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  10. Model-Based Prognostics of Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Bregon, Anibal

    2015-01-01

    Model-based prognostics has become a popular approach to solving the prognostics problem. However, almost all work has focused on prognostics of systems with continuous dynamics. In this paper, we extend the model-based prognostics framework to hybrid systems models that combine both continuous and discrete dynamics. In general, most systems are hybrid in nature, including those that combine physical processes with software. We generalize the model-based prognostics formulation to hybrid systems, and describe the challenges involved. We present a general approach for modeling hybrid systems, and overview methods for solving estimation and prediction in hybrid systems. As a case study, we consider the problem of conflict (i.e., loss of separation) prediction in the National Airspace System, in which the aircraft models are hybrid dynamical systems.

  11. A modified active appearance model based on an adaptive artificial bee colony.

    PubMed

    Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali

    2014-01-01

    Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.

  12. Graph cuts for curvature based image denoising.

    PubMed

    Bae, Egil; Shi, Juan; Tai, Xue-Cheng

    2011-05-01

    Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.

  13. Development of Learning Management Model Based on Constructivist Theory and Reasoning Strategies for Enhancing the Critical Thinking of Secondary Students

    ERIC Educational Resources Information Center

    Chaipichit, Dudduan; Jantharajit, Nirat; Chookhampaeng, Sumalee

    2015-01-01

    The objectives of this research were to study issues around the management of science learning, problems that are encountered, and to develop a learning management model to address those problems. The development of that model and the findings of its study were based on Constructivist Theory and literature on reasoning strategies for enhancing…

  14. Multi-Fidelity Framework for Modeling Combustion Instability

    DTIC Science & Technology

    2016-07-27

    generated from the reduced-domain dataset. Evaluations of the framework are performed based on simplified test problems for a model rocket combustor showing...generated from the reduced-domain dataset. Evaluations of the framework are performed based on simplified test problems for a model rocket combustor...of Aeronautics and Astronautics and Associate Fellow AIAA. ‡ Professor Emeritus. § Senior Scientist, Rocket Propulsion Division and Senior Member

  15. Multi-agent cooperation pursuit based on an extension of AALAADIN organisational model

    NASA Astrophysics Data System (ADS)

    Souidi, Mohammed El Habib; Songhao, Piao; Guo, Li; Lin, Chang

    2016-11-01

    An approach of cooperative pursuit for multiple mobile targets based on multi-agents system is discussed. In this kind of problem the pursuit process is divided into two kinds of tasks. The first one (coalition problem) is designed to solve the problem of the pursuit team formation. To achieve this mission, we used an innovative method based on a dynamic organisation and reorganisation of the pursuers' groups. We introduce our coalition strategy extended from the organisational agent, group, role model by assigning an access mechanism to the groups inspired by fuzzy logic principles. The second task (motion problem) is the treatment of the pursuers' motion strategy. To manage this problem we applied the principles of the Markov decision process. Simulation results show the feasibility and validity of the given proposal.

  16. Scalable approximate policies for Markov decision process models of hospital elective admissions.

    PubMed

    Zhu, George; Lizotte, Dan; Hoey, Jesse

    2014-05-01

    To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Hierarchical calibration and validation of computational fluid dynamics models for solid sorbent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Canhai; Xu, Zhijie; Pan, Wenxiao

    2016-01-01

    To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less

  18. Problem-Based Learning in Graduate Management Education: An Integrative Model and Interdisciplinary Application

    ERIC Educational Resources Information Center

    Brownell, Judi; Jameson, Daphne A.

    2004-01-01

    This article develops a model of problem-based learning (PBL) and shows how PBL has been used for a decade in one graduate management program. PBL capitalizes on synergies among cognitive, affective, and behavioral learning. Although management education usually privileges cognitive learning, affective learning is equally important. By focusing on…

  19. Feedback and Feed-Forward for Promoting Problem-Based Learning in Online Learning Environments

    ERIC Educational Resources Information Center

    Webb, Ashley; Moallem, Mahnaz

    2016-01-01

    Purpose: The study aimed to (1) review the literature to construct conceptual models that could guide instructional designers in developing problem/project-based learning environments while applying effective feedback strategies, (2) use the models to design, develop, and implement an online graduate course, and (3) assess the efficiency of the…

  20. The effect of discovery learning and problem-based learning on middle school students’ self-regulated learning

    NASA Astrophysics Data System (ADS)

    Miatun, A.; Muntazhimah

    2018-01-01

    The aim of this research was to determine the effect of learning models on mathematics achievement viewed from student’s self-regulated learning. The learning model compared were discovery learning and problem-based learning. The population was all students at the grade VIII of Junior High School in Boyolali regency. The samples were students of SMPN 4 Boyolali, SMPN 6 Boyolali, and SMPN 4 Mojosongo. The instruments used were mathematics achievement tests and self-regulated learning questionnaire. The data were analyzed using unbalanced two-ways Anova. The conclusion was as follows: (1) discovery learning gives better achievement than problem-based learning. (2) Achievement of students who have high self-regulated learning was better than students who have medium and low self-regulated learning. (3) For discovery learning, achievement of students who have high self-regulated learning was better than students who have medium and low self-regulated learning. For problem-based learning, students who have high and medium self-regulated learning have the same achievement. (4) For students who have high self-regulated learning, discovery learning gives better achievement than problem-based learning. Students who have medium and low self-regulated learning, both learning models give the same achievement.

  1. Development of syntax of intuition-based learning model in solving mathematics problems

    NASA Astrophysics Data System (ADS)

    Yeni Heryaningsih, Nok; Khusna, Hikmatul

    2018-01-01

    The aim of the research was to produce syntax of Intuition Based Learning (IBL) model in solving mathematics problem for improving mathematics students’ achievement that valid, practical and effective. The subject of the research were 2 classes in grade XI students of SMAN 2 Sragen, Central Java. The type of the research was a Research and Development (R&D). Development process adopted Plomp and Borg & Gall development model, they were preliminary investigation step, design step, realization step, evaluation and revision step. Development steps were as follow: (1) Collected the information and studied of theories in Preliminary Investigation step, studied about intuition, learning model development, students condition, and topic analysis, (2) Designed syntax that could bring up intuition in solving mathematics problem and then designed research instruments. They were several phases that could bring up intuition, Preparation phase, Incubation phase, Illumination phase and Verification phase, (3) Realized syntax of Intuition Based Learning model that has been designed to be the first draft, (4) Did validation of the first draft to the validator, (5) Tested the syntax of Intuition Based Learning model in the classrooms to know the effectiveness of the syntax, (6) Conducted Focus Group Discussion (FGD) to evaluate the result of syntax model testing in the classrooms, and then did the revision on syntax IBL model. The results of the research were produced syntax of IBL model in solving mathematics problems that valid, practical and effective. The syntax of IBL model in the classroom were, (1) Opening with apperception, motivations and build students’ positive perceptions, (2) Teacher explains the material generally, (3) Group discussion about the material, (4) Teacher gives students mathematics problems, (5) Doing exercises individually to solve mathematics problems with steps that could bring up students’ intuition: Preparations, Incubation, Illumination, and Verification, (6) Closure with the review of students have learned or giving homework.

  2. Edgar Schein's Process versus Content Consultation Models.

    ERIC Educational Resources Information Center

    Rockwood, Gary F.

    1993-01-01

    Describes Schein's three models of consultation based on assumptions inherent in different helping styles: purchase of expertise and doctor-patient models, which focus on content of organization problems; and process consultation model, which focuses on how organizational problems are solved. Notes that Schein has suggested that consultants begin…

  3. Service-based Solutions.

    ERIC Educational Resources Information Center

    Cummings, Lynda; Winston, Michael

    1998-01-01

    Describes the Solutions model used at Shelley High School in Idaho which gives students the opportunity to gain practical experience while tackling community problems. This approach is built on the three fundamentals of an integrated curriculum, a problem-solving focus, and service-based learning. Sample problems include increasing certain trout…

  4. Gaze data reveal distinct choice processes underlying model-based and model-free reinforcement learning

    PubMed Central

    Konovalov, Arkady; Krajbich, Ian

    2016-01-01

    Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383

  5. Human Pose Estimation from Monocular Images: A Comprehensive Survey

    PubMed Central

    Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi

    2016-01-01

    Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003

  6. Optimal blood glucose control in diabetes mellitus treatment using dynamic programming based on Ackerman’s linear model

    NASA Astrophysics Data System (ADS)

    Pradanti, Paskalia; Hartono

    2018-03-01

    Determination of insulin injection dose in diabetes mellitus treatment can be considered as an optimal control problem. This article is aimed to simulate optimal blood glucose control for patient with diabetes mellitus. The blood glucose regulation of diabetic patient is represented by Ackerman’s Linear Model. This problem is then solved using dynamic programming method. The desired blood glucose level is obtained by minimizing the performance index in Lagrange form. The results show that dynamic programming based on Ackerman’s Linear Model is quite good to solve the problem.

  7. Effects of Problem-Based Learning Model versus Expository Model and Motivation to Achieve for Student's Physic Learning Result of Senior High School at Class XI

    ERIC Educational Resources Information Center

    Prayekti

    2016-01-01

    "Problem-based learning" (PBL) is one of an innovative learning model which can provide an active learning to student, include the motivation to achieve showed by student when the learning is in progress. This research is aimed to know: (1) differences of physic learning result for student group which taught by PBL versus expository…

  8. Problem Solving Under Time-Constraints.

    ERIC Educational Resources Information Center

    Richardson, Michael; Hunt, Earl

    A model of how automated and controlled processing can be mixed in computer simulations of problem solving is proposed. It is based on previous work by Hunt and Lansman (1983), who developed a model of problem solving that could reproduce the data obtained with several attention and performance paradigms, extending production-system notation to…

  9. Junior high school students' cognitive process in solving the developed algebraic problems based on information processing taxonomy model

    NASA Astrophysics Data System (ADS)

    Purwoko, Saad, Noor Shah; Tajudin, Nor'ain Mohd

    2017-05-01

    This study aims to: i) develop problem solving questions of Linear Equations System of Two Variables (LESTV) based on levels of IPT Model, ii) explain the level of students' skill of information processing in solving LESTV problems; iii) explain students' skill in information processing in solving LESTV problems; and iv) explain students' cognitive process in solving LESTV problems. This study involves three phases: i) development of LESTV problem questions based on Tessmer Model; ii) quantitative survey method on analyzing students' skill level of information processing; and iii) qualitative case study method on analyzing students' cognitive process. The population of the study was 545 eighth grade students represented by a sample of 170 students of five Junior High Schools in Hilir Barat Zone, Palembang (Indonesia) that were chosen using cluster sampling. Fifteen students among them were drawn as a sample for the interview session with saturated information obtained. The data were collected using the LESTV problem solving test and the interview protocol. The quantitative data were analyzed using descriptive statistics, while the qualitative data were analyzed using the content analysis. The finding of this study indicated that students' cognitive process was just at the step of indentifying external source and doing algorithm in short-term memory fluently. Only 15.29% students could retrieve type A information and 5.88% students could retrieve type B information from long-term memory. The implication was the development problems of LESTV had validated IPT Model in modelling students' assessment by different level of hierarchy.

  10. Video Analysis of a Plucked String: An Example of Problem-based Learning

    NASA Astrophysics Data System (ADS)

    Wentworth, Christopher D.; Buse, Eric

    2009-11-01

    Problem-based learning is a teaching methodology that grounds learning within the context of solving a real problem. Typically the problem initiates learning of concepts rather than simply being an application of the concept, and students take the lead in identifying what must be developed to solve the problem. Problem-based learning in upper-level physics courses can be challenging, because of the time and financial requirements necessary to generate real data. Here, we present a problem that motivates learning about partial differential equations and their solution in a mathematical methods for physics course. Students study a plucked elastic cord using high speed digital video. After creating video clips of the cord motion under different tensions they are asked to create a mathematical model. Ultimately, students develop and solve a model that includes damping effects that are clearly visible in the videos. The digital video files used in this project are available on the web at http://physics.doane.edu .

  11. Dynamic emulation modelling for the optimal operation of water systems: an overview

    NASA Astrophysics Data System (ADS)

    Castelletti, A.; Galelli, S.; Giuliani, M.

    2014-12-01

    Despite sustained increase in computing power over recent decades, computational limitations remain a major barrier to the effective and systematic use of large-scale, process-based simulation models in rational environmental decision-making. Whereas complex models may provide clear advantages when the goal of the modelling exercise is to enhance our understanding of the natural processes, they introduce problems of model identifiability caused by over-parameterization and suffer from high computational burden when used in management and planning problems. As a result, increasing attention is now being devoted to emulation modelling (or model reduction) as a way of overcoming these limitations. An emulation model, or emulator, is a low-order approximation of the process-based model that can be substituted for it in order to solve high resource-demanding problems. In this talk, an overview of emulation modelling within the context of the optimal operation of water systems will be provided. Particular emphasis will be given to Dynamic Emulation Modelling (DEMo), a special type of model complexity reduction in which the dynamic nature of the original process-based model is preserved, with consequent advantages in a wide range of problems, particularly feedback control problems. This will be contrasted with traditional non-dynamic emulators (e.g. response surface and surrogate models) that have been studied extensively in recent years and are mainly used for planning purposes. A number of real world numerical experiences will be used to support the discussion ranging from multi-outlet water quality control in water reservoir through erosion/sedimentation rebalancing in the operation of run-off-river power plants to salinity control in lake and reservoirs.

  12. Developing a Blended Learning-Based Method for Problem-Solving in Capability Learning

    ERIC Educational Resources Information Center

    Dwiyogo, Wasis D.

    2018-01-01

    The main objectives of the study were to develop and investigate the implementation of blended learning based method for problem-solving. Three experts were involved in the study and all three had stated that the model was ready to be applied in the classroom. The implementation of the blended learning-based design for problem-solving was…

  13. Task-Analytic Design of Graphic Presentations

    DTIC Science & Technology

    1990-05-18

    important premise of Larkin and Simon’s work is that, when comparing alternative presentations, it is fruitful to characterize graphic-based problem solving...using the same information-processing models used to help understand problem solving using other representations [Newell and Simon, 19721...luring execution of graphic presentation- 4 based problem -solving procedures. Chapter 2 reviews other work related to the problem of designing graphic

  14. A Multi-layer Dynamic Model for Coordination Based Group Decision Making in Water Resource Allocation and Scheduling

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Zhang, Xingnan; Li, Chenming; Wang, Jianying

    Management of group decision-making is an important issue in water source management development. In order to overcome the defects in lacking of effective communication and cooperation in the existing decision-making models, this paper proposes a multi-layer dynamic model for coordination in water resource allocation and scheduling based group decision making. By introducing the scheme-recognized cooperative satisfaction index and scheme-adjusted rationality index, the proposed model can solve the problem of poor convergence of multi-round decision-making process in water resource allocation and scheduling. Furthermore, the problem about coordination of limited resources-based group decision-making process can be solved based on the effectiveness of distance-based group of conflict resolution. The simulation results show that the proposed model has better convergence than the existing models.

  15. Foundation for Problem-Based Gaming

    ERIC Educational Resources Information Center

    Kiili, Kristian

    2007-01-01

    Educational games may offer a viable strategy for developing students' problem-solving skills. However, the state of art of educational game research does not provide an account for that. Thus, the aim of this research is to develop an empirically allocated model about problem-based gaming that can be utilised to design pedagogically meaningful…

  16. PDE-based geophysical modelling using finite elements: examples from 3D resistivity and 2D magnetotellurics

    NASA Astrophysics Data System (ADS)

    Schaa, R.; Gross, L.; du Plessis, J.

    2016-04-01

    We present a general finite-element solver, escript, tailored to solve geophysical forward and inverse modeling problems in terms of partial differential equations (PDEs) with suitable boundary conditions. Escript’s abstract interface allows geoscientists to focus on solving the actual problem without being experts in numerical modeling. General-purpose finite element solvers have found wide use especially in engineering fields and find increasing application in the geophysical disciplines as these offer a single interface to tackle different geophysical problems. These solvers are useful for data interpretation and for research, but can also be a useful tool in educational settings. This paper serves as an introduction into PDE-based modeling with escript where we demonstrate in detail how escript is used to solve two different forward modeling problems from applied geophysics (3D DC resistivity and 2D magnetotellurics). Based on these two different cases, other geophysical modeling work can easily be realized. The escript package is implemented as a Python library and allows the solution of coupled, linear or non-linear, time-dependent PDEs. Parallel execution for both shared and distributed memory architectures is supported and can be used without modifications to the scripts.

  17. Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model

    NASA Astrophysics Data System (ADS)

    Mejer Hansen, Thomas

    2017-04-01

    Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.

  18. A spline-based parameter estimation technique for static models of elastic structures

    NASA Technical Reports Server (NTRS)

    Dutt, P.; Taasan, S.

    1986-01-01

    The problem of identifying the spatially varying coefficient of elasticity using an observed solution to the forward problem is considered. Under appropriate conditions this problem can be treated as a first order hyperbolic equation in the unknown coefficient. Some continuous dependence results are developed for this problem and a spline-based technique is proposed for approximating the unknown coefficient, based on these results. The convergence of the numerical scheme is established and error estimates obtained.

  19. Analysis of creative mathematic thinking ability in problem based learning model based on self-regulation learning

    NASA Astrophysics Data System (ADS)

    Munahefi, D. N.; Waluya, S. B.; Rochmad

    2018-03-01

    The purpose of this research identified the effectiveness of Problem Based Learning (PBL) models based on Self Regulation Leaning (SRL) on the ability of mathematical creative thinking and analyzed the ability of mathematical creative thinking of high school students in solving mathematical problems. The population of this study was students of grade X SMA N 3 Klaten. The research method used in this research was sequential explanatory. Quantitative stages with simple random sampling technique, where two classes were selected randomly as experimental class was taught with the PBL model based on SRL and control class was taught with expository model. The selection of samples at the qualitative stage was non-probability sampling technique in which each selected 3 students were high, medium, and low academic levels. PBL model with SRL approach effectived to students’ mathematical creative thinking ability. The ability of mathematical creative thinking of low academic level students with PBL model approach of SRL were achieving the aspect of fluency and flexibility. Students of academic level were achieving fluency and flexibility aspects well. But the originality of students at the academic level was not yet well structured. Students of high academic level could reach the aspect of originality.

  20. Students' Problem-Solving in Mechanics: Preference of a Process Based Model.

    ERIC Educational Resources Information Center

    Stavy, Ruth; And Others

    Research in science and mathematics education has indicated that students often use inappropriate models for solving problems because they tend to mentally represent a problem according to surface features instead of referring to scientific concepts and features. The objective of the study reported in this paper was to determine whether 34 Israeli…

  1. Why Inquiry Is Inherently Difficult...and Some Ways to Make It Easier

    ERIC Educational Resources Information Center

    Meyer, Daniel Z.; Avery, Leanne M.

    2010-01-01

    In this article, the authors offer a framework that identifies two critical problems in designing inquiry-based instruction and suggests three models for developing instruction that overcomes those problems. The Protocol Model overcomes the Getting on Board Problem by providing students an initial experience through clearly delineated steps with a…

  2. Application of Model-Based Systems Engineering (MBSE) to Compare Legacy and Future Forces in Mine Warfare (MIW) Missions

    DTIC Science & Technology

    2014-12-01

    model the list is sorted by using a “nearest neighbor” ap- proach as a solution to the “ traveling salesman problem ” to create a list of targets in the...2 B. PROBLEM .......................................................................................................3 C. APPROACH...54 4. Problem Statement.............................................................................54 C. REQUIREMENTS

  3. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  4. A fuzzy case based reasoning tool for model based approach to rocket engine health monitoring

    NASA Technical Reports Server (NTRS)

    Krovvidy, Srinivas; Nolan, Adam; Hu, Yong-Lin; Wee, William G.

    1992-01-01

    In this system we develop a fuzzy case based reasoner that can build a case representation for several past anomalies detected, and we develop case retrieval methods that can be used to index a relevant case when a new problem (case) is presented using fuzzy sets. The choice of fuzzy sets is justified by the uncertain data. The new problem can be solved using knowledge of the model along with the old cases. This system can then be used to generalize the knowledge from previous cases and use this generalization to refine the existing model definition. This in turn can help to detect failures using the model based algorithms.

  5. Model for the design of distributed data bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ram, S.

    This research focuses on developing a model to solve the File Allocation Problem (FAP). The model integrates two major design issues, namely Concurrently Control and Data Distribution. The central node locking mechanism is incorporated in developing a nonlinear integer programming model. Two solution algorithms are proposed, one of which was implemented in FORTRAN.V. The allocation of data bases and programs are examined using this heuristic. Several decision rules were also formulated based on the results of the heuristic. A second more comprehensive heuristic was proposed, based on the knapsack problem. The development and implementation of this algorithm has been leftmore » as a topic for future research.« less

  6. College Students Solving Chemistry Problems: A Theoretical Model of Expertise

    ERIC Educational Resources Information Center

    Taasoobshirazi, Gita; Glynn, Shawn M.

    2009-01-01

    A model of expertise in chemistry problem solving was tested on undergraduate science majors enrolled in a chemistry course. The model was based on Anderson's "Adaptive Control of Thought-Rational" (ACT-R) theory. The model shows how conceptualization, self-efficacy, and strategy interact and contribute to the successful solution of quantitative,…

  7. A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony

    PubMed Central

    Othman, Zulaiha Ali

    2014-01-01

    Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748

  8. Clarification process: Resolution of decision-problem conditions

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1980-01-01

    A model of a general process which occurs in both decisionmaking and problem-solving tasks is presented. It is called the clarification model and is highly dependent on information flow. The model addresses the possible constraints of individual indifferences and experience in achieving success in resolving decision-problem conditions. As indicated, the application of the clarification process model is only necessary for certain classes of the basic decision-problem condition. With less complex decision problem conditions, certain phases of the model may be omitted. The model may be applied across a wide range of decision problem conditions. The model consists of two major components: (1) the five-phase prescriptive sequence (based on previous approaches to both concepts) and (2) the information manipulation function (which draws upon current ideas in the areas of information processing, computer programming, memory, and thinking). The two components are linked together to provide a structure that assists in understanding the process of resolving problems and making decisions.

  9. Fuzzy logic and causal reasoning with an 'n' of 1 for diagnosis and treatment of the stroke patient.

    PubMed

    Helgason, Cathy M; Jobe, Thomas H

    2004-03-01

    The current scientific model for clinical decision-making is founded on binary or Aristotelian logic, classical set theory and probability-based statistics. Evidence-based medicine has been established as the basis for clinical recommendations. There is a problem with this scientific model when the physician must diagnose and treat the individual patient. The problem is a paradox, which is that the scientific model of evidence-based medicine is based upon a hypothesis aimed at the group and therefore, any conclusions cannot be extrapolated but to a degree to the individual patient. This extrapolation is dependent upon the expertise of the physician. A fuzzy logic multivalued-based scientific model allows this expertise to be numerically represented and solves the clinical paradox of evidence-based medicine.

  10. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  11. Introduction: Occam’s Razor (SOT - Fit for Purpose workshop introduction)

    EPA Science Inventory

    Mathematical models provide important, reproducible, and transparent information for risk-based decision making. However, these models must be constructed to fit the needs of the problem to be solved. A “fit for purpose” model is an abstraction of a complicated problem that allow...

  12. Epistemological beliefs of physics undergraduate and graduate students and faculty in the context of a well-structured and an ill-structured problem

    NASA Astrophysics Data System (ADS)

    Mercan, Fatih C.

    This study examines epistemological beliefs of physics undergraduate and graduate students and faculty in the context of solving a well-structured and an ill-structured problem. The data collection consisted of a think aloud problem solving session followed by a semi-structured interview conducted with 50 participants, 10 participants at freshmen, seniors, masters, PhD, and faculty levels. The data analysis involved (a) identification of the range of beliefs about knowledge in the context of the well-structured and the ill-structured problem solving, (b) construction of a framework that unites the individual beliefs identified in each problem context under the same conceptual base, and (c) comparisons of the problem contexts and expertise level groups using the framework. The results of the comparison of the contexts of the well-structured and the ill-structured problem showed that (a) authoritative beliefs about knowledge were expressed in the well-structured problem context, (b) relativistic and religious beliefs about knowledge were expressed in the ill-structured problem context, and (c) rational, empirical, modeling beliefs about knowledge were expressed in both problem contexts. The results of the comparison of the expertise level groups showed that (a) undergraduates expressed authoritative beliefs about knowledge more than graduate students and faculty did not express authoritative beliefs, (b) faculty expressed modeling beliefs about knowledge more than graduate students and undergraduates did not express modeling beliefs, and (c) there were no differences in rational, empirical, experiential, relativistic, and religious beliefs about knowledge among the expertise level groups. As the expertise level increased the number of participants who expressed authoritative beliefs about knowledge decreased and the number of participants who expressed modeling based beliefs about knowledge increased. The results of this study implied that existing developmental and cognitive models of personal epistemology can explain personal epistemology in physics to a limited extent, however, these models cannot adequately account for the variation of epistemological beliefs across problem contexts. Modeling beliefs about knowledge emerged as a part of personal epistemology and an indicator of epistemological sophistication, which do not develop until extensive experience in the field. Based on these findings, the researcher recommended providing opportunities for practicing model construction for students.

  13. The Views of Preservice Teachers for Problem Based Learning Model Supported by Geocaching in Environmental Education

    ERIC Educational Resources Information Center

    Adanali, Rukiye; Alim, Mete

    2017-01-01

    The purpose of this study is to investigate the usability of Problem-Based Learning model supported by Instructional Geocaching Game (PBL-IGG). The study was conducted in Turkey, in 2015-2016 spring term with 19 geography teacher candidates who chosen by convenience sampling method. In this study, within Educational Geocaching Game (IGG) which is…

  14. Summer Teacher Enhancement Institute for Science, Mathematics, and Technology Using the Problem-Based Learning Model

    NASA Technical Reports Server (NTRS)

    Petersen, Richard H.

    1997-01-01

    The objectives of the Institute were: (a) increase participants' content knowledge about aeronautics, science, mathematics, and technology, (b) model and promote the use of scientific inquiry through problem-based learning, (c) investigate the use of instructional technologies and their applications to curricula, and (d) encourage the dissemination of TEI experiences to colleagues, students, and parents.

  15. A Model of Small-Group Problem-Based Learning in Pharmacy Education: Teaching in the Clinical Environment

    ERIC Educational Resources Information Center

    Khumsikiew, Jeerisuda; Donsamak, Sisira; Saeteaw, Manit

    2015-01-01

    Problem-based Learning (PBL) is an alternate method of instruction that incorporates basic elements of cognitive learning theory. Colleges of pharmacy use PBL to aid anticipated learning outcomes and practice competencies for pharmacy student. The purpose of this study were to implement and evaluate a model of small group PBL for 5th year pharmacy…

  16. Characterizations of Social-Based and Self-Based Contexts Associated with Students' Awareness, Evaluation, and Regulation of Their Thinking during Small-Group Mathematical Modeling

    ERIC Educational Resources Information Center

    Magiera, Marta T.; Zawojewski, Judith S.

    2011-01-01

    This exploratory study focused on characterizing problem-solving situations associated with spontaneous metacognitive activity. The results came from connected case studies of a group of 3 purposefully selected 9th-grade students working collaboratively on a series of 5 modeling problems. Students' descriptions of their own thinking during…

  17. Problem-Solving Models for Computer Literacy: Getting Smarter at Solving Problems. Student Lessons.

    ERIC Educational Resources Information Center

    Moursund, David

    This book is intended for use as a student guide. It is about human problem solving and provides information on how the mind works, placing a major emphasis on the role of computers as an aid in problem solving. The book is written with the underlying philosophy of discovery-based learning based on two premises: first, through the appropriate…

  18. A rigorous approach to investigating common assumptions about disease transmission: Process algebra as an emerging modelling methodology for epidemiology.

    PubMed

    McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron

    2011-03-01

    Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.

  19. Building an information model (with the help of PSL/PSA). [Problem Statement Language/Problem Statement Analyzer

    NASA Technical Reports Server (NTRS)

    Callender, E. D.; Farny, A. M.

    1983-01-01

    Problem Statement Language/Problem Statement Analyzer (PSL/PSA) applications, which were once a one-step process in which product system information was immediately translated into PSL statements, have in light of experience been shown to result in inconsistent representations. These shortcomings have prompted the development of an intermediate step, designated the Product System Information Model (PSIM), which provides a basis for the mutual understanding of customer terminology and the formal, conceptual representation of that product system in a PSA data base. The PSIM is initially captured as a paper diagram, followed by formal capture in the PSL/PSA data base.

  20. The continuum fusion theory of signal detection applied to a bi-modal fusion problem

    NASA Astrophysics Data System (ADS)

    Schaum, A.

    2011-05-01

    A new formalism has been developed that produces detection algorithms for model-based problems, in which one or more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and represents a good approximation to the GLR test.

  1. A Structural Equation Model to Analyse the Antecedents to Students' Web-Based Problem-Solving Performance

    ERIC Educational Resources Information Center

    Hwang, Gwo-Jen; Kuo, Fan-Ray

    2015-01-01

    Web-based problem-solving, a compound ability of critical thinking, creative thinking, reasoning thinking and information-searching abilities, has been recognised as an important competence for elementary school students. Some researchers have reported the possible correlations between problem-solving competence and information searching ability;…

  2. Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications

    NASA Astrophysics Data System (ADS)

    He, K.; Zhu, W. D.

    2011-07-01

    A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.

  3. Science modelling in pre-calculus: how to make mathematics problems contextually meaningful

    NASA Astrophysics Data System (ADS)

    Sokolowski, Andrzej; Yalvac, Bugrahan; Loving, Cathleen

    2011-04-01

    'Use of mathematical representations to model and interpret physical phenomena and solve problems is one of the major teaching objectives in high school math curriculum' (National Council of Teachers of Mathematics (NCTM), Principles and Standards for School Mathematics, NCTM, Reston, VA, 2000). Commonly used pre-calculus textbooks provide a wide range of application problems. However, these problems focus students' attention on evaluating or solving pre-arranged formulas for given values. The role of scientific content is reduced to provide a background for these problems instead of being sources of data gathering for inducing mathematical tools. Students are neither required to construct mathematical models based on the contexts nor are they asked to validate or discuss the limitations of applied formulas. Using these contexts, the instructor may think that he/she is teaching problem solving, where in reality he/she is teaching algorithms of the mathematical operations (G. Kulm (ed.), New directions for mathematics assessment, in Assessing Higher Order Thinking in Mathematics, Erlbaum, Hillsdale, NJ, 1994, pp. 221-240). Without a thorough representation of the physical phenomena and the mathematical modelling processes undertaken, problem solving unintentionally appears as simple algorithmic operations. In this article, we deconstruct the representations of mathematics problems from selected pre-calculus textbooks and explicate their limitations. We argue that the structure and content of those problems limits students' coherent understanding of mathematical modelling, and this could result in weak student problem-solving skills. Simultaneously, we explore the ways to enhance representations of those mathematical problems, which we have characterized as lacking a meaningful physical context and limiting coherent student understanding. In light of our discussion, we recommend an alternative to strengthen the process of teaching mathematical modelling - utilization of computer-based science simulations. Although there are several exceptional computer-based science simulations designed for mathematics classes (see, e.g. Kinetic Book (http://www.kineticbooks.com/) or Gizmos (http://www.explorelearning.com/)), we concentrate mainly on the PhET Interactive Simulations developed at the University of Colorado at Boulder (http://phet.colorado.edu/) in generating our argument that computer simulations more accurately represent the contextual characteristics of scientific phenomena than their textual descriptions.

  4. Ontology-based vector space model and fuzzy query expansion to retrieve knowledge on medical computational problem solutions.

    PubMed

    Bratsas, Charalampos; Koutkias, Vassilis; Kaimakamis, Evangelos; Bamidis, Panagiotis; Maglaveras, Nicos

    2007-01-01

    Medical Computational Problem (MCP) solving is related to medical problems and their computerized algorithmic solutions. In this paper, an extension of an ontology-based model to fuzzy logic is presented, as a means to enhance the information retrieval (IR) procedure in semantic management of MCPs. We present herein the methodology followed for the fuzzy expansion of the ontology model, the fuzzy query expansion procedure, as well as an appropriate ontology-based Vector Space Model (VSM) that was constructed for efficient mapping of user-defined MCP search criteria and MCP acquired knowledge. The relevant fuzzy thesaurus is constructed by calculating the simultaneous occurrences of terms and the term-to-term similarities derived from the ontology that utilizes UMLS (Unified Medical Language System) concepts by using Concept Unique Identifiers (CUI), synonyms, semantic types, and broader-narrower relationships for fuzzy query expansion. The current approach constitutes a sophisticated advance for effective, semantics-based MCP-related IR.

  5. Text Summarization Model based on Facility Location Problem

    NASA Astrophysics Data System (ADS)

    Takamura, Hiroya; Okumura, Manabu

    e propose a novel multi-document generic summarization model based on the budgeted median problem, which is a facility location problem. The summarization method based on our model is an extractive method, which selects sentences from the given document cluster and generates a summary. Each sentence in the document cluster will be assigned to one of the selected sentences, where the former sentece is supposed to be represented by the latter. Our method selects sentences to generate a summary that yields a good sentence assignment and hence covers the whole content of the document cluster. An advantage of this method is that it can incorporate asymmetric relations between sentences such as textual entailment. Through experiments, we showed that the proposed method yields good summaries on the dataset of DUC'04.

  6. Design, Development and Validation of a Model of Problem Solving for Egyptian Science Classes

    ERIC Educational Resources Information Center

    Shahat, Mohamed A.; Ohle, Annika; Treagust, David F.; Fischer, Hans E.

    2013-01-01

    Educators and policymakers envision the future of education in Egypt as enabling learners to acquire scientific inquiry and problem-solving skills. In this article, we describe the validation of a model for problem solving and the design of instruments for evaluating new teaching methods in Egyptian science classes. The instruments were based on…

  7. A Model for Predicting Behavioural Sleep Problems in a Random Sample of Australian Pre-Schoolers

    ERIC Educational Resources Information Center

    Hall, Wendy A.; Zubrick, Stephen R.; Silburn, Sven R.; Parsons, Deborah E.; Kurinczuk, Jennifer J.

    2007-01-01

    Behavioural sleep problems (childhood insomnias) can cause distress for both parents and children. This paper reports a model describing predictors of high sleep problem scores in a representative population-based random sample survey of non-Aboriginal singleton children born in 1995 and 1996 (1085 girls and 1129 boys) in Western Australia.…

  8. Reverse engineering a social agent-based hidden markov model--visage.

    PubMed

    Chen, Hung-Ching Justin; Goldberg, Mark; Magdon-Ismail, Malik; Wallace, William A

    2008-12-01

    We present a machine learning approach to discover the agent dynamics that drives the evolution of the social groups in a community. We set up the problem by introducing an agent-based hidden Markov model for the agent dynamics: an agent's actions are determined by micro-laws. Nonetheless, We learn the agent dynamics from the observed communications without knowing state transitions. Our approach is to identify the appropriate micro-laws corresponding to an identification of the appropriate parameters in the model. The model identification problem is then formulated as a mixed optimization problem. To solve the problem, we develop a multistage learning process for determining the group structure, the group evolution, and the micro-laws of a community based on the observed set of communications among actors, without knowing the semantic contents. Finally, to test the quality of our approximations and the feasibility of the approach, we present the results of extensive experiments on synthetic data as well as the results on real communities, such as Enron email and Movie newsgroups. Insight into agent dynamics helps us understand the driving forces behind social evolution.

  9. Traffic Flow Density Distribution Based on FEM

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Cui, Jianming

    In analysis of normal traffic flow, it usually uses the static or dynamic model to numerical analyze based on fluid mechanics. However, in such handling process, the problem of massive modeling and data handling exist, and the accuracy is not high. Finite Element Method (FEM) is a production which is developed from the combination of a modern mathematics, mathematics and computer technology, and it has been widely applied in various domain such as engineering. Based on existing theory of traffic flow, ITS and the development of FEM, a simulation theory of the FEM that solves the problems existing in traffic flow is put forward. Based on this theory, using the existing Finite Element Analysis (FEA) software, the traffic flow is simulated analyzed with fluid mechanics and the dynamics. Massive data processing problem of manually modeling and numerical analysis is solved, and the authenticity of simulation is enhanced.

  10. Teaching genetics using hands-on models, problem solving, and inquiry-based methods

    NASA Astrophysics Data System (ADS)

    Hoppe, Stephanie Ann

    Teaching genetics can be challenging because of the difficulty of the content and misconceptions students might hold. This thesis focused on using hands-on model activities, problem solving, and inquiry-based teaching/learning methods in order to increase student understanding in an introductory biology class in the area of genetics. Various activities using these three methods were implemented into the classes to address any misconceptions and increase student learning of the difficult concepts. The activities that were implemented were shown to be successful based on pre-post assessment score comparison. The students were assessed on the subjects of inheritance patterns, meiosis, and protein synthesis and demonstrated growth in all of the areas. It was found that hands-on models, problem solving, and inquiry-based activities were more successful in learning concepts in genetics and the students were more engaged than tradition styles of lecture.

  11. Research on vehicles and cargos matching model based on virtual logistics platform

    NASA Astrophysics Data System (ADS)

    Zhuang, Yufeng; Lu, Jiang; Su, Zhiyuan

    2018-04-01

    Highway less than truckload (LTL) transportation vehicles and cargos matching problem is a joint optimization problem of typical vehicle routing and loading, which is also a hot issue of operational research. This article based on the demand of virtual logistics platform, for the problem of the highway LTL transportation, the matching model of the idle vehicle and the transportation order is set up and the corresponding genetic algorithm is designed. Then the algorithm is implemented by Java. The simulation results show that the solution is satisfactory.

  12. A nonlinear bi-level programming approach for product portfolio management.

    PubMed

    Ma, Shuang

    2016-01-01

    Product portfolio management (PPM) is a critical decision-making for companies across various industries in today's competitive environment. Traditional studies on PPM problem have been motivated toward engineering feasibilities and marketing which relatively pay less attention to other competitors' actions and the competitive relations, especially in mathematical optimization domain. The key challenge lies in that how to construct a mathematical optimization model to describe this Stackelberg game-based leader-follower PPM problem and the competitive relations between them. The primary work of this paper is the representation of a decision framework and the optimization model to leverage the PPM problem of leader and follower. A nonlinear, integer bi-level programming model is developed based on the decision framework. Furthermore, a bi-level nested genetic algorithm is put forward to solve this nonlinear bi-level programming model for leader-follower PPM problem. A case study of notebook computer product portfolio optimization is reported. Results and analyses reveal that the leader-follower bi-level optimization model is robust and can empower product portfolio optimization.

  13. Problem-Based Learning in Web-Based Science Classroom.

    ERIC Educational Resources Information Center

    Kim, Heeyoung; Chung, Ji-Sook; Kim, Younghoon

    The purpose of this paper is to discuss how general problem-based learning (PBL) models and social-constructivist perspectives are applied to the design and development of a Web-based science program, which emphasizes inquiry-based learning for fifth grade students. The paper also deals with the general features and learning process of a Web-based…

  14. Train repathing in emergencies based on fuzzy linear programming.

    PubMed

    Meng, Xuelei; Cui, Bingmou

    2014-01-01

    Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model) to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm.

  15. Influence of Problem Based Learning on Critical Thinking Skills and Competence Class VIII SMPN 1 Gunuang Omeh, 2016/2017

    NASA Astrophysics Data System (ADS)

    Aswan, D. M.; Lufri, L.; Sumarmin, R.

    2018-04-01

    This research intends to determine the effect of Problem Based Learning models on students' critical thinking skills and competences. This study was a quasi-experimental research. The population of the study was the students of class VIII SMPN 1 Subdistrict Gunuang Omeh. Random sample selection is done by randomizing the class. Sample class that was chosen VIII3 as an experimental class given that treatment study based on problems and class VIII1 as control class that treatment usually given study. Instrument that used to consist of critical thinking test, cognitive tests, observation sheet of affective and psychomotor. Independent t-test and Mann Whitney U test was used for the analysis. Results showed that there was significant difference (sig <0.05) between control and experimental group. The conclusion of this study was Problem Based Learning models affected the students’ critical thinking skills and competences.

  16. How to Enhance Interdisciplinary Competence--Interdisciplinary Problem-Based Learning versus Interdisciplinary Project-Based Learning

    ERIC Educational Resources Information Center

    Brassler, Mirjam; Dettmers, Jan

    2017-01-01

    Interdisciplinary competence is important in academia for both employability and sustainable development. However, to date, there are no specific interdisciplinary education models and, naturally, no empirical studies to assess them. Since problem-based learning (PBL) and project-based learning (PjBL) are learning approaches that emphasize…

  17. Problem solving based learning model with multiple representations to improve student's mental modelling ability on physics

    NASA Astrophysics Data System (ADS)

    Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran

    2017-08-01

    Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7

  18. Authentic assessment based showcase portfolio on learning of mathematical problem solving in senior high school

    NASA Astrophysics Data System (ADS)

    Sukmawati, Zuhairoh, Faihatuz

    2017-05-01

    The purpose of this research was to develop authentic assessment model based on showcase portfolio on learning of mathematical problem solving. This research used research and development Method (R & D) which consists of four stages of development that: Phase I, conducting a preliminary study. Phase II, determining the purpose of developing and preparing the initial model. Phase III, trial test of instrument for the initial draft model and the initial product. The respondents of this research are the students of SMAN 8 and SMAN 20 Makassar. The collection of data was through observation, interviews, documentation, student questionnaire, and instrument tests mathematical solving abilities. The data were analyzed with descriptive and inferential statistics. The results of this research are authentic assessment model design based on showcase portfolio which involves: 1) Steps in implementing the authentic assessment based Showcase, assessment rubric of cognitive aspects, assessment rubric of affective aspects, and assessment rubric of skill aspect. 2) The average ability of the students' problem solving which is scored by using authentic assessment based on showcase portfolio was in high category and the students' response in good category.

  19. Stochastic search, optimization and regression with energy applications

    NASA Astrophysics Data System (ADS)

    Hannah, Lauren A.

    Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.

  20. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm.

    PubMed

    Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2013-08-07

    Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.

  1. A new parallel DNA algorithm to solve the task scheduling problem based on inspired computational model.

    PubMed

    Wang, Zhaocai; Ji, Zuwen; Wang, Xiaoming; Wu, Tunhua; Huang, Wei

    2017-12-01

    As a promising approach to solve the computationally intractable problem, the method based on DNA computing is an emerging research area including mathematics, computer science and molecular biology. The task scheduling problem, as a well-known NP-complete problem, arranges n jobs to m individuals and finds the minimum execution time of last finished individual. In this paper, we use a biologically inspired computational model and describe a new parallel algorithm to solve the task scheduling problem by basic DNA molecular operations. In turn, we skillfully design flexible length DNA strands to represent elements of the allocation matrix, take appropriate biological experiment operations and get solutions of the task scheduling problem in proper length range with less than O(n 2 ) time complexity. Copyright © 2017. Published by Elsevier B.V.

  2. Essays on variational approximation techniques for stochastic optimization problems

    NASA Astrophysics Data System (ADS)

    Deride Silva, Julio A.

    This dissertation presents five essays on approximation and modeling techniques, based on variational analysis, applied to stochastic optimization problems. It is divided into two parts, where the first is devoted to equilibrium problems and maxinf optimization, and the second corresponds to two essays in statistics and uncertainty modeling. Stochastic optimization lies at the core of this research as we were interested in relevant equilibrium applications that contain an uncertain component, and the design of a solution strategy. In addition, every stochastic optimization problem relies heavily on the underlying probability distribution that models the uncertainty. We studied these distributions, in particular, their design process and theoretical properties such as their convergence. Finally, the last aspect of stochastic optimization that we covered is the scenario creation problem, in which we described a procedure based on a probabilistic model to create scenarios for the applied problem of power estimation of renewable energies. In the first part, Equilibrium problems and maxinf optimization, we considered three Walrasian equilibrium problems: from economics, we studied a stochastic general equilibrium problem in a pure exchange economy, described in Chapter 3, and a stochastic general equilibrium with financial contracts, in Chapter 4; finally from engineering, we studied an infrastructure planning problem in Chapter 5. We stated these problems as belonging to the maxinf optimization class and, in each instance, we provided an approximation scheme based on the notion of lopsided convergence and non-concave duality. This strategy is the foundation of the augmented Walrasian algorithm, whose convergence is guaranteed by lopsided convergence, that was implemented computationally, obtaining numerical results for relevant examples. The second part, Essays about statistics and uncertainty modeling, contains two essays covering a convergence problem for a sequence of estimators, and a problem for creating probabilistic scenarios on renewable energies estimation. In Chapter 7 we re-visited one of the "folk theorems" in statistics, where a family of Bayes estimators under 0-1 loss functions is claimed to converge to the maximum a posteriori estimator. This assertion is studied under the scope of the hypo-convergence theory, and the density functions are included in the class of upper semicontinuous functions. We conclude this chapter with an example in which the convergence does not hold true, and we provided sufficient conditions that guarantee convergence. The last chapter, Chapter 8, addresses the important topic of creating probabilistic scenarios for solar power generation. Scenarios are a fundamental input for the stochastic optimization problem of energy dispatch, especially when incorporating renewables. We proposed a model designed to capture the constraints induced by physical characteristics of the variables based on the application of an epi-spline density estimation along with a copula estimation, in order to account for partial correlations between variables.

  3. Problem-posing in education: transformation of the practice of the health professional.

    PubMed

    Casagrande, L D; Caron-Ruffino, M; Rodrigues, R A; Vendrúsculo, D M; Takayanagui, A M; Zago, M M; Mendes, M D

    1998-02-01

    This study was developed by a group of professionals from different areas (nurses and educators) concerned with health education. It proposes the use of a problem-posing model for the transformation of professional practice. The concept and functions of the model and their relationships with the educative practice of health professionals are discussed. The model of problem-posing education is presented (compared to traditional, "banking" education), and four innovative experiences of teaching-learning are reported based on this model. These experiences, carried out in areas of environmental and occupational health and patient education have shown the applicability of the problem-posing model to the practice of the health professional, allowing transformation.

  4. Using the Big Six Information Skills as a Metacognitive Scaffold To Solve Information Based Problems.

    ERIC Educational Resources Information Center

    Wolf, Sara Elizabeth; Brush, Thomas

    The purpose of this research study was to determine whether a specific information problem-solving skills model was an effective metacognitive scaffold for students solving information-based problems. Specifically, 35 eighth grade students in two intact classes were asked to write newspaper articles that summarized the events surrounding the Selma…

  5. Group problems in problem-based learning.

    PubMed

    Hendry, Graham D; Ryan, Greg; Harris, Jennifer

    2003-11-01

    Successful small-group learning in problem-based learning (PBL) educational programmes relies on functional group processes. However, there has been limited research on PBL group problems, and no studies have been conducted on problems as perceived by both students and tutors in the same educational context. The authors investigated PBL group problems in a graduate-entry medical programme, and report the most common group problems, and those that hinder students' learning the most. The possible causes of individual quietness and dominant behaviour, and potential influences that group problems may have on the tutorial process are summarized in an exploratory model of PBL group dysfunction that could be used to guide further research. Specifically, there is a need for further evidence on which to base guidelines for tutors and students to effectively manage group problems.

  6. Problem-based learning through field investigation: Boosting questioning skill, biological literacy, and academic achievement

    NASA Astrophysics Data System (ADS)

    Suwono, Hadi; Wibowo, Agung

    2018-01-01

    Biology learning emphasizes problem-based learning as a learning strategy to develop students ability in identifying and solving problems in the surrounding environment. Problem identification skills are closely correlated with questioning skills. By holding this skill, students tend to deliver a procedural question instead of the descriptive one. Problem-based learning through field investigation is an instruction model which directly exposes the students to problems or phenomena that occur in the environment, and then the students design the field investigation activities to solve these problems. The purpose of this research was to describe the improvement of undergraduate biology students on questioning skills, biological literacy, and academic achievement through problem-based learning through field investigation (PBFI) compared with the lecture-based instruction (LBI). This research was a time series quasi-experimental design. The research was conducted on August - December 2015 and involved 26 undergraduate biology students at the State University of Malang on the Freshwater Ecology course. The data were collected during the learning with LBI and PBFI, in which questioning skills, biological literacy, and academic achievement were collected 3 times in each learning model. The data showed that the procedural correlative and causal types of questions are produced by the students to guide them in conducting investigations and problem-solving in PBFI. The biological literacy and academic achievement of the students at PBFI are significantly higher than those at LBI. The results show that PBFI increases the questioning skill, biological literacy, and the academic achievement of undergraduate biology students.

  7. Research on air and missile defense task allocation based on extended contract net protocol

    NASA Astrophysics Data System (ADS)

    Zhang, Yunzhi; Wang, Gang

    2017-10-01

    Based on the background of air and missile defense distributed element corporative engagement, the interception task allocation problem of multiple weapon units with multiple targets under network condition is analyzed. Firstly, a mathematical model of task allocation is established by combat task decomposition. Secondly, the initialization assignment based on auction contract and the adjustment allocation scheme based on swap contract were introduced to the task allocation. Finally, through the simulation calculation of typical situation, the model can be used to solve the task allocation problem in complex combat environment.

  8. Tracking student progress in a game-like physics learning environment with a Monte Carlo Bayesian knowledge tracing model

    NASA Astrophysics Data System (ADS)

    Gweon, Gey-Hong; Lee, Hee-Sun; Dorsey, Chad; Tinker, Robert; Finzer, William; Damelin, Daniel

    2015-03-01

    In tracking student learning in on-line learning systems, the Bayesian knowledge tracing (BKT) model is a popular model. However, the model has well-known problems such as the identifiability problem or the empirical degeneracy problem. Understanding of these problems remain unclear and solutions to them remain subjective. Here, we analyze the log data from an online physics learning program with our new model, a Monte Carlo BKT model. With our new approach, we are able to perform a completely unbiased analysis, which can then be used for classifying student learning patterns and performances. Furthermore, a theoretical analysis of the BKT model and our computational work shed new light on the nature of the aforementioned problems. This material is based upon work supported by the National Science Foundation under Grant REC-1147621 and REC-1435470.

  9. John Dewey--Problem Solving and History Teaching

    ERIC Educational Resources Information Center

    Martorella, Peter H.

    1978-01-01

    Presents a model for introducing inquiry and problem-solving into middle grade history classes. It is based on an educational approach suggested by John Dewey. The author uses the model to explore two seemingly contradictory statements by Abraham Lincoln about slavery. (AV)

  10. Application of Artificial Intelligence for Bridge Deterioration Model.

    PubMed

    Chen, Zhang; Wu, Yangyang; Li, Li; Sun, Lijun

    2015-01-01

    The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.

  11. Application of Artificial Intelligence for Bridge Deterioration Model

    PubMed Central

    Chen, Zhang; Wu, Yangyang; Sun, Lijun

    2015-01-01

    The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention. PMID:26601121

  12. The emergence of the Activity Reduces Conflict Associated Strain (ARCAS) model: a test of a conditional mediation model of workplace conflict and employee strain.

    PubMed

    Dijkstra, Maria T M; Beersma, Bianca; Cornelissen, Roosmarijn A W M

    2012-07-01

    To test and extend the emerging Activity Reduces Conflict-Associated Strain (ARCAS) model, we predicted that the relationship between task conflict and employee strain would be weakened to the extent that people experience high organization-based self-esteem (OBSE). A survey among Dutch employees demonstrated that, consistent with the model, the conflict-employee strain relationship was weaker the higher employees' OBSE and the more they engaged in active problem-solving conflict management. Our data also revealed that higher levels of OBSE were related to more problem-solving conflict management. Moreover, consistent with the ARCAS model, we could confirm a conditional mediation model in which organization-based self-esteem through its relationship with problem-solving conflict management weakened the relationship between task conflict and employee strain. Potential applications of the results are discussed.

  13. Development of Learning Models Based on Problem Solving and Meaningful Learning Standards by Expert Validity for Animal Development Course

    NASA Astrophysics Data System (ADS)

    Lufri, L.; Fitri, R.; Yogica, R.

    2018-04-01

    The purpose of this study is to produce a learning model based on problem solving and meaningful learning standards by expert assessment or validation for the course of Animal Development. This research is a development research that produce the product in the form of learning model, which consist of sub product, namely: the syntax of learning model and student worksheets. All of these products are standardized through expert validation. The research data is the level of validity of all sub products obtained using questionnaire, filled by validators from various field of expertise (field of study, learning strategy, Bahasa). Data were analysed using descriptive statistics. The result of the research shows that the problem solving and meaningful learning model has been produced. Sub products declared appropriate by expert include the syntax of learning model and student worksheet.

  14. Intervention Fidelity in Family-Based Prevention Counseling for Adolescent Problem Behaviors

    ERIC Educational Resources Information Center

    Hogue, Aaron; Liddle, Howard A.; Singer, Alisa; Leckrone, Jodi

    2005-01-01

    This study examined fidelity in multidimensional family prevention (MDFP), a family-based prevention counseling model for adolescents at high risk for substance abuse and related behavior problems, in comparison to two empirically based treatments for adolescent drug abuse: multidimensional family therapy (MDFT) and cognitive-behavioral therapy…

  15. Probabilities and predictions: modeling the development of scientific problem-solving skills.

    PubMed

    Stevens, Ron; Johnson, David F; Soller, Amy

    2005-01-01

    The IMMEX (Interactive Multi-Media Exercises) Web-based problem set platform enables the online delivery of complex, multimedia simulations, the rapid collection of student performance data, and has already been used in several genetic simulations. The next step is the use of these data to understand and improve student learning in a formative manner. This article describes the development of probabilistic models of undergraduate student problem solving in molecular genetics that detailed the spectrum of strategies students used when problem solving, and how the strategic approaches evolved with experience. The actions of 776 university sophomore biology majors from three molecular biology lecture courses were recorded and analyzed. Each of six simulations were first grouped by artificial neural network clustering to provide individual performance measures, and then sequences of these performances were probabilistically modeled by hidden Markov modeling to provide measures of progress. The models showed that students with different initial problem-solving abilities choose different strategies. Initial and final strategies varied across different sections of the same course and were not strongly correlated with other achievement measures. In contrast to previous studies, we observed no significant gender differences. We suggest that instructor interventions based on early student performances with these simulations may assist students to recognize effective and efficient problem-solving strategies and enhance learning.

  16. Navigating complex decision spaces: Problems and paradigms in sequential choice

    PubMed Central

    Walsh, Matthew M.; Anderson, John R.

    2015-01-01

    To behave adaptively, we must learn from the consequences of our actions. Doing so is difficult when the consequences of an action follow a delay. This introduces the problem of temporal credit assignment. When feedback follows a sequence of decisions, how should the individual assign credit to the intermediate actions that comprise the sequence? Research in reinforcement learning provides two general solutions to this problem: model-free reinforcement learning and model-based reinforcement learning. In this review, we examine connections between stimulus-response and cognitive learning theories, habitual and goal-directed control, and model-free and model-based reinforcement learning. We then consider a range of problems related to temporal credit assignment. These include second-order conditioning and secondary reinforcers, latent learning and detour behavior, partially observable Markov decision processes, actions with distributed outcomes, and hierarchical learning. We ask whether humans and animals, when faced with these problems, behave in a manner consistent with reinforcement learning techniques. Throughout, we seek to identify neural substrates of model-free and model-based reinforcement learning. The former class of techniques is understood in terms of the neurotransmitter dopamine and its effects in the basal ganglia. The latter is understood in terms of a distributed network of regions including the prefrontal cortex, medial temporal lobes cerebellum, and basal ganglia. Not only do reinforcement learning techniques have a natural interpretation in terms of human and animal behavior, but they also provide a useful framework for understanding neural reward valuation and action selection. PMID:23834192

  17. A Study of Collaborative Software Development Using Groupware Tools

    ERIC Educational Resources Information Center

    Defranco-Tommarello, Joanna; Deek, Fadi P.

    2005-01-01

    The experimental results of a collaborative problem solving and program development model that takes into consideration the cognitive and social activities that occur during software development is presented in this paper. This collaborative model is based on the Dual Common Model that focuses on individual cognitive aspects of problem solving and…

  18. Effectiveness of discovery learning model on mathematical problem solving

    NASA Astrophysics Data System (ADS)

    Herdiana, Yunita; Wahyudin, Sispiyati, Ririn

    2017-08-01

    This research is aimed to describe the effectiveness of discovery learning model on mathematical problem solving. This research investigate the students' problem solving competency before and after learned by using discovery learning model. The population used in this research was student in grade VII in one of junior high school in West Bandung Regency. From nine classes, class VII B were randomly selected as the sample of experiment class, and class VII C as control class, which consist of 35 students every class. The method in this research was quasi experiment. The instrument in this research is pre-test, worksheet and post-test about problem solving of mathematics. Based on the research, it can be conclude that the qualification of problem solving competency of students who gets discovery learning model on level 80%, including in medium category and it show that discovery learning model effective to improve mathematical problem solving.

  19. Teaching High School Students with Learning Disabilities to Use Model Drawing Strategy to Solve Fraction and Percentage Word Problems

    ERIC Educational Resources Information Center

    Dennis, Minyi Shih; Knight, Jacqueline; Jerman, Olga

    2016-01-01

    This article describes how to teach fraction and percentage word problems using a model-drawing strategy. This cognitive strategy places emphasis on explicitly teaching students how to draw a schematic diagram to represent the qualitative relations described in the problem, and how to formulate the solution based on the schematic diagram. The…

  20. Innovating Method of Existing Mechanical Product Based on TRIZ Theory

    NASA Astrophysics Data System (ADS)

    Zhao, Cunyou; Shi, Dongyan; Wu, Han

    Main way of product development is adaptive design and variant design based on existing product. In this paper, conceptual design frame and its flow model of innovating products is put forward through combining the methods of conceptual design and TRIZ theory. Process system model of innovating design that includes requirement analysis, total function analysis and decomposing, engineering problem analysis, finding solution of engineering problem and primarily design is constructed and this establishes the base for innovating design of existing product.

  1. Model-Based Detection of Radioactive Contraband for Harbor Defense Incorporating Compton Scattering Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J V; Chambers, D H; Breitfeller, E F

    2010-03-02

    The detection of radioactive contraband is a critical problem is maintaining national security for any country. Photon emissions from threat materials challenge both detection and measurement technologies especially when concealed by various types of shielding complicating the transport physics significantly. This problem becomes especially important when ships are intercepted by U.S. Coast Guard harbor patrols searching for contraband. The development of a sequential model-based processor that captures both the underlying transport physics of gamma-ray emissions including Compton scattering and the measurement of photon energies offers a physics-based approach to attack this challenging problem. The inclusion of a basic radionuclide representationmore » of absorbed/scattered photons at a given energy along with interarrival times is used to extract the physics information available from the noisy measurements portable radiation detection systems used to interdict contraband. It is shown that this physics representation can incorporated scattering physics leading to an 'extended' model-based structure that can be used to develop an effective sequential detection technique. The resulting model-based processor is shown to perform quite well based on data obtained from a controlled experiment.« less

  2. Diagnosing and dealing with multicollinearity.

    PubMed

    Schroeder, M A

    1990-04-01

    The purpose of this article was to increase nurse researchers' awareness of the effects of collinear data in developing theoretical models for nursing practice. Collinear data distort the true value of the estimates generated from ordinary least-squares analysis. Theoretical models developed to provide the underpinnings of nursing practice need not be abandoned, however, because they fail to produce consistent estimates over repeated applications. It is also important to realize that multicollinearity is a data problem, not a problem associated with misspecification of a theorectical model. An investigator must first be aware of the problem, and then it is possible to develop an educated solution based on the degree of multicollinearity, theoretical considerations, and sources of error associated with alternative, biased, least-square regression techniques. Decisions based on theoretical and statistical considerations will further the development of theory-based nursing practice.

  3. Investigating and developing engineering students' mathematical modelling and problem-solving skills

    NASA Astrophysics Data System (ADS)

    Wedelin, Dag; Adawi, Tom; Jahan, Tabassum; Andersson, Sven

    2015-09-01

    How do engineering students approach mathematical modelling problems and how can they learn to deal with such problems? In the context of a course in mathematical modelling and problem solving, and using a qualitative case study approach, we found that the students had little prior experience of mathematical modelling. They were also inexperienced problem solvers, unaware of the importance of understanding the problem and exploring alternatives, and impeded by inappropriate beliefs, attitudes and expectations. Important impacts of the course belong to the metacognitive domain. The nature of the problems, the supervision and the follow-up lectures were emphasised as contributing to the impacts of the course, where students show major development. We discuss these empirical results in relation to a framework for mathematical thinking and the notion of cognitive apprenticeship. Based on the results, we argue that this kind of teaching should be considered in the education of all engineers.

  4. The Relationship Between Motor Skills, Social Problems, and ADHD Symptomatology: Does It Vary According to Parent and Teacher Report?

    PubMed

    Goulardins, Juliana B; Rigoli, Daniela; Loh, Pek Ru; Kane, Robert; Licari, Melissa; Hands, Beth; Oliveira, Jorge A; Piek, Jan

    2018-06-01

    This study investigated the relationship between motor performance; attentional, hyperactive, and impulsive symptoms; and social problems. Correlations between parents' versus teachers' ratings of social problems and ADHD symptomatology were also examined. A total of 129 children aged 9 to 12 years were included. ADHD symptoms and social problems were identified based on Conners' Rating Scales-Revised: L, and the McCarron Assessment of Neuromuscular Development was used to assess motor skills. After controlling for ADHD symptomatology, motor skills remained a significant predictor of social problems in the teacher model but not in the parent model. After controlling for motor skills, inattentive (not hyperactive-impulsive) symptoms were a significant predictor of social problems in the parent model, whereas hyperactive-impulsive (not inattentive) symptoms were a significant predictor of social problems in the teacher model. The findings suggested that intervention strategies should consider the interaction between symptoms and environmental contexts.

  5. The Effect of a Case-Based Reasoning Instructional Model on Korean High School Students' Awareness in Climate Change Unit

    ERIC Educational Resources Information Center

    Jeong, Jinwoo; Kim, Hyoungbum; Chae, Dong-hyun; Kim, Eunjeong

    2014-01-01

    The purpose of this study is to investigate the effects of the case-based reasoning instructional model on learning about climate change unit. Results suggest that students showed interest because it allowed them to find the solution to the problem and solve the problem for themselves by analogy from other cases such as crossword puzzles in an…

  6. Examining the Implementation of a Problem-Based Learning and Traditional Hybrid Model of Instruction in Remedial Mathematics Classes Designed for State Testing Preparation of Eleventh Grade Students

    ERIC Educational Resources Information Center

    Rodgers, Lindsay D.

    2011-01-01

    The following paper examined the effects of a new method of teaching for remedial mathematics, named the hybrid model of instruction. Due to increasing importance of high stakes testing, the study sought to determine if this method of instruction, that blends traditional teaching and problem-based learning, had different learning effects on…

  7. An optimization-based approach for solving a time-harmonic multiphysical wave problem with higher-order schemes

    NASA Astrophysics Data System (ADS)

    Mönkölä, Sanna

    2013-06-01

    This study considers developing numerical solution techniques for the computer simulations of time-harmonic fluid-structure interaction between acoustic and elastic waves. The focus is on the efficiency of an iterative solution method based on a controllability approach and spectral elements. We concentrate on the model, in which the acoustic waves in the fluid domain are modeled by using the velocity potential and the elastic waves in the structure domain are modeled by using displacement. Traditionally, the complex-valued time-harmonic equations are used for solving the time-harmonic problems. Instead of that, we focus on finding periodic solutions without solving the time-harmonic problems directly. The time-dependent equations can be simulated with respect to time until a time-harmonic solution is reached, but the approach suffers from poor convergence. To overcome this challenge, we follow the approach first suggested and developed for the acoustic wave equations by Bristeau, Glowinski, and Périaux. Thus, we accelerate the convergence rate by employing a controllability method. The problem is formulated as a least-squares optimization problem, which is solved with the conjugate gradient (CG) algorithm. Computation of the gradient of the functional is done directly for the discretized problem. A graph-based multigrid method is used for preconditioning the CG algorithm.

  8. Developing Metacognitive and Problem-Solving Skills through Problem Manipulation

    ERIC Educational Resources Information Center

    Parker Siburt, Claire J.; Bissell, Ahrash N.; Macphail, Richard A.

    2011-01-01

    In a collaborative effort between the our university's department of chemistry and the academic resource center, we designed a model for general chemistry recitation based on a problem manipulation method in which students actively assess the skills and knowledge used to answer a chemical problem and then manipulate the problem to create a new…

  9. Case Study on Optimal Routing in Logistics Network by Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoguang; Lin, Lin; Gen, Mitsuo; Shiota, Mitsushige

    Recently, research on logistics caught more and more attention. One of the important issues on logistics system is to find optimal delivery routes with the least cost for products delivery. Numerous models have been developed for that reason. However, due to the diversity and complexity of practical problem, the existing models are usually not very satisfying to find the solution efficiently and convinently. In this paper, we treat a real-world logistics case with a company named ABC Co. ltd., in Kitakyusyu Japan. Firstly, based on the natures of this conveyance routing problem, as an extension of transportation problem (TP) and fixed charge transportation problem (fcTP) we formulate the problem as a minimum cost flow (MCF) model. Due to the complexity of fcTP, we proposed a priority-based genetic algorithm (pGA) approach to find the most acceptable solution to this problem. In this pGA approach, a two-stage path decoding method is adopted to develop delivery paths from a chromosome. We also apply the pGA approach to this problem, and compare our results with the current logistics network situation, and calculate the improvement of logistics cost to help the management to make decisions. Finally, in order to check the effectiveness of the proposed method, the results acquired are compared with those come from the two methods/ software, such as LINDO and CPLEX.

  10. Two Methods for Efficient Solution of the Hitting-Set Problem

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Fijany, Amir

    2005-01-01

    A paper addresses much of the same subject matter as that of Fast Algorithms for Model-Based Diagnosis (NPO-30582), which appears elsewhere in this issue of NASA Tech Briefs. However, in the paper, the emphasis is more on the hitting-set problem (also known as the transversal problem), which is well known among experts in combinatorics. The authors primary interest in the hitting-set problem lies in its connection to the diagnosis problem: it is a theorem of model-based diagnosis that in the set-theory representation of the components of a system, the minimal diagnoses of a system are the minimal hitting sets of the system. In the paper, the hitting-set problem (and, hence, the diagnosis problem) is translated from a combinatorial to a computational problem by mapping it onto the Boolean satisfiability and integer- programming problems. The paper goes on to describe developments nearly identical to those summarized in the cited companion NASA Tech Briefs article, including the utilization of Boolean-satisfiability and integer- programming techniques to reduce the computation time and/or memory needed to solve the hitting-set problem.

  11. Water-Based Pressure-Sensitive Paints

    NASA Technical Reports Server (NTRS)

    Jordan, Jeffrey D.; Watkins, A. Neal; Oglesby, Donald M.; Ingram, JoAnne L.

    2006-01-01

    Water-based pressure-sensitive paints (PSPs) have been invented as alternatives to conventional organic-solvent-based pressure-sensitive paints, which are used primarily for indicating distributions of air pressure on wind-tunnel models. Typically, PSPs are sprayed onto aerodynamic models after they have been mounted in wind tunnels. When conventional organic-solvent-based PSPs are used, this practice creates a problem of removing toxic fumes from inside the wind tunnels. The use of water-based PSPs eliminates this problem. The waterbased PSPs offer high performance as pressure indicators, plus all the advantages of common water-based paints (low toxicity, low concentrations of volatile organic compounds, and easy cleanup by use of water).

  12. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography.

    PubMed

    Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A

    2013-11-01

    Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  13. An electromagnetism-like metaheuristic for open-shop problems with no buffer

    NASA Astrophysics Data System (ADS)

    Naderi, Bahman; Najafi, Esmaeil; Yazdani, Mehdi

    2012-12-01

    This paper considers open-shop scheduling with no intermediate buffer to minimize total tardiness. This problem occurs in many production settings, in the plastic molding, chemical, and food processing industries. The paper mathematically formulates the problem by a mixed integer linear program. The problem can be optimally solved by the model. The paper also develops a novel metaheuristic based on an electromagnetism algorithm to solve the large-sized problems. The paper conducts two computational experiments. The first includes small-sized instances by which the mathematical model and general performance of the proposed metaheuristic are evaluated. The second evaluates the metaheuristic for its performance to solve some large-sized instances. The results show that the model and algorithm are effective to deal with the problem.

  14. Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.

  15. A Systematic Review of Research on the Use of Problem-Based Learning in the Preparation and Development of School Leaders

    ERIC Educational Resources Information Center

    Hallinger, Philip; Bridges, Edwin M.

    2017-01-01

    Problem: Problem-based learning (PBL) was introduced into the parlance of educational leadership and management almost 30 years ago. During the ensuing decades, a global community of professors, doctoral students, and curriculum designers has built upon early models with the goal of increasing the impact of school leadership preparation. This…

  16. A ripple-spreading genetic algorithm for the aircraft sequencing problem.

    PubMed

    Hu, Xiao-Bing; Di Paolo, Ezequiel A

    2011-01-01

    When genetic algorithms (GAs) are applied to combinatorial problems, permutation representations are usually adopted. As a result, such GAs are often confronted with feasibility and memory-efficiency problems. With the aircraft sequencing problem (ASP) as a study case, this paper reports on a novel binary-representation-based GA scheme for combinatorial problems. Unlike existing GAs for the ASP, which typically use permutation representations based on aircraft landing order, the new GA introduces a novel ripple-spreading model which transforms the original landing-order-based ASP solutions into value-based ones. In the new scheme, arriving aircraft are projected as points into an artificial space. A deterministic method inspired by the natural phenomenon of ripple-spreading on liquid surfaces is developed, which uses a few parameters as input to connect points on this space to form a landing sequence. A traditional GA, free of feasibility and memory-efficiency problems, can then be used to evolve the ripple-spreading related parameters in order to find an optimal sequence. Since the ripple-spreading model is the centerpiece of the new algorithm, it is called the ripple-spreading GA (RSGA). The advantages of the proposed RSGA are illustrated by extensive comparative studies for the case of the ASP.

  17. Modeling an integrated hospital management planning problem using integer optimization approach

    NASA Astrophysics Data System (ADS)

    Sitepu, Suryati; Mawengkang, Herman; Irvan

    2017-09-01

    Hospital is a very important institution to provide health care for people. It is not surprising that nowadays the people’s demands for hospital is increasing. However, due to the rising cost of healthcare services, hospitals need to consider efficiencies in order to overcome these two problems. This paper deals with an integrated strategy of staff capacity management and bed allocation planning to tackle these problems. Mathematically, the strategy can be modeled as an integer linear programming problem. We solve the model using a direct neighborhood search approach, based on the notion of superbasic variables.

  18. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  19. Family process and youth internalizing problems: A triadic model of etiology and intervention.

    PubMed

    Schleider, Jessica L; Weisz, John R

    2017-02-01

    Despite major advances in the development of interventions for youth anxiety and depression, approximately 30% of youths with anxiety do not respond to cognitive behavioral treatment, and youth depression treatments yield modest symptom decreases overall. Identifying networks of modifiable risk and maintenance factors that contribute to both youth anxiety and depression (i.e., internalizing problems) may enhance and broaden treatment benefits by informing the development of mechanism-targeted interventions. A particularly powerful network is the rich array of family processes linked to internalizing problems (e.g., parenting styles, parental mental health problems, and sibling relationships). Here, we propose a new theoretical model, the triadic model of family process, to organize theory and evidence around modifiable, transdiagnostic family factors that may contribute to youth internalizing problems. We describe the model's implications for intervention, and we propose strategies for testing the model in future research. The model provides a framework for studying associations among family processes, their relation to youth internalizing problems, and family-based strategies for strengthening prevention and treatment.

  20. A General Architecture for Intelligent Tutoring of Diagnostic Classification Problem Solving

    PubMed Central

    Crowley, Rebecca S.; Medvedeva, Olga

    2003-01-01

    We report on a general architecture for creating knowledge-based medical training systems to teach diagnostic classification problem solving. The approach is informed by our previous work describing the development of expertise in classification problem solving in Pathology. The architecture envelops the traditional Intelligent Tutoring System design within the Unified Problem-solving Method description Language (UPML) architecture, supporting component modularity and reuse. Based on the domain ontology, domain task ontology and case data, the abstract problem-solving methods of the expert model create a dynamic solution graph. Student interaction with the solution graph is filtered through an instructional layer, which is created by a second set of abstract problem-solving methods and pedagogic ontologies, in response to the current state of the student model. We outline the advantages and limitations of this general approach, and describe it’s implementation in SlideTutor–a developing Intelligent Tutoring System in Dermatopathology. PMID:14728159

  1. Model and controller reduction of large-scale structures based on projection methods

    NASA Astrophysics Data System (ADS)

    Gildin, Eduardo

    The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that is, the reduced-order controller implemented with the full-order plant. A controller reduction approach is proposed such that to guarantee closed-loop stability. It is based on the concept of dissipativity (or positivity) of linear dynamical systems. Utilizing passivity preserving model reduction together with dissipative-LQG controllers, effective low-order optimal controllers are obtained. Results are shown through simulations.

  2. Joint Geophysical Inversion With Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelievre, P. G.; Bijani, R.; Farquharson, C. G.

    2015-12-01

    Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.

  3. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  4. Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projection-based reduced order models for the compressible Navier–Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl

    For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less

  5. Minimal subspace rotation on the Stiefel manifold for stabilization and enhancement of projection-based reduced order models for the compressible Navier–Stokes equations

    DOE PAGES

    Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl

    2016-05-25

    For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less

  6. GIS-BASED HYDROLOGIC MODELING: THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT TOOL

    EPA Science Inventory

    Planning and assessment in land and water resource management are evolving from simple, local scale problems toward complex, spatially explicit regional ones. Such problems have to be
    addressed with distributed models that can compute runoff and erosion at different spatial a...

  7. The Two-Capacitor Problem Revisited: A Mechanical Harmonic Oscillator Model Approach

    ERIC Educational Resources Information Center

    Lee, Keeyung

    2009-01-01

    The well-known two-capacitor problem, in which exactly half the stored energy disappears when a charged capacitor is connected to an identical capacitor, is discussed based on the mechanical harmonic oscillator model approach. In the mechanical harmonic oscillator model, it is shown first that "exactly half" the work done by a constant applied…

  8. Importance of fish behaviour in modelling conservation problems: food limitation as an example

    Treesearch

    Steven Railsback; Bret Harvey

    2011-01-01

    Simulation experiments using the inSTREAM individual-based brown trout Salmo trutta population model explored the role of individual adaptive behaviour in food limitation, as an example of how behaviour can affect managers’ understanding of conservation problems. The model includes many natural complexities in habitat (spatial and temporal variation in characteristics...

  9. On unified modeling, theory, and method for solving multi-scale global optimization problems

    NASA Astrophysics Data System (ADS)

    Gao, David Yang

    2016-10-01

    A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.

  10. Explaining evolution via constrained persistent perfect phylogeny

    PubMed Central

    2014-01-01

    Background The perfect phylogeny is an often used model in phylogenetics since it provides an efficient basic procedure for representing the evolution of genomic binary characters in several frameworks, such as for example in haplotype inference. The model, which is conceptually the simplest, is based on the infinite sites assumption, that is no character can mutate more than once in the whole tree. A main open problem regarding the model is finding generalizations that retain the computational tractability of the original model but are more flexible in modeling biological data when the infinite site assumption is violated because of e.g. back mutations. A special case of back mutations that has been considered in the study of the evolution of protein domains (where a domain is acquired and then lost) is persistency, that is the fact that a character is allowed to return back to the ancestral state. In this model characters can be gained and lost at most once. In this paper we consider the computational problem of explaining binary data by the Persistent Perfect Phylogeny model (referred as PPP) and for this purpose we investigate the problem of reconstructing an evolution where some constraints are imposed on the paths of the tree. Results We define a natural generalization of the PPP problem obtained by requiring that for some pairs (character, species), neither the species nor any of its ancestors can have the character. In other words, some characters cannot be persistent for some species. This new problem is called Constrained PPP (CPPP). Based on a graph formulation of the CPPP problem, we are able to provide a polynomial time solution for the CPPP problem for matrices whose conflict graph has no edges. Using this result, we develop a parameterized algorithm for solving the CPPP problem where the parameter is the number of characters. Conclusions A preliminary experimental analysis shows that the constrained persistent perfect phylogeny model allows to explain efficiently data that do not conform with the classical perfect phylogeny model. PMID:25572381

  11. Implications of Middle School Behavior Problems for High School Graduation and Employment Outcomes of Young Adults: Estimation of a Recursive Model.

    PubMed

    Karakus, Mustafa C; Salkever, David S; Slade, Eric P; Ialongo, Nicholas; Stuart, Elizabeth

    2012-01-01

    The potentially serious adverse impacts of behavior problems during adolescence on employment outcomes in adulthood provide a key economic rationale for early intervention programs. However, the extent to which lower educational attainment accounts for the total impact of adolescent behavior problems on later employment remains unclear As an initial step in exploring this issue, we specify and estimate a recursive bivariate probit model that 1) relates middle school behavior problems to high school graduation and 2) models later employment in young adulthood as a function of these behavior problems and of high school graduation. Our model thus allows for both a direct effect of behavior problems on later employment as well as an indirect effect that operates via graduation from high school. Our empirical results, based on analysis of data from the NELS, suggest that the direct effects of externalizing behavior problems on later employment are not significant but that these problems have important indirect effects operating through high school graduation.

  12. Analysis of an optimization-based atomistic-to-continuum coupling method for point defects

    DOE PAGES

    Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...

    2015-11-16

    Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.

  13. Research and Implementation of Tibetan Word Segmentation Based on Syllable Methods

    NASA Astrophysics Data System (ADS)

    Jiang, Jing; Li, Yachao; Jiang, Tao; Yu, Hongzhi

    2018-03-01

    Tibetan word segmentation (TWS) is an important problem in Tibetan information processing, while abbreviated word recognition is one of the key and most difficult problems in TWS. Most of the existing methods of Tibetan abbreviated word recognition are rule-based approaches, which need vocabulary support. In this paper, we propose a method based on sequence tagging model for abbreviated word recognition, and then implement in TWS systems with sequence labeling models. The experimental results show that our abbreviated word recognition method is fast and effective and can be combined easily with the segmentation model. This significantly increases the effect of the Tibetan word segmentation.

  14. The Waterfall Model in Large-Scale Development

    NASA Astrophysics Data System (ADS)

    Petersen, Kai; Wohlin, Claes; Baca, Dejan

    Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.

  15. Multiple Damage Progression Paths in Model-Based Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Goebel, Kai Frank

    2011-01-01

    Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active

  16. Students’ Mathematical Problem-Solving Abilities Through The Application of Learning Models Problem Based Learning

    NASA Astrophysics Data System (ADS)

    Nasution, M. L.; Yerizon, Y.; Gusmiyanti, R.

    2018-04-01

    One of the purpose mathematic learning is to develop problem solving abilities. Problem solving is obtained through experience in questioning non-routine. Improving students’ mathematical problem-solving abilities required an appropriate strategy in learning activities one of them is models problem based learning (PBL). Thus, the purpose of this research is to determine whether the problem solving abilities of mathematical students’ who learn to use PBL better than on the ability of students’ mathematical problem solving by applying conventional learning. This research included quasi experiment with static group design and population is students class XI MIA SMAN 1 Lubuk Alung. Class experiment in the class XI MIA 5 and class control in the class XI MIA 6. The instrument of final test students’ mathematical problem solving used essay form. The result of data final test in analyzed with t-test. The result is students’ mathematical problem solving abilities with PBL better then on the ability of students’ mathematical problem solving by applying conventional learning. It’s seen from the high percentage achieved by the group of students who learn to use PBL for each indicator of students’ mathematical problem solving.

  17. Evaluating the effects of real power losses in optimal power flow based storage integration

    DOE PAGES

    Castillo, Anya; Gayme, Dennice

    2017-03-27

    This study proposes a DC optimal power flow (DCOPF) with losses formulation (the `-DCOPF+S problem) and uses it to investigate the role of real power losses in OPF based grid-scale storage integration. We derive the `- DCOPF+S problem by augmenting a standard DCOPF with storage (DCOPF+S) problem to include quadratic real power loss approximations. This procedure leads to a multi-period nonconvex quadratically constrained quadratic program, which we prove can be solved to optimality using either a semidefinite or second order cone relaxation. Our approach has some important benefits over existing models. It is more computationally tractable than ACOPF with storagemore » (ACOPF+S) formulations and the provably exact convex relaxations guarantee that an optimal solution can be attained for a feasible problem. Adding loss approximations to a DCOPF+S model leads to a more accurate representation of locational marginal prices, which have been shown to be critical to determining optimal storage dispatch and siting in prior ACOPF+S based studies. Case studies demonstrate the improved accuracy of the `-DCOPF+S model over a DCOPF+S model and the computational advantages over an ACOPF+S formulation.« less

  18. Approximate Solutions for a Self-Folding Problem of Carbon Nanotubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Y Mikata

    2006-08-22

    This paper treats approximate solutions for a self-folding problem of carbon nanotubes. It has been observed in the molecular dynamics calculations [1] that a carbon nanotube with a large aspect ratio can self-fold due to van der Waals force between the parts of the same carbon nanotube. The main issue in the self-folding problem is to determine the minimum threshold length of the carbon nanotube at which it becomes possible for the carbon nanotube to self-fold due to the van der Waals force. An approximate mathematical model based on the force method is constructed for the self-folding problem of carbonmore » nanotubes, and it is solved exactly as an elastica problem using elliptic functions. Additionally, three other mathematical models are constructed based on the energy method. As a particular example, the lower and upper estimates for the critical threshold (minimum) length are determined based on both methods for the (5,5) armchair carbon nanotube.« less

  19. Target Coverage in Wireless Sensor Networks with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao

    2016-01-01

    Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902

  20. Spectrum Sharing Based on a Bertrand Game in Cognitive Radio Sensor Networks

    PubMed Central

    Zeng, Biqing; Zhang, Chi; Hu, Pianpian; Wang, Shengyu

    2017-01-01

    In the study of power control and allocation based on pricing, the utility of secondary users is usually studied from the perspective of the signal to noise ratio. The study of secondary user utility from the perspective of communication demand can not only promote the secondary users to meet the maximum communication needs, but also to maximize the utilization of spectrum resources, however, research in this area is lacking, so from the viewpoint of meeting the demand of network communication, this paper designs a two stage model to solve spectrum leasing and allocation problem in cognitive radio sensor networks (CRSNs). In the first stage, the secondary base station collects the secondary network communication requirements, and rents spectrum resources from several primary base stations using the Bertrand game to model the transaction behavior of the primary base station and secondary base station. The second stage, the subcarriers and power allocation problem of secondary base stations is defined as a nonlinear programming problem to be solved based on Nash bargaining. The simulation results show that the proposed model can satisfy the communication requirements of each user in a fair and efficient way compared to other spectrum sharing schemes. PMID:28067850

  1. Adaptive Greedy Dictionary Selection for Web Media Summarization.

    PubMed

    Cong, Yang; Liu, Ji; Sun, Gan; You, Quanzeng; Li, Yuncheng; Luo, Jiebo

    2017-01-01

    Initializing an effective dictionary is an indispensable step for sparse representation. In this paper, we focus on the dictionary selection problem with the objective to select a compact subset of basis from original training data instead of learning a new dictionary matrix as dictionary learning models do. We first design a new dictionary selection model via l 2,0 norm. For model optimization, we propose two methods: one is the standard forward-backward greedy algorithm, which is not suitable for large-scale problems; the other is based on the gradient cues at each forward iteration and speeds up the process dramatically. In comparison with the state-of-the-art dictionary selection models, our model is not only more effective and efficient, but also can control the sparsity. To evaluate the performance of our new model, we select two practical web media summarization problems: 1) we build a new data set consisting of around 500 users, 3000 albums, and 1 million images, and achieve effective assisted albuming based on our model and 2) by formulating the video summarization problem as a dictionary selection issue, we employ our model to extract keyframes from a video sequence in a more flexible way. Generally, our model outperforms the state-of-the-art methods in both these two tasks.

  2. Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints

    NASA Astrophysics Data System (ADS)

    Kmet', Tibor; Kmet'ová, Mária

    2009-09-01

    A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  3. The Priority Inversion Problem and Real-Time Symbolic Model Checking

    DTIC Science & Technology

    1993-04-23

    real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties

  4. Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization

    NASA Astrophysics Data System (ADS)

    Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane

    2003-01-01

    The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.

  5. A Comparison of Two Mathematics Problem-Solving Strategies: Facilitate Algebra-Readiness

    ERIC Educational Resources Information Center

    Xin, Yan Ping; Zhang, Dake; Park, Joo Young; Tom, Kinsey; Whipple, Amanda; Si, Luo

    2011-01-01

    The authors compared a conceptual model-based problem-solving (COMPS) approach with a general heuristic instructional approach for teaching multiplication-division word-problem solving to elementary students with learning problems (LP). The results indicate that only the COMPS group significantly improved, from pretests to posttests, their…

  6. Deformed Palmprint Matching Based on Stable Regions.

    PubMed

    Wu, Xiangqian; Zhao, Qiushi

    2015-12-01

    Palmprint recognition (PR) is an effective technology for personal recognition. A main problem, which deteriorates the performance of PR, is the deformations of palmprint images. This problem becomes more severe on contactless occasions, in which images are acquired without any guiding mechanisms, and hence critically limits the applications of PR. To solve the deformation problems, in this paper, a model for non-linearly deformed palmprint matching is derived by approximating non-linear deformed palmprint images with piecewise-linear deformed stable regions. Based on this model, a novel approach for deformed palmprint matching, named key point-based block growing (KPBG), is proposed. In KPBG, an iterative M-estimator sample consensus algorithm based on scale invariant feature transform features is devised to compute piecewise-linear transformations to approximate the non-linear deformations of palmprints, and then, the stable regions complying with the linear transformations are decided using a block growing algorithm. Palmprint feature extraction and matching are performed over these stable regions to compute matching scores for decision. Experiments on several public palmprint databases show that the proposed models and the KPBG approach can effectively solve the deformation problem in palmprint verification and outperform the state-of-the-art methods.

  7. Large biases in regression-based constituent flux estimates: causes and diagnostic tools

    USGS Publications Warehouse

    Hirsch, Robert M.

    2014-01-01

    It has been documented in the literature that, in some cases, widely used regression-based models can produce severely biased estimates of long-term mean river fluxes of various constituents. These models, estimated using sample values of concentration, discharge, and date, are used to compute estimated fluxes for a multiyear period at a daily time step. This study compares results of the LOADEST seven-parameter model, LOADEST five-parameter model, and the Weighted Regressions on Time, Discharge, and Season (WRTDS) model using subsampling of six very large datasets to better understand this bias problem. This analysis considers sample datasets for dissolved nitrate and total phosphorus. The results show that LOADEST-7 and LOADEST-5, although they often produce very nearly unbiased results, can produce highly biased results. This study identifies three conditions that can give rise to these severe biases: (1) lack of fit of the log of concentration vs. log discharge relationship, (2) substantial differences in the shape of this relationship across seasons, and (3) severely heteroscedastic residuals. The WRTDS model is more resistant to the bias problem than the LOADEST models but is not immune to them. Understanding the causes of the bias problem is crucial to selecting an appropriate method for flux computations. Diagnostic tools for identifying the potential for bias problems are introduced, and strategies for resolving bias problems are described.

  8. Promoter Sequences Prediction Using Relational Association Rule Mining

    PubMed Central

    Czibula, Gabriela; Bocicor, Maria-Iuliana; Czibula, Istvan Gergely

    2012-01-01

    In this paper we are approaching, from a computational perspective, the problem of promoter sequences prediction, an important problem within the field of bioinformatics. As the conditions for a DNA sequence to function as a promoter are not known, machine learning based classification models are still developed to approach the problem of promoter identification in the DNA. We are proposing a classification model based on relational association rules mining. Relational association rules are a particular type of association rules and describe numerical orderings between attributes that commonly occur over a data set. Our classifier is based on the discovery of relational association rules for predicting if a DNA sequence contains or not a promoter region. An experimental evaluation of the proposed model and comparison with similar existing approaches is provided. The obtained results show that our classifier overperforms the existing techniques for identifying promoter sequences, confirming the potential of our proposal. PMID:22563233

  9. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    DOE PAGES

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab; ...

    2016-11-21

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less

  10. A quantile-based scenario analysis approach to biomass supply chain optimization under uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamar, David S.; Gopaluni, Bhushan; Sokhansanj, Shahab

    Supply chain optimization for biomass-based power plants is an important research area due to greater emphasis on renewable power energy sources. Biomass supply chain design and operational planning models are often formulated and studied using deterministic mathematical models. While these models are beneficial for making decisions, their applicability to real world problems may be limited because they do not capture all the complexities in the supply chain, including uncertainties in the parameters. This study develops a statistically robust quantile-based approach for stochastic optimization under uncertainty, which builds upon scenario analysis. We apply and evaluate the performance of our approach tomore » address the problem of analyzing competing biomass supply chains subject to stochastic demand and supply. Finally, the proposed approach was found to outperform alternative methods in terms of computational efficiency and ability to meet the stochastic problem requirements.« less

  11. [Problem based learning (PBL)--possible adaptation in psychiatry (debate)].

    PubMed

    Adamowski, Tomasz; Frydecka, Dorota; Kiejna, Andrzej

    2007-01-01

    Teaching psychiatry concerns mainly education of students studying medicine and clinical psychology, but it also concerns professional training the people specializing in psychiatry and in other fields of medicine. Since the requirements that medical professionals are obliged to meet are ever higher, it is essential to provide highest possible quality of teaching and to do so to use the best possible teaching models. One of the modern educational models is Problem Based Learning (PBL). Barrows' and Dreyfus' research as well as development of andragogy had major impact on the introduction of this model of teaching. There are favourable experiences of using PBL in teaching psychiatry reported, especially in the field of psychosomatics. Problem Based Learning gradually becomes a part of modern curricula in Western Europe. For this reason it is worth keeping in mind PBL's principles and knowingly apply them into practice, all the more the reported educational effects of using this method are very promising.

  12. Models of performance of evolutionary program induction algorithms based on indicators of problem difficulty.

    PubMed

    Graff, Mario; Poli, Riccardo; Flores, Juan J

    2013-01-01

    Modeling the behavior of algorithms is the realm of evolutionary algorithm theory. From a practitioner's point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. However, in recent work (Graff and Poli, 2008, 2010), where we developed a method to practically estimate the performance of evolutionary program-induction algorithms (EPAs), we started addressing this issue. The method was quite general; however, it suffered from some limitations: it required the identification of a set of reference problems, it required hand picking a distance measure in each particular domain, and the resulting models were opaque, typically being linear combinations of 100 features or more. In this paper, we propose a significant improvement of this technique that overcomes the three limitations of our previous method. We achieve this through the use of a novel set of features for assessing problem difficulty for EPAs which are very general, essentially based on the notion of finite difference. To show the capabilities or our technique and to compare it with our previous performance models, we create models for the same two important classes of problems-symbolic regression on rational functions and Boolean function induction-used in our previous work. We model a variety of EPAs. The comparison showed that for the majority of the algorithms and problem classes, the new method produced much simpler and more accurate models than before. To further illustrate the practicality of the technique and its generality (beyond EPAs), we have also used it to predict the performance of both autoregressive models and EPAs on the problem of wind speed forecasting, obtaining simpler and more accurate models that outperform in all cases our previous performance models.

  13. The Diffusion Simulator - Teaching Geomorphic and Geologic Problems Visually.

    ERIC Educational Resources Information Center

    Gilbert, R.

    1979-01-01

    Describes a simple hydraulic simulator based on more complex models long used by engineers to develop approximate solutions. It allows students to visualize non-steady transfer, to apply a model to solve a problem, and to compare experimentally simulated information with calculated values. (Author/MA)

  14. A game theory-based trust measurement model for social networks.

    PubMed

    Wang, Yingjie; Cai, Zhipeng; Yin, Guisheng; Gao, Yang; Tong, Xiangrong; Han, Qilong

    2016-01-01

    In social networks, trust is a complex social network. Participants in online social networks want to share information and experiences with as many reliable users as possible. However, the modeling of trust is complicated and application dependent. Modeling trust needs to consider interaction history, recommendation, user behaviors and so on. Therefore, modeling trust is an important focus for online social networks. We propose a game theory-based trust measurement model for social networks. The trust degree is calculated from three aspects, service reliability, feedback effectiveness, recommendation credibility, to get more accurate result. In addition, to alleviate the free-riding problem, we propose a game theory-based punishment mechanism for specific trust and global trust, respectively. We prove that the proposed trust measurement model is effective. The free-riding problem can be resolved effectively through adding the proposed punishment mechanism.

  15. Bi-national cross-validation of an evidence-based conduct problem prevention model.

    PubMed

    Porta, Carolyn M; Bloomquist, Michael L; Garcia-Huidobro, Diego; Gutiérrez, Rafael; Vega, Leticia; Balch, Rosita; Yu, Xiaohui; Cooper, Daniel K

    2018-04-01

    To (a) explore the preferences of Mexican parents and Spanish-speaking professionals working with migrant Latino families in Minnesota regarding the Mexican-adapted brief model versus the original conduct problems intervention and (b) identifying the potential challenges, and preferred solutions, to implementation of a conduct problems preventive intervention. The core practice elements of a conduct problems prevention program originating in the United States were adapted for prevention efforts in Mexico. Three focus groups were conducted in the United States, with Latino parents (n = 24; 2 focus groups) and professionals serving Latino families (n = 9; 1 focus group), to compare and discuss the Mexican-adapted model and the original conduct problems prevention program. Thematic analysis was conducted on the verbatim focus group transcripts in the original language spoken. Participants preferred the Mexican-adapted model. The following key areas were identified for cultural adaptation when delivering a conduct problems prevention program with Latino families: recruitment/enrollment strategies, program delivery format, and program content (i.e., child skills training, parent skills training, child-parent activities, and child-parent support). For both models, strengths, concerns, barriers, and strategies for overcoming concerns and barriers were identified. We summarize recommendations offered by participants to strengthen the effective implementation of a conduct problems prevention model with Latino families in the United States. This project demonstrates the strength in binational collaboration to critically examine cultural adaptations of evidence-based prevention programs that could be useful to diverse communities, families, and youth in other settings. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Dynamic motion planning of 3D human locomotion using gradient-based optimization.

    PubMed

    Kim, Hyung Joo; Wang, Qian; Rahmatalla, Salam; Swan, Colby C; Arora, Jasbir S; Abdel-Malek, Karim; Assouline, Jose G

    2008-06-01

    Since humans can walk with an infinite variety of postures and limb movements, there is no unique solution to the modeling problem to predict human gait motions. Accordingly, we test herein the hypothesis that the redundancy of human walking mechanisms makes solving for human joint profiles and force time histories an indeterminate problem best solved by inverse dynamics and optimization methods. A new optimization-based human-modeling framework is thus described for predicting three-dimensional human gait motions on level and inclined planes. The basic unknowns in the framework are the joint motion time histories of a 25-degree-of-freedom human model and its six global degrees of freedom. The joint motion histories are calculated by minimizing an objective function such as deviation of the trunk from upright posture that relates to the human model's performance. A variety of important constraints are imposed on the optimization problem, including (1) satisfaction of dynamic equilibrium equations by requiring the model's zero moment point (ZMP) to lie within the instantaneous geometrical base of support, (2) foot collision avoidance, (3) limits on ground-foot friction, and (4) vanishing yawing moment. Analytical forms of objective and constraint functions are presented and discussed for the proposed human-modeling framework in which the resulting optimization problems are solved using gradient-based mathematical programming techniques. When the framework is applied to the modeling of bipedal locomotion on level and inclined planes, acyclic human walking motions that are smooth and realistic as opposed to less natural robotic motions are obtained. The aspects of the modeling framework requiring further investigation and refinement, as well as potential applications of the framework in biomechanics, are discussed.

  17. Connected Component Model for Multi-Object Tracking.

    PubMed

    He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan

    2016-08-01

    In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.

  18. Case study method and problem-based learning: utilizing the pedagogical model of progressive complexity in nursing education.

    PubMed

    McMahon, Michelle A; Christopher, Kimberly A

    2011-08-19

    As the complexity of health care delivery continues to increase, educators are challenged to determine educational best practices to prepare BSN students for the ambiguous clinical practice setting. Integrative, active, and student-centered curricular methods are encouraged to foster student ability to use clinical judgment for problem solving and informed clinical decision making. The proposed pedagogical model of progressive complexity in nursing education suggests gradually introducing students to complex and multi-contextual clinical scenarios through the utilization of case studies and problem-based learning activities, with the intention to transition nursing students into autonomous learners and well-prepared practitioners at the culmination of a nursing program. Exemplar curricular activities are suggested to potentiate student development of a transferable problem solving skill set and a flexible knowledge base to better prepare students for practice in future novel clinical experiences, which is a mutual goal for both educators and students.

  19. Optimization of Regional Geodynamic Models for Mantle Dynamics

    NASA Astrophysics Data System (ADS)

    Knepley, M.; Isaac, T.; Jadamec, M. A.

    2016-12-01

    The SubductionGenerator program is used to construct high resolution, 3D regional thermal structures for mantle convection simulations using a variety of data sources, including sea floor ages and geographically referenced 3D slab locations based on seismic observations. The initial bulk temperature field is constructed using a half-space cooling model or plate cooling model, and related smoothing functions based on a diffusion length-scale analysis. In this work, we seek to improve the 3D thermal model and test different model geometries and dynamically driven flow fields using constraints from observed seismic velocities and plate motions. Through a formal adjoint analysis, we construct the primal-dual version of the multi-objective PDE-constrained optimization problem for the plate motions and seismic misfit. We have efficient, scalable preconditioners for both the forward and adjoint problems based upon a block preconditioning strategy, and a simple gradient update is used to improve the control residual. The full optimal control problem is formulated on a nested hierarchy of grids, allowing a nonlinear multigrid method to accelerate the solution.

  20. Implementation of Problem Based Learning Model in Concept Learning Mushroom as a Result of Student Learning Improvement Efforts Guidelines for Teachers

    ERIC Educational Resources Information Center

    Rubiah, Musriadi

    2016-01-01

    Problem based learning is a training strategy, students work together in groups, and take responsibility for solving problems in a professional manner. Instructional materials such as textbooks become the main reference of students in study of mushrooms, especially the material is considered less effective in responding to the information needs of…

  1. Problem Solving Model for Science Learning

    NASA Astrophysics Data System (ADS)

    Alberida, H.; Lufri; Festiyed; Barlian, E.

    2018-04-01

    This research aims to develop problem solving model for science learning in junior high school. The learning model was developed using the ADDIE model. An analysis phase includes curriculum analysis, analysis of students of SMP Kota Padang, analysis of SMP science teachers, learning analysis, as well as the literature review. The design phase includes product planning a science-learning problem-solving model, which consists of syntax, reaction principle, social system, support system, instructional impact and support. Implementation of problem-solving model in science learning to improve students' science process skills. The development stage consists of three steps: a) designing a prototype, b) performing a formative evaluation and c) a prototype revision. Implementation stage is done through a limited trial. A limited trial was conducted on 24 and 26 August 2015 in Class VII 2 SMPN 12 Padang. The evaluation phase was conducted in the form of experiments at SMPN 1 Padang, SMPN 12 Padang and SMP National Padang. Based on the development research done, the syntax model problem solving for science learning at junior high school consists of the introduction, observation, initial problems, data collection, data organization, data analysis/generalization, and communicating.

  2. An Ellipsoidal Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 1

    NASA Technical Reports Server (NTRS)

    Shivarama, Ravishankar; Fahrenthold, Eric P.

    2004-01-01

    A number of coupled particle-element and hybrid particle-element methods have been developed for the simulation of hypervelocity impact problems, to avoid certain disadvantages associated with the use of pure continuum based or pure particle based methods. To date these methods have employed spherical particles. In recent work a hybrid formulation has been extended to the ellipsoidal particle case. A model formulation approach based on Lagrange's equations, with particles entropies serving as generalized coordinates, avoids the angular momentum conservation problems which have been reported with ellipsoidal smooth particle hydrodynamics models.

  3. Critical social theory as a model for the informatics curriculum for nursing.

    PubMed

    Wainwright, P; Jones, P G

    2000-01-01

    It is widely acknowledged that the education and training of nurses in information management and technology is problematic. Drawing from recent research this paper presents a theoretical framework within which the nature of the problems faced by nurses in the use of information may be analyzed. This framework, based on the critical social theory of Habermas, also provides a model for the informatics curriculum. The advantages of problem based learning and multi-media web-based technologies for the delivery of learning materials within this area are also discussed.

  4. Social Models: Blueprints or Processes?

    ERIC Educational Resources Information Center

    Little, Graham R.

    1981-01-01

    Discusses the nature and implications of two different models for societal planning: (1) the problem-solving process approach based on Karl Popper; and (2) the goal-setting "blueprint" approach based on Karl Marx. (DC)

  5. Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models

    PubMed Central

    2017-01-01

    We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927

  6. Perspectives on the Use of the Problem-Solving Model from the Viewpoint of a School Psychologist, Administrator, and Teacher from a Large Midwest Urban School District

    ERIC Educational Resources Information Center

    Lau, Matthew Y.; Sieler, Jay D.; Muyskens, Paul; Canter, Andrea; VanKeuren, Barbara; Marston, Doug

    2006-01-01

    The Minneapolis Public School System has been implementing an intervention-based approach to special education placement. This Problem-Solving Model (PSM) was designed to de-emphasize the role of norm-referenced tests and to provide early instructional interventions. The basic outline of the PSM is to define the problem, determine the best…

  7. A quadratic-tensor model algorithm for nonlinear least-squares problems with linear constraints

    NASA Technical Reports Server (NTRS)

    Hanson, R. J.; Krogh, Fred T.

    1992-01-01

    A new algorithm for solving nonlinear least-squares and nonlinear equation problems is proposed which is based on approximating the nonlinear functions using the quadratic-tensor model by Schnabel and Frank. The algorithm uses a trust region defined by a box containing the current values of the unknowns. The algorithm is found to be effective for problems with linear constraints and dense Jacobian matrices.

  8. Market-oriented Programming Using Small-world Networks for Controlling Building Environments

    NASA Astrophysics Data System (ADS)

    Shigei, Noritaka; Miyajima, Hiromi; Osako, Tsukasa

    The market model, which is one of the economic activity models, is modeled as an agent system, and applying the model to the resource allocation problem has been studied. For air conditioning control of building, which is one of the resource allocation problems, an effective method based on the agent system using auction has been proposed for traditional PID controller. On the other hand, it has been considered that this method is performed by decentralized control. However, its decentralization is not perfect, and its performace is not enough. In this paper, firstly, we propose a perfectly decentralized agent model and show its performance. Secondly, in order to improve the model, we propose the agent model based on small-world model. The effectiveness of the proposed model is shown by simulation.

  9. Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture. Part 1: Non-reactive physical mass transfer across the wetted wall column: Original Research Article: Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less

  10. Study on Multi-stage Logistics System Design Problem with Inventory Considering Demand Change by Hybrid Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Inoue, Hisaki; Gen, Mitsuo

    The logistics model used in this study is 3-stage model employed by an automobile company, which aims to solve traffic problems at a total minimum cost. Recently, research on the metaheuristics method has advanced as an approximate means for solving optimization problems like this model. These problems can be solved using various methods such as the genetic algorithm (GA), simulated annealing, and tabu search. GA is superior in robustness and adjustability toward a change in the structure of these problems. However, GA has a disadvantage in that it has a slightly inefficient search performance because it carries out a multi-point search. A hybrid GA that combines another method is attracting considerable attention since it can compensate for a fault to a partial solution that early convergence gives a bad influence on a result. In this study, we propose a novel hybrid random key-based GA(h-rkGA) that combines local search and parameter tuning of crossover rate and mutation rate; h-rkGA is an improved version of the random key-based GA (rk-GA). We attempted comparative experiments with spanning tree-based GA, priority based GA and random key-based GA. Further, we attempted comparative experiments with “h-GA by only local search” and “h-GA by only parameter tuning”. We reported the effectiveness of the proposed method on the basis of the results of these experiments.

  11. Developing a Parent-Professional Team Leadership Model in Group Work: Work with Families with Children Experiencing Behavioral and Emotional Problems

    ERIC Educational Resources Information Center

    Ruffolo, Mary C.; Kuhn, Mary T.; Evans, Mary E.

    2006-01-01

    Building on the respective strengths of parent-led and professional-led groups, a parent-professional team leadership model for group interventions was developed and evaluated for families of youths with emotional and behavioral problems. The model was developed based on feedback from 26 parents in focus group sessions and recommendations from…

  12. Optimal harvesting for a predator-prey agent-based model using difference equations.

    PubMed

    Oremland, Matthew; Laubenbacher, Reinhard

    2015-03-01

    In this paper, a method known as Pareto optimization is applied in the solution of a multi-objective optimization problem. The system in question is an agent-based model (ABM) wherein global dynamics emerge from local interactions. A system of discrete mathematical equations is formulated in order to capture the dynamics of the ABM; while the original model is built up analytically from the rules of the model, the paper shows how minor changes to the ABM rule set can have a substantial effect on model dynamics. To address this issue, we introduce parameters into the equation model that track such changes. The equation model is amenable to mathematical theory—we show how stability analysis can be performed and validated using ABM data. We then reduce the equation model to a simpler version and implement changes to allow controls from the ABM to be tested using the equations. Cohen's weighted κ is proposed as a measure of similarity between the equation model and the ABM, particularly with respect to the optimization problem. The reduced equation model is used to solve a multi-objective optimization problem via a technique known as Pareto optimization, a heuristic evolutionary algorithm. Results show that the equation model is a good fit for ABM data; Pareto optimization provides a suite of solutions to the multi-objective optimization problem that can be implemented directly in the ABM.

  13. Classroom Crisis Intervention through Contracting: A Moral Development Model.

    ERIC Educational Resources Information Center

    Smaby, Marlowe H.; Tamminen, Armas W.

    1981-01-01

    A counselor can arbitrate problem situations using a systematic approach to classroom intervention which includes meetings with the teacher and students. This crisis intervention model based on moral development can be more effective than reliance on guidance activities disconnected from the actual classroom settings where the problems arise.…

  14. Problem Solving Techniques for the Design of Algorithms.

    ERIC Educational Resources Information Center

    Kant, Elaine; Newell, Allen

    1984-01-01

    Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…

  15. Modeling the Round Earth through Diagrams

    ERIC Educational Resources Information Center

    Padalkar, Shamin; Ramadas, Jayashree

    2008-01-01

    Earlier studies have found that students, including adults, have problems understanding the scientifically accepted model of the Sun-Earth-Moon system and explaining day-to-day astronomical phenomena based on it. We have been examining such problems in the context of recent research on visual-spatial reasoning. Working with middle school students…

  16. Parallel computing in enterprise modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priorimore » ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.« less

  17. The effect of mathematical model development on the instruction of acceleration to introductory physics students

    NASA Astrophysics Data System (ADS)

    Sauer, Tim Allen

    The purpose of this study was to evaluate the effectiveness of utilizing student constructed theoretical math models when teaching acceleration to high school introductory physics students. The goal of the study was for the students to be able to utilize mathematical modeling strategies to improve their problem solving skills, as well as their standardized scientific and conceptual understanding. This study was based on mathematical modeling research, conceptual change research and constructivist theory of learning, all of which suggest that mathematical modeling is an effective way to influence students' conceptual connectiveness and sense making of formulaic equations and problem solving. A total of 48 students in two sections of high school introductory physics classes received constructivist, inquiry-based, cooperative learning, and conceptual change-oriented instruction. The difference in the instruction for the 24 students in the mathematical modeling treatment group was that they constructed every formula they needed to solve problems from data they collected. In contrast, the instructional design for the control group of 24 students allowed the same instruction with assigned problems solved with formulas given to them without explanation. The results indicated that the mathematical modeling students were able to solve less familiar and more complicated problems with greater confidence and mental flexibility than the control group students. The mathematical modeling group maintained fewer alternative conceptions consistently in the interviews than did the control group. The implications for acceleration instruction from these results were discussed.

  18. A geostatistical approach to the change-of-support problem and variable-support data fusion in spatial analysis

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Wang, Yang; Zeng, Hui

    2016-01-01

    A key issue to address in synthesizing spatial data with variable-support in spatial analysis and modeling is the change-of-support problem. We present an approach for solving the change-of-support and variable-support data fusion problems. This approach is based on geostatistical inverse modeling that explicitly accounts for differences in spatial support. The inverse model is applied here to produce both the best predictions of a target support and prediction uncertainties, based on one or more measurements, while honoring measurements. Spatial data covering large geographic areas often exhibit spatial nonstationarity and can lead to computational challenge due to the large data size. We developed a local-window geostatistical inverse modeling approach to accommodate these issues of spatial nonstationarity and alleviate computational burden. We conducted experiments using synthetic and real-world raster data. Synthetic data were generated and aggregated to multiple supports and downscaled back to the original support to analyze the accuracy of spatial predictions and the correctness of prediction uncertainties. Similar experiments were conducted for real-world raster data. Real-world data with variable-support were statistically fused to produce single-support predictions and associated uncertainties. The modeling results demonstrate that geostatistical inverse modeling can produce accurate predictions and associated prediction uncertainties. It is shown that the local-window geostatistical inverse modeling approach suggested offers a practical way to solve the well-known change-of-support problem and variable-support data fusion problem in spatial analysis and modeling.

  19. Workflow Agents vs. Expert Systems: Problem Solving Methods in Work Systems Design

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Seah, Chin

    2009-01-01

    During the 1980s, a community of artificial intelligence researchers became interested in formalizing problem solving methods as part of an effort called "second generation expert systems" (2nd GES). How do the motivations and results of this research relate to building tools for the workplace today? We provide an historical review of how the theory of expertise has developed, a progress report on a tool for designing and implementing model-based automation (Brahms), and a concrete example how we apply 2nd GES concepts today in an agent-based system for space flight operations (OCAMS). Brahms incorporates an ontology for modeling work practices, what people are doing in the course of a day, characterized as "activities." OCAMS was developed using a simulation-to-implementation methodology, in which a prototype tool was embedded in a simulation of future work practices. OCAMS uses model-based methods to interactively plan its actions and keep track of the work to be done. The problem solving methods of practice are interactive, employing reasoning for and through action in the real world. Analogously, it is as if a medical expert system were charged not just with interpreting culture results, but actually interacting with a patient. Our perspective shifts from building a "problem solving" (expert) system to building an actor in the world. The reusable components in work system designs include entire "problem solvers" (e.g., a planning subsystem), interoperability frameworks, and workflow agents that use and revise models dynamically in a network of people and tools. Consequently, the research focus shifts so "problem solving methods" include ways of knowing that models do not fit the world, and ways of interacting with other agents and people to gain or verify information and (ultimately) adapt rules and procedures to resolve problematic situations.

  20. The effectiveness of problem-based learning on students’ problem solving ability in vector analysis course

    NASA Astrophysics Data System (ADS)

    Mushlihuddin, R.; Nurafifah; Irvan

    2018-01-01

    The student’s low ability in mathematics problem solving proved to the less effective of a learning process in the classroom. Effective learning was a learning that affects student’s math skills, one of which is problem-solving abilities. Problem-solving capability consisted of several stages: understanding the problem, planning the settlement, solving the problem as planned, re-examining the procedure and the outcome. The purpose of this research was to know: (1) was there any influence of PBL model in improving ability Problem solving of student math in a subject of vector analysis?; (2) was the PBL model effective in improving students’ mathematical problem-solving skills in vector analysis courses? This research was a quasi-experiment research. The data analysis techniques performed from the test stages of data description, a prerequisite test is the normality test, and hypothesis test using the ANCOVA test and Gain test. The results showed that: (1) there was an influence of PBL model in improving students’ math problem-solving abilities in vector analysis courses; (2) the PBL model was effective in improving students’ problem-solving skills in vector analysis courses with a medium category.

  1. The Promise of Technology to Confront Dilemmas in Teacher Education: The Use of WebQuests in Problem-Based Methods Courses

    ERIC Educational Resources Information Center

    Smith, Leigh K.; Draper, Roni Jo; Sabey, Brenda L.

    2005-01-01

    This qualitative study examined the use of WebQuests as a teaching tool in problem-based elementary methods courses. We explored the potential of WebQuests to address three dilemmas faced in teacher education: (a) modeling instruction that is based on current learning theory and research-based practices, (b) providing preservice teachers with…

  2. AI tools in computer based problem solving

    NASA Technical Reports Server (NTRS)

    Beane, Arthur J.

    1988-01-01

    The use of computers to solve value oriented, deterministic, algorithmic problems, has evolved a structured life cycle model of the software process. The symbolic processing techniques used, primarily in research, for solving nondeterministic problems, and those for which an algorithmic solution is unknown, have evolved a different model, much less structured. Traditionally, the two approaches have been used completely independently. With the advent of low cost, high performance 32 bit workstations executing identical software with large minicomputers and mainframes, it became possible to begin to merge both models into a single extended model of computer problem solving. The implementation of such an extended model on a VAX family of micro/mini/mainframe systems is described. Examples in both development and deployment of applications involving a blending of AI and traditional techniques are given.

  3. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  4. LTI system order reduction approach based on asymptotical equivalence and the Co-operation of biology-related algorithms

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.

    2017-02-01

    A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.

  5. Nonlinear Schrödinger approach to European option pricing

    NASA Astrophysics Data System (ADS)

    Wróblewski, Marcin

    2017-05-01

    This paper deals with numerical option pricing methods based on a Schrödinger model rather than the Black-Scholes model. Nonlinear Schrödinger boundary value problems seem to be alternatives to linear models which better reflect the complexity and behavior of real markets. Therefore, based on the nonlinear Schrödinger option pricing model proposed in the literature, in this paper a model augmented by external atomic potentials is proposed and numerically tested. In terms of statistical physics the developed model describes the option in analogy to a pair of two identical quantum particles occupying the same state. The proposed model is used to price European call options on a stock index. the model is calibrated using the Levenberg-Marquardt algorithm based on market data. A Runge-Kutta method is used to solve the discretized boundary value problem numerically. Numerical results are provided and discussed. It seems that our proposal more accurately models phenomena observed in the real market than do linear models.

  6. Model based defect characterization in composites

    NASA Astrophysics Data System (ADS)

    Roberts, R.; Holland, S.

    2017-02-01

    Work is reported on model-based defect characterization in CFRP composites. The work utilizes computational models of the interaction of NDE probing energy fields (ultrasound and thermography), to determine 1) the measured signal dependence on material and defect properties (forward problem), and 2) an assessment of performance-critical defect properties from analysis of measured NDE signals (inverse problem). Work is reported on model implementation for inspection of CFRP laminates containing multi-ply impact-induced delamination, with application in this paper focusing on ultrasound. A companion paper in these proceedings summarizes corresponding activity in thermography. Inversion of ultrasound data is demonstrated showing the quantitative extraction of damage properties.

  7. An inverse finance problem for estimation of the volatility

    NASA Astrophysics Data System (ADS)

    Neisy, A.; Salmani, K.

    2013-01-01

    Black-Scholes model, as a base model for pricing in derivatives markets has some deficiencies, such as ignoring market jumps, and considering market volatility as a constant factor. In this article, we introduce a pricing model for European-Options under jump-diffusion underlying asset. Then, using some appropriate numerical methods we try to solve this model with integral term, and terms including derivative. Finally, considering volatility as an unknown parameter, we try to estimate it by using our proposed model. For the purpose of estimating volatility, in this article, we utilize inverse problem, in which inverse problem model is first defined, and then volatility is estimated using minimization function with Tikhonov regularization.

  8. Safety Assessment of Dangerous Goods Transport Enterprise Based on the Relative Entropy Aggregation in Group Decision Making Model

    PubMed Central

    Wu, Jun; Li, Chengbing; Huo, Yueying

    2014-01-01

    Safety of dangerous goods transport is directly related to the operation safety of dangerous goods transport enterprise. Aiming at the problem of the high accident rate and large harm in dangerous goods logistics transportation, this paper took the group decision making problem based on integration and coordination thought into a multiagent multiobjective group decision making problem; a secondary decision model was established and applied to the safety assessment of dangerous goods transport enterprise. First of all, we used dynamic multivalue background and entropy theory building the first level multiobjective decision model. Secondly, experts were to empower according to the principle of clustering analysis, and combining with the relative entropy theory to establish a secondary rally optimization model based on relative entropy in group decision making, and discuss the solution of the model. Then, after investigation and analysis, we establish the dangerous goods transport enterprise safety evaluation index system. Finally, case analysis to five dangerous goods transport enterprises in the Inner Mongolia Autonomous Region validates the feasibility and effectiveness of this model for dangerous goods transport enterprise recognition, which provides vital decision making basis for recognizing the dangerous goods transport enterprises. PMID:25477954

  9. Safety assessment of dangerous goods transport enterprise based on the relative entropy aggregation in group decision making model.

    PubMed

    Wu, Jun; Li, Chengbing; Huo, Yueying

    2014-01-01

    Safety of dangerous goods transport is directly related to the operation safety of dangerous goods transport enterprise. Aiming at the problem of the high accident rate and large harm in dangerous goods logistics transportation, this paper took the group decision making problem based on integration and coordination thought into a multiagent multiobjective group decision making problem; a secondary decision model was established and applied to the safety assessment of dangerous goods transport enterprise. First of all, we used dynamic multivalue background and entropy theory building the first level multiobjective decision model. Secondly, experts were to empower according to the principle of clustering analysis, and combining with the relative entropy theory to establish a secondary rally optimization model based on relative entropy in group decision making, and discuss the solution of the model. Then, after investigation and analysis, we establish the dangerous goods transport enterprise safety evaluation index system. Finally, case analysis to five dangerous goods transport enterprises in the Inner Mongolia Autonomous Region validates the feasibility and effectiveness of this model for dangerous goods transport enterprise recognition, which provides vital decision making basis for recognizing the dangerous goods transport enterprises.

  10. Evidence-based ergonomics: a model and conceptual structure proposal.

    PubMed

    Silveira, Dierci Marcio

    2012-01-01

    In Human Factors and Ergonomics Science (HFES), it is difficult to identify what is the best approach to tackle the workplace and systems design problems which needs to be solved, and it has been also advocated as transdisciplinary and multidisciplinary the issue of "How to solve the human factors and ergonomics problems that are identified?". The proposition on this study is to combine the theoretical approach for Sustainability Science, the Taxonomy of the Human Factors and Ergonomics (HFE) discipline and the framework for Evidence-Based Medicine in an attempt to be applied in Human Factors and Ergonomics. Applications of ontologies are known in the field of medical research and computer science. By scrutinizing the key requirements for the HFES structuring of knowledge, it was designed a reference model, First, it was identified the important requirements for HFES Concept structuring, as regarded by Meister. Second, it was developed an evidence-based ergonomics framework as a reference model composed of six levels based on these requirements. Third, it was devised a mapping tool using linguistic resources to translate human work, systems environment and the complexities inherent to their hierarchical relationships to support future development at Level 2 of the reference model and for meeting the two major challenges for HFES, namely, identifying what problems should be addressed in HFE as an Autonomous Science itself and proposing solutions by integrating concepts and methods applied in HFES for those problems.

  11. Mathematical modeling of swirled flows in industrial applications

    NASA Astrophysics Data System (ADS)

    Dekterev, A. A.; Gavrilov, A. A.; Sentyabov, A. V.

    2018-03-01

    Swirled flows are widely used in technological devices. Swirling flows are characterized by a wide range of flow regimes. 3D mathematical modeling of flows is widely used in research and design. For correct mathematical modeling of such a flow, it is necessary to use turbulence models, which take into account important features of the flow. Based on the experience of computational modeling of a wide class of problems with swirling flows, recommendations on the use of turbulence models for calculating the applied problems are proposed.

  12. Numerical Modelling of Foundation Slabs with use of Schur Complement Method

    NASA Astrophysics Data System (ADS)

    Koktan, Jiří; Brožovský, Jiří

    2017-10-01

    The paper discusses numerical modelling of foundation slabs with use of advanced numerical approaches, which are suitable for parallel processing. The solution is based on the Finite Element Method with the slab-type elements. The subsoil is modelled with use of Winklertype contact model (as an alternative a multi-parameter model can be used). The proposed modelling approach uses the Schur Complement method to speed-up the computations of the problem. The method is based on a special division of the analyzed model to several substructures. It adds some complexity to the numerical procedures, especially when subsoil models are used inside the finite element method solution. In other hand, this method makes possible a fast solution of large models but it introduces further problems to the process. Thus, the main aim of this paper is to verify that such method can be successfully used for this type of problem. The most suitable finite elements will be discussed, there will be also discussion related to finite element mesh and limitations of its construction for such problem. The core approaches of the implementation of the Schur Complement Method for this type of the problem will be also presented. The proposed approach was implemented in the form of a computer program, which will be also briefly introduced. There will be also presented results of example computations, which prove the speed-up of the solution - there will be shown important speed-up of solution even in the case of on-parallel processing and the ability of bypass size limitations of numerical models with use of the discussed approach.

  13. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  14. A comparison of reduced-order modelling techniques for application in hyperthermia control and estimation.

    PubMed

    Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B

    1998-01-01

    Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.

  15. A Framework for Sharing and Integrating Remote Sensing and GIS Models Based on Web Service

    PubMed Central

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a “black box” and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users. PMID:24901016

  16. A framework for sharing and integrating remote sensing and GIS models based on Web service.

    PubMed

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.

  17. The Applicability of Market-Based Solutions to Public Sector Problems.

    ERIC Educational Resources Information Center

    Kelley, Carolyn

    This paper examines the ways in which private- and public-sector location affects organizational structure and functions, and the implications for school reform. It identifies the differences that are often overlooked when policymakers utilize market-based organizational reform models to address public school problems. Two fundamental questions…

  18. Simplifications for hydronic system models in modelica

    DOE PAGES

    Jorissen, F.; Wetter, M.; Helsen, L.

    2018-01-12

    Building systems and their heating, ventilation and air conditioning flow networks, are becoming increasingly complex. Some building energy simulation tools simulate these flow networks using pressure drop equations. These flow network models typically generate coupled algebraic nonlinear systems of equations, which become increasingly more difficult to solve as their sizes increase. This leads to longer computation times and can cause the solver to fail. These problems also arise when using the equation-based modelling language Modelica and Annex 60-based libraries. This may limit the applicability of the library to relatively small problems unless problems are restructured. This paper discusses two algebraicmore » loop types and presents an approach that decouples algebraic loops into smaller parts, or removes them completely. The approach is applied to a case study model where an algebraic loop of 86 iteration variables is decoupled into smaller parts with a maximum of five iteration variables.« less

  19. On Multifunctional Collaborative Methods in Engineering Science

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    2001-01-01

    Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized.

  20. A charge-based model of Junction Barrier Schottky rectifiers

    NASA Astrophysics Data System (ADS)

    Latorre-Rey, Alvaro D.; Mudholkar, Mihir; Quddus, Mohammed T.; Salih, Ali

    2018-06-01

    A new charge-based model of the electric field distribution for Junction Barrier Schottky (JBS) diodes is presented, based on the description of the charge-sharing effect between the vertical Schottky junction and the lateral pn-junctions that constitute the active cell of the device. In our model, the inherently 2-D problem is transformed into a simple but accurate 1-D problem which has a closed analytical solution that captures the reshaping and reduction of the electric field profile responsible for the improved electrical performance of these devices, while preserving physically meaningful expressions that depend on relevant device parameters. The validation of the model is performed by comparing calculated electric field profiles with drift-diffusion simulations of a JBS device showing good agreement. Even though other fully 2-D models already available provide higher accuracy, they lack physical insight making the proposed model an useful tool for device design.

  1. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  2. Distributed optimisation problem with communication delay and external disturbance

    NASA Astrophysics Data System (ADS)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  3. Parent-Reported Behavioral and Psychiatric Problems Mediate the Relationship between Sleep-Disordered Breathing and Cognitive Deficits in School-Aged Children.

    PubMed

    Smith, Dale L; Gozal, David; Hunter, Scott J; Kheirandish-Gozal, Leila

    2017-01-01

    Numerous studies over the past several decades have illustrated that children who suffer from sleep-disordered breathing (SDB) are at greater risk for cognitive, behavioral, and psychiatric problems. Although behavioral problems have been proposed as a potential mediator between SDB and cognitive functioning, these relationships have not been critically examined. This analysis is based on a community-based cohort of 1,115 children who underwent overnight polysomnography, and cognitive and behavioral phenotyping. Structural model of the relationships between SDB, behavior, and cognition, and two recently developed mediation approaches based on propensity score weighting and resampling were used to assess the mediational role of parent-reported behavior and psychiatric problems in the relationship between SDB and cognitive functioning. Multiple models utilizing two different SDB definitions further explored direct effects of SDB on cognition as well as indirect effects through behavioral pathology. All models were adjusted for age, sex, race, BMI z -score, and asthma status. Indirect effects of SDB through behavior problems were significant in all mediation models, while direct effects of SDB on cognition were not. The findings were consistent across different mediation procedures and remained essentially unaltered when different criteria for SDB, behavior, and cognition were used. Potential effects of SDB on cognitive functioning appear to occur through behavioral problems that are detectable in this pediatric population. Thus, early attentional or behavioral pathology may be implicated in the cognitive functioning deficits associated with SDB, and may present an early morbidity-related susceptibility biomarker.

  4. A toolbox and record for scientific models

    NASA Technical Reports Server (NTRS)

    Ellman, Thomas

    1994-01-01

    Computational science presents a host of challenges for the field of knowledge-based software design. Scientific computation models are difficult to construct. Models constructed by one scientist are easily misapplied by other scientists to problems for which they are not well-suited. Finally, models constructed by one scientist are difficult for others to modify or extend to handle new types of problems. Construction of scientific models actually involves much more than the mechanics of building a single computational model. In the course of developing a model, a scientist will often test a candidate model against experimental data or against a priori expectations. Test results often lead to revisions of the model and a consequent need for additional testing. During a single model development session, a scientist typically examines a whole series of alternative models, each using different simplifying assumptions or modeling techniques. A useful scientific software design tool must support these aspects of the model development process as well. In particular, it should propose and carry out tests of candidate models. It should analyze test results and identify models and parts of models that must be changed. It should determine what types of changes can potentially cure a given negative test result. It should organize candidate models, test data, and test results into a coherent record of the development process. Finally, it should exploit the development record for two purposes: (1) automatically determining the applicability of a scientific model to a given problem; (2) supporting revision of a scientific model to handle a new type of problem. Existing knowledge-based software design tools must be extended in order to provide these facilities.

  5. Inference of Vohradský's Models of Genetic Networks by Solving Two-Dimensional Function Optimization Problems

    PubMed Central

    Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko

    2013-01-01

    The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175

  6. High-performance biocomputing for simulating the spread of contagion over large contact networks

    PubMed Central

    2012-01-01

    Background Many important biological problems can be modeled as contagion diffusion processes over interaction networks. This article shows how the EpiSimdemics interaction-based simulation system can be applied to the general contagion diffusion problem. Two specific problems, computational epidemiology and human immune system modeling, are given as examples. We then show how the graphics processing unit (GPU) within each compute node of a cluster can effectively be used to speed-up the execution of these types of problems. Results We show that a single GPU can accelerate the EpiSimdemics computation kernel by a factor of 6 and the entire application by a factor of 3.3, compared to the execution time on a single core. When 8 CPU cores and 2 GPU devices are utilized, the speed-up of the computational kernel increases to 9.5. When combined with effective techniques for inter-node communication, excellent scalability can be achieved without significant loss of accuracy in the results. Conclusions We show that interaction-based simulation systems can be used to model disparate and highly relevant problems in biology. We also show that offloading some of the work to GPUs in distributed interaction-based simulations can be an effective way to achieve increased intra-node efficiency. PMID:22537298

  7. Modern meta-heuristics based on nonlinear physics processes: A review of models and design procedures

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.

    2016-10-01

    Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in broad sense, of meta-heuristics, and describe free-accessible software frameworks which can be used to make easier the implementation of these algorithms.

  8. A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu

    Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.

  9. Resource-Competing Oscillator Network as a Model of Amoeba-Based Neurocomputer

    NASA Astrophysics Data System (ADS)

    Aono, Masashi; Hirata, Yoshito; Hara, Masahiko; Aihara, Kazuyuki

    An amoeboid organism, Physarum, exhibits rich spatiotemporal oscillatory behavior and various computational capabilities. Previously, the authors created a recurrent neurocomputer incorporating the amoeba as a computing substrate to solve optimization problems. In this paper, considering the amoeba to be a network of oscillators coupled such that they compete for constant amounts of resources, we present a model of the amoeba-based neurocomputer. The model generates a number of oscillation modes and produces not only simple behavior to stabilize a single mode but also complex behavior to spontaneously switch among different modes, which reproduces well the experimentally observed behavior of the amoeba. To explore the significance of the complex behavior, we set a test problem used to compare computational performances of the oscillation modes. The problem is a kind of optimization problem of how to allocate a limited amount of resource to oscillators such that conflicts among them can be minimized. We show that the complex behavior enables to attain a wider variety of solutions to the problem and produces better performances compared with the simple behavior.

  10. Progressive Transitions from Algorithmic to Conceptual Understanding in Student Ability To Solve Chemistry Problems: A Lakatosian Interpretation.

    ERIC Educational Resources Information Center

    Niaz, Mansoor

    The main objective of this study is to construct models based on strategies students use to solve chemistry problems and to show that these models form sequences of progressive transitions similar to what Lakatos (1970) in the history of science refers to as progressive 'problemshifts' that increase the explanatory' heuristic power of the models.…

  11. The Effect on Pupils' Science Performance and Problem-Solving Ability through Lego: An Engineering Design-Based Modeling Approach

    ERIC Educational Resources Information Center

    Li, Yanyan; Huang, Zhinan; Jiang, Menglu; Chang, Ting-Wen

    2016-01-01

    Incorporating scientific fundamentals via engineering through a design-based methodology has proven to be highly effective for STEM education. Engineering design can be instantiated for learning as they involve mental and physical stimulation and develop practical skills especially in solving problems. Lego bricks, as a set of toys based on design…

  12. Analyzing Log Files to Predict Students' Problem Solving Performance in a Computer-Based Physics Tutor

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2015-01-01

    This study investigates whether information saved in the log files of a computer-based tutor can be used to predict the problem solving performance of students. The log files of a computer-based physics tutoring environment called Andes Physics Tutor was analyzed to build a logistic regression model that predicted success and failure of students'…

  13. Phase averaging method for the modeling of the multiprobe and cutaneous cryosurgery

    NASA Astrophysics Data System (ADS)

    E Shilnikov, K.; Kudryashov, N. A.; Y Gaiur, I.

    2017-12-01

    In this paper we consider the problem of planning and optimization of the cutaneous and multiprobe cryosurgery operations. An explicit scheme based on the finite volume approximation of phase averaged Pennes bioheat transfer model is applied. The flux relaxation method is used for the stability improvement of scheme. Skin tissue is considered as strongly inhomogeneous media. Computerized planning tool is tested on model cryotip-based and cutaneous cryosurgery problems. For the case of cutaneous cryosurgery the method of an additional freezing element mounting is studied as an approach to optimize the cellular necrosis front propagation.

  14. Evaluation of parameters of color profile models of LCD and LED screens

    NASA Astrophysics Data System (ADS)

    Zharinov, I. O.; Zharinov, O. O.

    2017-12-01

    The purpose of the research relates to the problem of parametric identification of the color profile model of LCD (liquid crystal display) and LED (light emitting diode) screens. The color profile model of a screen is based on the Grassmann’s Law of additive color mixture. Mathematically the problem is to evaluate unknown parameters (numerical coefficients) of the matrix transformation between different color spaces. Several methods of evaluation of these screen profile coefficients were developed. These methods are based either on processing of some colorimetric measurements or on processing of technical documentation data.

  15. Implications of Middle School Behavior Problems for High School Graduation and Employment Outcomes of Young Adults: Estimation of a Recursive Model

    PubMed Central

    Karakus, Mustafa C.; Salkever, David S.; Slade, Eric P.; Ialongo, Nicholas; Stuart, Elizabeth

    2013-01-01

    The potentially serious adverse impacts of behavior problems during adolescence on employment outcomes in adulthood provide a key economic rationale for early intervention programs. However, the extent to which lower educational attainment accounts for the total impact of adolescent behavior problems on later employment remains unclear As an initial step in exploring this issue, we specify and estimate a recursive bivariate probit model that 1) relates middle school behavior problems to high school graduation and 2) models later employment in young adulthood as a function of these behavior problems and of high school graduation. Our model thus allows for both a direct effect of behavior problems on later employment as well as an indirect effect that operates via graduation from high school. Our empirical results, based on analysis of data from the NELS, suggest that the direct effects of externalizing behavior problems on later employment are not significant but that these problems have important indirect effects operating through high school graduation. PMID:23576834

  16. Research Progress on Dark Matter Model Based on Weakly Interacting Massive Particles

    NASA Astrophysics Data System (ADS)

    He, Yu; Lin, Wen-bin

    2017-04-01

    The cosmological model of cold dark matter (CDM) with the dark energy and a scale-invariant adiabatic primordial power spectrum has been considered as the standard cosmological model, i.e. the ΛCDM model. Weakly interacting massive particles (WIMPs) become a prominent candidate for the CDM. Many models extended from the standard model can provide the WIMPs naturally. The standard calculations of relic abundance of dark matter show that the WIMPs are well in agreement with the astronomical observation of ΩDM h2 ≈0.11. The WIMPs have a relatively large mass, and a relatively slow velocity, so they are easy to aggregate into clusters, and the results of numerical simulations based on the WIMPs agree well with the observational results of cosmic large-scale structures. In the aspect of experiments, the present accelerator or non-accelerator direct/indirect detections are mostly designed for the WIMPs. Thus, a wide attention has been paid to the CDM model based on the WIMPs. However, the ΛCDM model has a serious problem for explaining the small-scale structures under one Mpc. Different dark matter models have been proposed to alleviate the small-scale problem. However, so far there is no strong evidence enough to exclude the CDM model. We plan to introduce the research progress of the dark matter model based on the WIMPs, such as the WIMPs miracle, numerical simulation, small-scale problem, and the direct/indirect detection, to analyze the criterion for discriminating the ;cold;, ;hot;, and ;warm; dark matter, and present the future prospects for the study in this field.

  17. Process-based models are required to manage ecological systems in a changing world

    Treesearch

    K. Cuddington; M.-J. Fortin; L.R. Gerber; A. Hastings; A. Liebhold; M. OConnor; C. Ray

    2013-01-01

    Several modeling approaches can be used to guide management decisions. However, some approaches are better fitted than others to address the problem of prediction under global change. Process-based models, which are based on a theoretical understanding of relevant ecological processes, provide a useful framework to incorporate specific responses to altered...

  18. A connectionist model for diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Peng, Yun; Reggia, James A.

    1989-01-01

    A competition-based connectionist model for solving diagnostic problems is described. The problems considered are computationally difficult in that (1) multiple disorders may occur simultaneously and (2) a global optimum in the space exponential to the total number of possible disorders is sought as a solution. The diagnostic problem is treated as a nonlinear optimization problem, and global optimization criteria are decomposed into local criteria governing node activation updating in the connectionist model. Nodes representing disorders compete with each other to account for each individual manifestation, yet complement each other to account for all manifestations through parallel node interactions. When equilibrium is reached, the network settles into a locally optimal state. Three randomly generated examples of diagnostic problems, each of which has 1024 cases, were tested, and the decomposition plus competition plus resettling approach yielded very high accuracy.

  19. Decomposition of the Mean Squared Error and NSE Performance Criteria: Implications for Improving Hydrological Modelling

    NASA Technical Reports Server (NTRS)

    Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.

    2009-01-01

    The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.

  20. Augmented Lagrange Hopfield network for solving economic dispatch problem in competitive environment

    NASA Astrophysics Data System (ADS)

    Vo, Dieu Ngoc; Ongsakul, Weerakorn; Nguyen, Khai Phuc

    2012-11-01

    This paper proposes an augmented Lagrange Hopfield network (ALHN) for solving economic dispatch (ED) problem in the competitive environment. The proposed ALHN is a continuous Hopfield network with its energy function based on augmented Lagrange function for efficiently dealing with constrained optimization problems. The ALHN method can overcome the drawbacks of the conventional Hopfield network such as local optimum, long computational time, and linear constraints. The proposed method is used for solving the ED problem with two revenue models of revenue based on payment for power delivered and payment for reserve allocated. The proposed ALHN has been tested on two systems of 3 units and 10 units for the two considered revenue models. The obtained results from the proposed methods are compared to those from differential evolution (DE) and particle swarm optimization (PSO) methods. The result comparison has indicated that the proposed method is very efficient for solving the problem. Therefore, the proposed ALHN could be a favorable tool for ED problem in the competitive environment.

  1. A pressure relaxation closure model for one-dimensional, two-material Lagrangian hydrodynamics based on the Riemann problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, James R; Shashkov, Mikhail J

    2009-01-01

    Despite decades of development, Lagrangian hydrodynamics of strengthfree materials presents numerous open issues, even in one dimension. We focus on the problem of closing a system of equations for a two-material cell under the assumption of a single velocity model. There are several existing models and approaches, each possessing different levels of fidelity to the underlying physics and each exhibiting unique features in the computed solutions. We consider the case in which the change in heat in the constituent materials in the mixed cell is assumed equal. An instantaneous pressure equilibration model for a mixed cell can be cast asmore » four equations in four unknowns, comprised of the updated values of the specific internal energy and the specific volume for each of the two materials in the mixed cell. The unique contribution of our approach is a physics-inspired, geometry-based model in which the updated values of the sub-cell, relaxing-toward-equilibrium constituent pressures are related to a local Riemann problem through an optimization principle. This approach couples the modeling problem of assigning sub-cell pressures to the physics associated with the local, dynamic evolution. We package our approach in the framework of a standard predictor-corrector time integration scheme. We evaluate our model using idealized, two material problems using either ideal-gas or stiffened-gas equations of state and compare these results to those computed with the method of Tipton and with corresponding pure-material calculations.« less

  2. Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.

    PubMed

    Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante

    2014-10-01

    In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.

  3. Implementation of DRG Payment in France: issues and recent developments.

    PubMed

    Or, Zeynep

    2014-08-01

    In France, a DRG-based payment system was introduced in 2004/2005 for funding acute services in all hospitals with the objectives of improving hospital efficiency, transparency and fairness in payments to public and private hospitals. Despite the initial consensus on the necessity of the reform, providers have become increasingly critical of the system because of the problems encountered during the implementation. In 2012 the government announced its intention to modify the payment model to better deal with its adverse effects. The paper reports on the issues raised by the DRG-based payment in the French hospital sector and provides an overview of the main problems with the French DRG payment model. It also summarises the evidence on its impact and presents recent developments for reforming the current model. DRG-based payment addressed some of the chronic problems inherent in the French hospital market and improved accountability and productivity of health-care facilities. However, it has also created new problems for controlling hospital activity and ensuring that care provided is medically appropriate. In order to alter its adverse effects the French DRG model needs to better align greater efficiency with the objectives of better quality and effectiveness of care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. A Multidirectional Model for Assessing Learning Disabled Students' Intelligence: An Information-Processing Framework.

    ERIC Educational Resources Information Center

    Swanson, H. Lee

    1982-01-01

    An information processing approach to the assessment of learning disabled students' intellectual performance is presented. The model is based on the assumption that intelligent behavior is comprised of a variety of problem- solving strategies. An account of child problem solving is explained and illustrated with a "thinking aloud" protocol.…

  5. Perceptions of Problem Behavior in Adolescents' Families: Perceiver, Target, and Family Effects

    ERIC Educational Resources Information Center

    Manders, Willeke A.; Janssens, Jan M. A. M.; Cook, William L.; Oud, Johan H. L.; De Bruyn, Eric E. J.; Scholte, Ron H. J.

    2009-01-01

    Considerable research has focused on the reliability and validity of informant reports of family behavior, especially maternal reports of adolescent problem behavior. None of these studies, however, has based their orientation on a theoretical model of interpersonal perception. In this study we used the social relations model (SRM) to examine…

  6. "Breaking the Mold" in the Dissertation: Implementing a Problem-Based, Decision-Oriented Thesis Project

    ERIC Educational Resources Information Center

    Archbald, Doug

    2010-01-01

    This article offers lessons from an initiative refashioning the doctoral thesis in an education leadership program. The program serves a practitioner clientele; most are teachers and administrators. The new model for the thesis emphasizes leadership, problem solving, decision making, and organizational improvement. The former model was a…

  7. Developing Environmental Decision-making in Middle School Classes.

    ERIC Educational Resources Information Center

    Rowland, Paul McD.; Adkins, Carol R.

    This paper presents Rowland's Ways of Knowing and Decision-making Model for curriculum development and how it can be applied to environmental education curricula. The model uses a problem solving approach based on steps of: (1) coming to know the problem through the ways of knowing of the disciplines and personal knowledge; (2) proposing solutions…

  8. Two dimensional finite element heat transfer models for softwood

    Treesearch

    Hongmei Gu; John F. Hunt

    2004-01-01

    The anisotropy of wood creates a complex problem for solving heat and mass transfer problems that require analyses be based on fundamental material properties of the wood structure. Most heat transfer models use average thermal properties across either the radial or tangential directions and have not differentiated the effects of cellular alignment, earlywood/latewood...

  9. Causal Client Models in Selecting Effective Interventions: A Cognitive Mapping Study

    ERIC Educational Resources Information Center

    de Kwaadsteniet, Leontien; Hagmayer, York; Krol, Nicole P. C. M.; Witteman, Cilia L. M.

    2010-01-01

    An important reason to choose an intervention to treat psychological problems of clients is the expectation that the intervention will be effective in alleviating the problems. The authors investigated whether clinicians base their ratings of the effectiveness of interventions on models that they construct representing the factors causing and…

  10. Effect of Worked Examples on Mental Model Progression in a Computer-Based Simulation Learning Environment

    ERIC Educational Resources Information Center

    Darabi, Aubteen; Nelson, David W.; Meeker, Richard; Liang, Xinya; Boulware, Wilma

    2010-01-01

    In a diagnostic problem solving operation of a computer-simulated chemical plant, chemical engineering students were randomly assigned to two groups: one studying product-oriented worked examples, the other practicing conventional problem solving. Effects of these instructional strategies on the progression of learners' mental models were examined…

  11. A 1D-2D Shallow Water Equations solver for discontinuous porosity field based on a Generalized Riemann Problem

    NASA Astrophysics Data System (ADS)

    Ferrari, Alessia; Vacondio, Renato; Dazzi, Susanna; Mignosa, Paolo

    2017-09-01

    A novel augmented Riemann Solver capable of handling porosity discontinuities in 1D and 2D Shallow Water Equation (SWE) models is presented. With the aim of accurately approximating the porosity source term, a Generalized Riemann Problem is derived by adding an additional fictitious equation to the SWEs system and imposing mass and momentum conservation across the porosity discontinuity. The modified Shallow Water Equations are theoretically investigated, and the implementation of an augmented Roe Solver in a 1D Godunov-type finite volume scheme is presented. Robust treatment of transonic flows is ensured by introducing an entropy fix based on the wave pattern of the Generalized Riemann Problem. An Exact Riemann Solver is also derived in order to validate the numerical model. As an extension of the 1D scheme, an analogous 2D numerical model is also derived and validated through test cases with radial symmetry. The capability of the 1D and 2D numerical models to capture different wave patterns is assessed against several Riemann Problems with different wave patterns.

  12. [Testing a Model to Predict Problem Gambling in Speculative Game Users].

    PubMed

    Park, Hyangjin; Kim, Suk Sun

    2018-04-01

    The purpose of the study was to develop and test a model for predicting problem gambling in speculative game users based on Blaszczynski and Nower's pathways model of problem and pathological gambling. The participants were 262 speculative game users recruited from seven speculative gambling places located in Seoul, Gangwon, and Gyeonggi, Korea. They completed a structured self-report questionnaire comprising measures of problem gambling, negative emotions, attentional impulsivity, motor impulsivity, non-planning impulsivity, gambler's fallacy, and gambling self-efficacy. Structural Equation Modeling was used to test the hypothesized model and to examine the direct and indirect effects on problem gambling in speculative game users using SPSS 22.0 and AMOS 20.0 programs. The hypothetical research model provided a reasonable fit to the data. Negative emotions, motor impulsivity, gambler's fallacy, and gambling self-efficacy had direct effects on problem gambling in speculative game users, while indirect effects were reported for negative emotions, motor impulsivity, and gambler's fallacy. These predictors explained 75.2% problem gambling in speculative game users. The findings suggest that developing intervention programs to reduce negative emotions, motor impulsivity, and gambler's fallacy, and to increase gambling self-efficacy in speculative game users are needed to prevent their problem gambling. © 2018 Korean Society of Nursing Science.

  13. Network congestion control algorithm based on Actor-Critic reinforcement learning model

    NASA Astrophysics Data System (ADS)

    Xu, Tao; Gong, Lina; Zhang, Wei; Li, Xuhong; Wang, Xia; Pan, Wenwen

    2018-04-01

    Aiming at the network congestion control problem, a congestion control algorithm based on Actor-Critic reinforcement learning model is designed. Through the genetic algorithm in the congestion control strategy, the network congestion problems can be better found and prevented. According to Actor-Critic reinforcement learning, the simulation experiment of network congestion control algorithm is designed. The simulation experiments verify that the AQM controller can predict the dynamic characteristics of the network system. Moreover, the learning strategy is adopted to optimize the network performance, and the dropping probability of packets is adaptively adjusted so as to improve the network performance and avoid congestion. Based on the above finding, it is concluded that the network congestion control algorithm based on Actor-Critic reinforcement learning model can effectively avoid the occurrence of TCP network congestion.

  14. The Development of Learning Model Based on Problem Solving to Construct High-Order Thinking Skill on the Learning Mathematics of 11th Grade in SMA/MA

    ERIC Educational Resources Information Center

    Syahputra, Edi; Surya, Edy

    2017-01-01

    This paper is a summary study of team Postgraduate on 11th grade. The objective of this study is to develop a learning model based on problem solving which can construct high-order thinking on the learning mathematics in SMA/MA. The subject of dissemination consists of Students of 11th grade in SMA/MA in 3 kabupaten/kota in North Sumatera, namely:…

  15. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE PAGES

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; ...

    2015-01-01

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  16. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Christopher

    In this talk, I review recent work on using a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), called the Singlet-extended Minimal Supersymmetric Standard Model (SMSSM), to raise the mass of the Standard Model-like Higgs boson without requiring extremely heavy top squarks or large stop mixing. In so doing, this model solves the little hierarchy problem of the minimal model (MSSM), at the expense of leaving the {mu}-problem of the MSSM unresolved. This talk is based on work published in Refs. [1, 2, 3].

  18. Sustainable High-Potential Career Development: A Resource-Based View.

    ERIC Educational Resources Information Center

    Iles, Paul

    1997-01-01

    In the current economic climate, fast-track career models pose problems for individuals and organizations. An alternative model uses a resource-based view of the company and principles of sustainable development borrowed from environmentalism. (SK)

  19. Spatial modeling in ecology: the flexibility of eigenfunction spatial analyses.

    PubMed

    Griffith, Daniel A; Peres-Neto, Pedro R

    2006-10-01

    Recently, analytical approaches based on the eigenfunctions of spatial configuration matrices have been proposed in order to consider explicitly spatial predictors. The present study demonstrates the usefulness of eigenfunctions in spatial modeling applied to ecological problems and shows equivalencies of and differences between the two current implementations of this methodology. The two approaches in this category are the distance-based (DB) eigenvector maps proposed by P. Legendre and his colleagues, and spatial filtering based upon geographic connectivity matrices (i.e., topology-based; CB) developed by D. A. Griffith and his colleagues. In both cases, the goal is to create spatial predictors that can be easily incorporated into conventional regression models. One important advantage of these two approaches over any other spatial approach is that they provide a flexible tool that allows the full range of general and generalized linear modeling theory to be applied to ecological and geographical problems in the presence of nonzero spatial autocorrelation.

  20. General framework for dynamic large deformation contact problems based on phantom-node X-FEM

    NASA Astrophysics Data System (ADS)

    Broumand, P.; Khoei, A. R.

    2018-04-01

    This paper presents a general framework for modeling dynamic large deformation contact-impact problems based on the phantom-node extended finite element method. The large sliding penalty contact formulation is presented based on a master-slave approach which is implemented within the phantom-node X-FEM and an explicit central difference scheme is used to model the inertial effects. The method is compared with conventional contact X-FEM; advantages, limitations and implementational aspects are also addressed. Several numerical examples are presented to show the robustness and accuracy of the proposed method.

  1. Modeling Thermal Noise from Crystaline Coatings for Gravitational-Wave Detectors

    NASA Astrophysics Data System (ADS)

    Demos, Nicholas; Lovelace, Geoffrey; LSC Collaboration

    2016-03-01

    The sensitivity of current and future ground-based gravitational-wave detectors are, in part, limited in sensitivity by Brownian and thermoelastic noise in each detector's mirror substrate and coating. Crystalline mirror coatings could potentially reduce thermal noise, but thermal noise is challenging to model analytically in the case of crystalline materials. Thermal noise can be modeled using the fluctuation-dissipation theorem, which relates thermal noise to an auxiliary elastic problem. In this poster, I will present results from a new code that numerically models thermal noise by numerically solving the auxiliary elastic problem for various types of crystalline mirror coatings. The code uses a finite element method with adaptive mesh refinement to model the auxiliary elastic problem which is then related to thermal noise. I will present preliminary results for a crystal coating on a fused silica substrate of varying sizes and elastic properties. This and future work will help develop the next generation of ground-based gravitational-wave detectors.

  2. Introduction of a Population Balance Based Design Problem in a Particle Science and Technology Course for Chemical Engineers

    ERIC Educational Resources Information Center

    Ehrman, Sheryl H.; Castellanos, Patricia; Dwivedi, Vivek; Diemer, R. Bertrum

    2007-01-01

    A particle technology design problem incorporating population balance modeling was developed and assigned to senior and first-year graduate students in a Particle Science and Technology course. The problem focused on particle collection, with a pipeline agglomerator, Cyclone, and baghouse comprising the collection system. The problem was developed…

  3. A Case Study in an Integrated Development and Problem Solving Environment

    ERIC Educational Resources Information Center

    Deek, Fadi P.; McHugh, James A.

    2003-01-01

    This article describes an integrated problem solving and program development environment, illustrating the application of the system with a detailed case study of a small-scale programming problem. The system, which is based on an explicit cognitive model, is intended to guide the novice programmer through the stages of problem solving and program…

  4. Pricing and location decisions in multi-objective facility location problem with M/M/m/k queuing systems

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Vazifeh-Noshafagh, Samira; Taleizadeh, Ata Allah; Hajipour, Vahid; Mahmoudi, Amin

    2017-01-01

    This article presents a new multi-objective model for a facility location problem with congestion and pricing policies. This model considers situations in which immobile service facilities are congested by a stochastic demand following M/M/m/k queues. The presented model belongs to the class of mixed-integer nonlinear programming models and NP-hard problems. To solve such a hard model, a new multi-objective optimization algorithm based on a vibration theory, namely multi-objective vibration damping optimization (MOVDO), is developed. In order to tune the algorithms parameters, the Taguchi approach using a response metric is implemented. The computational results are compared with those of the non-dominated ranking genetic algorithm and non-dominated sorting genetic algorithm. The outputs demonstrate the robustness of the proposed MOVDO in large-sized problems.

  5. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  6. Matched field localization based on CS-MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng

    2016-04-01

    The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.

  7. Problem-based learning: effects on student’s scientific reasoning skills in science

    NASA Astrophysics Data System (ADS)

    Wulandari, F. E.; Shofiyah, N.

    2018-04-01

    This research aimed to develop instructional package of problem-based learning to enhance student’s scientific reasoning from concrete to formal reasoning skills level. The instructional package was developed using the Dick and Carey Model. Subject of this study was instructional package of problem-based learning which was consisting of lesson plan, handout, student’s worksheet, and scientific reasoning test. The instructional package was tried out on 4th semester science education students of Universitas Muhammadiyah Sidoarjo by using the one-group pre-test post-test design. The data of scientific reasoning skills was collected by making use of the test. The findings showed that the developed instructional package reflecting problem-based learning was feasible to be implemented in classroom. Furthermore, through applying the problem-based learning, students could dominate formal scientific reasoning skills in terms of functionality and proportional reasoning, control variables, and theoretical reasoning.

  8. Effects of a Problem-based Structure of Physics Contents on Conceptual Learning and the Ability to Solve Problems

    NASA Astrophysics Data System (ADS)

    Becerra-Labra, Carlos; Gras-Martí, Albert; Martínez Torregrosa, Joaquín

    2012-05-01

    A model of teaching/learning is proposed based on a 'problem-based structure' of the contents of the course, in combination with a training in paper and pencil problem solving that emphasizes discussion and quantitative analysis, rather than formulae plug-in. The aim is to reverse the high failure and attrition rate among engineering undergraduates taking physics. A number of tests and questionnaires were administered to a group of students following a traditional lecture-based instruction, as well as to another group that was following an instruction scheme based on the proposed approach and the teaching materials developed ad hoc. The results show that students following the new method can develop scientific reasoning habits in problem-solving skills, and show gains in conceptual learning, attitudes and interests, and that the effects of this approach on learning are noticeable several months after the course is over.

  9. Multiple Fan-Beam Optical Tomography: Modelling Techniques

    PubMed Central

    Rahim, Ruzairi Abdul; Chen, Leong Lai; San, Chan Kok; Rahiman, Mohd Hafiz Fazalul; Fea, Pang Jon

    2009-01-01

    This paper explains in detail the solution to the forward and inverse problem faced in this research. In the forward problem section, the projection geometry and the sensor modelling are discussed. The dimensions, distributions and arrangements of the optical fibre sensors are determined based on the real hardware constructed and these are explained in the projection geometry section. The general idea in sensor modelling is to simulate an artificial environment, but with similar system properties, to predict the actual sensor values for various flow models in the hardware system. The sensitivity maps produced from the solution of the forward problems are important in reconstructing the tomographic image. PMID:22291523

  10. An assessment of RELAP5-3D using the Edwards-O'Brien Blowdown problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomlinson, E.T.; Aumiller, D.L.

    1999-07-01

    The RELAP5-3D (version bt) computer code was used to assess the United States Nuclear Regulatory Commission's Standard Problem 1 (Edwards-O'Brien Blowdown Test). The RELAP5-3D standard installation problem based on the Edwards-O'Brien Blowdown Test was modified to model the appropriate initial conditions and to represent the proper location of the instruments present in the experiment. The results obtained using the modified model are significantly different from the original calculation indicating the need to model accurately the experimental conditions if an accurate assessment of the calculational model is to be obtained.

  11. Multibody dynamical modeling for spacecraft docking process with spring-damper buffering device: A new validation approach

    NASA Astrophysics Data System (ADS)

    Daneshjou, Kamran; Alibakhshi, Reza

    2018-01-01

    In the current manuscript, the process of spacecraft docking, as one of the main risky operations in an on-orbit servicing mission, is modeled based on unconstrained multibody dynamics. The spring-damper buffering device is utilized here in the docking probe-cone system for micro-satellites. Owing to the impact occurs inevitably during docking process and the motion characteristics of multibody systems are remarkably affected by this phenomenon, a continuous contact force model needs to be considered. Spring-damper buffering device, keeping the spacecraft stable in an orbit when impact occurs, connects a base (cylinder) inserted in the chaser satellite and the end of docking probe. Furthermore, by considering a revolute joint equipped with torsional shock absorber, between base and chaser satellite, the docking probe can experience both translational and rotational motions simultaneously. Although spacecraft docking process accompanied by the buffering mechanisms may be modeled by constrained multibody dynamics, this paper deals with a simple and efficient formulation to eliminate the surplus generalized coordinates and solve the impact docking problem based on unconstrained Lagrangian mechanics. By an example problem, first, model verification is accomplished by comparing the computed results with those recently reported in the literature. Second, according to a new alternative validation approach, which is based on constrained multibody problem, the accuracy of presented model can be also evaluated. This proposed verification approach can be applied to indirectly solve the constrained multibody problems by minimum required effort. The time history of impact force, the influence of system flexibility and physical interaction between shock absorber and penetration depth caused by impact are the issues followed in this paper. Third, the MATLAB/SIMULINK multibody dynamic analysis software will be applied to build impact docking model to validate computed results and then, investigate the trajectories of both satellites to take place the successful capture process.

  12. Community-Based Rehabilitation (CBR): Problems and Possibilities.

    ERIC Educational Resources Information Center

    O'Toole, Brian

    1987-01-01

    The institution-based model for providing services to individuals with disabilities has limitations in both developing and developed countries. The community-based rehabilitation model was positively evaluated by the World Health Organization as an alternative approach, but the evaluation is questioned on methodological and philosophical grounds.…

  13. Constructing Self-Modeling Videos: Procedures and Technology

    ERIC Educational Resources Information Center

    Collier-Meek, Melissa A.; Fallon, Lindsay M.; Johnson, Austin H.; Sanetti, Lisa M. H.; Delcampo, Marisa A.

    2012-01-01

    Although widely recommended, evidence-based interventions are not regularly utilized by school practitioners. Video self-modeling is an effective and efficient evidence-based intervention for a variety of student problem behaviors. However, like many other evidence-based interventions, it is not frequently used in schools. As video creation…

  14. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Research on NC laser combined cutting optimization model of sheet metal parts

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Zhang, Y. L.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    The optimization problem for NC laser combined cutting of sheet metal parts was taken as the research object in this paper. The problem included two contents: combined packing optimization and combined cutting path optimization. In the problem of combined packing optimization, the method of “genetic algorithm + gravity center NFP + geometric transformation” was used to optimize the packing of sheet metal parts. In the problem of combined cutting path optimization, the mathematical model of cutting path optimization was established based on the parts cutting constraint rules of internal contour priority and cross cutting. The model played an important role in the optimization calculation of NC laser combined cutting.

  16. The Proposal of the Model for Developing Dispatch System for Nationwide One-Day Integrative Planning

    NASA Astrophysics Data System (ADS)

    Kim, Hyun Soo; Choi, Hyung Rim; Park, Byung Kwon; Jung, Jae Un; Lee, Jin Wook

    The problems of dispatch planning for container truck are classified as the pickup and delivery problems, which are highly complex issues that consider various constraints in the real world. However, in case of the current situation, it is developed by the control system so that it requires the automated planning system under the view of nationwide integrative planning. Therefore, the purpose of this study is to suggest model to develop the automated dispatch system through the constraint satisfaction problem and meta-heuristic technique-based algorithm. In the further study, the practical system is developed and evaluation is performed in aspect of various results. This study suggests model to undergo the study which promoted the complexity of the problems by considering the various constraints which were not considered in the early study. However, it is suggested that it is necessary to add the study which includes the real-time monitoring function for vehicles and cargos based on the information technology.

  17. The traveling salesman problem: a hierarchical model.

    PubMed

    Graham, S M; Joshi, A; Pizlo, Z

    2000-10-01

    Our review of prior literature on spatial information processing in perception, attention, and memory indicates that these cognitive functions involve similar mechanisms based on a hierarchical architecture. The present study extends the application of hierarchical models to the area of problem solving. First, we report results of an experiment in which human subjects were tested on a Euclidean traveling salesman problem (TSP) with 6 to 30 cities. The subject's solutions were either optimal or near-optimal in length and were produced in a time that was, on average, a linear function of the number of cities. Next, the performance of the subjects is compared with that of five representative artificial intelligence and operations research algorithms, that produce approximate solutions for Euclidean problems. None of these algorithms was found to be an adequate psychological model. Finally, we present a new algorithm for solving the TSP, which is based on a hierarchical pyramid architecture. The performance of this new algorithm is quite similar to the performance of the subjects.

  18. Effects of Simulation With Problem-Based Learning Program on Metacognition, Team Efficacy, and Learning Attitude in Nursing Students: Nursing Care With Increased Intracranial Pressure Patient.

    PubMed

    Lee, Myung-Nam; Nam, Kyung-Dong; Kim, Hyeon-Young

    2017-03-01

    Nursing care for patients with central nervous system problems requires advanced professional knowledge and care skills. Nursing students are more likely to have difficulty in dealing with adult patients who have severe neurological problems in clinical practice. This study investigated the effect on the metacognition, team efficacy, and learning attitude of nursing students after an integrated simulation and problem-based learning program. A real scenario of a patient with increased intracranial pressure was simulated for the students. The results showed that this method was effective in improving the metacognitive ability of the students. Furthermore, we used this comprehensive model of simulation with problem-based learning in order to assess the consequences of student satisfaction with the nursing major, interpersonal relationships, and importance of simulation-based education in relation to the effectiveness of the integrated simulation with problem-based learning. The results can be used to improve the design of clinical practicum and nursing education.

  19. Evaluation of Generation Alternation Models in Evolutionary Robotics

    NASA Astrophysics Data System (ADS)

    Oiso, Masashi; Matsumura, Yoshiyuki; Yasuda, Toshiyuki; Ohkura, Kazuhiro

    For efficient implementation of Evolutionary Algorithms (EA) to a desktop grid computing environment, we propose a new generation alternation model called Grid-Oriented-Deletion (GOD) based on comparison with the conventional techniques. In previous research, generation alternation models are generally evaluated by using test functions. However, their exploration performance on the real problems such as Evolutionary Robotics (ER) has not been made very clear yet. Therefore we investigate the relationship between the exploration performance of EA on an ER problem and its generation alternation model. We applied four generation alternation models to the Evolutionary Multi-Robotics (EMR), which is the package-pushing problem to investigate their exploration performance. The results show that GOD is more effective than the other conventional models.

  20. Variational approach to direct and inverse problems of atmospheric pollution studies

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey

    2016-04-01

    We present the development of a variational approach for solving interrelated problems of atmospheric hydrodynamics and chemistry concerning air pollution transport and transformations. The proposed approach allows us to carry out complex studies of different-scale physical and chemical processes using the methods of direct and inverse modeling [1-3]. We formulate the problems of risk/vulnerability and uncertainty assessment, sensitivity studies, variational data assimilation procedures [4], etc. A computational technology of constructing consistent mathematical models and methods of their numerical implementation is based on the variational principle in the weak constraint formulation specifically designed to account for uncertainties in models and observations. Algorithms for direct and inverse modeling are designed with the use of global and local adjoint problems. Implementing the idea of adjoint integrating factors provides unconditionally monotone and stable discrete-analytic approximations for convection-diffusion-reaction problems [5,6]. The general framework is applied to the direct and inverse problems for the models of transport and transformation of pollutants in Siberian and Arctic regions. The work has been partially supported by the RFBR grant 14-01-00125 and RAS Presidium Program I.33P. References: 1. V. Penenko, A.Baklanov, E. Tsvetova and A. Mahura . Direct and inverse problems in a variational concept of environmental modeling //Pure and Applied Geoph.(2012) v.169: 447-465. 2. V. V. Penenko, E. A. Tsvetova, and A. V. Penenko Development of variational approach for direct and inverse problems of atmospheric hydrodynamics and chemistry, Izvestiya, Atmospheric and Oceanic Physics, 2015, Vol. 51, No. 3, p. 311-319, DOI: 10.1134/S0001433815030093. 3. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Methods based on the joint use of models and observational data in the framework of variational approach to forecasting weather and atmospheric composition quality// Russian meteorology and hydrology, V. 40, Issue: 6, Pages: 365-373, DOI: 10.3103/S1068373915060023. 4. A.V. Penenko and V.V. Penenko. Direct data assimilation method for convection-diffusion models based on splitting scheme. Computational technologies, 19(4):69-83, 2014. 5. V.V. Penenko, E.A. Tsvetova, A.V. Penenko Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, 2014, V.67, Issue 12, Pages 2240-2256, DOI:10.1016/j.camwa.2014.04.004 6. V.V. Penenko, E.A. Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220, DOI 10.1134/S199542391303004X

  1. Tracing for the problem-solving ability in advanced calculus class based on modification of SAVI model at Universitas Negeri Semarang

    NASA Astrophysics Data System (ADS)

    Pujiastuti, E.; Waluya, B.; Mulyono

    2018-03-01

    There were many ways of solving the problem offered by the experts. The author combines various ways of solving the problem as a form of novelty. Among the learning model that was expected to support the growth of problem-solving skills was SAVI. The purpose, to obtain trace results from the analysis of the problem-solving ability of students in the Dual Integral material. The research method was a qualitative approach. Its activities include tests was filled with mathematical connections, observation, interviews, FGD, and triangulation. The results were: (1) some students were still experiencing difficulties in solving the problems. (2) The application of modification of SAVI learning model effective in supporting the growth of problem-solving abilities. (3) The strength of the students related to solving the problem, there were two students in the excellent category, there were three students in right classes and one student in the medium group.

  2. Beyond the Sparsity-Based Target Detector: A Hybrid Sparsity and Statistics Based Detector for Hyperspectral Images.

    PubMed

    Du, Bo; Zhang, Yuxiang; Zhang, Liangpei; Tao, Dacheng

    2016-08-18

    Hyperspectral images provide great potential for target detection, however, new challenges are also introduced for hyperspectral target detection, resulting that hyperspectral target detection should be treated as a new problem and modeled differently. Many classical detectors are proposed based on the linear mixing model and the sparsity model. However, the former type of model cannot deal well with spectral variability in limited endmembers, and the latter type of model usually treats the target detection as a simple classification problem and pays less attention to the low target probability. In this case, can we find an efficient way to utilize both the high-dimension features behind hyperspectral images and the limited target information to extract small targets? This paper proposes a novel sparsitybased detector named the hybrid sparsity and statistics detector (HSSD) for target detection in hyperspectral imagery, which can effectively deal with the above two problems. The proposed algorithm designs a hypothesis-specific dictionary based on the prior hypotheses for the test pixel, which can avoid the imbalanced number of training samples for a class-specific dictionary. Then, a purification process is employed for the background training samples in order to construct an effective competition between the two hypotheses. Next, a sparse representation based binary hypothesis model merged with additive Gaussian noise is proposed to represent the image. Finally, a generalized likelihood ratio test is performed to obtain a more robust detection decision than the reconstruction residual based detection methods. Extensive experimental results with three hyperspectral datasets confirm that the proposed HSSD algorithm clearly outperforms the stateof- the-art target detectors.

  3. Conceptualizing a model: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-2.

    PubMed

    Roberts, Mark; Russell, Louise B; Paltiel, A David; Chambers, Michael; McEwan, Phil; Krahn, Murray

    2012-01-01

    The appropriate development of a model begins with understanding the problem that is being represented. The aim of this article is to provide a series of consensus-based best practices regarding the process of model conceptualization. For the purpose of this series of papers, the authors consider the development of models whose purpose is to inform medical decisions and health-related resource allocation questions. They specifically divide the conceptualization process into two distinct components: the conceptualization of the problem, which converts knowledge of the health care process or decision into a representation of the problem, followed by the conceptualization of the model itself, which matches the attributes and characteristics of a particular modeling type to the needs of the problem being represented. Recommendations are made regarding the structure of the modeling team, agreement on the statement of the problem, the structure, perspective and target population of the model, and the interventions and outcomes represented. Best practices relating to the specific characteristics of model structure, and which characteristics of the problem might be most easily represented in a specific modeling method, are presented. Each section contains a number of recommendations that were iterated among the authors, as well as the wider modeling taskforce, jointly set up by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making.

  4. Expected value based fuzzy programming approach to solve integrated supplier selection and inventory control problem with fuzzy demand

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Sunarsih; Kartono

    2018-01-01

    In this paper, a mathematical model in quadratic programming with fuzzy parameter is proposed to determine the optimal strategy for integrated inventory control and supplier selection problem with fuzzy demand. To solve the corresponding optimization problem, we use the expected value based fuzzy programming. Numerical examples are performed to evaluate the model. From the results, the optimal amount of each product that have to be purchased from each supplier for each time period and the optimal amount of each product that have to be stored in the inventory for each time period were determined with minimum total cost and the inventory level was sufficiently closed to the reference level.

  5. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  6. Teaching Psychosomatic Medicine Using Problem-Based Learning and Role-Playing

    ERIC Educational Resources Information Center

    Heru, Alison M.

    2011-01-01

    Objective: Problem-based learning (PBL) has been implemented in medical education world-wide. Despite its popularity, it has not been generally considered useful for residency programs. The author presents a model for the implementation of PBL in residency programs. Method: The author presents a description of a PBL curriculum for teaching…

  7. Scaffolding Teachers Integrate Social Media into a Problem-Based Learning Approach?

    ERIC Educational Resources Information Center

    Buus, Lillian

    2012-01-01

    At Aalborg University (AAU) we are known to work with problem-based learning (PBL) in a particular way designated "The Aalborg PBL model." In PBL the focus is on participant control, knowledge sharing, collaboration among participants, which makes it interesting to consider the integration of social media in the learning that takes…

  8. Students' Usability Evaluation of a Web-Based Tutorial Program for College Biology Problem Solving

    ERIC Educational Resources Information Center

    Kim, H. S.; Prevost, L.; Lemons, P. P.

    2015-01-01

    The understanding of core concepts and processes of science in solving problems is important to successful learning in biology. We have designed and developed a Web-based, self-directed tutorial program, "SOLVEIT," that provides various scaffolds (e.g., prompts, expert models, visual guidance) to help college students enhance their…

  9. Effects of Online Problem-Based Learning on Teachers' Technology Perceptions and Planning

    ERIC Educational Resources Information Center

    Nelson, Erik T.

    2007-01-01

    The purpose of this qualitative study was to examine the ways in which the experience of learning through an online problem-based learning (PBL) model affect teachers' perceptions of integrating technology. Participant reflections were collected and analyzed to identify the pros, cons, and challenges of learning technology integration through this…

  10. Measuring Teachers' Learning from a Problem-Based Learning Approach to Professional Development in Science Education

    ERIC Educational Resources Information Center

    Weizman, Ayelet; Covitt, Beth A.; Koehler, Matthew J.; Lundeberg, Mary A.; Oslund, Joy A.; Low, Mark R.; Eberhardt, Janet; Urban-Lurain, Mark

    2008-01-01

    In this study we measured changes in science teachers' conceptual science understanding (content knowledge) and pedagogical content knowledge (PCK) while participating in a problem-based learning (PBL) model of professional development. Teachers participated in a two-week long workshop followed by nine monthly meetings during one academic year…

  11. Zoology Students' Experiences of Collaborative Enquiry in Problem-Based Learning

    ERIC Educational Resources Information Center

    Harland, Tony

    2002-01-01

    This paper presents an action-research case study that focuses on experiences of collaboration in a problem-based learning (PBL) course in Zoology. Our PBL model was developed as a research activity in partnership with a commercial organisation. Consequently, learning was grounded in genuine situations of practice in which a high degree of…

  12. PBL-SEE: An Authentic Assessment Model for PBL-Based Software Engineering Education

    ERIC Educational Resources Information Center

    dos Santos, Simone C.

    2017-01-01

    The problem-based learning (PBL) approach has been successfully applied to teaching software engineering thanks to its principles of group work, learning by solving real problems, and learning environments that match the market realities. However, the lack of well-defined methodologies and processes for implementing the PBL approach represents a…

  13. Deep Learning towards Expertise Development in a Visualization-Based Learning Environment

    ERIC Educational Resources Information Center

    Yuan, Bei; Wang, Minhong; Kushniruk, Andre W.; Peng, Jun

    2017-01-01

    With limited problem-solving capability and practical experience, novices have difficulties developing expert-like performance. It is important to make the complex problem-solving process visible to learners and provide them with necessary help throughout the process. This study explores the design and effects of a model-based learning approach…

  14. From primary care to public health: using Problem-based Learning and the ecological model to teach public health to first year medical students.

    PubMed

    Hoover, Cora R; Wong, Candice C; Azzam, Amin

    2012-06-01

    We investigated whether a public health-oriented Problem-Based Learning case presented to first-year medical students conveyed 12 "Population Health Competencies for Medical Students," as recommended by the Association of American Medical Colleges and the Regional Medicine-Public Health Education Centers. A public health-oriented Problem-Based Learning case guided by the ecological model paradigm was developed and implemented among two groups of 8 students at the University of California, Berkeley-UCSF Joint Medical Program, in the Fall of 2010. Using directed content analysis, student-generated written reports were coded for the presence of the 12 population health content areas. Students generated a total of 29 reports, of which 20 (69%) contained information relevant to at least one of the 12 population health competencies. Each of the 12 content areas was addressed by at least one report. As physicians-in-training prepare to confront the challenges of integrating prevention and population health with clinical practice, Problem-Based Learning is a promising tool to enhance medical students' engagement with public health.

  15. Merging universal and indicated prevention programs: the Fast Track model. Conduct Problems Prevention Research Group.

    PubMed

    2000-01-01

    Fast Track is a multisite, multicomponent preventive intervention for young children at high risk for long-term antisocial behavior. Based on a comprehensive developmental model, this intervention includes a universal-level classroom program plus social-skill training, academic tutoring, parent training, and home visiting to improve competencies and reduce problems in a high-risk group of children selected in kindergarten. The theoretical principles and clinical strategies utilized in the Fast Track Project are described to illustrate the interplay between basic developmental research, the understanding of risk and protective factors, and a research-based model of preventive intervention that integrates universal and indicated models of prevention.

  16. A chaotic model of sustaining attention problem in attention deficit disorder

    NASA Astrophysics Data System (ADS)

    Baghdadi, G.; Jafari, S.; Sprott, J. C.; Towhidkhah, F.; Hashemi Golpayegani, M. R.

    2015-01-01

    The problem of keeping an attention level is one of the common symptoms of attention deficit disorder. Dopamine deficiency is introduced as one of the causes of this disorder. Based on some physiological facts about the attention control mechanism and chaos intermittency, a behavioral model is presented in this paper. This model represents the problem of undesired alternation of attention level, and can also suggest different valuable predictions about a possible cause of attention deficit disorder. The proposed model reveals that there is a possible interaction between different neurotransmitters which help the individual to adaptively inhibit the attention switching over time. The result of this study can be used to examine and develop a new practical and more appropriate treatment for the problem of sustaining attention.

  17. Problem based learning with scaffolding technique on geometry

    NASA Astrophysics Data System (ADS)

    Bayuningsih, A. S.; Usodo, B.; Subanti, S.

    2018-05-01

    Geometry as one of the branches of mathematics has an important role in the study of mathematics. This research aims to explore the effectiveness of Problem Based Learning (PBL) with scaffolding technique viewed from self-regulation learning toward students’ achievement learning in mathematics. The research data obtained through mathematics learning achievement test and self-regulated learning (SRL) questionnaire. This research employed quasi-experimental research. The subjects of this research are students of the junior high school in Banyumas Central Java. The result of the research showed that problem-based learning model with scaffolding technique is more effective to generate students’ mathematics learning achievement than direct learning (DL). This is because in PBL model students are more able to think actively and creatively. The high SRL category student has better mathematic learning achievement than middle and low SRL categories, and then the middle SRL category has better than low SRL category. So, there are interactions between learning model with self-regulated learning in increasing mathematic learning achievement.

  18. Output-feedback control of combined sewer networks through receding horizon control with moving horizon estimation

    NASA Astrophysics Data System (ADS)

    Joseph-Duran, Bernat; Ocampo-Martinez, Carlos; Cembrano, Gabriela

    2015-10-01

    An output-feedback control strategy for pollution mitigation in combined sewer networks is presented. The proposed strategy provides means to apply model-based predictive control to large-scale sewer networks, in-spite of the lack of measurements at most of the network sewers. In previous works, the authors presented a hybrid linear control-oriented model for sewer networks together with the formulation of Optimal Control Problems (OCP) and State Estimation Problems (SEP). By iteratively solving these problems, preliminary Receding Horizon Control with Moving Horizon Estimation (RHC/MHE) results, based on flow measurements, were also obtained. In this work, the RHC/MHE algorithm has been extended to take into account both flow and water level measurements and the resulting control loop has been extensively simulated to assess the system performance according different measurement availability scenarios and rain events. All simulations have been carried out using a detailed physically based model of a real case-study network as virtual reality.

  19. Innovative mathematical modeling in environmental remediation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, Gour T.; National Central Univ.; Univ. of Central Florida

    2013-05-01

    There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out aremore » used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g.,Ni, Cr, Co).The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models for environmental remediation.The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium.« less

  20. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  1. A New Perspective on Modeling Groundwater-Driven Health Risk With Subjective Information

    NASA Astrophysics Data System (ADS)

    Ozbek, M. M.

    2003-12-01

    Fuzzy rule-based systems provide an efficient environment for the modeling of expert information in the context of risk management for groundwater contamination problems. In general, their use in the form of conditional pieces of knowledge, has been either as a tool for synthesizing control laws from data (i.e., conjunction-based models), or in a knowledge representation and reasoning perspective in Artificial Intelligence (i.e., implication-based models), where only the latter may lead to coherence problems (e.g., input data that leads to logical inconsistency when added to the knowledge base). We implement a two-fold extension to an implication-based groundwater risk model (Ozbek and Pinder, 2002) including: 1) the implementation of sufficient conditions for a coherent knowledge base, and 2) the interpolation of expert statements to supplement gaps in knowledge. The original model assumes statements of public health professionals for the characterization of the exposed individual and the relation of dose and pattern of exposure to its carcinogenic effects. We demonstrate the utility of the extended model in that it: 1)identifies inconsistent statements and establishes coherence in the knowledge base, and 2) minimizes the burden of knowledge elicitation from the experts for utilizing existing knowledge in an optimal fashion.ÿÿ

  2. Constrained optimization via simulation models for new product innovation

    NASA Astrophysics Data System (ADS)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  3. Assessing the detail needed to capture rainfall-runoff dynamics with physics-based hydrologic response simulation

    USGS Publications Warehouse

    Mirus, B.B.; Ebel, B.A.; Heppner, C.S.; Loague, K.

    2011-01-01

    Concept development simulation with distributed, physics-based models provides a quantitative approach for investigating runoff generation processes across environmental conditions. Disparities within data sets employed to design and parameterize boundary value problems used in heuristic simulation inevitably introduce various levels of bias. The objective was to evaluate the impact of boundary value problem complexity on process representation for different runoff generation mechanisms. The comprehensive physics-based hydrologic response model InHM has been employed to generate base case simulations for four well-characterized catchments. The C3 and CB catchments are located within steep, forested environments dominated by subsurface stormflow; the TW and R5 catchments are located in gently sloping rangeland environments dominated by Dunne and Horton overland flows. Observational details are well captured within all four of the base case simulations, but the characterization of soil depth, permeability, rainfall intensity, and evapotranspiration differs for each. These differences are investigated through the conversion of each base case into a reduced case scenario, all sharing the same level of complexity. Evaluation of how individual boundary value problem characteristics impact simulated runoff generation processes is facilitated by quantitative analysis of integrated and distributed responses at high spatial and temporal resolution. Generally, the base case reduction causes moderate changes in discharge and runoff patterns, with the dominant process remaining unchanged. Moderate differences between the base and reduced cases highlight the importance of detailed field observations for parameterizing and evaluating physics-based models. Overall, similarities between the base and reduced cases indicate that the simpler boundary value problems may be useful for concept development simulation to investigate fundamental controls on the spectrum of runoff generation mechanisms. Copyright 2011 by the American Geophysical Union.

  4. Analysis of students’ creative thinking level in problem solving based on national council of teachers of mathematics

    NASA Astrophysics Data System (ADS)

    Hobri; Suharto; Rifqi Naja, Ahmad

    2018-04-01

    This research aims to determine students’ creative thinking level in problem solving based on NCTM in function subject. The research type is descriptive with qualitative approach. Data collection methods which were used are test and interview. Creative thinking level in problem solving based on NCTM indicators consists of (1) Make mathematical model from a contextual problem and solve the problem, (2) Solve problem using various possible alternatives, (3) Find new alternative(s) to solve the problem, (4) Determine the most efficient and effective alternative for that problem, (5) Review and correct mistake(s) on the process of problem solving. Result of the research showed that 10 students categorized in very satisfying level, 23 students categorized in satisfying level and 1 students categorized in less satisfying level. Students in very satisfying level meet all indicators, students in satisfying level meet first, second, fourth, and fifth indicator, while students in less satisfying level only meet first and fifth indicator.

  5. Micromechanics based phenomenological damage modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muju, S.; Anderson, P.M.; Popelar, C.H.

    A model is developed for the study of process zone effects on dominant cracks. The model proposed here is intended to bridge the gap between the micromechanics based and the phenomenological models for the class of problems involving microcracking, transforming inclusions etc. It is based on representation of localized eigenstrains using dislocation dipoles. The eigenstrain (fitting strain) is represented as the strength (Burgers vector) of the dipole which obeys a certain phenomenological constitutive relation.

  6. Highway extraction from high resolution aerial photography using a geometric active contour model

    NASA Astrophysics Data System (ADS)

    Niu, Xutong

    Highway extraction and vehicle detection are two of the most important steps in traffic-flow analysis from multi-frame aerial photographs. The traditional method of deriving traffic flow trajectories relies on manual vehicle counting from a sequence of aerial photographs, which is tedious and time-consuming. This research presents a new framework for semi-automatic highway extraction. The basis of the new framework is an improved geometric active contour (GAC) model. This novel model seeks to minimize an objective function that transforms a problem of propagation of regular curves into an optimization problem. The implementation of curve propagation is based on level set theory. By using an implicit representation of a two-dimensional curve, a level set approach can be used to deal with topological changes naturally, and the output is unaffected by different initial positions of the curve. However, the original GAC model, on which the new model is based, only incorporates boundary information into the curve propagation process. An error-producing phenomenon called leakage is inevitable wherever there is an uncertain weak edge. In this research, region-based information is added as a constraint into the original GAC model, thereby, giving this proposed method the ability of integrating both boundary and region-based information during the curve propagation. Adding the region-based constraint eliminates the leakage problem. This dissertation applies the proposed augmented GAC model to the problem of highway extraction from high-resolution aerial photography. First, an optimized stopping criterion is designed and used in the implementation of the GAC model. It effectively saves processing time and computations. Second, a seed point propagation framework is designed and implemented. This framework incorporates highway extraction, tracking, and linking into one procedure. A seed point is usually placed at an end node of highway segments close to the boundary of the image or at a position where possible blocking may occur, such as at an overpass bridge or near vehicle crowds. These seed points can be automatically propagated throughout the entire highway network. During the process, road center points are also extracted, which introduces a search direction for solving possible blocking problems. This new framework has been successfully applied to highway network extraction from a large orthophoto mosaic. In the process, vehicles on the highway extracted from mosaic were detected with an 83% success rate.

  7. Toward Theory-Based Instruction in Scientific Problem Solving.

    ERIC Educational Resources Information Center

    Heller, Joan I.; And Others

    Several empirical and theoretical analyses related to scientific problem-solving are reviewed, including: detailed studies of individuals at different levels of expertise, and computer models simulating some aspects of human information processing during problem solving. Analysis of these studies has revealed many facets about the nature of the…

  8. The Nature and Predictors of Undercontrolled and Internalizing Problem Trajectories across Early Childhood

    ERIC Educational Resources Information Center

    Mathiesen, Kristin S.; Sanson, Ann; Stoolmiller, Mike; Karevold, Evalill

    2009-01-01

    Using growth curve modeling, trajectories of undercontrolled (oppositional, irritable, inattentive and overactive behaviors) and internalizing (worried, sad and fearful) problems from 18 months to 4.5 years were studied in a population based sample of 921 Norwegian children. At the population level, undercontrolled problems decreased and…

  9. A Matlab toolkit for three-dimensional electrical impedance tomography: a contribution to the Electrical Impedance and Diffuse Optical Reconstruction Software project

    NASA Astrophysics Data System (ADS)

    Polydorides, Nick; Lionheart, William R. B.

    2002-12-01

    The objective of the Electrical Impedance and Diffuse Optical Reconstruction Software project is to develop freely available software that can be used to reconstruct electrical or optical material properties from boundary measurements. Nonlinear and ill posed problems such as electrical impedance and optical tomography are typically approached using a finite element model for the forward calculations and a regularized nonlinear solver for obtaining a unique and stable inverse solution. Most of the commercially available finite element programs are unsuitable for solving these problems because of their conventional inefficient way of calculating the Jacobian, and their lack of accurate electrode modelling. A complete package for the two-dimensional EIT problem was officially released by Vauhkonen et al at the second half of 2000. However most industrial and medical electrical imaging problems are fundamentally three-dimensional. To assist the development we have developed and released a free toolkit of Matlab routines which can be employed to solve the forward and inverse EIT problems in three dimensions based on the complete electrode model along with some basic visualization utilities, in the hope that it will stimulate further development. We also include a derivation of the formula for the Jacobian (or sensitivity) matrix based on the complete electrode model.

  10. Children's Solution Processes in Elementary Arithmetic Problems: Analysis and Improvement. Report No. 19.

    ERIC Educational Resources Information Center

    De Corte, Erik; Verschaffel, Lieven

    Design and results of an investigation attempting to analyze and improve children's solution processes in elementary addition and subtraction problems are described. As background for the study, a conceptual model was developed based on previous research. One dimension of the model relates to the characteristics of the tasks (numerical versus word…

  11. Using a Problem-Solving/Decision-Making Model to Evaluate School Lunch Salad Bars

    ERIC Educational Resources Information Center

    Johnson, Carolyn C.; Spruance, Lori Andersen; O'Malley, Keelia; Begalieva, Maya; Myers, Leann

    2017-01-01

    Purpose/Objectives: Evaluation of school-based activities is a high priority for school personnel. Nutrition activities, such as salad bars (SBs) incorporated into school lunchrooms, may increase children's consumption of low-energy, high fiber diets. The purpose of this paper is to describe a problem-solving/ decision-making model and demonstrate…

  12. Integrated Thermal Response Modeling System For Hypersonic Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Chen, Y.-K.; Milos, F. S.; Partridge, Harry (Technical Monitor)

    2000-01-01

    We describe all extension of the Markov decision process model in which a continuous time dimension is included ill the state space. This allows for the representation and exact solution of a wide range of problems in which transitions or rewards vary over time. We examine problems based on route planning with public transportation and telescope observation scheduling.

  13. Determination of thermophysical characteristics of solid materials by electrical modelling of the solutions to the inverse problems in nonsteady heat conduction

    NASA Technical Reports Server (NTRS)

    Kozdoba, L. A.; Krivoshei, F. A.

    1985-01-01

    The solution of the inverse problem of nonsteady heat conduction is discussed, based on finding the coefficient of the heat conduction and the coefficient of specific volumetric heat capacity. These findings are included in the equation used for the electrical model of this phenomenon.

  14. Agent-Based Modeling of Collaborative Problem Solving. Research Report. ETS RR-16-27

    ERIC Educational Resources Information Center

    Bergner, Yoav; Andrews, Jessica J.; Zhu, Mengxiao; Gonzales, Joseph E.

    2016-01-01

    Collaborative problem solving (CPS) is a critical competency in a variety of contexts, including the workplace, school, and home. However, only recently have assessment and curriculum reformers begun to focus to a greater extent on the acquisition and development of CPS skill. One of the major challenges in psychometric modeling of CPS is…

  15. Formative feedback and scaffolding for developing complex problem solving and modelling outcomes

    NASA Astrophysics Data System (ADS)

    Frank, Brian; Simper, Natalie; Kaupp, James

    2018-07-01

    This paper discusses the use and impact of formative feedback and scaffolding to develop outcomes for complex problem solving in a required first-year course in engineering design and practice at a medium-sized research-intensive Canadian university. In 2010, the course began to use team-based, complex, open-ended contextualised problems to develop problem solving, communications, teamwork, modelling, and professional skills. Since then, formative feedback has been incorporated into: task and process-level feedback on scaffolded tasks in-class, formative assignments, and post-assignment review. Development in complex problem solving and modelling has been assessed through analysis of responses from student surveys, direct criterion-referenced assessment of course outcomes from 2013 to 2015, and an external longitudinal study. The findings suggest that students are improving in outcomes related to complex problem solving over the duration of the course. Most notably, the addition of new feedback and scaffolding coincided with improved student performance.

  16. Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving

    PubMed Central

    Semeniuk, Yulia Yuriyivna; Brown, Roger L.; Riesch, Susan K.

    2016-01-01

    We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem solving skill. The intervention is based on the Circumplex Model and Social Problem Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. PMID:26936844

  17. Stability of stationary solutions for inflow problem on the micropolar fluid model

    NASA Astrophysics Data System (ADS)

    Yin, Haiyan

    2017-04-01

    In this paper, we study the asymptotic behavior of solutions to the initial boundary value problem for the micropolar fluid model in a half-line R+:=(0,∞). We prove that the corresponding stationary solutions of the small amplitude to the inflow problem for the micropolar fluid model are time asymptotically stable under small H1 perturbations in both the subsonic and degenerate cases. The microrotation velocity brings us some additional troubles compared with Navier-Stokes equations in the absence of the microrotation velocity. The proof of asymptotic stability is based on the basic energy method.

  18. Scenario-based, closed-loop model predictive control with application to emergency vehicle scheduling

    NASA Astrophysics Data System (ADS)

    Goodwin, Graham. C.; Medioli, Adrian. M.

    2013-08-01

    Model predictive control has been a major success story in process control. More recently, the methodology has been used in other contexts, including automotive engine control, power electronics and telecommunications. Most applications focus on set-point tracking and use single-sequence optimisation. Here we consider an alternative class of problems motivated by the scheduling of emergency vehicles. Here disturbances are the dominant feature. We develop a novel closed-loop model predictive control strategy aimed at this class of problems. We motivate, and illustrate, the ideas via the problem of fluid deployment of ambulance resources.

  19. Optimization Model for Capacity Management and Bed Scheduling for Hospital

    NASA Astrophysics Data System (ADS)

    Sitepu, Suryati; Mawengkang, Herman; Husein, Ismail

    2018-01-01

    Hospital is a very important institution to provide health care for people. It is not surprising that nowadays the people’s demands for hospital is increasing.. However, due to the rising cost of healthcare services, hospitals need to consider efficiencies in order to overcome these two problems. This paper deals with an integrated strategy of staff capacity management and bed allocation planning to tackle these problems. Mathematically, the strategy can be modeled as an integer linear programming problem. We solve the model using a direct neighborhood search approach, based on the notion of superbasic variables.

  20. Text Summarization Model based on Maximum Coverage Problem and its Variant

    NASA Astrophysics Data System (ADS)

    Takamura, Hiroya; Okumura, Manabu

    We discuss text summarization in terms of maximum coverage problem and its variant. To solve the optimization problem, we applied some decoding algorithms including the ones never used in this summarization formulation, such as a greedy algorithm with performance guarantee, a randomized algorithm, and a branch-and-bound method. We conduct comparative experiments. On the basis of the experimental results, we also augment the summarization model so that it takes into account the relevance to the document cluster. Through experiments, we showed that the augmented model is at least comparable to the best-performing method of DUC'04.

  1. A bottom-up approach to the strong CP problem

    NASA Astrophysics Data System (ADS)

    Diaz-Cruz, J. L.; Hollik, W. G.; Saldana-Salazar, U. J.

    2018-05-01

    The strong CP problem is one of many puzzles in the theoretical description of elementary particle physics that still lacks an explanation. While top-down solutions to that problem usually comprise new symmetries or fields or both, we want to present a rather bottom-up perspective. The main problem seems to be how to achieve small CP violation in the strong interactions despite the large CP violation in weak interactions. In this paper, we show that with minimal assumptions on the structure of mass (Yukawa) matrices, they do not contribute to the strong CP problem and thus we can provide a pathway to a solution of the strong CP problem within the structures of the Standard Model and no extension at the electroweak scale is needed. However, to address the flavor puzzle, models based on minimal SU(3) flavor groups leading to the proposed flavor matrices are favored. Though we refrain from an explicit UV completion of the Standard Model, we provide a simple requirement for such models not to show a strong CP problem by construction.

  2. New Ideas on the Design of the Web-Based Learning System Oriented to Problem Solving from the Perspective of Question Chain and Learning Community

    ERIC Educational Resources Information Center

    Zhang, Yin; Chu, Samuel K. W.

    2016-01-01

    In recent years, a number of models concerning problem solving systems have been put forward. However, many of them stress on technology and neglect the research of problem solving itself, especially the learning mechanism related to problem solving. In this paper, we analyze the learning mechanism of problem solving, and propose that when…

  3. Low frequency full waveform seismic inversion within a tree based Bayesian framework

    NASA Astrophysics Data System (ADS)

    Ray, Anandaroop; Kaplan, Sam; Washbourne, John; Albertin, Uwe

    2018-01-01

    Limited illumination, insufficient offset, noisy data and poor starting models can pose challenges for seismic full waveform inversion. We present an application of a tree based Bayesian inversion scheme which attempts to mitigate these problems by accounting for data uncertainty while using a mildly informative prior about subsurface structure. We sample the resulting posterior model distribution of compressional velocity using a trans-dimensional (trans-D) or Reversible Jump Markov chain Monte Carlo method in the wavelet transform domain of velocity. This allows us to attain rapid convergence to a stationary distribution of posterior models while requiring a limited number of wavelet coefficients to define a sampled model. Two synthetic, low frequency, noisy data examples are provided. The first example is a simple reflection + transmission inverse problem, and the second uses a scaled version of the Marmousi velocity model, dominated by reflections. Both examples are initially started from a semi-infinite half-space with incorrect background velocity. We find that the trans-D tree based approach together with parallel tempering for navigating rugged likelihood (i.e. misfit) topography provides a promising, easily generalized method for solving large-scale geophysical inverse problems which are difficult to optimize, but where the true model contains a hierarchy of features at multiple scales.

  4. Supervised guiding long-short term memory for image caption generation based on object classes

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan

    2018-03-01

    The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.

  5. Hypovigilance Detection for UCAV Operators Based on a Hidden Markov Model

    PubMed Central

    Kwon, Namyeon; Shin, Yongwook; Ryo, Chuh Yeop; Park, Jonghun

    2014-01-01

    With the advance of military technology, the number of unmanned combat aerial vehicles (UCAVs) has rapidly increased. However, it has been reported that the accident rate of UCAVs is much higher than that of manned combat aerial vehicles. One of the main reasons for the high accident rate of UCAVs is the hypovigilance problem which refers to the decrease in vigilance levels of UCAV operators while maneuvering. In this paper, we propose hypovigilance detection models for UCAV operators based on EEG signal to minimize the number of occurrences of hypovigilance. To enable detection, we have applied hidden Markov models (HMMs), two of which are used to indicate the operators' dual states, normal vigilance and hypovigilance, and, for each operator, the HMMs are trained as a detection model. To evaluate the efficacy and effectiveness of the proposed models, we conducted two experiments on the real-world data obtained by using EEG-signal acquisition devices, and they yielded satisfactory results. By utilizing the proposed detection models, the problem of hypovigilance of UCAV operators and the problem of high accident rate of UCAVs can be addressed. PMID:24963338

  6. The Proposal of a Evolutionary Strategy Generating the Data Structures Based on a Horizontal Tree for the Tests

    NASA Astrophysics Data System (ADS)

    Żukowicz, Marek; Markiewicz, Michał

    2016-09-01

    The aim of the article is to present a mathematical definition of the object model, that is known in computer science as TreeList and to show application of this model for design evolutionary algorithm, that purpose is to generate structures based on this object. The first chapter introduces the reader to the problem of presenting data using the TreeList object. The second chapter describes the problem of testing data structures based on TreeList. The third one shows a mathematical model of the object TreeList and the parameters, used in determining the utility of structures created through this model and in evolutionary strategy, that generates these structures for testing purposes. The last chapter provides a brief summary and plans for future research related to the algorithm presented in the article.

  7. An Ontology-Based Conceptual Model For Accumulating And Reusing Knowledge In A DMAIC Process

    NASA Astrophysics Data System (ADS)

    Nguyen, ThanhDat; Kifor, Claudiu Vasile

    2015-09-01

    DMAIC (Define, Measure, Analyze, Improve, and Control) is an important process used to enhance quality of processes basing on knowledge. However, it is difficult to access DMAIC knowledge. Conventional approaches meet a problem arising from structuring and reusing DMAIC knowledge. The main reason is that DMAIC knowledge is not represented and organized systematically. In this article, we overcome the problem basing on a conceptual model that is a combination of DMAIC process, knowledge management, and Ontology engineering. The main idea of our model is to utilizing Ontologies to represent knowledge generated by each of DMAIC phases. We build five different knowledge bases for storing all knowledge of DMAIC phases with the support of necessary tools and appropriate techniques in Information Technology area. Consequently, these knowledge bases provide knowledge available to experts, managers, and web users during or after DMAIC execution in order to share and reuse existing knowledge.

  8. Bayesian models based on test statistics for multiple hypothesis testing problems.

    PubMed

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  9. Decomposability and scalability in space-based observatory scheduling

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Stephen F.

    1992-01-01

    In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.

  10. Innovative mathematical modeling in environmental remediation.

    PubMed

    Yeh, Gour-Tsyh; Gwo, Jin-Ping; Siegel, Malcolm D; Li, Ming-Hsu; Fang, Yilin; Zhang, Fan; Luo, Wensui; Yabusaki, Steve B

    2013-05-01

    There are two different ways to model reactive transport: ad hoc and innovative reaction-based approaches. The former, such as the Kd simplification of adsorption, has been widely employed by practitioners, while the latter has been mainly used in scientific communities for elucidating mechanisms of biogeochemical transport processes. It is believed that innovative mechanistic-based models could serve as protocols for environmental remediation as well. This paper reviews the development of a mechanistically coupled fluid flow, thermal transport, hydrologic transport, and reactive biogeochemical model and example-applications to environmental remediation problems. Theoretical bases are sufficiently described. Four example problems previously carried out are used to demonstrate how numerical experimentation can be used to evaluate the feasibility of different remediation approaches. The first one involved the application of a 56-species uranium tailing problem to the Melton Branch Subwatershed at Oak Ridge National Laboratory (ORNL) using the parallel version of the model. Simulations were made to demonstrate the potential mobilization of uranium and other chelating agents in the proposed waste disposal site. The second problem simulated laboratory-scale system to investigate the role of natural attenuation in potential off-site migration of uranium from uranium mill tailings after restoration. It showed inadequacy of using a single Kd even for a homogeneous medium. The third example simulated laboratory experiments involving extremely high concentrations of uranium, technetium, aluminum, nitrate, and toxic metals (e.g., Ni, Cr, Co). The fourth example modeled microbially-mediated immobilization of uranium in an unconfined aquifer using acetate amendment in a field-scale experiment. The purposes of these modeling studies were to simulate various mechanisms of mobilization and immobilization of radioactive wastes and to illustrate how to apply reactive transport models for environmental remediation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Problem Based Learning and the scientific process

    NASA Astrophysics Data System (ADS)

    Schuchardt, Daniel Shaner

    This research project was developed to inspire students to constructively use problem based learning and the scientific process to learn middle school science content. The student population in this study consisted of male and female seventh grade students. Students were presented with authentic problems that are connected to physical and chemical properties of matter. The intent of the study was to have students use the scientific process of looking at existing knowledge, generating learning issues or questions about the problems, and then developing a course of action to research and design experiments to model resolutions to the authentic problems. It was expected that students would improve their ability to actively engage with others in a problem solving process to achieve a deeper understanding of Michigan's 7th Grade Level Content Expectations, the Next Generation Science Standards, and a scientific process. Problem based learning was statistically effective in students' learning of the scientific process. Students statistically showed improvement on pre to posttest scores. The teaching method of Problem Based Learning was effective for seventh grade science students at Dowagiac Middle School.

  12. Discrete bacteria foraging optimization algorithm for graph based problems - a transition from continuous to discrete

    NASA Astrophysics Data System (ADS)

    Sur, Chiranjib; Shukla, Anupam

    2018-03-01

    Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.

  13. Conditioning of high voltage radio frequency cavities by using fuzzy logic in connection with rule based programming

    NASA Astrophysics Data System (ADS)

    Perreard, S.; Wildner, E.

    1994-12-01

    Many processes are controlled by experts using some kind of mental model to decide on actions and make conclusions. This model, based on heuristic knowledge, can often be represented by rules and does not have to be particularly accurate. Such is the case for the problem of conditioning high voltage RF cavities; the expert has to decide, by observing some criteria, whether to increase or to decrease the voltage and by how much. A program has been implemented which can be applied to a class of similar problems. The kernel of the program is a small rule base, which is independent of the kind of cavity. To model a specific cavity, we use fuzzy logic which is implemented as a separate routine called by the rule base, to translate from numeric to symbolic information.

  14. Identification of Bouc-Wen hysteretic parameters based on enhanced response sensitivity approach

    NASA Astrophysics Data System (ADS)

    Wang, Li; Lu, Zhong-Rong

    2017-05-01

    This paper aims to identify parameters of Bouc-Wen hysteretic model using time-domain measured data. It follows a general inverse identification procedure, that is, identifying model parameters is treated as an optimization problem with the nonlinear least squares objective function. Then, the enhanced response sensitivity approach, which has been shown convergent and proper for such kind of problems, is adopted to solve the optimization problem. Numerical tests are undertaken to verify the proposed identification approach.

  15. A path-integral approach to the problem of time

    NASA Astrophysics Data System (ADS)

    Amaral, M. M.; Bojowald, Martin

    2018-01-01

    Quantum transition amplitudes are formulated for model systems with local internal time, using intuition from path integrals. The amplitudes are shown to be more regular near a turning point of internal time than could be expected based on existing canonical treatments. In particular, a successful transition through a turning point is provided in the model systems, together with a new definition of such a transition in general terms. Some of the results rely on a fruitful relation between the problem of time and general Gribov problems.

  16. Model predictive control of an air suspension system with damping multi-mode switching damper based on hybrid model

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqiang; Yuan, Chaochun; Cai, Yingfeng; Wang, Shaohua; Chen, Long

    2017-09-01

    This paper presents the hybrid modeling and the model predictive control of an air suspension system with damping multi-mode switching damper. Unlike traditional damper with continuously adjustable damping, in this study, a new damper with four discrete damping modes is applied to vehicle semi-active air suspension. The new damper can achieve different damping modes by just controlling the on-off statuses of two solenoid valves, which makes its damping adjustment more efficient and more reliable. However, since the damping mode switching induces different modes of operation, the air suspension system with the new damper poses challenging hybrid control problem. To model both the continuous/discrete dynamics and the switching between different damping modes, the framework of mixed logical dynamical (MLD) systems is used to establish the system hybrid model. Based on the resulting hybrid dynamical model, the system control problem is recast as a model predictive control (MPC) problem, which allows us to optimize the switching sequences of the damping modes by taking into account the suspension performance requirements. Numerical simulations results demonstrate the efficacy of the proposed control method finally.

  17. What if ? On alternative conceptual models and the problem of their implementation

    NASA Astrophysics Data System (ADS)

    Neuberg, Jurgen

    2015-04-01

    Seismic and other monitoring techniques rely on a set of conceptual models on the base of which data sets can be interpreted. In order to do this on an operational level in volcano observatories these models need to be tested and ready for an interpretation in a timely manner. Once established, scientists in charge advising stakeholders and decision makers often stick firmly to these models to avoid confusion by giving alternative versions of interpretations to non-experts. This talk gives an overview of widely accepted conceptual models to interpret seismic and deformation data, and highlights in a few case studies some of the arising problems. Aspects covered include knowledge transfer between research institutions and observatories, data sharing, the problem of up-taking advice, and some hidden problems which turn out to be much more critical in assessing volcanic hazard than the actual data interpretation.

  18. Analysis of the type II robotic mixed-model assembly line balancing problem

    NASA Astrophysics Data System (ADS)

    Çil, Zeynel Abidin; Mete, Süleyman; Ağpak, Kürşad

    2017-06-01

    In recent years, there has been an increasing trend towards using robots in production systems. Robots are used in different areas such as packaging, transportation, loading/unloading and especially assembly lines. One important step in taking advantage of robots on the assembly line is considering them while balancing the line. On the other hand, market conditions have increased the importance of mixed-model assembly lines. Therefore, in this article, the robotic mixed-model assembly line balancing problem is studied. The aim of this study is to develop a new efficient heuristic algorithm based on beam search in order to minimize the sum of cycle times over all models. In addition, mathematical models of the problem are presented for comparison. The proposed heuristic is tested on benchmark problems and compared with the optimal solutions. The results show that the algorithm is very competitive and is a promising tool for further research.

  19. Executive functioning as a mediator of conduct problems prevention in children of homeless families residing in temporary supportive housing: a parallel process latent growth modeling approach.

    PubMed

    Piehler, Timothy F; Bloomquist, Michael L; August, Gerald J; Gewirtz, Abigail H; Lee, Susanne S; Lee, Wendy S C

    2014-01-01

    A culturally diverse sample of formerly homeless youth (ages 6-12) and their families (n = 223) participated in a cluster randomized controlled trial of the Early Risers conduct problems prevention program in a supportive housing setting. Parents provided 4 annual behaviorally-based ratings of executive functioning (EF) and conduct problems, including at baseline, over 2 years of intervention programming, and at a 1-year follow-up assessment. Using intent-to-treat analyses, a multilevel latent growth model revealed that the intervention group demonstrated reduced growth in conduct problems over the 4 assessment points. In order to examine mediation, a multilevel parallel process latent growth model was used to simultaneously model growth in EF and growth in conduct problems along with intervention status as a covariate. A significant mediational process emerged, with participation in the intervention promoting growth in EF, which predicted negative growth in conduct problems. The model was consistent with changes in EF fully mediating intervention-related changes in youth conduct problems over the course of the study. These findings highlight the critical role that EF plays in behavioral change and lends further support to its importance as a target in preventive interventions with populations at risk for conduct problems.

  20. The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model.

    NASA Astrophysics Data System (ADS)

    Wan, S.; He, W.

    2016-12-01

    The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963) equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data." On the basis of the intelligent features of evolutionary modeling (EM), including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  1. Front-Stage Stars and Backstage Producers: The Role of Judges in Problem-Solving Courts1

    PubMed Central

    Portillo, Shannon; Rudes, Danielle; Viglione, Jill; Nelson, Matthew; Taxman, Faye

    2012-01-01

    In problem-solving courts judges are no longer neutral arbitrators in adversarial justice processes. Instead, judges directly engage with court participants. The movement towards problem-solving court models emerges from a collaborative therapeutic jurisprudence framework. While most scholars argue judges are the central courtroom actors within problem-solving courts, we find judges are the stars front-stage, but play a more supporting role backstage. We use Goffman's front-stage-backstage framework to analyze 350 hours of ethnographic fieldwork within five problem-solving courts. Problem-solving courts are collaborative organizations with shifting leadership, based on forum. Understanding how the roles of courtroom workgroup actors adapt under the new court model is foundational for effective implementation of these justice processes. PMID:23397430

  2. Front-Stage Stars and Backstage Producers: The Role of Judges in Problem-Solving Courts().

    PubMed

    Portillo, Shannon; Rudes, Danielle; Viglione, Jill; Nelson, Matthew; Taxman, Faye

    2013-01-01

    In problem-solving courts judges are no longer neutral arbitrators in adversarial justice processes. Instead, judges directly engage with court participants. The movement towards problem-solving court models emerges from a collaborative therapeutic jurisprudence framework. While most scholars argue judges are the central courtroom actors within problem-solving courts, we find judges are the stars front-stage, but play a more supporting role backstage. We use Goffman's front-stage-backstage framework to analyze 350 hours of ethnographic fieldwork within five problem-solving courts. Problem-solving courts are collaborative organizations with shifting leadership, based on forum. Understanding how the roles of courtroom workgroup actors adapt under the new court model is foundational for effective implementation of these justice processes.

  3. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  4. A Critical Survey of Optimization Models for Tactical and Strategic Aspects of Air Traffic Flow Management

    NASA Technical Reports Server (NTRS)

    Bertsimas, Dimitris; Odoni, Amedeo

    1997-01-01

    This document presents a critical review of the principal existing optimization models that have been applied to Air Traffic Flow Management (TFM). Emphasis will be placed on two problems, the Generalized Tactical Flow Management Problem (GTFMP) and the Ground Holding Problem (GHP), as well as on some of their variations. To perform this task, we have carried out an extensive literature review that has covered more than 40 references, most of them very recent. Based on the review of this emerging field our objectives were to: (i) identify the best available models; (ii) describe typical contexts for applications of the models; (iii) provide illustrative model formulations; and (iv) identify the methodologies that can be used to solve the models. We shall begin our presentation below by providing a brief context for the models that we are reviewing. In Section 3 we shall offer a taxonomy and identify four classes of models for review. In Sections 4, 5, and 6 we shall then review, respectively, models for the Single-Airport Ground Holding Problem, the Generalized Tactical FM P and the Multi-Airport Ground Holding Problem (for the definition of these problems see Section 3 below). In each section, we identify the best available models and discuss briefly their computational performance and applications, if any, to date. Section 7 summarizes our conclusions about the state of the art.

  5. A Flux-Corrected Transport Based Hydrodynamic Model for the Plasmasphere Refilling Problem following Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Chatterjee, K.; Schunk, R. W.

    2017-12-01

    The refilling of the plasmasphere following a geomagnetic storm remains one of the longstanding problems in the area of ionosphere-magnetosphere coupling. Both diffusion and hydrodynamic approximations have been adopted for the modeling and solution of this problem. The diffusion approximation neglects the nonlinear inertial term in the momentum equation and so this approximation is not rigorously valid immediately after the storm. Over the last few years, we have developed a hydrodynamic refilling model using the flux-corrected transport method, a numerical method that is extremely well suited to handling nonlinear problems with shocks and discontinuities. The plasma transport equations are solved along 1D closed magnetic field lines that connect conjugate ionospheres and the model currently includes three ion (H+, O+, He+) and two neutral (O, H) species. In this work, each ion species under consideration has been modeled as two separate streams emanating from the conjugate hemispheres and the model correctly predicts supersonic ion speeds and the presence of high levels of Helium during the early hours of refilling. The ultimate objective of this research is the development of a 3D model for the plasmasphere refilling problem and with additional development, the same methodology can potentially be applied to the study of other complex space plasma coupling problems in closed flux tube geometries. Index Terms: 2447 Modeling and forecasting [IONOSPHERE] 2753 Numerical modeling [MAGNETOSPHERIC PHYSICS] 7959 Models [SPACE WEATHER

  6. Evidence-Informed, Individual Treatment of a Child with Sexual Behavior Problems: A Case Study.

    PubMed

    Allen, Brian; Berliner, Lucy

    2015-11-01

    Children with sexual behavior problems pose a significant challenge for community-based mental health clinicians. Very few clinical trials are available to guide intervention and those interventions that are available are based in a group format. The current case study demonstrates the application of evidence-informed treatment techniques during the individual treatment of a 10-year-old boy displaying interpersonal sexual behavior problems. Specifically, the clinician adapts and implements a group-based model developed and tested by Bonner et al. (1999) for use with an individual child and his caregivers. Key points of the case study are discussed within the context of implementing evidence-informed treatments for children with sexual behavior problems.

  7. Nonlinear model predictive control of a wave energy converter based on differential flatness parameterisation

    NASA Astrophysics Data System (ADS)

    Li, Guang

    2017-01-01

    This paper presents a fast constrained optimization approach, which is tailored for nonlinear model predictive control of wave energy converters (WEC). The advantage of this approach relies on its exploitation of the differential flatness of the WEC model. This can reduce the dimension of the resulting nonlinear programming problem (NLP) derived from the continuous constrained optimal control of WEC using pseudospectral method. The alleviation of computational burden using this approach helps to promote an economic implementation of nonlinear model predictive control strategy for WEC control problems. The method is applicable to nonlinear WEC models, nonconvex objective functions and nonlinear constraints, which are commonly encountered in WEC control problems. Numerical simulations demonstrate the efficacy of this approach.

  8. Application of different variants of the BEM in numerical modeling of bioheat transfer problems.

    PubMed

    Majchrzak, Ewa

    2013-09-01

    Heat transfer processes proceeding in the living organisms are described by the different mathematical models. In particular, the typical continuous model of bioheat transfer bases on the most popular Pennes equation, but the Cattaneo-Vernotte equation and the dual phase lag equation are also used. It should be pointed out that in parallel are also examined the vascular models, and then for the large blood vessels and tissue domain the energy equations are formulated separately. In the paper the different variants of the boundary element method as a tool of numerical solution of bioheat transfer problems are discussed. For the steady state problems and the vascular models the classical BEM algorithm and also the multiple reciprocity BEM are presented. For the transient problems connected with the heating of tissue, the various tissue models are considered for which the 1st scheme of the BEM, the BEM using discretization in time and the general BEM are applied. Examples of computations illustrate the possibilities of practical applications of boundary element method in the scope of bioheat transfer problems.

  9. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  10. A Problem-Based Approach to Elastic Wave Propagation: The Role of Constraints

    ERIC Educational Resources Information Center

    Fazio, Claudio; Guastella, Ivan; Tarantino, Giovanni

    2009-01-01

    A problem-based approach to the teaching of mechanical wave propagation, focused on observation and measurement of wave properties in solids and on modelling of these properties, is presented. In particular, some experimental results, originally aimed at measuring the propagation speed of sound waves in metallic rods, are used in order to deepen…

  11. Training Australian General Practitioners in Rural Public Health: Impact, Desirability and Adaptability of Hybrid Problem-Based Learning

    ERIC Educational Resources Information Center

    Gladman, Justin; Perkins, David

    2013-01-01

    Context and Objective: Australian rural general practitioners (GPs) require public health knowledge. This study explored the suitability of teaching complex public health issues related to Aboriginal health by way of a hybrid problem-based learning (PBL) model within an intensive training retreat for GP registrars, when numerous trainees have no…

  12. Motivational Influences of Using Peer Evaluation in Problem-Based Learning in Medical Education

    ERIC Educational Resources Information Center

    Abercrombie, Sara; Parkes, Jay; McCarty, Teresita

    2015-01-01

    This study investigates the ways in which medical students' achievement goal orientations (AGO) affect their perceptions of learning and actual learning from an online problem-based learning environment, Calibrated Peer Review™. First, the tenability of a four-factor model (Elliot & McGregor, 2001) of AGO was tested with data collected from…

  13. An Oral Language Based Reading Remedial Program for Special Education Children.

    ERIC Educational Resources Information Center

    Langdon, Tom

    A problem was addressed within the context of the action based research practicum model. The problem was junior high school special education students who read at or below the 10th percentile when compared to age appropriate peers on standardized achievement instruments; and who have had all manner of reading interventions and yet continue to fall…

  14. Integrating Problem- and Project-Based Learning Opportunities: Assessing Outcomes of a Field Course in Environment and Sustainability

    ERIC Educational Resources Information Center

    Kricsfalusy, Vladimir; George, Colleen; Reed, Maureen G.

    2018-01-01

    Improving student competencies to address sustainability challenges has been a subject of significant debate in higher education. Problem- and project-based learning have been widely celebrated as course models that support the development of sustainability competencies. This paper describes a course developed for a professional Master's program…

  15. Integrating Blended and Problem-Based Learning into an Architectural Housing Design Studio: A Case Study

    ERIC Educational Resources Information Center

    Bregger, Yasemin Alkiser

    2017-01-01

    This paper presents how a blended learning pedagogic model is integrated into an architectural design studio by adapting the problem-based learning process and housing issues in Istanbul Technical University (ITU), during fall 2015 and spring 2016 semesters for fourth and sixth level students. These studios collaborated with the "Introduction…

  16. Using a Modified Pyramidal Training Model to Teach Special Education Teachers to Conduct Trial-Based Functional Analyses

    ERIC Educational Resources Information Center

    Kunnavatana, S. Shanun; Bloom, Sarah E.; Samaha, Andrew L.; Lignugaris/Kraft, Benjamin; Dayton, Elizabeth; Harris, Shannon K.

    2013-01-01

    Functional behavioral assessments are commonly used in school settings to assess and develop interventions for problem behavior. The trial-based functional analysis is an approach that teachers can use in their classrooms to identify the function of problem behavior. The current study evaluates the effectiveness of a modified pyramidal training…

  17. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  18. EVALUATION OF THE REAL-TIME AIR-QUALITY MODEL USING THE RAPS (REGIONAL AIR POLLUTION STUDY) DATA BASE. VOLUME 1. OVERVIEW

    EPA Science Inventory

    The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four report volumes. Moreover, the tests are generally applicable to other model evaluation problem...

  19. Introduction of the Notion of Differential Equations by Modelling Based Teaching

    ERIC Educational Resources Information Center

    Budinski, Natalija; Takaci, Djurdjica

    2011-01-01

    This paper proposes modelling based learning as a tool for learning and teaching mathematics. The example of modelling real world problems leading to the exponential function as the solution of differential equations is described, as well as the observations about students' activities during the process. The students were acquainted with the…

  20. Using the Constructivist Tridimensional Design Model for Online Continuing Education for Health Care Clinical Faculty

    ERIC Educational Resources Information Center

    Seo, Kay Kyeong-Ju; Engelhard, Chalee

    2014-01-01

    This article presents a new paradigm for continuing education of Clinical Instructors (CIs): the Constructivist Tridimensional (CTD) model for the design of an online curriculum. Based on problem-based learning, self-regulated learning, and adult learning theory, the CTD model was designed to facilitate interactive, collaborative, and authentic…

  1. Approaches to eliminate waste and reduce cost for recycling glass.

    PubMed

    Chao, Chien-Wen; Liao, Ching-Jong

    2011-12-01

    In recent years, the issue of environmental protection has received considerable attention. This paper adds to the literature by investigating a scheduling problem in the manufacturing of a glass recycling factory in Taiwan. The objective is to minimize the sum of the total holding cost and loss cost. We first represent the problem as an integer programming (IP) model, and then develop two heuristics based on the IP model to find near-optimal solutions for the problem. To validate the proposed heuristics, comparisons between optimal solutions from the IP model and solutions from the current method are conducted. The comparisons involve two problem sizes, small and large, where the small problems range from 15 to 45 jobs, and the large problems from 50 to 100 jobs. Finally, a genetic algorithm is applied to evaluate the proposed heuristics. Computational experiments show that the proposed heuristics can find good solutions in a reasonable time for the considered problem. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Direct and inverse problems of studying the properties of multilayer nanostructures based on a two-dimensional model of X-ray reflection and scattering

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2014-06-01

    A mathematical model of X-ray reflection and scattering by multilayered nanostructures in the quasi-optical approximation is proposed. X-ray propagation and the electric field distribution inside the multilayered structure are considered with allowance for refraction, which is taken into account via the second derivative with respect to the depth of the structure. This model is used to demonstrate the possibility of solving inverse problems in order to determine the characteristics of irregularities not only over the depth (as in the one-dimensional problem) but also over the length of the structure. An approximate combinatorial method for system decomposition and composition is proposed for solving the inverse problems.

  3. Statistical mechanics of competitive resource allocation using agent-based models

    NASA Astrophysics Data System (ADS)

    Chakraborti, Anirban; Challet, Damien; Chatterjee, Arnab; Marsili, Matteo; Zhang, Yi-Cheng; Chakrabarti, Bikas K.

    2015-01-01

    Demand outstrips available resources in most situations, which gives rise to competition, interaction and learning. In this article, we review a broad spectrum of multi-agent models of competition (El Farol Bar problem, Minority Game, Kolkata Paise Restaurant problem, Stable marriage problem, Parking space problem and others) and the methods used to understand them analytically. We emphasize the power of concepts and tools from statistical mechanics to understand and explain fully collective phenomena such as phase transitions and long memory, and the mapping between agent heterogeneity and physical disorder. As these methods can be applied to any large-scale model of competitive resource allocation made up of heterogeneous adaptive agent with non-linear interaction, they provide a prospective unifying paradigm for many scientific disciplines.

  4. Motion-based prediction is sufficient to solve the aperture problem

    PubMed Central

    Perrinet, Laurent U; Masson, Guillaume S

    2012-01-01

    In low-level sensory systems, it is still unclear how the noisy information collected locally by neurons may give rise to a coherent global percept. This is well demonstrated for the detection of motion in the aperture problem: as luminance of an elongated line is symmetrical along its axis, tangential velocity is ambiguous when measured locally. Here, we develop the hypothesis that motion-based predictive coding is sufficient to infer global motion. Our implementation is based on a context-dependent diffusion of a probabilistic representation of motion. We observe in simulations a progressive solution to the aperture problem similar to physiology and behavior. We demonstrate that this solution is the result of two underlying mechanisms. First, we demonstrate the formation of a tracking behavior favoring temporally coherent features independently of their texture. Second, we observe that incoherent features are explained away while coherent information diffuses progressively to the global scale. Most previous models included ad-hoc mechanisms such as end-stopped cells or a selection layer to track specific luminance-based features as necessary conditions to solve the aperture problem. Here, we have proved that motion-based predictive coding, as it is implemented in this functional model, is sufficient to solve the aperture problem. This solution may give insights in the role of prediction underlying a large class of sensory computations. PMID:22734489

  5. On the Directional Dependence and Null Space Freedom in Uncertainty Bound Identification

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    1997-01-01

    In previous work, the determination of uncertainty models via minimum norm model validation is based on a single set of input and output measurement data. Since uncertainty bounds at each frequency is directionally dependent for multivariable systems, this will lead to optimistic uncertainty levels. In addition, the design freedom in the uncertainty model has not been utilized to further reduce uncertainty levels. The above issues are addressed by formulating a min- max problem. An analytical solution to the min-max problem is given to within a generalized eigenvalue problem, thus avoiding a direct numerical approach. This result will lead to less conservative and more realistic uncertainty models for use in robust control.

  6. Problem-Based Learning Model Used to Scientific Approach Based Worksheet for Physics to Develop Senior High School Students Characters

    NASA Astrophysics Data System (ADS)

    Yulianti, D.

    2017-04-01

    The purpose of this study is to explore the application of Problem Based Learning(PBL) model aided withscientific approach and character integrated physics worksheets (LKS). Another purpose is to investigate the increase in cognitive and psychomotor learning outcomes and to know the character development of students. The method used in this study was the quasi-experiment. The instruments were observation and cognitive test. Worksheets can improve students’ cognitive, psychomotor learning outcomes. Improvements in cognitive learning results of students who have learned using worksheets are higher than students who received learning without worksheets. LKS can also develop the students’ character.

  7. Particle tracking acceleration via signed distance fields in direct-accelerated geometry Monte Carlo

    DOE PAGES

    Shriwise, Patrick C.; Davis, Andrew; Jacobson, Lucas J.; ...

    2017-08-26

    Computer-aided design (CAD)-based Monte Carlo radiation transport is of value to the nuclear engineering community for its ability to conduct transport on high-fidelity models of nuclear systems, but it is more computationally expensive than native geometry representations. This work describes the adaptation of a rendering data structure, the signed distance field, as a geometric query tool for accelerating CAD-based transport in the direct-accelerated geometry Monte Carlo toolkit. Demonstrations of its effectiveness are shown for several problems. The beginnings of a predictive model for the data structure's utilization based on various problem parameters is also introduced.

  8. An adaptive finite element method for the inequality-constrained Reynolds equation

    NASA Astrophysics Data System (ADS)

    Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha

    2018-07-01

    We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.

  9. The infinite sites model of genome evolution.

    PubMed

    Ma, Jian; Ratan, Aakrosh; Raney, Brian J; Suh, Bernard B; Miller, Webb; Haussler, David

    2008-09-23

    We formalize the problem of recovering the evolutionary history of a set of genomes that are related to an unseen common ancestor genome by operations of speciation, deletion, insertion, duplication, and rearrangement of segments of bases. The problem is examined in the limit as the number of bases in each genome goes to infinity. In this limit, the chromosomes are represented by continuous circles or line segments. For such an infinite-sites model, we present a polynomial-time algorithm to find the most parsimonious evolutionary history of any set of related present-day genomes.

  10. Knowledge-based control for robot self-localization

    NASA Technical Reports Server (NTRS)

    Bennett, Bonnie Kathleen Holte

    1993-01-01

    Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.

  11. Computational method for analysis of polyethylene biodegradation

    NASA Astrophysics Data System (ADS)

    Watanabe, Masaji; Kawai, Fusako; Shibata, Masaru; Yokoyama, Shigeo; Sudate, Yasuhiro

    2003-12-01

    In a previous study concerning the biodegradation of polyethylene, we proposed a mathematical model based on two primary factors: the direct consumption or absorption of small molecules and the successive weight loss of large molecules due to β-oxidation. Our model is an initial value problem consisting of a differential equation whose independent variable is time. Its unknown variable represents the total weight of all the polyethylene molecules that belong to a molecular-weight class specified by a parameter. In this paper, we describe a numerical technique to introduce experimental results into analysis of our model. We first establish its mathematical foundation in order to guarantee its validity, by showing that the initial value problem associated with the differential equation has a unique solution. Our computational technique is based on a linear system of differential equations derived from the original problem. We introduce some numerical results to illustrate our technique as a practical application of the linear approximation. In particular, we show how to solve the inverse problem to determine the consumption rate and the β-oxidation rate numerically, and illustrate our numerical technique by analyzing the GPC patterns of polyethylene wax obtained before and after 5 weeks cultivation of a fungus, Aspergillus sp. AK-3. A numerical simulation based on these degradation rates confirms that the primary factors of the polyethylene biodegradation posed in modeling are indeed appropriate.

  12. Is Team-Based Primary Care Associated with Less Access Problems and Self-Reported Unmet Need in Canada?

    PubMed

    Zygmunt, Austin; Asada, Yukiko; Burge, Frederick

    2017-10-01

    As in many jurisdictions, the delivery of primary care in Canada is being transformed from solo practice to team-based care. In Canada, team-based primary care involves general practitioners working with nurses or other health care providers, and it is expected to improve equity in access to care. This study examined whether team-based care is associated with fewer access problems and less unmet need and whether socioeconomic gradients in access problems and unmet need are smaller in team-based care than in non-team-based care. Data came from the 2008 Canadian Survey of Experiences with Primary Health Care (sample size: 10,858). We measured primary care type as team-based or non-team-based and socioeconomic status by income and education. We created four access problem variables and four unmet need variables (overall and three specific components). For each, we ran separate logistic regression models to examine their associations with primary care type. We examined socioeconomic gradients in access problems and unmet need stratified by primary care type. Primary care type had no statistically significant, independent associations with access problems or unmet need. Among those with non-team-based care, a statistically significant education gradient for overall access problems existed, whereas among those with team-based care, no statistically significant socioeconomic gradients existed.

  13. Applying an MVC Framework for The System Development Life Cycle with Waterfall Model Extended

    NASA Astrophysics Data System (ADS)

    Hardyanto, W.; Purwinarko, A.; Sujito, F.; Masturi; Alighiri, D.

    2017-04-01

    This paper describes the extension of the waterfall model using MVC architectural pattern for software development. The waterfall model is the based model of the most widely used in software development, yet there are still many problems in it. The general issue usually happens on data changes that cause the delays on the process itself. On the other hand, the security factor on the software as well as one of the major problems. This study uses PHP programming language for implementation. Although this model can be implemented in several programming languages with the same concept. This study is based on MVC architecture so that it can improve the performance of both software development and maintenance, especially concerning security, validation, database access, and routing.

  14. A School-Based Mental Health Consultation Curriculum.

    ERIC Educational Resources Information Center

    Sandoval, Jonathan; Davis, John M.

    1984-01-01

    Presents one position on consultation that integrates a theoretical model, a process model, and a curriculum for training school-based mental health consultants. Elements of the proposed curriculum include: ethics, relationship building, maintaining rapport, defining problems, gathering data, sharing information, generating and supporting…

  15. Optimization for Service Routes of Pallet Service Center Based on the Pallet Pool Mode

    PubMed Central

    He, Shiwei; Song, Rui

    2016-01-01

    Service routes optimization (SRO) of pallet service center should meet customers' demand firstly and then, through the reasonable method of lines organization, realize the shortest path of vehicle driving. The routes optimization of pallet service center is similar to the distribution problems of vehicle routing problem (VRP) and Chinese postman problem (CPP), but it has its own characteristics. Based on the relevant research results, the conditions of determining the number of vehicles, the one way of the route, the constraints of loading, and time windows are fully considered, and a chance constrained programming model with stochastic constraints is constructed taking the shortest path of all vehicles for a delivering (recycling) operation as an objective. For the characteristics of the model, a hybrid intelligent algorithm including stochastic simulation, neural network, and immune clonal algorithm is designed to solve the model. Finally, the validity and rationality of the optimization model and algorithm are verified by the case. PMID:27528865

  16. Validation of the thermal challenge problem using Bayesian Belief Networks.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McFarland, John; Swiler, Laura Painton

    The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less

  17. Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model

    NASA Astrophysics Data System (ADS)

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-05-01

    Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.

  18. Data Intensive Systems (DIS) Benchmark Performance Summary

    DTIC Science & Technology

    2003-08-01

    models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures

  19. An investigation of meaningful understanding and effectiveness of the implementation of Piagetian and Ausubelian theories in physics instruction

    NASA Astrophysics Data System (ADS)

    Williams, Karen Ann

    One section of college students (N = 25) enrolled in an algebra-based physics course was selected for a Piagetian-based learning cycle (LC) treatment while a second section (N = 25) studied in an Ausubelian-based meaningful verbal reception learning treatment (MVRL). This study examined the students' overall (concept + problem solving + mental model) meaningful understanding of force, density/Archimedes Principle, and heat. Also examined were students' meaningful understanding as measured by conceptual questions, problems, and mental models. In addition, students' learning orientations were examined. There were no significant posttest differences between the LC and MVRL groups for students' meaningful understanding or learning orientation. Piagetian and Ausubelian theories explain meaningful understanding for each treatment. Students from each treatment increased their meaningful understanding. However, neither group altered their learning orientation. The results of meaningful understanding as measured by conceptual questions, problem solving, and mental models were mixed. Differences were attributed to the weaknesses and strengths of each treatment. This research also examined four variables (treatment, reasoning ability, learning orientation, and prior knowledge) to find which best predicted students' overall meaningful understanding of physics concepts. None of these variables were significant predictors at the.05 level. However, when the same variables were used to predict students' specific understanding (i.e. concept, problem solving, or mental model understanding), the results were mixed. For forces and density/Archimedes Principle, prior knowledge and reasoning ability significantly predicted students' conceptual understanding. For heat, however, reasoning ability was the only significant predictor of concept understanding. Reasoning ability and treatment were significant predictors of students' problem solving for heat and forces. For density/Archimedes Principle, treatment was the only significant predictor of students' problem solving. None of the variables were significant predictors of mental model understanding. This research suggested that Piaget and Ausubel used different terminology to describe learning yet these theories are similar. Further research is needed to validate this premise and validate the blending of the two theories.

  20. Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model

    NASA Astrophysics Data System (ADS)

    Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled

    2018-03-01

    The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.

  1. [In Process Citation].

    PubMed

    Akner, Gunnar; Järhult, Bengt

    2016-05-17

    The international trend »value-based care« (VbC) started with a book by Porter and Olmsted Teisberg in 2006 followed by Porter's 7-point proposal for value-based reform of health care in 2009. VbC may have relevance for delimited, procedure-related health problems with a foreseeable course of development. Most health problems in health care, however, do not involve such delimited problems. VbC is probably not suited as a steering model for chronic health conditions or for multiple health problems. VbC is being rapidly introduced to steer health care without scientific evidence.

  2. Gender and Grade-Level Comparisons in the Structure of Problem Behaviors among Adolescents

    ERIC Educational Resources Information Center

    Chun, Heejung; Mobley, Michael

    2010-01-01

    Based on Jessor's theory (1987) the comparability of a second-order problem behavior model (SPBM) was investigated across gender and grade-level among adolescents. In addition, gender and grade-level differences in problem behavior engagement were addressed examining latent mean differences. Using a sample of 6504 adolescents drawn from the…

  3. Measuring Problem Solving Skills in Plants vs. Zombies 2

    ERIC Educational Resources Information Center

    Shute, Valerie J.; Moore, Gregory R.; Wang, Lubin

    2015-01-01

    We are using stealth assessment, embedded in "Plants vs. Zombies 2," to measure middle-school students' problem solving skills. This project started by developing a problem solving competency model based on a thorough review of the literature. Next, we identified relevant in-game indicators that would provide evidence about students'…

  4. Predicting Correctness of Problem Solving from Low-Level Log Data in Intelligent Tutoring Systems

    ERIC Educational Resources Information Center

    Cetintas, Suleyman; Si, Luo; Xin, Yan Ping; Hord, Casey

    2009-01-01

    This paper proposes a learning based method that can automatically determine how likely a student is to give a correct answer to a problem in an intelligent tutoring system. Only log files that record students' actions with the system are used to train the model, therefore the modeling process doesn't require expert knowledge for identifying…

  5. How Levels of Interactivity in Tutorials Affect Students' Learning of Modeling Transportation Problems in a Spreadsheet

    ERIC Educational Resources Information Center

    Seal, Kala Chand; Przasnyski, Zbigniew H.; Leon, Linda A.

    2010-01-01

    Do students learn to model OR/MS problems better by using computer-based interactive tutorials and, if so, does increased interactivity in the tutorials lead to better learning? In order to determine the effect of different levels of interactivity on student learning, we used screen capture technology to design interactive support materials for…

  6. Research on Bayes matting algorithm based on Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Jiang, Shan; Han, Cheng; Zhang, Chao; Jiang, Zhengang

    2015-12-01

    The digital matting problem is a classical problem of imaging. It aims at separating non-rectangular foreground objects from a background image, and compositing with a new background image. Accurate matting determines the quality of the compositing image. A Bayesian matting Algorithm Based on Gaussian Mixture Model is proposed to solve this matting problem. Firstly, the traditional Bayesian framework is improved by introducing Gaussian mixture model. Then, a weighting factor is added in order to suppress the noises of the compositing images. Finally, the effect is further improved by regulating the user's input. This algorithm is applied to matting jobs of classical images. The results are compared to the traditional Bayesian method. It is shown that our algorithm has better performance in detail such as hair. Our algorithm eliminates the noise well. And it is very effectively in dealing with the kind of work, such as interested objects with intricate boundaries.

  7. Conceptualizing a model: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force--2.

    PubMed

    Roberts, Mark; Russell, Louise B; Paltiel, A David; Chambers, Michael; McEwan, Phil; Krahn, Murray

    2012-01-01

    The appropriate development of a model begins with understanding the problem that is being represented. The aim of this article was to provide a series of consensus-based best practices regarding the process of model conceptualization. For the purpose of this series of articles, we consider the development of models whose purpose is to inform medical decisions and health-related resource allocation questions. We specifically divide the conceptualization process into two distinct components: the conceptualization of the problem, which converts knowledge of the health care process or decision into a representation of the problem, followed by the conceptualization of the model itself, which matches the attributes and characteristics of a particular modeling type with the needs of the problem being represented. Recommendations are made regarding the structure of the modeling team, agreement on the statement of the problem, the structure, perspective, and target population of the model, and the interventions and outcomes represented. Best practices relating to the specific characteristics of model structure and which characteristics of the problem might be most easily represented in a specific modeling method are presented. Each section contains a number of recommendations that were iterated among the authors, as well as among the wider modeling taskforce, jointly set up by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  8. Reliable Facility Location Problem with Facility Protection

    PubMed Central

    Tang, Luohao; Zhu, Cheng; Lin, Zaili; Shi, Jianmai; Zhang, Weiming

    2016-01-01

    This paper studies a reliable facility location problem with facility protection that aims to hedge against random facility disruptions by both strategically protecting some facilities and using backup facilities for the demands. An Integer Programming model is proposed for this problem, in which the failure probabilities of facilities are site-specific. A solution approach combining Lagrangian Relaxation and local search is proposed and is demonstrated to be both effective and efficient based on computational experiments on random numerical examples with 49, 88, 150 and 263 nodes in the network. A real case study for a 100-city network in Hunan province, China, is presented, based on which the properties of the model are discussed and some managerial insights are analyzed. PMID:27583542

  9. Developing learning material of introduction to operation research course based on problem-based learning

    NASA Astrophysics Data System (ADS)

    Yerizon; Jazwinarti; Yarman

    2018-01-01

    Students have difficulties experience in the course Introduction to Operational Research (PRO). The purpose of this study is to analyze the requirement of students in the developing lecturing materials PRO based Problem Based Learning which is valid, practice, and effective. Lecture materials are developed based on Plomp’s model. The development process of this device consists of 3 phases: front-end analysis/preliminary research, development/prototype phase and assessment phase. Preliminary analysis was obtained by observation and interview. From the research, it is found that students need the student’s worksheet (LKM) for several reasons: 1) no LKM available, 2) presentation of subject not yet based on real problem, 3) experiencing difficulties from current learning source.

  10. Three hybridization models based on local search scheme for job shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Balbi Fraga, Tatiana

    2015-05-01

    This work presents three different hybridization models based on the general schema of Local Search Heuristics, named Hybrid Successive Application, Hybrid Neighborhood, and Hybrid Improved Neighborhood. Despite similar approaches might have already been presented in the literature in other contexts, in this work these models are applied to analyzes the solution of the job shop scheduling problem, with the heuristics Taboo Search and Particle Swarm Optimization. Besides, we investigate some aspects that must be considered in order to achieve better solutions than those obtained by the original heuristics. The results demonstrate that the algorithms derived from these three hybrid models are more robust than the original algorithms and able to get better results than those found by the single Taboo Search.

  11. An Advanced One-Dimensional Finite Element Model for Incompressible Thermally Expandable Flow

    DOE PAGES

    Hu, Rui

    2017-03-27

    Here, this paper provides an overview of a new one-dimensional finite element flow model for incompressible but thermally expandable flow. The flow model was developed for use in system analysis tools for whole-plant safety analysis of sodium fast reactors. Although the pressure-based formulation was implemented, the use of integral equations in the conservative form ensured the conservation laws of the fluid. A stabilization scheme based on streamline-upwind/Petrov-Galerkin and pressure-stabilizing/Petrov-Galerkin formulations is also introduced. The flow model and its implementation have been verified by many test problems, including density wave propagation, steep gradient problems, discharging between tanks, and the conjugate heatmore » transfer in a heat exchanger.« less

  12. An Advanced One-Dimensional Finite Element Model for Incompressible Thermally Expandable Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui

    Here, this paper provides an overview of a new one-dimensional finite element flow model for incompressible but thermally expandable flow. The flow model was developed for use in system analysis tools for whole-plant safety analysis of sodium fast reactors. Although the pressure-based formulation was implemented, the use of integral equations in the conservative form ensured the conservation laws of the fluid. A stabilization scheme based on streamline-upwind/Petrov-Galerkin and pressure-stabilizing/Petrov-Galerkin formulations is also introduced. The flow model and its implementation have been verified by many test problems, including density wave propagation, steep gradient problems, discharging between tanks, and the conjugate heatmore » transfer in a heat exchanger.« less

  13. Possibility-based robust design optimization for the structural-acoustic system with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2018-03-01

    The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.

  14. Simulation of a class of hazardous situations in the ICS «INM RAS - Baltic Sea»

    NASA Astrophysics Data System (ADS)

    Zakharova, Natalia; Agoshkov, Valery; Aseev, Nikita; Parmuzin, Eugene; Sheloput, Tateana; Shutyaev, Victor

    2017-04-01

    Development of Informational Computational Systems (ICS) for data assimilation procedures is one of multidisciplinary problems. To study and solve these problems one needs to apply modern results from different disciplines and recent developments in mathematical modeling, theory of adjoint equations and optimal control, inverse problems, numerical methods theory, numerical algebra, scientific computing and processing of satellite data. In this work the results on the ICS development for PC-ICS "INM RAS - Baltic Sea" are presented. We discuss practical problems studied by ICS. The System includes numerical model of the Baltic Sea thermodynamics, the new oil spill model describing the propagation of a slick at the Sea surface (Agoshkov, Aseev et al., 2014) and the optimal ship route calculating block (Agoshkov, Zayachkovsky et al., 2014). The ICS is based on the INMOM numerical model of the Baltic Sea thermodynamics (Zalesny et al., 2013). It is possible to calculate main hydrodynamic parameters (temperature, salinity, velocities, sea level) using user-friendly interface of the ICS. The System includes data assimilation procedures (Agoshkov, 2003, Parmuzin, Agoshkov, 2012) and one can use the block of variational assimilation of the sea surface temperature in order to obtain main hydrodynamic parameters. Main possibilities of the ICS and several numerical experiments are presented in the work. By the problem of risk control is meant a problem of determination of optimal resources quantity which are necessary for decreasing the risk to some acceptable value. Mass of oil slick is chosen as a function of control. For the realization of the random variable the quadratic "functional of cost" is introduced. It comprises cleaning costs and deviation of damage of oil pollution from its acceptable value. The problem of minimization of this functional is solved based on the methods of optimal control and the theory of adjoint equations. The solution of this problem is explicitly found. The study was supported by the Russian Foundation for Basic Research (project 16-31-00510) and by the Russian Science Foundation (project №14-11-00609). V. I. Agoshkov, Methods of Optimal Control and Adjoint Equations in Problems of Mathematical Physics. INM RAS, Moscow, 2003 (in Russian). V. B. Zalesny, A. V. Gusev, V. O. Ivchenko, R. Tamsalu, and R. Aps, Numerical model of the Baltic Sea circulation. Russ. J. Numer. Anal. Math. Modelling 28 (2013), No. 1, 85-100. V.I. Agoshkov, A.O. Zayachkovskiy, R. Aps, P. Kujala, and J. Rytkönen. Risk theory based solution to the problem of optimal vessel route // Russian Journal of Numerical Analysis and Mathematical Modelling. 2014. Volume 29, Issue 2, Pages 69-78. Agoshkov, V., Aseev, N., Aps, R., Kujala, P., Rytkönen, J., Zalesny, V. The problem of control of oil pollution risk in the Baltic Sea // Russian Journal of Numerical Analysis and Mathematical Modelling. 2014. Volume 29, Issue 2, Pages 93-105. E. I. Parmuzin and V. I. Agoshkov, Numerical solution of the variational assimilation problem for sea surface temperature in the model of the Black Sea dynamics. Russ. J. Numer. Anal. Math. Modelling 27 (2012), No. 1, 69-94. Olof Liungman and Johan Mattsson. Scientic Documentation of Seatrack Web; physical processes, algorithms and references, 2011.

  15. Criteria for assessing problem solving and decision making in complex environments

    NASA Technical Reports Server (NTRS)

    Orasanu, Judith

    1993-01-01

    Training crews to cope with unanticipated problems in high-risk, high-stress environments requires models of effective problem solving and decision making. Existing decision theories use the criteria of logical consistency and mathematical optimality to evaluate decision quality. While these approaches are useful under some circumstances, the assumptions underlying these models frequently are not met in dynamic time-pressured operational environments. Also, applying formal decision models is both labor and time intensive, a luxury often lacking in operational environments. Alternate approaches and criteria are needed. Given that operational problem solving and decision making are embedded in ongoing tasks, evaluation criteria must address the relation between those activities and satisfaction of broader task goals. Effectiveness and efficiency become relevant for judging reasoning performance in operational environments. New questions must be addressed: What is the relation between the quality of decisions and overall performance by crews engaged in critical high risk tasks? Are different strategies most effective for different types of decisions? How can various decision types be characterized? A preliminary model of decision types found in air transport environments will be described along with a preliminary performance model based on an analysis of 30 flight crews. The performance analysis examined behaviors that distinguish more and less effective crews (based on performance errors). Implications for training and system design will be discussed.

  16. Convergence analysis of surrogate-based methods for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Zhang, Yuan-Xiang

    2017-12-01

    The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.

  17. A Self-Organizing State-Space-Model Approach for Parameter Estimation in Hodgkin-Huxley-Type Models of Single Neurons

    PubMed Central

    Vavoulis, Dimitrios V.; Straub, Volko A.; Aston, John A. D.; Feng, Jianfeng

    2012-01-01

    Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm), often in combination with a local search method (such as gradient descent) in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in the construction of biophysical neuron models. PMID:22396632

  18. Semicompeting risks in aging research: methods, issues and needs

    PubMed Central

    Varadhan, Ravi; Xue, Qian-Li; Bandeen-Roche, Karen

    2015-01-01

    A semicompeting risks problem involves two-types of events: a nonterminal and a terminal event (death). Typically, the nonterminal event is the focus of the study, but the terminal event can preclude the occurrence of the nonterminal event. Semicompeting risks are ubiquitous in studies of aging. Examples of semicompeting risk dyads include: dementia and death, frailty syndrome and death, disability and death, and nursing home placement and death. Semicompeting risk models can be divided into two broad classes: models based only on observables quantities (class O) and those based on potential (latent) failure times (class L). The classical illness-death model belongs to class O. This model is a special case of the multistate models, which has been an active area of methodology development. During the past decade and a half, there has also been a flurry of methodological activity on semicompeting risks based on latent failure times (L models). These advances notwithstanding, the semi-competing risks methodology has not penetrated biomedical research, in general, and gerontological research, in particular. Some possible reasons for this lack of uptake are: the methods are relatively new and sophisticated, conceptual problems associated with potential failure time models are difficult to overcome, paucity of expository articles aimed at educating practitioners, and non-availability of readily usable software. The main goals of this review article are: (i) to describe the major types of semicompeting risks problems arising in aging research, (ii) to provide a brief survey of the semicompeting risks methods, (iii) to suggest appropriate methods for addressing the problems in aging research, (iv) to highlight areas where more work is needed, and (v) to suggest ways to facilitate the uptake of the semicompeting risks methodology by the broader biomedical research community. PMID:24729136

  19. Optimal allocation model of construction land based on two-level system optimization theory

    NASA Astrophysics Data System (ADS)

    Liu, Min; Liu, Yanfang; Xia, Yuping; Lei, Qihong

    2007-06-01

    The allocation of construction land is an important task in land-use planning. Whether implementation of planning decisions is a success or not, usually depends on a reasonable and scientific distribution method. Considering the constitution of land-use planning system and planning process in China, multiple levels and multiple objective decision problems is its essence. Also, planning quantity decomposition is a two-level system optimization problem and an optimal resource allocation decision problem between a decision-maker in the topper and a number of parallel decision-makers in the lower. According the characteristics of the decision-making process of two-level decision-making system, this paper develops an optimal allocation model of construction land based on two-level linear planning. In order to verify the rationality and the validity of our model, Baoan district of Shenzhen City has been taken as a test case. Under the assistance of the allocation model, construction land is allocated to ten townships of Baoan district. The result obtained from our model is compared to that of traditional method, and results show that our model is reasonable and usable. In the end, the paper points out the shortcomings of the model and further research directions.

  20. Internal Model-Based Robust Tracking Control Design for the MEMS Electromagnetic Micromirror.

    PubMed

    Tan, Jiazheng; Sun, Weijie; Yeow, John T W

    2017-05-26

    The micromirror based on micro-electro-mechanical systems (MEMS) technology is widely employed in different areas, such as scanning, imaging and optical switching. This paper studies the MEMS electromagnetic micromirror for scanning or imaging application. In these application scenarios, the micromirror is required to track the command sinusoidal signal, which can be converted to an output regulation problem theoretically. In this paper, based on the internal model principle, the output regulation problem is solved by designing a robust controller that is able to force the micromirror to track the command signal accurately. The proposed controller relies little on the accuracy of the model. Further, the proposed controller is implemented, and its effectiveness is examined by experiments. The experimental results demonstrate that the performance of the proposed controller is satisfying.

  1. Internal Model-Based Robust Tracking Control Design for the MEMS Electromagnetic Micromirror

    PubMed Central

    Tan, Jiazheng; Sun, Weijie; Yeow, John T. W.

    2017-01-01

    The micromirror based on micro-electro-mechanical systems (MEMS) technology is widely employed in different areas, such as scanning, imaging and optical switching. This paper studies the MEMS electromagnetic micromirror for scanning or imaging application. In these application scenarios, the micromirror is required to track the command sinusoidal signal, which can be converted to an output regulation problem theoretically. In this paper, based on the internal model principle, the output regulation problem is solved by designing a robust controller that is able to force the micromirror to track the command signal accurately. The proposed controller relies little on the accuracy of the model. Further, the proposed controller is implemented, and its effectiveness is examined by experiments. The experimental results demonstrate that the performance of the proposed controller is satisfying. PMID:28587105

  2. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  3. Bi-Objective Modelling for Hazardous Materials Road–Rail Multimodal Routing Problem with Railway Schedule-Based Space–Time Constraints

    PubMed Central

    Sun, Yan; Lang, Maoxiang; Wang, Danzhu

    2016-01-01

    The transportation of hazardous materials is always accompanied by considerable risk that will impact public and environment security. As an efficient and reliable transportation organization, a multimodal service should participate in the transportation of hazardous materials. In this study, we focus on transporting hazardous materials through the multimodal service network and explore the hazardous materials multimodal routing problem from the operational level of network planning. To formulate this problem more practicably, minimizing the total generalized costs of transporting the hazardous materials and the social risk along the planned routes are set as the optimization objectives. Meanwhile, the following formulation characteristics will be comprehensively modelled: (1) specific customer demands; (2) multiple hazardous material flows; (3) capacitated schedule-based rail service and uncapacitated time-flexible road service; and (4) environmental risk constraint. A bi-objective mixed integer nonlinear programming model is first built to formulate the routing problem that combines the formulation characteristics above. Then linear reformations are developed to linearize and improve the initial model so that it can be effectively solved by exact solution algorithms on standard mathematical programming software. By utilizing the normalized weighted sum method, we can generate the Pareto solutions to the bi-objective optimization problem for a specific case. Finally, a large-scale empirical case study from the Beijing–Tianjin–Hebei Region in China is presented to demonstrate the feasibility of the proposed methods in dealing with the practical problem. Various scenarios are also discussed in the case study. PMID:27483294

  4. An annealed chaotic maximum neural network for bipartite subgraph problem.

    PubMed

    Wang, Jiahai; Tang, Zheng; Wang, Ronglong

    2004-04-01

    In this paper, based on maximum neural network, we propose a new parallel algorithm that can help the maximum neural network escape from local minima by including a transient chaotic neurodynamics for bipartite subgraph problem. The goal of the bipartite subgraph problem, which is an NP- complete problem, is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. Lee et al. presented a parallel algorithm using the maximum neural model (winner-take-all neuron model) for this NP- complete problem. The maximum neural model always guarantees a valid solution and greatly reduces the search space without a burden on the parameter-tuning. However, the model has a tendency to converge to a local minimum easily because it is based on the steepest descent method. By adding a negative self-feedback to the maximum neural network, we proposed a new parallel algorithm that introduces richer and more flexible chaotic dynamics and can prevent the network from getting stuck at local minima. After the chaotic dynamics vanishes, the proposed algorithm is then fundamentally reined by the gradient descent dynamics and usually converges to a stable equilibrium point. The proposed algorithm has the advantages of both the maximum neural network and the chaotic neurodynamics. A large number of instances have been simulated to verify the proposed algorithm. The simulation results show that our algorithm finds the optimum or near-optimum solution for the bipartite subgraph problem superior to that of the best existing parallel algorithms.

  5. Dynamic Emulation Modelling (DEMo) of large physically-based environmental models

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2012-12-01

    In environmental modelling large, spatially-distributed, physically-based models are widely adopted to describe the dynamics of physical, social and economic processes. Such an accurate process characterization comes, however, to a price: the computational requirements of these models are considerably high and prevent their use in any problem requiring hundreds or thousands of model runs to be satisfactory solved. Typical examples include optimal planning and management, data assimilation, inverse modelling and sensitivity analysis. An effective approach to overcome this limitation is to perform a top-down reduction of the physically-based model by identifying a simplified, computationally efficient emulator, constructed from and then used in place of the original model in highly resource-demanding tasks. The underlying idea is that not all the process details in the original model are equally important and relevant to the dynamics of the outputs of interest for the type of problem considered. Emulation modelling has been successfully applied in many environmental applications, however most of the literature considers non-dynamic emulators (e.g. metamodels, response surfaces and surrogate models), where the original dynamical model is reduced to a static map between input and the output of interest. In this study we focus on Dynamic Emulation Modelling (DEMo), a methodological approach that preserves the dynamic nature of the original physically-based model, with consequent advantages in a wide variety of problem areas. In particular, we propose a new data-driven DEMo approach that combines the many advantages of data-driven modelling in representing complex, non-linear relationships, but preserves the state-space representation typical of process-based models, which is both particularly effective in some applications (e.g. optimal management and data assimilation) and facilitates the ex-post physical interpretation of the emulator structure, thus enhancing the credibility of the model to stakeholders and decision-makers. Numerical results from the application of the approach to the reduction of 3D coupled hydrodynamic-ecological models in several real world case studies, including Marina Reservoir (Singapore) and Googong Reservoir (Australia), are illustrated.

  6. Location-allocation models and new solution methodologies in telecommunication networks

    NASA Astrophysics Data System (ADS)

    Dinu, S.; Ciucur, V.

    2016-08-01

    When designing a telecommunications network topology, three types of interdependent decisions are combined: location, allocation and routing, which are expressed by the following design considerations: how many interconnection devices - consolidation points/concentrators should be used and where should they be located; how to allocate terminal nodes to concentrators; how should the voice, video or data traffic be routed and what transmission links (capacitated or not) should be built into the network. Including these three components of the decision into a single model generates a problem whose complexity makes it difficult to solve. A first method to address the overall problem is the sequential one, whereby the first step deals with the location-allocation problem and based on this solution the subsequent sub-problem (routing the network traffic) shall be solved. The issue of location and allocation in a telecommunications network, called "The capacitated concentrator location- allocation - CCLA problem" is based on one of the general location models on a network in which clients/demand nodes are the terminals and facilities are the concentrators. Like in a location model, each client node has a demand traffic, which must be served, and the facilities can serve these demands within their capacity limit. In this study, the CCLA problem is modeled as a single-source capacitated location-allocation model whose optimization objective is to determine the minimum network cost consisting of fixed costs for establishing the locations of concentrators, costs for operating concentrators and costs for allocating terminals to concentrators. The problem is known as a difficult combinatorial optimization problem for which powerful algorithms are required. Our approach proposes a Fuzzy Genetic Algorithm combined with a local search procedure to calculate the optimal values of the location and allocation variables. To confirm the efficiency of the proposed algorithm with respect to the quality of solutions, significant size test problems were considered: up to 100 terminal nodes and 50 concentrators on a 100 × 100 square grid. The performance of this hybrid intelligent algorithm was evaluated by measuring the quality of its solutions with respect to the following statistics: the standard deviation and the ratio of the best solution obtained.

  7. Model checking for linear temporal logic: An efficient implementation

    NASA Technical Reports Server (NTRS)

    Sherman, Rivi; Pnueli, Amir

    1990-01-01

    This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.

  8. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    NASA Astrophysics Data System (ADS)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  9. Promoting students’ mathematical problem-solving skills through 7e learning cycle and hypnoteaching model

    NASA Astrophysics Data System (ADS)

    Saleh, H.; Suryadi, D.; Dahlan, J. A.

    2018-01-01

    The aim of this research was to find out whether 7E learning cycle under hypnoteaching model can enhance students’ mathematical problem-solving skill. This research was quasi-experimental study. The design of this study was pretest-posttest control group design. There were two groups of sample used in the study. The experimental group was given 7E learning cycle under hypnoteaching model, while the control group was given conventional model. The population of this study was the student of mathematics education program at one university in Tangerang. The statistical analysis used to test the hypothesis of this study were t-test and Mann-Whitney U. The result of this study show that: (1) The students’ achievement of mathematical problem solving skill who obtained 7E learning cycle under hypnoteaching model are higher than the students who obtained conventional model; (2) There are differences in the students’ enhancement of mathematical problem-solving skill based on students’ prior mathematical knowledge (PMK) category (high, middle, and low).

  10. Effect of Internet-Based Cognitive Apprenticeship Model (i-CAM) on Statistics Learning among Postgraduate Students.

    PubMed

    Saadati, Farzaneh; Ahmad Tarmizi, Rohani; Mohd Ayub, Ahmad Fauzi; Abu Bakar, Kamariah

    2015-01-01

    Because students' ability to use statistics, which is mathematical in nature, is one of the concerns of educators, embedding within an e-learning system the pedagogical characteristics of learning is 'value added' because it facilitates the conventional method of learning mathematics. Many researchers emphasize the effectiveness of cognitive apprenticeship in learning and problem solving in the workplace. In a cognitive apprenticeship learning model, skills are learned within a community of practitioners through observation of modelling and then practice plus coaching. This study utilized an internet-based Cognitive Apprenticeship Model (i-CAM) in three phases and evaluated its effectiveness for improving statistics problem-solving performance among postgraduate students. The results showed that, when compared to the conventional mathematics learning model, the i-CAM could significantly promote students' problem-solving performance at the end of each phase. In addition, the combination of the differences in students' test scores were considered to be statistically significant after controlling for the pre-test scores. The findings conveyed in this paper confirmed the considerable value of i-CAM in the improvement of statistics learning for non-specialized postgraduate students.

  11. Algorithms for sum-of-squares-based stability analysis and control design of uncertain nonlinear systems

    NASA Astrophysics Data System (ADS)

    Ataei-Esfahani, Armin

    In this dissertation, we present algorithmic procedures for sum-of-squares based stability analysis and control design for uncertain nonlinear systems. In particular, we consider the case of robust aircraft control design for a hypersonic aircraft model subject to parametric uncertainties in its aerodynamic coefficients. In recent years, Sum-of-Squares (SOS) method has attracted increasing interest as a new approach for stability analysis and controller design of nonlinear dynamic systems. Through the application of SOS method, one can describe a stability analysis or control design problem as a convex optimization problem, which can efficiently be solved using Semidefinite Programming (SDP) solvers. For nominal systems, the SOS method can provide a reliable and fast approach for stability analysis and control design for low-order systems defined over the space of relatively low-degree polynomials. However, The SOS method is not well-suited for control problems relating to uncertain systems, specially those with relatively high number of uncertainties or those with non-affine uncertainty structure. In order to avoid issues relating to the increased complexity of the SOS problems for uncertain system, we present an algorithm that can be used to transform an SOS problem with uncertainties into a LMI problem with uncertainties. A new Probabilistic Ellipsoid Algorithm (PEA) is given to solve the robust LMI problem, which can guarantee the feasibility of a given solution candidate with an a-priori fixed probability of violation and with a fixed confidence level. We also introduce two approaches to approximate the robust region of attraction (RROA) for uncertain nonlinear systems with non-affine dependence on uncertainties. The first approach is based on a combination of PEA and SOS method and searches for a common Lyapunov function, while the second approach is based on the generalized Polynomial Chaos (gPC) expansion theorem combined with the SOS method and searches for parameter-dependent Lyapunov functions. The control design problem is investigated through a case study of a hypersonic aircraft model with parametric uncertainties. Through time-scale decomposition and a series of function approximations, the complexity of the aircraft model is reduced to fall within the capability of SDP solvers. The control design problem is then formulated as a convex problem using the dual of the Lyapunov theorem. A nonlinear robust controller is searched using the combined PEA/SOS method. The response of the uncertain aircraft model is evaluated for two sets of pilot commands. As the simulation results show, the aircraft remains stable under up to 50% uncertainty in aerodynamic coefficients and can follow the pilot commands.

  12. A Distributed Approach to System-Level Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil

    2012-01-01

    Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.

  13. A trust-based sensor allocation algorithm in cooperative space search problems

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2011-06-01

    Sensor allocation is an important and challenging problem within the field of multi-agent systems. The sensor allocation problem involves deciding how to assign a number of targets or cells to a set of agents according to some allocation protocol. Generally, in order to make efficient allocations, we need to design mechanisms that consider both the task performers' costs for the service and the associated probability of success (POS). In our problem, the costs are the used sensor resource, and the POS is the target tracking performance. Usually, POS may be perceived differently by different agents because they typically have different standards or means of evaluating the performance of their counterparts (other sensors in the search and tracking problem). Given this, we turn to the notion of trust to capture such subjective perceptions. In our approach, we develop a trust model to construct a novel mechanism that motivates sensor agents to limit their greediness or selfishness. Then we model the sensor allocation optimization problem with trust-in-loop negotiation game and solve it using a sub-game perfect equilibrium. Numerical simulations are performed to demonstrate the trust-based sensor allocation algorithm in cooperative space situation awareness (SSA) search problems.

  14. Energy-economic policy modeling

    NASA Astrophysics Data System (ADS)

    Sanstad, Alan H.

    2018-01-01

    Computational models based on economic principles and methods are powerful tools for understanding and analyzing problems in energy and the environment and for designing policies to address them. Among their other features, some current models of this type incorporate information on sustainable energy technologies and can be used to examine their potential role in addressing the problem of global climate change. The underlying principles and the characteristics of the models are summarized, and examples of this class of model and their applications are presented. Modeling epistemology and related issues are discussed, as well as critiques of the models. The paper concludes with remarks on the evolution of the models and possibilities for their continued development.

  15. An adjoint-based method for a linear mechanically-coupled tumor model: application to estimate the spatial variation of murine glioma growth based on diffusion weighted magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Feng, Xinzeng; Hormuth, David A.; Yankeelov, Thomas E.

    2018-06-01

    We present an efficient numerical method to quantify the spatial variation of glioma growth based on subject-specific medical images using a mechanically-coupled tumor model. The method is illustrated in a murine model of glioma in which we consider the tumor as a growing elastic mass that continuously deforms the surrounding healthy-appearing brain tissue. As an inverse parameter identification problem, we quantify the volumetric growth of glioma and the growth component of deformation by fitting the model predicted cell density to the cell density estimated using the diffusion-weighted magnetic resonance imaging data. Numerically, we developed an adjoint-based approach to solve the optimization problem. Results on a set of experimentally measured, in vivo rat glioma data indicate good agreement between the fitted and measured tumor area and suggest a wide variation of in-plane glioma growth with the growth-induced Jacobian ranging from 1.0 to 6.0.

  16. Model of Rescue Units Control in Event of Potential Emergency

    NASA Astrophysics Data System (ADS)

    Kalach, A. V.; Kravchenko, A. S.; Soloviev, A. S.; Nesterov, A. A.

    2018-05-01

    A problem of organization and efficiency improvement of the system controlling the rescue units of the Ministry of Civil Defense and Emergency Response of the Russian Federation considered using the example of potential hydrological emergency, a model of a system for controlling rescue units in the event of potential hydrological emergency. The problem solution is based on mathematical models of operational control of rescue units and assessment of a hydrological situation of area flooding.

  17. Fundamental Mechanisms of NeuroInformation Processing: Inverse Problems and Spike Processing

    DTIC Science & Technology

    2016-08-04

    platform called Neurokernel for collaborative development of comprehensive models of the brain of the fruit fly Drosophila melanogaster and their execution...example. We investigated the following nonlinear identification problem: given both the input signal u and the time sequence (tk)k2Z at the output of...from a time sequence is to be contrasted with existing methods for rate-based models in neuroscience. In such models the output of the system is taken

  18. Image-Based Models for Specularity Propagation in Diminished Reality.

    PubMed

    Said, Souheil Hadj; Tamaazousti, Mohamed; Bartoli, Adrien

    2018-07-01

    The aim of Diminished Reality (DR) is to remove a target object in a live video stream seamlessly. In our approach, the area of the target object is replaced with new texture that blends with the rest of the image. The result is then propagated to the next frames of the video. One of the important stages of this technique is to update the target region with respect to the illumination change. This is a complex and recurrent problem when the viewpoint changes. We show that the state-of-the-art in DR fails in solving this problem, even under simple scenarios. We then use local illumination models to address this problem. According to these models, the variation in illumination only affects the specular component of the image. In the context of DR, the problem is therefore solved by propagating the specularities in the target area. We list a set of structural properties of specularities which we incorporate in two new models for specularity propagation. Our first model includes the same property as the previous approaches, which is the smoothness of illumination variation, but has a different estimation method based on the Thin-Plate Spline. Our second model incorporates more properties of the specularity's shape on planar surfaces. Experimental results on synthetic and real data show that our strategy substantially improves the rendering quality compared to the state-of-the-art in DR.

  19. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.

    2016-01-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230

  20. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C

    2015-03-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.

  1. Overcoming rule-based rigidity and connectionist limitations through massively-parallel case-based reasoning

    NASA Technical Reports Server (NTRS)

    Barnden, John; Srinivas, Kankanahalli

    1990-01-01

    Symbol manipulation as used in traditional Artificial Intelligence has been criticized by neural net researchers for being excessively inflexible and sequential. On the other hand, the application of neural net techniques to the types of high-level cognitive processing studied in traditional artificial intelligence presents major problems as well. A promising way out of this impasse is to build neural net models that accomplish massively parallel case-based reasoning. Case-based reasoning, which has received much attention recently, is essentially the same as analogy-based reasoning, and avoids many of the problems leveled at traditional artificial intelligence. Further problems are avoided by doing many strands of case-based reasoning in parallel, and by implementing the whole system as a neural net. In addition, such a system provides an approach to some aspects of the problems of noise, uncertainty and novelty in reasoning systems. The current neural net system (Conposit), which performs standard rule-based reasoning, is being modified into a massively parallel case-based reasoning version.

  2. Wrinkle-free design of thin membrane structures using stress-based topology optimization

    NASA Astrophysics Data System (ADS)

    Luo, Yangjun; Xing, Jian; Niu, Yanzhuang; Li, Ming; Kang, Zhan

    2017-05-01

    Thin membrane structures would experience wrinkling due to local buckling deformation when compressive stresses are induced in some regions. Using the stress criterion for membranes in wrinkled and taut states, this paper proposed a new stress-based topology optimization methodology to seek the optimal wrinkle-free design of macro-scale thin membrane structures under stretching. Based on the continuum model and linearly elastic assumption in the taut state, the optimization problem is defined as to maximize the structural stiffness under membrane area and principal stress constraints. In order to make the problem computationally tractable, the stress constraints are reformulated into equivalent ones and relaxed by a cosine-type relaxation scheme. The reformulated optimization problem is solved by a standard gradient-based algorithm with the adjoint-variable sensitivity analysis. Several examples with post-bulking simulations and experimental tests are given to demonstrate the effectiveness of the proposed optimization model for eliminating stress-related wrinkles in the novel design of thin membrane structures.

  3. Exploratory Model Analysis of the Space Based Infrared System (SBIRS) Low Global Scheduler Problem

    DTIC Science & Technology

    1999-12-01

    solution. The non- linear least squares model is defined as Y = f{e,t) where: 0 =M-element parameter vector Y =N-element vector of all data t...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM (SBIRS) LOW GLOBAL SCHEDULER...December 1999 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE EXPLORATORY MODEL ANALYSIS OF THE SPACE BASED INFRARED SYSTEM

  4. A Neural Dynamic Model Generates Descriptions of Object-Oriented Actions.

    PubMed

    Richter, Mathis; Lins, Jonas; Schöner, Gregor

    2017-01-01

    Describing actions entails that relations between objects are discovered. A pervasively neural account of this process requires that fundamental problems are solved: the neural pointer problem, the binding problem, and the problem of generating discrete processing steps from time-continuous neural processes. We present a prototypical solution to these problems in a neural dynamic model that comprises dynamic neural fields holding representations close to sensorimotor surfaces as well as dynamic neural nodes holding discrete, language-like representations. Making the connection between these two types of representations enables the model to describe actions as well as to perceptually ground movement phrases-all based on real visual input. We demonstrate how the dynamic neural processes autonomously generate the processing steps required to describe or ground object-oriented actions. By solving the fundamental problems of neural pointing, binding, and emergent discrete processing, the model may be a first but critical step toward a systematic neural processing account of higher cognition. Copyright © 2017 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  5. The Architecture of Recovery: Two Kinds of Housing Assistance for Chronic Homeless Persons with Substance Use Disorders.

    PubMed

    Wittman, Friedner D; Polcin, Douglas L; Sheridan, Dave

    2017-01-01

    Roughly half a million persons in the United States are homeless on any given night and over a third of those individuals have significant alcohol/other drug (AOD) problems. Many are chronically homeless and in need of assistance for a variety of problems. However, the literature on housing services for this population has paid limited attention to comparative analyses contrasting different approaches. We examined the literature on housing models for homeless persons with AOD problems and critically analyzed how service settings and operations aligned with service goals. We found two predominant housing models that reflect different service goals: Sober Living Houses (SLHs) and Housing First (HF). SLHs are communally based living arrangements that draw on the principles of Alcoholics Anonymous. They emphasize a living environment that promotes abstinence and peer support for recovery. HF is based on the premise that many homeless persons with substance abuse problems will reject abstinence as a goal. Therefore, the HF focus is providing subsidized or free housing and optional professional services for substance abuse, psychiatric disorders and other problems. If homeless service providers are to develop comprehensive systems for homeless persons with AOD problems, they need to consider important contrasts in housing models, including definitions of "recovery," roles of peer support, facility management, roles for professional service, and the architectural designs that support the mission of each type of housing. This paper is the first to consider distinct consumer choices within homeless service systems and provide recommendations to improve each based upon an integrated analysis that considers how architecture and operations align with service goals.

  6. Predicting Successful Treatment Outcome of Web-Based Self-help for Problem Drinkers: Secondary Analysis From a Randomized Controlled Trial

    PubMed Central

    Kramer, Jeannet; Keuken, Max; Smit, Filip; Schippers, Gerard; Cuijpers, Pim

    2008-01-01

    Background Web-based self-help interventions for problem drinking are coming of age. They have shown promising results in terms of cost-effectiveness, and they offer opportunities to reach out on a broad scale to problem drinkers. The question now is whether certain groups of problem drinkers benefit more from such Web-based interventions than others. Objective We sought to identify baseline, client-related predictors of the effectiveness of Drinking Less, a 24/7, free-access, interactive, Web-based self-help intervention without therapist guidance for problem drinkers who want to reduce their alcohol consumption. The intervention is based on cognitive-behavioral and self-control principles. Methods We conducted secondary analysis of data from a pragmatic randomized trial with follow-up at 6 and 12 months. Participants (N = 261) were adult problem drinkers in the Dutch general population with a weekly alcohol consumption above 210 g of ethanol for men or 140 g for women, or consumption of at least 60 g (men) or 40 g (women) one or more days a week over the past 3 months. Six baseline participant characteristics were designated as putative predictors of treatment response: (1) gender, (2) education, (3) Internet use competence (sociodemographics), (4) mean weekly alcohol consumption, (5) prior professional help for alcohol problems (level of problem drinking), and (6) participants’ expectancies of Web-based interventions for problem drinking. Intention-to-treat (ITT) analyses, using last-observation-carried-forward (LOCF) data, and regression imputation (RI) were performed to deal with loss to follow-up. Statistical tests for interaction terms were conducted and linear regression analysis was performed to investigate whether the participants’ characteristics as measured at baseline predicted positive treatment responses at 6- and 12-month follow-ups. Results At 6 months, prior help for alcohol problems predicted a small, marginally significant positive treatment outcome in the RI model only (beta = .18, P = .05, R2 = .11). At 12 months, females displayed modest predictive power in both imputation models (LOCF: beta = .22, P = .045, R2 = .02; regression: beta = .27, P = .01, R2 = .03). Those with higher levels of education exhibited modest predictive power in the LOCF model only (beta = .33, P = .01, R2 = .03). Conclusions Although female and more highly educated users appeared slightly more likely to derive benefit from the Drinking Less intervention, none of the baseline characteristics we studied persuasively predicted a favorable treatment outcome. The Web-based intervention therefore seems well suited for a heterogeneous group of problem drinkers and could hence be offered as a first-step treatment in a stepped-care approach directed at problem drinkers in the general population. Trial Registration International Standard Randomized Controlled Trial Number (ISRCTN): 47285230; http://www.controlled-trials.com/isrctn47285230 (Archived by WebCite at http://www.webcitation.org/5cSR2sMkp). PMID:19033150

  7. Predicting successful treatment outcome of web-based self-help for problem drinkers: secondary analysis from a randomized controlled trial.

    PubMed

    Riper, Heleen; Kramer, Jeannet; Keuken, Max; Smit, Filip; Schippers, Gerard; Cuijpers, Pim

    2008-11-22

    Web-based self-help interventions for problem drinking are coming of age. They have shown promising results in terms of cost-effectiveness, and they offer opportunities to reach out on a broad scale to problem drinkers. The question now is whether certain groups of problem drinkers benefit more from such Web-based interventions than others. We sought to identify baseline, client-related predictors of the effectiveness of Drinking Less, a 24/7, free-access, interactive, Web-based self-help intervention without therapist guidance for problem drinkers who want to reduce their alcohol consumption. The intervention is based on cognitive-behavioral and self-control principles. We conducted secondary analysis of data from a pragmatic randomized trial with follow-up at 6 and 12 months. Participants (N = 261) were adult problem drinkers in the Dutch general population with a weekly alcohol consumption above 210 g of ethanol for men or 140 g for women, or consumption of at least 60 g (men) or 40 g (women) one or more days a week over the past 3 months. Six baseline participant characteristics were designated as putative predictors of treatment response: (1) gender, (2) education, (3) Internet use competence (sociodemographics), (4) mean weekly alcohol consumption, (5) prior professional help for alcohol problems (level of problem drinking), and (6) participants' expectancies of Web-based interventions for problem drinking. Intention-to-treat (ITT) analyses, using last-observation-carried-forward (LOCF) data, and regression imputation (RI) were performed to deal with loss to follow-up. Statistical tests for interaction terms were conducted and linear regression analysis was performed to investigate whether the participants' characteristics as measured at baseline predicted positive treatment responses at 6- and 12-month follow-ups. At 6 months, prior help for alcohol problems predicted a small, marginally significant positive treatment outcome in the RI model only (beta = .18, P = .05, R(2) = .11). At 12 months, females displayed modest predictive power in both imputation models (LOCF: beta = .22, P = .045, R(2) = .02; regression: beta = .27, P = .01, R(2) = .03). Those with higher levels of education exhibited modest predictive power in the LOCF model only (beta = .33, P = .01, R(2) = .03). Although female and more highly educated users appeared slightly more likely to derive benefit from the Drinking Less intervention, none of the baseline characteristics we studied persuasively predicted a favorable treatment outcome. The Web-based intervention therefore seems well suited for a heterogeneous group of problem drinkers and could hence be offered as a first-step treatment in a stepped-care approach directed at problem drinkers in the general population. International Standard Randomized Controlled Trial Number (ISRCTN): 47285230; http://www.controlled-trials.com/isrctn47285230 (Archived by WebCite at http://www.webcitation.org/5cSR2sMkp).

  8. The effect of visual representation style in problem-solving: a perspective from cognitive processes.

    PubMed

    Nyamsuren, Enkhbold; Taatgen, Niels A

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving.

  9. The Effect of Visual Representation Style in Problem-Solving: A Perspective from Cognitive Processes

    PubMed Central

    Nyamsuren, Enkhbold; Taatgen, Niels A.

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving. PMID:24260415

  10. A new modal-based approach for modelling the bump foil structure in the simultaneous solution of foil-air bearing rotor dynamic problems

    NASA Astrophysics Data System (ADS)

    Bin Hassan, M. F.; Bonello, P.

    2017-05-01

    Recently-proposed techniques for the simultaneous solution of foil-air bearing (FAB) rotor dynamic problems have been limited to a simple bump foil model in which the individual bumps were modelled as independent spring-damper (ISD) subsystems. The present paper addresses this limitation by introducing a modal model of the bump foil structure into the simultaneous solution scheme. The dynamics of the corrugated bump foil structure are first studied using the finite element (FE) technique. This study is experimentally validated using a purpose-made corrugated foil structure. Based on the findings of this study, it is proposed that the dynamics of the full foil structure, including bump interaction and foil inertia, can be represented by a modal model comprising a limited number of modes. This full foil structure modal model (FFSMM) is then adapted into the rotordynamic FAB problem solution scheme, instead of the ISD model. Preliminary results using the FFSMM under static and unbalance excitation conditions are proven to be reliable by comparison against the corresponding ISD foil model results and by cross-correlating different methods for computing the deflection of the full foil structure. The rotor-bearing model is also validated against experimental and theoretical results in the literature.

  11. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    PubMed Central

    Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano

    2015-01-01

    As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246

  12. Effect of Face-to-face Education, Problem-based Learning, and Goldstein Systematic Training Model on Quality of Life and Fatigue among Caregivers of Patients with Diabetes.

    PubMed

    Masoudi, Reza; Soleimani, Mohammad Ali; Yaghoobzadeh, Ameneh; Baraz, Shahram; Hakim, Ashrafalsadat; Chan, Yiong H

    2017-01-01

    Education is a fundamental component for patients with diabetes to achieve good glycemic control. In addition, selecting the appropriate method of education is one of the most effective factors in the quality of life. The present study aimed to evaluate the effect of face-to-face education, problem-based learning, and Goldstein systematic training model on the quality of life (QOL) and fatigue among caregivers of patients with diabetes. This randomized clinical trial was conducted in Hajar Hospital (Shahrekord, Iran) in 2012. The study subjects consisted of 105 family caregivers of patients with diabetes. The participants were randomly assigned to three intervention groups (35 caregivers in each group). For each group, 5-h training sessions were held separately. QOL and fatigue were evaluated immediately before and after the intervention, and after 1, 2, 3, and 4 months of intervention. There was a significant increase in QOL for all the three groups. Both the problem-based learning and the Goldstein method showed desirable QOL improvement over time. The desired educational intervention for fatigue reduction during the 4-month post-intervention period was the Goldstein method. A significant reduction was observed in fatigue in all three groups after the intervention ( P < 0.001). The results of the present study illustrated that the problem-based learning and Goldstein systematic training model improve the QOL of caregivers of patients with diabetes. In addition, the Goldstein systematic training model had the greatest effect on the reduction of fatigue within 4 months of the intervention.

  13. Teacher in a Problem-Based Learning Environment--Jack of All Trades?

    ERIC Educational Resources Information Center

    Dahms, Mona Lisa; Spliid, Claus Monrad; Nielsen, Jens Frederik Dalsgaard

    2017-01-01

    Problem-based learning (PBL) is one among several approaches to active learning. Being a teacher in a PBL environment can, however, be a challenge because of the need to support students' learning within a broad "landscape of learning". In this article we will analyse the landscape of learning by use of the study activity model (SAM)…

  14. Modeling the Problem-Based Learning Preferences of McMaster University Undergraduate Medical Students Using a Discrete Choice Conjoint Experiment

    ERIC Educational Resources Information Center

    Cunningham, Charles E.; Deal, Ken; Neville, Alan; Rimas, Heather; Lohfeld, Lynne

    2006-01-01

    Objectives: To use methods from the field of marketing research to involve students in the redesign of McMaster University's small group, problem-based undergraduate medical education program. Methods: We used themes from a focus group conducted in an electronic decision support lab to compose 14 four-level educational attributes. Undergraduate…

  15. Integrated and Contextual Basic Science Instruction in Preclinical Education: Problem-Based Learning Experience Enriched with Brain/Mind Learning Principles

    ERIC Educational Resources Information Center

    Gülpinar, Mehmet Ali; Isoglu-Alkaç, Ümmühan; Yegen, Berrak Çaglayan

    2015-01-01

    Recently, integrated and contextual learning models such as problem-based learning (PBL) and brain/mind learning (BML) have become prominent. The present study aimed to develop and evaluate a PBL program enriched with BML principles. In this study, participants were 295 first-year medical students. The study used both quantitative and qualitative…

  16. Prevent-Teach-Reinforce: The School-Based Model of Individualized Positive Behavior Support

    ERIC Educational Resources Information Center

    Dunlap, Glen; Iovannone, Rose; Kincaid, Donald; Wilson, Kelly; Christiansen, Kathy; Strain, Phillip; English, Carie

    2010-01-01

    Solve serious behavior challenges in K-8 classrooms with this easy-to-use book, the first practical guide to the research-proven Prevent-Teach-Reinforce (PTR) model. Developed by some of the most respected authorities on positive behavior support, this innovative model gives school-based teams a five-step plan for reducing problems unresolved by…

  17. EVALUATION OF THE REAL-TIME AIR-QUALITY MODEL USING THE RAPS (REGIONAL AIR POLLUTION STUDY) DATA BASE. VOLUME 3. PROGRAM USER'S GUIDE

    EPA Science Inventory

    The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four volumes. Moreover, the tests are generally applicable to other model evaluation problems. Volu...

  18. EVALUATION OF THE REAL-TIME AIR-QUALITY MODEL USING THE RAPS (REGIONAL AIR POLLUTION STUDY) DATA BASE. VOLUME 4. EVALUATION GUIDE

    EPA Science Inventory

    The theory and programming of statistical tests for evaluating the Real-Time Air-Quality Model (RAM) using the Regional Air Pollution Study (RAPS) data base are fully documented in four volumes. Moreover, the tests are generally applicable to other model evaluation problems. Volu...

  19. Spatial Rule-Based Modeling: A Method and Its Application to the Human Mitotic Kinetochore

    PubMed Central

    Ibrahim, Bashar; Henze, Richard; Gruenert, Gerd; Egbert, Matthew; Huwald, Jan; Dittrich, Peter

    2013-01-01

    A common problem in the analysis of biological systems is the combinatorial explosion that emerges from the complexity of multi-protein assemblies. Conventional formalisms, like differential equations, Boolean networks and Bayesian networks, are unsuitable for dealing with the combinatorial explosion, because they are designed for a restricted state space with fixed dimensionality. To overcome this problem, the rule-based modeling language, BioNetGen, and the spatial extension, SRSim, have been developed. Here, we describe how to apply rule-based modeling to integrate experimental data from different sources into a single spatial simulation model and how to analyze the output of that model. The starting point for this approach can be a combination of molecular interaction data, reaction network data, proximities, binding and diffusion kinetics and molecular geometries at different levels of detail. We describe the technique and then use it to construct a model of the human mitotic inner and outer kinetochore, including the spindle assembly checkpoint signaling pathway. This allows us to demonstrate the utility of the procedure, show how a novel perspective for understanding such complex systems becomes accessible and elaborate on challenges that arise in the formulation, simulation and analysis of spatial rule-based models. PMID:24709796

  20. On the Relationship between Variational Level Set-Based and SOM-Based Active Contours

    PubMed Central

    Abdelsamea, Mohammed M.; Gnecco, Giorgio; Gaber, Mohamed Medhat; Elyan, Eyad

    2015-01-01

    Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses. PMID:25960736

  1. Topology optimization of unsteady flow problems using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Nørgaard, Sebastian; Sigmund, Ole; Lazarov, Boyan

    2016-02-01

    This article demonstrates and discusses topology optimization for unsteady incompressible fluid flows. The fluid flows are simulated using the lattice Boltzmann method, and a partial bounceback model is implemented to model the transition between fluid and solid phases in the optimization problems. The optimization problem is solved with a gradient based method, and the design sensitivities are computed by solving the discrete adjoint problem. For moderate Reynolds number flows, it is demonstrated that topology optimization can successfully account for unsteady effects such as vortex shedding and time-varying boundary conditions. Such effects are relevant in several engineering applications, i.e. fluid pumps and control valves.

  2. Cloud-based large-scale air traffic flow optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yi

    The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.

  3. Learning Petri net models of non-linear gene interactions.

    PubMed

    Mayo, Michael

    2005-10-01

    Understanding how an individual's genetic make-up influences their risk of disease is a problem of paramount importance. Although machine-learning techniques are able to uncover the relationships between genotype and disease, the problem of automatically building the best biochemical model or "explanation" of the relationship has received less attention. In this paper, I describe a method based on random hill climbing that automatically builds Petri net models of non-linear (or multi-factorial) disease-causing gene-gene interactions. Petri nets are a suitable formalism for this problem, because they are used to model concurrent, dynamic processes analogous to biochemical reaction networks. I show that this method is routinely able to identify perfect Petri net models for three disease-causing gene-gene interactions recently reported in the literature.

  4. On the optimization of electromagnetic geophysical data: Application of the PSO algorithm

    NASA Astrophysics Data System (ADS)

    Godio, A.; Santilano, A.

    2018-01-01

    Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.

  5. An investigation of the use of temporal decomposition in space mission scheduling

    NASA Technical Reports Server (NTRS)

    Bullington, Stanley E.; Narayanan, Venkat

    1994-01-01

    This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.

  6. Nonlinearity measure and internal model control based linearization in anti-windup design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perev, Kamen

    2013-12-18

    This paper considers the problem of internal model control based linearization in anti-windup design. The nonlinearity measure concept is used for quantifying the control system degree of nonlinearity. The linearizing effect of a modified internal model control structure is presented by comparing the nonlinearity measures of the open-loop and closed-loop systems. It is shown that the linearization properties are improved by increasing the control system local feedback gain. However, it is emphasized that at the same time the stability of the system deteriorates. The conflicting goals of stability and linearization are resolved by solving the design problem in different frequencymore » ranges.« less

  7. Research on precise modeling of buildings based on multi-source data fusion of air to ground

    NASA Astrophysics Data System (ADS)

    Li, Yongqiang; Niu, Lubiao; Yang, Shasha; Li, Lixue; Zhang, Xitong

    2016-03-01

    Aims at the accuracy problem of precise modeling of buildings, a test research was conducted based on multi-source data for buildings of the same test area , including top data of air-borne LiDAR, aerial orthophotos, and façade data of vehicle-borne LiDAR. After accurately extracted the top and bottom outlines of building clusters, a series of qualitative and quantitative analysis was carried out for the 2D interval between outlines. Research results provide a reliable accuracy support for precise modeling of buildings of air ground multi-source data fusion, on the same time, discussed some solution for key technical problems.

  8. Polarimetric SAR image classification based on discriminative dictionary learning model

    NASA Astrophysics Data System (ADS)

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  9. Moving alcohol prevention research forward-Part II: new directions grounded in community-based system dynamics modeling.

    PubMed

    Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller

    2018-02-01

    Given the complexity of factors contributing to alcohol misuse, appropriate epistemologies and methodologies are needed to understand and intervene meaningfully. We aimed to (1) provide an overview of computational modeling methodologies, with an emphasis on system dynamics modeling; (2) explain how community-based system dynamics modeling can forge new directions in alcohol prevention research; and (3) present a primer on how to build alcohol misuse simulation models using system dynamics modeling, with an emphasis on stakeholder involvement, data sources and model validation. Throughout, we use alcohol misuse among college students in the United States as a heuristic example for demonstrating these methodologies. System dynamics modeling employs a top-down aggregate approach to understanding dynamically complex problems. Its three foundational properties-stocks, flows and feedbacks-capture non-linearity, time-delayed effects and other system characteristics. As a methodological choice, system dynamics modeling is amenable to participatory approaches; in particular, community-based system dynamics modeling has been used to build impactful models for addressing dynamically complex problems. The process of community-based system dynamics modeling consists of numerous stages: (1) creating model boundary charts, behavior-over-time-graphs and preliminary system dynamics models using group model-building techniques; (2) model formulation; (3) model calibration; (4) model testing and validation; and (5) model simulation using learning-laboratory techniques. Community-based system dynamics modeling can provide powerful tools for policy and intervention decisions that can result ultimately in sustainable changes in research and action in alcohol misuse prevention. © 2017 Society for the Study of Addiction.

  10. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  11. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  12. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  13. Robust BMPM training based on second-order cone programming and its application in medical diagnosis.

    PubMed

    Peng, Xiang; King, Irwin

    2008-01-01

    The Biased Minimax Probability Machine (BMPM) constructs a classifier which deals with the imbalanced learning tasks. It provides a worst-case bound on the probability of misclassification of future data points based on reliable estimates of means and covariance matrices of the classes from the training data samples, and achieves promising performance. In this paper, we develop a novel yet critical extension training algorithm for BMPM that is based on Second-Order Cone Programming (SOCP). Moreover, we apply the biased classification model to medical diagnosis problems to demonstrate its usefulness. By removing some crucial assumptions in the original solution to this model, we make the new method more accurate and robust. We outline the theoretical derivatives of the biased classification model, and reformulate it into an SOCP problem which could be efficiently solved with global optima guarantee. We evaluate our proposed SOCP-based BMPM (BMPMSOCP) scheme in comparison with traditional solutions on medical diagnosis tasks where the objectives are to focus on improving the sensitivity (the accuracy of the more important class, say "ill" samples) instead of the overall accuracy of the classification. Empirical results have shown that our method is more effective and robust to handle imbalanced classification problems than traditional classification approaches, and the original Fractional Programming-based BMPM (BMPMFP).

  14. A Model for settlement of health insurance organizations’ debt to health service delivery institutions

    PubMed Central

    Abolhallaj, Masood; Hosseini, Seyed Mohammadreza; Jafari, Mehdi; Alaei, Fatemeh

    2017-01-01

    Background: Sukuk is a type of financial instrument backed by balance sheet and physical assets. This applied and descriptive study aimed at providing solutions to the problems faced by insurance companies in the health sector. Methods: In this study, we achieved operational models by reviewing the release nature and mechanism of any of the securities and combining them. Results: According to the model presented in this study, 2 problems could be solved: settling the past debts and avoiding future debts. This model was deigned based on asset backed securities. Conclusion: Utilizing financing instruments (such as Sukuk), creating investment funds, and finding a solution to this problem, this study was conducted in 2 aspects: (1) models that are settling old debts of the organization, and (2) models that prevent debts in the future.

  15. Supersonic reacting internal flowfields

    NASA Astrophysics Data System (ADS)

    Drummond, J. P.

    The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flowfields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.

  16. Supersonic reacting internal flow fields

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip

    1989-01-01

    The national program to develop a trans-atmospheric vehicle has kindled a renewed interest in the modeling of supersonic reacting flows. A supersonic combustion ramjet, or scramjet, has been proposed to provide the propulsion system for this vehicle. The development of computational techniques for modeling supersonic reacting flow fields, and the application of these techniques to an increasingly difficult set of combustion problems are studied. Since the scramjet problem has been largely responsible for motivating this computational work, a brief history is given of hypersonic vehicles and their propulsion systems. A discussion is also given of some early modeling efforts applied to high speed reacting flows. Current activities to develop accurate and efficient algorithms and improved physical models for modeling supersonic combustion is then discussed. Some new problems where computer codes based on these algorithms and models are being applied are described.

  17. The modeler's influence on calculated solubilities for performance assessments at the Aspo Hard-rock Laboratory

    USGS Publications Warehouse

    Ernren, A.T.; Arthur, R.; Glynn, P.D.; McMurry, J.

    1999-01-01

    Four researchers were asked to provide independent modeled estimates of the solubility of a radionuclide solid phase, specifically Pu(OH)4, under five specified sets of conditions. The objectives of the study were to assess the variability in the results obtained and to determine the primary causes for this variability.In the exercise, modelers were supplied with the composition, pH and redox properties of the water and with a description of the mineralogy of the surrounding fracture system A standard thermodynamic data base was provided to all modelers. Each modeler was encouraged to use other data bases in addition to the standard data base and to try different approaches to solving the problem.In all, about fifty approaches were used, some of which included a large number of solubility calculations. For each of the five test cases, the calculated solubilities from different approaches covered several orders of magnitude. The variability resulting from the use of different thermodynamic data bases was in most cases, far smaller than that resulting from the use of different approaches to solving the problem.

  18. Optimizing decentralized production-distribution planning problem in a multi-period supply chain network under uncertainty

    NASA Astrophysics Data System (ADS)

    Nourifar, Raheleh; Mahdavi, Iraj; Mahdavi-Amiri, Nezam; Paydar, Mohammad Mahdi

    2017-09-01

    Decentralized supply chain management is found to be significantly relevant in today's competitive markets. Production and distribution planning is posed as an important optimization problem in supply chain networks. Here, we propose a multi-period decentralized supply chain network model with uncertainty. The imprecision related to uncertain parameters like demand and price of the final product is appropriated with stochastic and fuzzy numbers. We provide mathematical formulation of the problem as a bi-level mixed integer linear programming model. Due to problem's convolution, a structure to solve is developed that incorporates a novel heuristic algorithm based on Kth-best algorithm, fuzzy approach and chance constraint approach. Ultimately, a numerical example is constructed and worked through to demonstrate applicability of the optimization model. A sensitivity analysis is also made.

  19. Sequential Inverse Problems Bayesian Principles and the Logistic Map Example

    NASA Astrophysics Data System (ADS)

    Duan, Lian; Farmer, Chris L.; Moroz, Irene M.

    2010-09-01

    Bayesian statistics provides a general framework for solving inverse problems, but is not without interpretation and implementation problems. This paper discusses difficulties arising from the fact that forward models are always in error to some extent. Using a simple example based on the one-dimensional logistic map, we argue that, when implementation problems are minimal, the Bayesian framework is quite adequate. In this paper the Bayesian Filter is shown to be able to recover excellent state estimates in the perfect model scenario (PMS) and to distinguish the PMS from the imperfect model scenario (IMS). Through a quantitative comparison of the way in which the observations are assimilated in both the PMS and the IMS scenarios, we suggest that one can, sometimes, measure the degree of imperfection.

  20. Two-dimensional finite element heat transfer model of softwood. Part III, Effect of moisture content on thermal conductivity

    Treesearch

    Hongmei Gu; John F. Hunt

    2007-01-01

    The anisotropy of wood creates a complex problem for solving heat and mass transfer problems that require analyses be based on fundamental material properties of the wood structure. Most heat transfer models for softwood use average thermal properties across either the radial or tangential direction and do not differentiate the effects of cellular alignment or...

Top