Science.gov

Sample records for pedagogical methodics adaption

  1. Making Pedagogical Adaptability Less Obvious

    ERIC Educational Resources Information Center

    Vagle, Mark D.

    2016-01-01

    In this article, I try to make pedagogical adaptability a bit less obvious. In particular, I use some post-structural philosophical ideas and some concepts at the intersections of social class and race to re-interpret Dylan Wiliam's conception of formative assessment. I suggest that this interpretation can provide opportunities to resist the urge…

  2. Adaptation of Technological Pedagogical Content Knowledge Scale to Turkish

    ERIC Educational Resources Information Center

    Kaya, Zehra; Kaya, Osman Nafiz; Emre, Irfan

    2013-01-01

    The purpose of this study was to adapt "Survey of Pre-service Teachers' Knowledge of Teaching and Technology" in order to assess pre-service primary teachers' Technological Pedagogical Content Knowledge (TPACK) to Turkish. 407 pre-service primary teachers (227 female and 180 male) in their final semester in Education Faculties…

  3. Development and Evaluation of an E-Learning Course for Deaf and Hard of Hearing Based on the Advanced Adapted Pedagogical Index Method

    ERIC Educational Resources Information Center

    Debevc, Matjaž; Stjepanovic, Zoran; Holzinger, Andreas

    2014-01-01

    Web-based and adapted e-learning materials provide alternative methods of learning to those used in a traditional classroom. Within the study described in this article, deaf and hard of hearing people used an adaptive e-learning environment to improve their computer literacy. This environment included streaming video with sign language interpreter…

  4. Pedagogical Content Knowledge of Experienced Teachers in Physical Education: Functional Analysis of Adaptations

    ERIC Educational Resources Information Center

    Ayvazo, Shiri; Ward, Phillip

    2011-01-01

    Pedagogical content knowledge (PCK) is the teacher's ability to pedagogically adapt content to students of diverse abilities. In this study, we investigated how teachers' adaptations of instruction for individual students differed when teaching stronger and weaker instructional units. We used functional analysis (Hanley, Iwata, & McCord, 2003) of…

  5. A Context-Aware Self-Adaptive Fractal Based Generalized Pedagogical Agent Framework for Mobile Learning

    ERIC Educational Resources Information Center

    Boulehouache, Soufiane; Maamri, Ramdane; Sahnoun, Zaidi

    2015-01-01

    The Pedagogical Agents (PAs) for Mobile Learning (m-learning) must be able not only to adapt the teaching to the learner knowledge level and profile but also to ensure the pedagogical efficiency within unpredictable changing runtime contexts. Therefore, to deal with this issue, this paper proposes a Context-aware Self-Adaptive Fractal Component…

  6. Nodal Analysis Optimization Based on the Use of Virtual Current Sources: A Powerful New Pedagogical Method

    ERIC Educational Resources Information Center

    Chatzarakis, G. E.

    2009-01-01

    This paper presents a new pedagogical method for nodal analysis optimization based on the use of virtual current sources, applicable to any linear electric circuit (LEC), regardless of its complexity. The proposed method leads to straightforward solutions, mostly arrived at by inspection. Furthermore, the method is easily adapted to computer…

  7. Transdisciplinary Pedagogical Templates and Their Potential for Adaptive Reuse

    ERIC Educational Resources Information Center

    Dobozy, Eva; Dalziel, James

    2016-01-01

    This article explores the use and usefulness of carefully designed transdisciplinary pedagogical templates (TPTs) aligned to different learning theories. The TPTs are based on the Learning Design Framework outlined in the Larnaca Declaration (Dalziel et al. in this collection). The generation of pedagogical plans or templates is not new. However,…

  8. The Patriarchal Scenes and Narratives of Orson Welles'"Othello" and the Pedagogical Uses of Adapted Films.

    ERIC Educational Resources Information Center

    Lee, Yong-eun

    2001-01-01

    Focuses on the patriarchal aspect of Welles'"Othello" and the pedagogical uses of adapted films. Focuses on differences between the narratives of Othello and those of Desdemona. Studies pedagogical uses and effects of adapted films and discusses "Shakespeare in Love" as an effective tool for instruction. (Author/VWL)

  9. Psychological and Pedagogical Support for Students' Adaptation to Learning Activity in High Science School

    ERIC Educational Resources Information Center

    Zeleeva, Vera P.; Bykova, Svetlana S.; Varbanova, Silvia

    2016-01-01

    The relevance of the study is due to the importance of psychological and pedagogical support for students in university that would prevent difficulties in learning activities and increase adaptive capacity through the development of relevant personal traits. Therefore, this article is aimed at solving the problem of arranging psychological and…

  10. Critically Adaptive Pedagogical Relations: The Relevance for Educational Policy and Practice

    ERIC Educational Resources Information Center

    Griffiths, Morwenna

    2013-01-01

    In this article Morwenna Griffiths argues that teacher education policies should be predicated on a proper and full understanding of pedagogical relations as contingent, responsive, and adaptive over the course of a career. Griffiths uses the example of the recent report on teacher education in Scotland, by Graham Donaldson, to argue that for all…

  11. Turkish Adaptation of Technological Pedagogical Content Knowledge Survey for Elementary Teachers

    ERIC Educational Resources Information Center

    Kaya, Sibel; Dag, Funda

    2013-01-01

    The purpose of this study was to adapt the Technological Pedagogical Content Knowledge (TPACK) Survey developed by Schmidt and colleagues into Turkish and investigate its factor structure through exploratory and confirmatory factor analysis. The participants were 352 elementary pre-service teachers from three large universities in northwestern…

  12. Pedagogical content knowledge of experienced teachers in physical education: functional analysis of adaptations.

    PubMed

    Ayvazo, Shiri; Ward, Phillip

    2011-12-01

    Pedagogical content knowledge (PCK) is the teacher's ability to pedagogically adapt content to students of diverse abilities. In this study, we investigated how teachers' adaptations of instruction for individual students differed when teaching stronger and weaker instructional units. We used functional analysis (Hanley, Iwata, & McCord, 2003) of the instructional interaction to examine PCK. We observed and measured student-teacher interactions and their appropriateness. Participants were 2 experienced elementary physical educators who taught stronger and weaker units. Primarily, the appropriateness data indicated PCK differences between the stronger and weaker units. Results show that functional analysis of instructional adaptations is an effective strategy for examining PCK and that teachers were better able to meet students' needs in the stronger unit. PMID:22276409

  13. Learner Language Analytic Methods and Pedagogical Implications

    ERIC Educational Resources Information Center

    Dyson, Bronwen

    2010-01-01

    Methods for analysing interlanguage have long aimed to capture learner language in its own right. By surveying the cognitive methods of Error Analysis, Obligatory Occasion Analysis and Frequency Analysis, this paper traces reformulations to attain this goal. The paper then focuses on Emergence Analysis, which fine-tunes learner language analysis…

  14. Adaptive Teaching: An Invaluable Pedagogic Practice in Social Studies Education

    ERIC Educational Resources Information Center

    Ikwumelu, S. N.; Oyibe, Ogene A.; Oketa, E. C.

    2015-01-01

    The paper delved into the issue of learner/teacher centredness in Social Studies and held that the choice of around whom Social Studies teaching would be centred should be determined by the individual differences of the learners. Adaptive teaching was explained as an approach aimed at achieving a common instructional goal with learners considering…

  15. Teaching physiology by combined passive (pedagogical) and active (andragogical) methods.

    PubMed

    Richardson, D; Birge, B

    1995-06-01

    Pedagogy and andragogy are models of education based, respectively, on passive and active learning. This project compared two balanced sections of an undergraduate course in physiology. Both sections used the pedagogical method of didactic lectures to present basic material. Students in section 01 were given multiple-choice examinations, a pedagogical procedure, over the lecture content for the purpose of performance evaluation. In section 02 the lectures were used as an information source, which students combined with other information researched in the library to draft essays on assigned topics, i.e., an andragogical approach. Grading of the essays constituted 75% of a student's performance evaluation, with participation in class discussions making up the remaining 25%. There was no significant difference in overall performance outcome between the two sections (P > 0.47). Students from both sections valued the lectures, even though they served a different purpose in each section. However, overall the student rating of section 02 was significantly higher than that of section 01 (P < or = 0.05). This reflected different teaching methods rather than different teachers, because the ratings of the two instructors were virtually identical (P > 0.98). These results suggest that a combined pedagogical and andragogical approach is an acceptable model for teaching introductory physiology. PMID:7598176

  16. Adaptive Algebraic Multigrid Methods

    SciTech Connect

    Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J

    2004-04-09

    Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.

  17. Accelerated adaptive integration method.

    PubMed

    Kaus, Joseph W; Arrar, Mehrnoosh; McCammon, J Andrew

    2014-05-15

    Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083

  18. Accelerated Adaptive Integration Method

    PubMed Central

    2015-01-01

    Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083

  19. CEPIAH: A Method for the Design and Evaluation of Pedagogical Hypermedia

    ERIC Educational Resources Information Center

    Trigano, Philippe C.; Giacomini-Pacurar, Ecaterina

    2004-01-01

    CEPIAH is a method and a proposal for a Web-based system to be used to assist teachers in designing multimedia documents and in evaluating their prototypes. The proposed tool integrates two modules: one for the Evaluation of Multimedia Pedagogical and Interactive software (EMPI), and the other, a method for designing pedagogical hypermedia…

  20. Learning as Researchers and Teachers: The Development of a Pedagogical Culture for Social Science Research Methods?

    ERIC Educational Resources Information Center

    Kilburn, Daniel; Nind, Melanie; Wiles, Rose

    2014-01-01

    In light of calls to improve the capacity for social science research within UK higher education, this article explores the possibilities for an emerging pedagogy for research methods. A lack of pedagogical culture in this field has been identified by previous studies. In response, we examine pedagogical literature surrounding approaches for…

  1. Learning to Critique and Adapt Science Curriculum Materials: Examining the Development of Preservice Elementary Teachers' Pedagogical Content Knowledge

    ERIC Educational Resources Information Center

    Beyer, Carrie J.; Davis, Elizabeth A.

    2012-01-01

    Teachers often engage in curricular planning by critiquing and adapting existing curriculum materials to contextualize lessons and compensate for their deficiencies. Designing instruction for students is shaped by teachers' ability to apply a variety of personal resources, including their pedagogical content knowledge (PCK). This study…

  2. Teachers' Instructional Planning for Computer-Supported Collaborative Learning: Macro-Scripts as a Pedagogical Method to Facilitate Collaborative Learning

    ERIC Educational Resources Information Center

    Hamalainen, Raija; Hakkinen, Paivi

    2010-01-01

    Technological tools challenge teachers' pedagogical activities. The use of information and communication technologies (ICT) in education should help teachers integrate new pedagogical methods into their work. This study explores macro-level computer-supported collaborative learning scripts as a pedagogical method to facilitate collaboration.…

  3. Method For Model-Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1990-01-01

    Relatively simple method of model-reference adaptive control (MRAC) developed from two prior classes of MRAC techniques: signal-synthesis method and parameter-adaption method. Incorporated into unified theory, which yields more general adaptation scheme.

  4. The Correlation between Rigor and Relevance Using Pedagogical or Andragogical Instructional Methods in American Business Schools

    ERIC Educational Resources Information Center

    Roldan, Alberto

    2010-01-01

    The purpose of this study was to examine and document whether there is a correlation between relevance (applicability) focused courses and rigor (scholarly research) focused courses with pedagogical instructional methods or andragogical instructional methods in undergraduate business schools, and how it affects learning behavior and final course…

  5. An adaptive level set method

    SciTech Connect

    Milne, R.B.

    1995-12-01

    This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.

  6. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  7. Searching for Pedagogical Adaptations by Exploring Teacher's Tacit Knowledge and Interactional Co-Regulation in the Education of Pupils with Autism

    ERIC Educational Resources Information Center

    Rama, Irene; Kontu, Elina

    2012-01-01

    The purpose of this article is to introduce a research design, which aims to find useful pedagogical adaptations for teaching pupils with autism. Autism is a behavioural syndrome characterised by disabilities and dysfunctions in interaction and communication, which is why it is interesting to explore educational processes particularly from an…

  8. Impact of pedagogical method on Brazilian dental students' waste management practice.

    PubMed

    Victorelli, Gabriela; Flório, Flávia Martão; Ramacciato, Juliana Cama; Motta, Rogério Heládio Lopes; de Souza Fonseca Silva, Almenara

    2014-11-01

    The purpose of this study was to conduct a qualitative analysis of waste management practices among a group of Brazilian dental students (n=64) before and after implementing two different pedagogical methods: 1) the students attended a two-hour lecture based on World Health Organization standards; and 2) the students applied the lessons learned in an organized group setting aimed toward raising their awareness about socioenvironmental issues related to waste. All eligible students participated, and the students' learning was evaluated through their answers to a series of essay questions, which were quantitatively measured. Afterwards, the impact of the pedagogical approaches was compared by means of qualitative categorization of wastes generated in clinical activities. Waste categorization was performed for a period of eight consecutive days, both before and thirty days after the pedagogical strategies. In the written evaluation, 80 to 90 percent of the students' answers were correct. The qualitative assessment revealed a high frequency of incorrect waste disposal with a significant increase of incorrect disposal inside general and infectious waste containers (p<0.05). Although the students' theoretical learning improved, it was not enough to change behaviors established by cultural values or to encourage the students to adequately segregate and package waste material. PMID:25362694

  9. Simple method for model reference adaptive control

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1989-01-01

    A simple method is presented for combined signal synthesis and parameter adaptation within the framework of model reference adaptive control theory. The results are obtained using a simple derivation based on an improved Liapunov function.

  10. The Jakobsonian One-Stem Analysis of the Russian Verb: Adaptations and Pedagogical Strategies.

    ERIC Educational Resources Information Center

    Gaines, Billie D.

    1982-01-01

    The evolution of one-stem verb theory since Roman Jakobson's 1948 study of Russian conjugation is outlined, and adaptations of his one-stem conjugation methodology for current classroom use are discussed and compared. (MSE)

  11. The State of the Art of Teaching Research Methods in the Social Sciences: Towards a Pedagogical Culture

    ERIC Educational Resources Information Center

    Wagner, Claire; Garner, Mark; Kawulich, Barbara

    2011-01-01

    No formal pedagogical culture for research methods in the social sciences seems to exist and, as part of the authors' endeavour to establish such a culture, this article reviews current literature about teaching research methods and identifies the gaps in the research. Articles in academic journals spanning a 10-year period were collected by…

  12. A new orientation-adaptive interpolation method.

    PubMed

    Wang, Qing; Ward, Rabab Kreidieh

    2007-04-01

    We propose an isophote-oriented, orientation-adaptive interpolation method. The proposed method employs an interpolation kernel that adapts to the local orientation of isophotes, and the pixel values are obtained through an oriented, bilinear interpolation. We show that, by doing so, the curvature of the interpolated isophotes is reduced, and, thus, zigzagging artifacts are largely suppressed. Analysis and experiments show that images interpolated using the proposed method are visually pleasing and almost artifact free. PMID:17405424

  13. The Method of Adaptive Comparative Judgement

    ERIC Educational Resources Information Center

    Pollitt, Alastair

    2012-01-01

    Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…

  14. Method Sections of Management Research Articles: A Pedagogically Motivated Qualitative Study

    ERIC Educational Resources Information Center

    Lim, Jason Miin Hwa

    2006-01-01

    Notwithstanding the voluminous literature devoted to research genres, more investigation needs to be conducted to demonstrate the pedagogical significance of studying linguistic features in relation to communicative functions. Motivated by a concern for the pedagogical applicability of genre analysis, this paper investigates the extent to which…

  15. The Development of Thai Pre-Service Chemistry Teachers' Pedagogical Content Knowledge: From a Methods Course to Field Experience

    ERIC Educational Resources Information Center

    Faikhamta, Chatree; Coll, Richard K.; Roadrangka, Vantipa

    2009-01-01

    This study investigated the journey of four Thai pre-service chemistry teachers as they sought to develop their Pedagogical Content Knowledge (PCK) throughout a PCK-based chemistry methods course and field experience. In an interpretive case study approach we drew upon classroom observations, semi-structured interviews, chemistry content knowledge…

  16. The Transnational and National Dimensions of Pedagogical Ideas: The Case of the Project Method, 1918-1939

    ERIC Educational Resources Information Center

    Del Mar Del Pozo Andres, Maria

    2009-01-01

    The goal of this article is to assess the national and transnational forms of the spread and reception of pedagogical ideas through a very concrete example, namely, the study of the project method. There are several good reasons for choosing this subject. In the first place, it was quite important that the theoretical construct of the New…

  17. Variational method for adaptive grid generation

    SciTech Connect

    Brackbill, J.U.

    1983-01-01

    A variational method for generating adaptive meshes is described. Functionals measuring smoothness, skewness, orientation, and the Jacobian are minimized to generate a mapping from a rectilinear domain in natural coordinate to an arbitrary domain in physical coordinates. From the mapping, a mesh is easily constructed. In using the method to adaptively zone computational problems, as few as one third the number of mesh points are required in each coordinate direction compared with a uniformly zoned mesh.

  18. The Dissemination of Pedagogical Patterns

    ERIC Educational Resources Information Center

    Bennedsen, Jens

    2006-01-01

    Pedagogical patterns have been around since 1995, but several authors claim their impact is limited. However, these claims are based on authors' own observations and not on methodical evaluations of the use and dissemination of pedagogical patterns. This claim is in contrast to the vision of the creators of pedagogical patterns--they think…

  19. Adaptive Finite Element Methods in Geodynamics

    NASA Astrophysics Data System (ADS)

    Davies, R.; Davies, H.; Hassan, O.; Morgan, K.; Nithiarasu, P.

    2006-12-01

    Adaptive finite element methods are presented for improving the quality of solutions to two-dimensional (2D) and three-dimensional (3D) convection dominated problems in geodynamics. The methods demonstrate the application of existing technology in the engineering community to problems within the `solid' Earth sciences. Two-Dimensional `Adaptive Remeshing': The `remeshing' strategy introduced in 2D adapts the mesh automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. The approach requires the coupling of an automatic mesh generator, a finite element flow solver and an error estimator. In this study, the procedure is implemented in conjunction with the well-known geodynamical finite element code `ConMan'. An unstructured quadrilateral mesh generator is utilised, with mesh adaptation accomplished through regeneration. This regeneration employs information provided by an interpolation based local error estimator, obtained from the computed solution on an existing mesh. The technique is validated by solving thermal and thermo-chemical problems with known benchmark solutions. In a purely thermal context, results illustrate that the method is highly successful, improving solution accuracy whilst increasing computational efficiency. For thermo-chemical simulations the same conclusions can be drawn. However, results also demonstrate that the grid based methods employed for simulating the compositional field are not competitive with the other methods (tracer particle and marker chain) currently employed in this field, even at the higher spatial resolutions allowed by the adaptive grid strategies. Three-Dimensional Adaptive Multigrid: We extend the ideas from our 2D work into the 3D realm in the context of a pre-existing 3D-spherical mantle dynamics code, `TERRA'. In its original format, `TERRA' is computationally highly efficient since it employs a multigrid solver that depends upon a grid utilizing a clever

  20. A New Adaptive Image Denoising Method

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    In this paper, a new adaptive image denoising method is proposed that follows the soft-thresholding technique. In our method, a new threshold function is also proposed, which is determined by taking the various combinations of noise level, noise-free signal variance, subband size, and decomposition level. It is simple and adaptive as it depends on the data-driven parameters estimation in each subband. The state-of-the-art denoising methods viz. VisuShrink, SureShrink, BayesShrink, WIDNTF and IDTVWT are not able to modify the coefficients in an efficient manner to provide the good quality of image. Our method removes the noise from the noisy image significantly and provides better visual quality of an image.

  1. Domain adaptive boosting method and its applications

    NASA Astrophysics Data System (ADS)

    Geng, Jie; Miao, Zhenjiang

    2015-03-01

    Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.

  2. Structured adaptive grid generation using algebraic methods

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.

    1993-01-01

    The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration

  3. PAR for the Course: A Congruent Pedagogical Approach for a PAR Methods Class

    ERIC Educational Resources Information Center

    Hammond, Joyce D.; Hicks, Maria; Kalman, Rowenn; Miller, Jason

    2005-01-01

    In the past two years, three graduate students and a senior faculty member have co-taught a participatory action research (PAR) course to undergraduate and graduate students. In this article the co-teachers advocate a set of pedagogical principles and practices in a PAR-oriented classroom that establishes congruency with community PAR projects in…

  4. Electric Conduction in Semiconductors: A Pedagogical Model Based on the Monte Carlo Method

    ERIC Educational Resources Information Center

    Capizzo, M. C.; Sperandeo-Mineo, R. M.; Zarcone, M.

    2008-01-01

    We present a pedagogic approach aimed at modelling electric conduction in semiconductors in order to describe and explain some macroscopic properties, such as the characteristic behaviour of resistance as a function of temperature. A simple model of the band structure is adopted for the generation of electron-hole pairs as well as for the carrier…

  5. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  6. An adaptive selective frequency damping method

    NASA Astrophysics Data System (ADS)

    Jordi, Bastien; Cotter, Colin; Sherwin, Spencer

    2015-03-01

    The selective frequency damping (SFD) method is used to obtain unstable steady-state solutions of dynamical systems. The stability of this method is governed by two parameters that are the control coefficient and the filter width. Convergence is not guaranteed for arbitrary choice of these parameters. Even when the method does converge, the time necessary to reach a steady-state solution may be very long. We present an adaptive SFD method. We show that by modifying the control coefficient and the filter width all along the solver execution, we can reach an optimum convergence rate. This method is based on successive approximations of the dominant eigenvalue of the flow studied. We design a one-dimensional model to select SFD parameters that enable us to control the evolution of the least stable eigenvalue of the system. These parameters are then used for the application of the SFD method to the multi-dimensional flow problem. We apply this adaptive method to a set of classical test cases of computational fluid dynamics and show that the steady-state solutions obtained are similar to what can be found in the literature. Then we apply it to a specific vortex dominated flow (of interest for the automotive industry) whose stability had never been studied before. Seventh Framework Programme of the European Commission - ANADE project under Grant Contract PITN-GA-289428.

  7. The dissemination of pedagogical patterns

    NASA Astrophysics Data System (ADS)

    Bennedsen, Jens

    2006-06-01

    Pedagogical patterns have been around since 1995, but several authors claim their impact is limited. However, these claims are based on authors' own observations and not on methodical evaluations of the use and dissemination of pedagogical patterns. This claim is in contrast to the vision of the creators of pedagogical patterns—they think pedagogical patterns can be used for capturing and transferring knowledge of teaching within the community. In this article, I analyse the results of a questionnaire and try to answer whether the creators of the pedagogical patterns have reached their vision among computer science teachers at universities around the world. The results indicate a rather high familiarity with pedagogical patterns. The attitude towards pedagogical patterns is positive among both teachers with knowledge and teachers without knowledge of pedagogical patterns. The attitude towards pedagogical patterns as a tool for further development of teaching is to some extent positive. The results show that the group with knowledge of pedagogical patterns, as well as the group with a positive attitude towards pedagogical patterns, is difficult to describe.

  8. The influence of alternative pedagogical methods in postsecondary biology education: How do students experience a multimedia case-study environment?

    NASA Astrophysics Data System (ADS)

    Wolter, Bjorn Hugo Karl

    that allowed students to fit all the pieces of their previous academic instruction together into a single, comprehensive picture---and to place themselves within that picture. Students enjoyed the autonomy and personal connections that using case studies and multimedia content offered, and found the material more engaging and relevant. By involving students in real-world situations, Case It! demonstrated the application and effect of theoretical knowledge and stimulated students' curiosity. Case It! motivates students by making material relevant and personal, thus creating enduring links between students and content which can result in better performance and higher retention rates. It is an effective pedagogical tool that, unlike many other such tools, is not instructor dependent, and is adaptable to fit various learner types, settings, and levels.

  9. An Adaptive VOF Method on Unstructured Grid

    NASA Astrophysics Data System (ADS)

    Wu, L. L.; Huang, M.; Chen, B.

    2011-09-01

    In order to improve the accuracy of interface capturing and keeping the computational efficiency, an adaptive VOF method on unstructured grid is proposed in this paper. The volume fraction in each cell is regarded as the criterion to locally refine the interface cell. With the movement of interface, new interface cells (0 ≤ f ≤ 1) are subdivided into child cells, while those child cells that no longer contain interface will be merged back into the original parent cell. In order to avoid the complicated redistribution of volume fraction during the subdivision and amalgamation procedure, a predictor-corrector algorithm is proposed to implement the subdivision and amalgamation procedures only in empty or full cell ( f = 0 or 1). Thus volume fraction in the new cell can take the value from the original cell directly, and the interpolation of the interface is avoided. The advantage of this method is that the re-generation of the whole grid system is not necessary, so its implementation is very efficient. Moreover, an advection flow test of a hollow square was performed, and the relative shape error of the result obtained by adaptive mesh is smaller than those by non-refined grid, which verifies the validation of our method.

  10. Ensemble transform sensitivity method for adaptive observations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan

    2016-01-01

    The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.

  11. Adaptive characterization method for desktop color printers

    NASA Astrophysics Data System (ADS)

    Shen, Hui-Liang; Zheng, Zhi-Huan; Jin, Chong-Chao; Du, Xin; Shao, Si-Jie; Xin, John H.

    2013-04-01

    With the rapid development of multispectral imaging technique, it is desired that the spectral color can be accurately reproduced using desktop color printers. However, due to the specific spectral gamuts determined by printer inks, it is almost impossible to exactly replicate the reflectance spectra in other media. In addition, as ink densities can not be individually controlled, desktop printers can only be regarded as red-green-blue devices, making physical models unfeasible. We propose a locally adaptive method, which consists of both forward and inverse models, for desktop printer characterization. In the forward model, we establish the adaptive transform between control values and reflectance spectrum on individual cellular subsets by using weighted polynomial regression. In the inverse model, we first determine the candidate space of the control values based on global inverse regression and then compute the optimal control values by minimizing the color difference between the actual spectrum and the predicted spectrum via forward transform. Experimental results show that the proposed method can reproduce colors accurately for different media under multiple illuminants.

  12. Adaptive method with intercessory feedback control for an intelligent agent

    DOEpatents

    Goldsmith, Steven Y.

    2004-06-22

    An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.

  13. Adaptive Accommodation Control Method for Complex Assembly

    NASA Astrophysics Data System (ADS)

    Kang, Sungchul; Kim, Munsang; Park, Shinsuk

    Robotic systems have been used to automate assembly tasks in manufacturing and in teleoperation. Conventional robotic systems, however, have been ineffective in controlling contact force in multiple contact states of complex assemblythat involves interactions between complex-shaped parts. Unlike robots, humans excel at complex assembly tasks by utilizing their intrinsic impedance, forces and torque sensation, and tactile contact clues. By examining the human behavior in assembling complex parts, this study proposes a novel geometry-independent control method for robotic assembly using adaptive accommodation (or damping) algorithm. Two important conditions for complex assembly, target approachability and bounded contact force, can be met by the proposed control scheme. It generates target approachable motion that leads the object to move closer to a desired target position, while contact force is kept under a predetermined value. Experimental results from complex assembly tests have confirmed the feasibility and applicability of the proposed method.

  14. Adapting implicit methods to parallel processors

    SciTech Connect

    Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.

    1994-12-31

    When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.

  15. Linearly-Constrained Adaptive Signal Processing Methods

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.

    1988-01-01

    In adaptive least-squares estimation problems, a desired signal d(n) is estimated using a linear combination of L observation values samples xi (n), x2(n), . . . , xL-1(n) and denoted by the vector X(n). The estimate is formed as the inner product of this vector with a corresponding L-dimensional weight vector W. One particular weight vector of interest is Wopt which minimizes the mean-square between d(n) and the estimate. In this context, the term `mean-square difference' is a quadratic measure such as statistical expectation or time average. The specific value of W which achieves the minimum is given by the prod-uct of the inverse data covariance matrix and the cross-correlation between the data vector and the desired signal. The latter is often referred to as the P-vector. For those cases in which time samples of both the desired and data vector signals are available, a variety of adaptive methods have been proposed which will guarantee that an iterative weight vector Wa(n) converges (in some sense) to the op-timal solution. Two which have been extensively studied are the recursive least-squares (RLS) method and the LMS gradient approximation approach. There are several problems of interest in the communication and radar environment in which the optimal least-squares weight set is of interest and in which time samples of the desired signal are not available. Examples can be found in array processing in which only the direction of arrival of the desired signal is known and in single channel filtering where the spectrum of the desired response is known a priori. One approach to these problems which has been suggested is the P-vector algorithm which is an LMS-like approximate gradient method. Although it is easy to derive the mean and variance of the weights which result with this algorithm, there has never been an identification of the corresponding underlying error surface which the procedure searches. The purpose of this paper is to suggest an alternative

  16. An adaptive SPH method for strong shocks

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Trujillo, Leonardo

    2009-09-01

    We propose an alternative SPH scheme to usual SPH Godunov-type methods for simulating supersonic compressible flows with sharp discontinuities. The method relies on an adaptive density kernel estimation (ADKE) algorithm, which allows the width of the kernel interpolant to vary locally in space and time so that the minimum necessary smoothing is applied in regions of low density. We have performed a von Neumann stability analysis of the SPH equations for an ideal gas and derived the corresponding dispersion relation in terms of the local width of the kernel. Solution of the dispersion relation in the short wavelength limit shows that stability is achieved for a wide range of the ADKE parameters. Application of the method to high Mach number shocks confirms the predictions of the linear analysis. Examples of the resolving power of the method are given for a set of difficult problems, involving the collision of two strong shocks, the strong shock-tube test, and the interaction of two blast waves.

  17. Adaptive wavelet methods - Matrix-vector multiplication

    NASA Astrophysics Data System (ADS)

    Černá, Dana; Finěk, Václav

    2012-12-01

    The design of most adaptive wavelet methods for elliptic partial differential equations follows a general concept proposed by A. Cohen, W. Dahmen and R. DeVore in [3, 4]. The essential steps are: transformation of the variational formulation into the well-conditioned infinite-dimensional l2 problem, finding of the convergent iteration process for the l2 problem and finally derivation of its finite dimensional version which works with an inexact right hand side and approximate matrix-vector multiplications. In our contribution, we shortly review all these parts and wemainly pay attention to approximate matrix-vector multiplications. Effective approximation of matrix-vector multiplications is enabled by an off-diagonal decay of entries of the wavelet stiffness matrix. We propose here a new approach which better utilize actual decay of matrix entries.

  18. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo

    2014-04-15

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  19. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-11-18

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  20. Combining advanced networked technology and pedagogical methods to improve collaborative distance learning.

    PubMed

    Staccini, Pascal; Dufour, Jean-Charles; Raps, Hervé; Fieschi, Marius

    2005-01-01

    Making educational material be available on a network cannot be reduced to merely implementing hypermedia and interactive resources on a server. A pedagogical schema has to be defined to guide students for learning and to provide teachers with guidelines to prepare valuable and upgradeable resources. Components of a learning environment, as well as interactions between students and other roles such as author, tutor and manager, can be deduced from cognitive foundations of learning, such as the constructivist approach. Scripting the way a student will to navigate among information nodes and interact with tools to build his/her own knowledge can be a good way of deducing the features of the graphic interface related to the management of the objects. We defined a typology of pedagogical resources, their data model and their logic of use. We implemented a generic and web-based authoring and publishing platform (called J@LON for Join And Learn On the Net) within an object-oriented and open-source programming environment (called Zope) embedding a content management system (called Plone). Workflow features have been used to mark the progress of students and to trace the life cycle of resources shared by the teaching staff. The platform integrated advanced on line authoring features to create interactive exercises and support live courses diffusion. The platform engine has been generalized to the whole curriculum of medical studies in our faculty; it also supports an international master of risk management in health care and will be extent to all other continuous training diploma. PMID:16160271

  1. Online Adaptive Replanning Method for Prostate Radiotherapy

    SciTech Connect

    Ahunbay, Ergun E.; Peng Cheng; Holmes, Shannon; Godley, Andrew; Lawton, Colleen; Li, X. Allen

    2010-08-01

    Purpose: To report the application of an adaptive replanning technique for prostate cancer radiotherapy (RT), consisting of two steps: (1) segment aperture morphing (SAM), and (2) segment weight optimization (SWO), to account for interfraction variations. Methods and Materials: The new 'SAM+SWO' scheme was retroactively applied to the daily CT images acquired for 10 prostate cancer patients on a linear accelerator and CT-on-Rails combination during the course of RT. Doses generated by the SAM+SWO scheme based on the daily CT images were compared with doses generated after patient repositioning using the current planning target volume (PTV) margin (5 mm, 3 mm toward rectum) and a reduced margin (2 mm), along with full reoptimization scans based on the daily CT images to evaluate dosimetry benefits. Results: For all cases studied, the online replanning method provided significantly better target coverage when compared with repositioning with reduced PTV (13% increase in minimum prostate dose) and improved organ sparing when compared with repositioning with regular PTV (13% decrease in the generalized equivalent uniform dose of rectum). The time required to complete the online replanning process was 6 {+-} 2 minutes. Conclusion: The proposed online replanning method can be used to account for interfraction variations for prostate RT with a practically acceptable time frame (5-10 min) and with significant dosimetric benefits. On the basis of this study, the developed online replanning scheme is being implemented in the clinic for prostate RT.

  2. Case study method and problem-based learning: utilizing the pedagogical model of progressive complexity in nursing education.

    PubMed

    McMahon, Michelle A; Christopher, Kimberly A

    2011-01-01

    As the complexity of health care delivery continues to increase, educators are challenged to determine educational best practices to prepare BSN students for the ambiguous clinical practice setting. Integrative, active, and student-centered curricular methods are encouraged to foster student ability to use clinical judgment for problem solving and informed clinical decision making. The proposed pedagogical model of progressive complexity in nursing education suggests gradually introducing students to complex and multi-contextual clinical scenarios through the utilization of case studies and problem-based learning activities, with the intention to transition nursing students into autonomous learners and well-prepared practitioners at the culmination of a nursing program. Exemplar curricular activities are suggested to potentiate student development of a transferable problem solving skill set and a flexible knowledge base to better prepare students for practice in future novel clinical experiences, which is a mutual goal for both educators and students. PMID:22718667

  3. Adaptive numerical methods for partial differential equations

    SciTech Connect

    Cololla, P.

    1995-07-01

    This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.

  4. Principles and Methods of Adapted Physical Education.

    ERIC Educational Resources Information Center

    Arnheim, Daniel D.; And Others

    Programs in adapted physical education are presented preceded by a background of services for the handicapped, by the psychosocial implications of disability, and by the growth and development of the handicapped. Elements of conducting programs discussed are organization and administration, class organization, facilities, exercise programs…

  5. QUEST - A Bayesian adaptive psychometric method

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Pelli, D. G.

    1983-01-01

    An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.

  6. Adaptive method of realizing natural gradient learning for multilayer perceptrons.

    PubMed

    Amari, S; Park, H; Fukumizu, K

    2000-06-01

    The natural gradient learning method is known to have ideal performances for on-line training of multilayer perceptrons. It avoids plateaus, which give rise to slow convergence of the backpropagation method. It is Fisher efficient, whereas the conventional method is not. However, for implementing the method, it is necessary to calculate the Fisher information matrix and its inverse, which is practically very difficult. This article proposes an adaptive method of directly obtaining the inverse of the Fisher information matrix. It generalizes the adaptive Gauss-Newton algorithms and provides a solid theoretical justification of them. Simulations show that the proposed adaptive method works very well for realizing natural gradient learning. PMID:10935719

  7. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  8. Adaptive method for electron bunch profile prediction

    NASA Astrophysics Data System (ADS)

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.

  9. Adaptive method for electron bunch profile prediction

    SciTech Connect

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.

  10. Assessing Adaptive Instructional Design Tools and Methods in ADAPT[IT].

    ERIC Educational Resources Information Center

    Eseryel, Deniz; Spector, J. Michael

    ADAPT[IT] (Advanced Design Approach for Personalized Training - Interactive Tools) is a European project within the Information Society Technologies program that is providing design methods and tools to guide a training designer according to the latest cognitive science and standardization principles. ADAPT[IT] addresses users in two significantly…

  11. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  12. Moving and adaptive grid methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Trepanier, Jean-Yves; Camarero, Ricardo

    1995-01-01

    This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.

  13. An adaptive pseudospectral method for discontinuous problems

    NASA Technical Reports Server (NTRS)

    Augenbaum, Jeffrey M.

    1988-01-01

    The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.

  14. Adaptable radiation monitoring system and method

    DOEpatents

    Archer, Daniel E.; Beauchamp, Brock R.; Mauger, G. Joseph; Nelson, Karl E.; Mercer, Michael B.; Pletcher, David C.; Riot, Vincent J.; Schek, James L.; Knapp, David A.

    2006-06-20

    A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.

  15. Adaptive computational methods for aerothermal heating analysis

    NASA Technical Reports Server (NTRS)

    Price, John M.; Oden, J. Tinsley

    1988-01-01

    The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.

  16. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  17. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  18. Pedagogical effectiveness of innovative teaching methods initiated at the Department of Physiology, Government Medical College, Chandigarh.

    PubMed

    Nageswari, K Sri; Malhotra, Anita S; Kapoor, Nandini; Kaur, Gurjit

    2004-12-01

    Modern teaching trends in medical education exhibit a paradigm shift from the conventional classroom teaching methods adopted in the past to nonconventional teaching aids so as to encourage interactive forms of learning in medical students through active participation and integrative reasoning where the relationship of the teacher and the taught has undergone tremendous transformation. Some of the nonconventional teaching methods adopted at our department are learning through active participation by the students through computer-assisted learning (CD-ROMs), Web-based learning (undergraduate projects), virtual laboratories, seminars, audiovisual aids (video-based demonstrations), and "physioquiz." PMID:15149960

  19. Pedagogical Innovation and Music Education in Spain: Introducing the Dalcroze Method in Catalonia

    ERIC Educational Resources Information Center

    Comas Rubí, Francesca; Motilla-Salas, Xavier; Sureda-Garcia, Bernat

    2014-01-01

    The aim of this paper is to analyse how the Dalcroze method was introduced to Spain and became known there, more specifically in the Catalonia of the "Noucentisme" movement, and why it made the greatest impact and was more widely disseminated in this particular region of Spain. Following a summary of Dalcroze's contributions to…

  20. Signature Pedagogies and Legal Education in Universities: Epistemological and Pedagogical Concerns with Langdellian Case Method

    ERIC Educational Resources Information Center

    Hyland, Aine; Kilcommins, Shane

    2009-01-01

    This paper offers an analysis of Lee S. Shulman's concept of "signature pedagogies" as it relates to legal education. In law, the signature pedagogy identified by Shulman is the Langdellian case method. Though the concept of signature pedagogies provides an excellent infrastructure for the exchange of teaching ideas, Shulman has a tendency to…

  1. Adaptive sequential methods for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Walker, Ernest

    2013-06-01

    In this paper, we propose new sequential methods for detecting port-scan attackers which routinely perform random "portscans" of IP addresses to find vulnerable servers to compromise. In addition to rigorously control the probability of falsely implicating benign remote hosts as malicious, our method performs significantly faster than other current solutions. Moreover, our method guarantees that the maximum amount of observational time is bounded. In contrast to the previous most effective method, Threshold Random Walk Algorithm, which is explicit and analytical in nature, our proposed algorithm involve parameters to be determined by numerical methods. We have introduced computational techniques such as iterative minimax optimization for quick determination of the parameters of the new detection algorithm. A framework of multi-valued decision for detecting portscanners and DoS attacks is also proposed.

  2. Adaptive finite-element method for diffraction gratings

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Chen, Zhiming; Wu, Haijun

    2005-06-01

    A second-order finite-element adaptive strategy with error control for one-dimensional grating problems is developed. The unbounded computational domain is truncated to a bounded one by a perfectly-matched-layer (PML) technique. The PML parameters, such as the thickness of the layer and the medium properties, are determined through sharp a posteriori error estimates. The adaptive finite-element method is expected to increase significantly the accuracy and efficiency of the discretization as well as reduce the computation cost. Numerical experiments are included to illustrate the competitiveness of the proposed adaptive method.

  3. Adaptive multiscale method for two-dimensional nanoscale adhesive contacts

    NASA Astrophysics Data System (ADS)

    Tong, Ruiting; Liu, Geng; Liu, Lan; Wu, Liyan

    2013-05-01

    There are two separate traditional approaches to model contact problems: continuum and atomistic theory. Continuum theory is successfully used in many domains, but when the scale of the model comes to nanometer, continuum approximation meets challenges. Atomistic theory can catch the detailed behaviors of an individual atom by using molecular dynamics (MD) or quantum mechanics, although accurately, it is usually time-consuming. A multiscale method coupled MD and finite element (FE) is presented. To mesh the FE region automatically, an adaptive method based on the strain energy gradient is introduced to the multiscale method to constitute an adaptive multiscale method. Utilizing the proposed method, adhesive contacts between a rigid cylinder and an elastic substrate are studied, and the results are compared with full MD simulations. The process of FE meshes refinement shows that adaptive multiscale method can make FE mesh generation more flexible. Comparison of the displacements of boundary atoms in the overlap region with the results from full MD simulations indicates that adaptive multiscale method can transfer displacements effectively. Displacements of atoms and FE nodes on the center line of the multiscale model agree well with that of atoms in full MD simulations, which shows the continuity in the overlap region. Furthermore, the Von Mises stress contours and contact force distributions in the contact region are almost same as full MD simulations. The method presented combines multiscale method and adaptive technique, and can provide a more effective way to multiscale method and to the investigation on nanoscale contact problems.

  4. Fast adaptive composite grid methods on distributed parallel architectures

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Quinlan, Daniel

    1992-01-01

    The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.

  5. Adaptive upscaling with the dual mesh method

    SciTech Connect

    Guerillot, D.; Verdiere, S.

    1997-08-01

    The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.

  6. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  7. [Problem-based learning, description of a pedagogical method leading to evidence-based medicine].

    PubMed

    Chalon, P; Delvenne, C; Pasleau, F

    2000-04-01

    Problem-Based Learning is an educational method which uses health care scenarios to provide a context for learning and to elaborate knowledge through discussion. Additional expectations are to stimulate critical thinking and problem-solving skills, and to develop clinical reasoning taking into account the patient's psychosocial environment and preferences, the economic requirements as well as the best evidence from biomedical research. Appearing at the end of the 60's, it has been adopted by 10% of medical schools world-wide. PBL follows the same rules as Evidence-Based Medicine but is student-centered and provides the information-seeking skills necessary for self-directed life long learning. In this short article, we review the theoretical basis and process of PBL, emphasizing the teacher-student relationship and discussing the suggested advantages and disadvantages of this curriculum. Students in PBL programs make greater use of self-selected references and online searching. From this point of view, PBL strengthens the role of health libraries in medical education, and prepares the future physician for Evidence-Based Medicine. PMID:10909306

  8. An auto-adaptive background subtraction method for Raman spectra.

    PubMed

    Xie, Yi; Yang, Lidong; Sun, Xilong; Wu, Dewen; Chen, Qizhen; Zeng, Yongming; Liu, Guokun

    2016-05-15

    Background subtraction is a crucial step in the preprocessing of Raman spectrum. Usually, parameter manipulating of the background subtraction method is necessary for the efficient removal of the background, which makes the quality of the spectrum empirically dependent. In order to avoid artificial bias, we proposed an auto-adaptive background subtraction method without parameter adjustment. The main procedure is: (1) select the local minima of spectrum while preserving major peaks, (2) apply an interpolation scheme to estimate background, (3) and design an iteration scheme to improve the adaptability of background subtraction. Both simulated data and Raman spectra have been used to evaluate the proposed method. By comparing the backgrounds obtained from three widely applied methods: the polynomial, the Baek's and the airPLS, the auto-adaptive method meets the demand of practical applications in terms of efficiency and accuracy. PMID:26950502

  9. An auto-adaptive background subtraction method for Raman spectra

    NASA Astrophysics Data System (ADS)

    Xie, Yi; Yang, Lidong; Sun, Xilong; Wu, Dewen; Chen, Qizhen; Zeng, Yongming; Liu, Guokun

    2016-05-01

    Background subtraction is a crucial step in the preprocessing of Raman spectrum. Usually, parameter manipulating of the background subtraction method is necessary for the efficient removal of the background, which makes the quality of the spectrum empirically dependent. In order to avoid artificial bias, we proposed an auto-adaptive background subtraction method without parameter adjustment. The main procedure is: (1) select the local minima of spectrum while preserving major peaks, (2) apply an interpolation scheme to estimate background, (3) and design an iteration scheme to improve the adaptability of background subtraction. Both simulated data and Raman spectra have been used to evaluate the proposed method. By comparing the backgrounds obtained from three widely applied methods: the polynomial, the Baek's and the airPLS, the auto-adaptive method meets the demand of practical applications in terms of efficiency and accuracy.

  10. Track and vertex reconstruction: From classical to adaptive methods

    SciTech Connect

    Strandlie, Are; Fruehwirth, Rudolf

    2010-04-15

    This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.

  11. Introduction to Adaptive Methods for Differential Equations

    NASA Astrophysics Data System (ADS)

    Eriksson, Kenneth; Estep, Don; Hansbo, Peter; Johnson, Claes

    Knowing thus the Algorithm of this calculus, which I call Differential Calculus, all differential equations can be solved by a common method (Gottfried Wilhelm von Leibniz, 1646-1719).When, several years ago, I saw for the first time an instrument which, when carried, automatically records the number of steps taken by a pedestrian, it occurred to me at once that the entire arithmetic could be subjected to a similar kind of machinery so that not only addition and subtraction, but also multiplication and division, could be accomplished by a suitably arranged machine easily, promptly and with sure results. For it is unworthy of excellent men to lose hours like slaves in the labour of calculations, which could safely be left to anyone else if the machine was used. And now that we may give final praise to the machine, we may say that it will be desirable to all who are engaged in computations which, as is well known, are the managers of financial affairs, the administrators of others estates, merchants, surveyors, navigators, astronomers, and those connected with any of the crafts that use mathematics (Leibniz).

  12. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  13. Adaptive multiscale model reduction with Generalized Multiscale Finite Element Methods

    NASA Astrophysics Data System (ADS)

    Chung, Eric; Efendiev, Yalchin; Hou, Thomas Y.

    2016-09-01

    In this paper, we discuss a general multiscale model reduction framework based on multiscale finite element methods. We give a brief overview of related multiscale methods. Due to page limitations, the overview focuses on a few related methods and is not intended to be comprehensive. We present a general adaptive multiscale model reduction framework, the Generalized Multiscale Finite Element Method. Besides the method's basic outline, we discuss some important ingredients needed for the method's success. We also discuss several applications. The proposed method allows performing local model reduction in the presence of high contrast and no scale separation.

  14. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    SciTech Connect

    Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.

    1998-12-10

    OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  15. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2004-01-28

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  16. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2002-10-19

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  17. Adaptive wavelet collocation method simulations of Rayleigh-Taylor instability

    NASA Astrophysics Data System (ADS)

    Reckinger, S. J.; Livescu, D.; Vasilyev, O. V.

    2010-12-01

    Numerical simulations of single-mode, compressible Rayleigh-Taylor instability are performed using the adaptive wavelet collocation method (AWCM), which utilizes wavelets for dynamic grid adaptation. Due to the physics-based adaptivity and direct error control of the method, AWCM is ideal for resolving the wide range of scales present in the development of the instability. The problem is initialized consistent with the solutions from linear stability theory. Non-reflecting boundary conditions are applied to prevent the contamination of the instability growth by pressure waves created at the interface. AWCM is used to perform direct numerical simulations that match the early-time linear growth, the terminal bubble velocity and a reacceleration region.

  18. Adaptive computational methods for SSME internal flow analysis

    NASA Technical Reports Server (NTRS)

    Oden, J. T.

    1986-01-01

    Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.

  19. Making Teachers' Pedagogical Capital Visible and Useful

    ERIC Educational Resources Information Center

    Henningsson-Yousif, Anna; Aasen, Solveig Fredriksen

    2015-01-01

    Purpose: The purpose of this paper is to compare methods of working with pedagogical capital in teacher and mentor education. The author makes an account of the development of the concept of pedagogical capital and relates it to the theoretical context of practice theory. Empirical data will substantiate the theoretical discussion of teachers'…

  20. Adaptive windowed range-constrained Otsu method using local information

    NASA Astrophysics Data System (ADS)

    Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie

    2016-01-01

    An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.

  1. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  2. A Conditional Exposure Control Method for Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Nering, Michael L.; Roussos, Louis A.

    2009-01-01

    In computerized adaptive testing (CAT), ensuring the security of test items is a crucial practical consideration. A common approach to reducing item theft is to define maximum item exposure rates, i.e., to limit the proportion of examinees to whom a given item can be administered. Numerous methods for controlling exposure rates have been proposed…

  3. Adaptive frequency estimation by MUSIC (Multiple Signal Classification) method

    NASA Astrophysics Data System (ADS)

    Karhunen, Juha; Nieminen, Esko; Joutsensalo, Jyrki

    During the last years, the eigenvector-based method called MUSIC has become very popular in estimating the frequencies of sinusoids in additive white noise. Adaptive realizations of the MUSIC method are studied using simulated data. Several of the adaptive realizations seem to give in practice equally good results as the nonadaptive standard realization. The only exceptions are instantaneous gradient type algorithms that need considerably more samples to achieve a comparable performance. A new method is proposed for constructing initial estimates to the signal subspace. The method improves often dramatically the performance of instantaneous gradient type algorithms. The new signal subspace estimate can also be used to define a frequency estimator directly or to simplify eigenvector computation.

  4. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    SciTech Connect

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.

  5. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGESBeta

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  6. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  7. Workshop on adaptive grid methods for fusion plasmas

    SciTech Connect

    Wiley, J.C.

    1995-07-01

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  8. Solving Chemical Master Equations by an Adaptive Wavelet Method

    SciTech Connect

    Jahnke, Tobias; Galan, Steffen

    2008-09-01

    Solving chemical master equations is notoriously difficult due to the tremendous number of degrees of freedom. We present a new numerical method which efficiently reduces the size of the problem in an adaptive way. The method is based on a sparse wavelet representation and an algorithm which, in each time step, detects the essential degrees of freedom required to approximate the solution up to the desired accuracy.

  9. ICASE/LaRC Workshop on Adaptive Grid Methods

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)

    1995-01-01

    Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.

  10. An Adaptive Cross-Architecture Combination Method for Graph Traversal

    SciTech Connect

    You, Yang; Song, Shuaiwen; Kerbyson, Darren J.

    2014-06-18

    Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.

  11. An adaptive over/under data combination method

    NASA Astrophysics Data System (ADS)

    He, Jian-Wei; Lu, Wen-Kai; Li, Zhong-Xiao

    2013-12-01

    The traditional "dephase and sum" algorithms for over/under data combination estimate the ghost operator by assuming a calm sea surface. However, the real sea surface is typically rough, which invalidates the calm sea surface assumption. Hence, the traditional "dephase and sum" algorithms might produce poor-quality results in rough sea conditions. We propose an adaptive over/under data combination method, which adaptively estimates the amplitude spectrum of the ghost operator from the over/under data, and then over/under data combinations are implemented using the estimated ghost operators. A synthetic single shot gather is used to verify the performance of the proposed method in rough sea surface conditions and a real triple over/under dataset demonstrates the method performance.

  12. An Adaptive Derivative-based Method for Function Approximation

    SciTech Connect

    Tong, C

    2008-10-22

    To alleviate the high computational cost of large-scale multi-physics simulations to study the relationships between the model parameters and the outputs of interest, response surfaces are often used in place of the exact functional relationships. This report explores a method for response surface construction using adaptive sampling guided by derivative information at each selected sample point. This method is especially suitable for applications that can readily provide added information such as gradients and Hessian with respect to the input parameters under study. When higher order terms (third and above) in the Taylor series are negligible, the approximation error for this method can be controlled. We present details of the adaptive algorithm and numerical results on a few test problems.

  13. Development of a dynamically adaptive grid method for multidimensional problems

    NASA Astrophysics Data System (ADS)

    Holcomb, J. E.; Hindman, R. G.

    1984-06-01

    An approach to solution adaptive grid generation for use with finite difference techniques, previously demonstrated on model problems in one space dimension, has been extended to multidimensional problems. The method is based on the popular elliptic steady grid generators, but is 'dynamically' adaptive in the sense that a grid is maintained at all times satisfying the steady grid law driven by a solution-dependent source term. Testing has been carried out on Burgers' equation in one and two space dimensions. Results appear encouraging both for inviscid wave propagation cases and viscous boundary layer cases, suggesting that application to practical flow problems is now possible. In the course of the work, obstacles relating to grid correction, smoothing of the solution, and elliptic equation solvers have been largely overcome. Concern remains, however, about grid skewness, boundary layer resolution and the need for implicit integration methods. Also, the method in 3-D is expected to be very demanding of computer resources.

  14. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    SciTech Connect

    Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron

    1998-12-08

    Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  15. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge

  16. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  17. The Phenomenology of Pedagogic Observation.

    ERIC Educational Resources Information Center

    Van Manen, Max

    1979-01-01

    The intent of this paper is to begin a reflective discussion of the phenomenology of pedagogic observation. In doing this it borrows extensively from Beets and draws attention to one aspect of phenomenological method: the function of "example" in phenomenological inquiry. (Author/SJL)

  18. Student Empowerment: Niza's Pedagogical Model.

    ERIC Educational Resources Information Center

    Grave-Resendes, Lydia

    1991-01-01

    Niza's nontraditional pedagogical principles are observed at work in a Portuguese classroom of three- to six-year olds. All learning is thought to follow the scientific method of discovery. Classes are heterogeneous, reflecting natural society. Schoolwork is organized, developed, and implemented by both teachers and students interacting…

  19. Psychological and Pedagogic Support of Children with Health Limitations

    ERIC Educational Resources Information Center

    Ezhovkina, Elena Vasilyevna; Ryabova, Natalia Vladimirovna

    2015-01-01

    The article represented theoretic analysis of the literature on the problem of psychological and pedagogic support of disabled children. It defined the following terms: a successfully adapting disabled child, a model, interaction of specialists, psychological and pedagogic support. The article also determined the key components of a successfully…

  20. Methods for prismatic/tetrahedral grid generation and adaptation

    NASA Astrophysics Data System (ADS)

    Kallinderis, Y.

    1995-10-01

    The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.

  1. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  2. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2003-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  3. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  4. Adaptive [theta]-methods for pricing American options

    NASA Astrophysics Data System (ADS)

    Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran

    2008-12-01

    We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.

  5. Space-time adaptive numerical methods for geophysical applications.

    PubMed

    Castro, C E; Käser, M; Toro, E F

    2009-11-28

    In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost. PMID:19840984

  6. Robust flicker evaluation method for low power adaptive dimming LCDs

    NASA Astrophysics Data System (ADS)

    Kim, Seul-Ki; Song, Seok-Jeong; Nam, Hyoungsik

    2015-05-01

    This paper describes a robust dimming flicker evaluation method of adaptive dimming algorithms for low power liquid crystal displays (LCDs). While the previous methods use sum of square difference (SSD) values without excluding the image sequence information, the proposed modified SSD (mSSD) values are obtained only with the dimming flicker effects by making use of differential images. The proposed scheme is verified for eight dimming configurations of two dimming level selection methods and four temporal filters over three test videos. Furthermore, a new figure of merit is introduced to cover the dimming flicker as well as image qualities and power consumption.

  7. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  8. Adaptive domain decomposition methods for advection-diffusion problems

    SciTech Connect

    Carlenzoli, C.; Quarteroni, A.

    1995-12-31

    Domain decomposition methods can perform poorly on advection-diffusion equations if diffusion is dominated by advection. Indeed, the hyperpolic part of the equations could affect the behavior of iterative schemes among subdomains slowing down dramatically their rate of convergence. Taking into account the direction of the characteristic lines we introduce suitable adaptive algorithms which are stable with respect to the magnitude of the convective field in the equations and very effective on bear boundary value problems.

  9. Extended generalized Lagrangian multipliers for magnetohydrodynamics using adaptive multiresolution methods

    NASA Astrophysics Data System (ADS)

    Domingues, Margarete O.; Gomes, Anna Karina F.; Mendes, Odim; Schneider, Kai

    2013-10-01

    We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge-Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of the magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution. This work was supported by the contract SiCoMHD (ANR-Blanc 2011-045).

  10. An adaptive unsupervised hyperspectral classification method based on Gaussian distribution

    NASA Astrophysics Data System (ADS)

    Yue, Jiang; Wu, Jing-wei; Zhang, Yi; Bai, Lian-fa

    2014-11-01

    In order to achieve adaptive unsupervised clustering in the high precision, a method using Gaussian distribution to fit the similarity of the inter-class and the noise distribution is proposed in this paper, and then the automatic segmentation threshold is determined by the fitting result. First, according with the similarity measure of the spectral curve, this method assumes that the target and the background both in Gaussian distribution, the distribution characteristics is obtained through fitting the similarity measure of minimum related windows and center pixels with Gaussian function, and then the adaptive threshold is achieved. Second, make use of the pixel minimum related windows to merge adjacent similar pixels into a picture-block, then the dimensionality reduction is completed and the non-supervised classification is realized. AVIRIS data and a set of hyperspectral data we caught are used to evaluate the performance of the proposed method. Experimental results show that the proposed algorithm not only realizes the adaptive but also outperforms K-MEANS and ISODATA on the classification accuracy, edge recognition and robustness.

  11. A New Online Calibration Method for Multidimensional Computerized Adaptive Testing.

    PubMed

    Chen, Ping; Wang, Chun

    2016-09-01

    Multidimensional-Method A (M-Method A) has been proposed as an efficient and effective online calibration method for multidimensional computerized adaptive testing (MCAT) (Chen & Xin, Paper presented at the 78th Meeting of the Psychometric Society, Arnhem, The Netherlands, 2013). However, a key assumption of M-Method A is that it treats person parameter estimates as their true values, thus this method might yield erroneous item calibration when person parameter estimates contain non-ignorable measurement errors. To improve the performance of M-Method A, this paper proposes a new MCAT online calibration method, namely, the full functional MLE-M-Method A (FFMLE-M-Method A). This new method combines the full functional MLE (Jones & Jin in Psychometrika 59:59-75, 1994; Stefanski & Carroll in Annals of Statistics 13:1335-1351, 1985) with the original M-Method A in an effort to correct for the estimation error of ability vector that might otherwise adversely affect the precision of item calibration. Two correction schemes are also proposed when implementing the new method. A simulation study was conducted to show that the new method generated more accurate item parameter estimation than the original M-Method A in almost all conditions. PMID:26608960

  12. A novel adaptive force control method for IPMC manipulation

    NASA Astrophysics Data System (ADS)

    Hao, Lina; Sun, Zhiyong; Li, Zhi; Su, Yunquan; Gao, Jianchao

    2012-07-01

    IPMC is a type of electro-active polymer material, also called artificial muscle, which can generate a relatively large deformation under a relatively low input voltage (generally speaking, less than 5 V), and can be implemented in a water environment. Due to these advantages, IPMC can be used in many fields such as biomimetics, service robots, bio-manipulation, etc. Until now, most existing methods for IPMC manipulation are displacement control not directly force control, however, under most conditions, the success rate of manipulations for tiny fragile objects is limited by the contact force, such as using an IPMC gripper to fix cells. Like most EAPs, a creep phenomenon exists in IPMC, of which the generated force will change with time and the creep model will be influenced by the change of the water content or other environmental factors, so a proper force control method is urgently needed. This paper presents a novel adaptive force control method (AIPOF control—adaptive integral periodic output feedback control), based on employing a creep model of which parameters are obtained by using the FRLS on-line identification method. The AIPOF control method can achieve an arbitrary pole configuration as long as the plant is controllable and observable. This paper also designs the POF and IPOF controller to compare their test results. Simulation and experiments of micro-force-tracking tests are carried out, with results confirming that the proposed control method is viable.

  13. Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems

    NASA Technical Reports Server (NTRS)

    Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.

    1979-01-01

    The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.

  14. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  15. The Views of Pre-Service Teachers Who Take Special Teaching Course within the Context of Pedagogical Formation Certificate Program about Micro-Teaching Method and a Physics Lesson Plan

    ERIC Educational Resources Information Center

    Gurbuz, Fatih

    2015-01-01

    The purpose of this study is to determine the views of the pre-service teachers who received training on pedagogical formation certificate program about micro-teaching method. The study was carried out with a case study method. Semi-structured interviews were used in the study as a data collection tool to gather pre-service teachers' views about…

  16. Adaptive methods for nonlinear structural dynamics and crashworthiness analysis

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted

    1993-01-01

    The objective is to describe three research thrusts in crashworthiness analysis: adaptivity; mixed time integration, or subcycling, in which different timesteps are used for different parts of the mesh in explicit methods; and methods for contact-impact which are highly vectorizable. The techniques are being developed to improve the accuracy of calculations, ease-of-use of crashworthiness programs, and the speed of calculations. The latter is still of importance because crashworthiness calculations are often made with models of 20,000 to 50,000 elements using explicit time integration and require on the order of 20 to 100 hours on current supercomputers. The methodologies are briefly reviewed and then some example calculations employing these methods are described. The methods are also of value to other nonlinear transient computations.

  17. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  18. Planetary gearbox fault diagnosis using an adaptive stochastic resonance method

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia

    2013-07-01

    Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.

  19. Spatially-Anisotropic Parallel Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Brown-Dymkoski, Eric

    2015-11-01

    Despite latest advancements in development of robust wavelet-based adaptive numerical methodologies to solve partial differential equations, they all suffer from two major ``curses'': 1) the reliance on rectangular domain and 2) the ``curse of anisotropy'' (i.e. homogeneous wavelet refinement and inability to have spatially varying aspect ratio of the mesh elements). The new method addresses both of these challenges by utilizing an adaptive anisotropic wavelet transform on curvilinear meshes that can be either algebraically prescribed or calculated on the fly using PDE-based mesh generation. In order to ensure accurate representation of spatial operators in physical space, an additional adaptation on spatial physical coordinates is also performed. It is important to note that when new nodes are added in computational space, the physical coordinates can be approximated by interpolation of the existing solution and additional local iterations to ensure that the solution of coordinate mapping PDEs is converged on the new mesh. In contrast to traditional mesh generation approaches, the cost of adding additional nodes is minimal, mainly due to localized nature of iterative mesh generation PDE solver requiring local iterations in the vicinity of newly introduced points. This work was supported by ONR MURI under grant N00014-11-1-069.

  20. The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering

    NASA Astrophysics Data System (ADS)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2016-04-01

    Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in

  1. An adaptive pseudo-spectral method for reaction diffusion problems

    NASA Technical Reports Server (NTRS)

    Bayliss, A.; Gottlieb, D.; Matkowsky, B. J.; Minkoff, M.

    1987-01-01

    The spectral interpolation error was considered for both the Chebyshev pseudo-spectral and Galerkin approximations. A family of functionals I sub r (u), with the property that the maximum norm of the error is bounded by I sub r (u)/J sub r, where r is an integer and J is the degree of the polynomial approximation, was developed. These functionals are used in the adaptive procedure whereby the problem is dynamically transformed to minimize I sub r (u). The number of collocation points is then chosen to maintain a prescribed error bound. The method is illustrated by various examples from combustion problems in one and two dimensions.

  2. A multilevel adaptive projection method for unsteady incompressible flow

    NASA Technical Reports Server (NTRS)

    Howell, Louis H.

    1993-01-01

    There are two main requirements for practical simulation of unsteady flow at high Reynolds number: the algorithm must accurately propagate discontinuous flow fields without excessive artificial viscosity, and it must have some adaptive capability to concentrate computational effort where it is most needed. We satisfy the first of these requirements with a second-order Godunov method similar to those used for high-speed flows with shocks, and the second with a grid-based refinement scheme which avoids some of the drawbacks associated with unstructured meshes. These two features of our algorithm place certain constraints on the projection method used to enforce incompressibility. Velocities are cell-based, leading to a Laplacian stencil for the projection which decouples adjacent grid points. We discuss features of the multigrid and multilevel iteration schemes required for solution of the resulting decoupled problem. Variable-density flows require use of a modified projection operator--we have found a multigrid method for this modified projection that successfully handles density jumps of thousands to one. Numerical results are shown for the 2D adaptive and 3D variable-density algorithms.

  3. A parallel adaptive method for pseudo-arclength continuation

    NASA Astrophysics Data System (ADS)

    Aruliah, D. A.; van Veen, L.; Dubitski, A.

    2012-10-01

    Pseudo-arclength continuation is a well-established method for constructing a numerical curve comprising solutions of a system of nonlinear equations. In many complicated high-dimensional systems, the corrector steps within pseudo-arclength continuation are extremely costly to compute; as a result, the step-length of the preceding prediction step must be adapted carefully to avoid prohibitively many failed steps. We describe the essence of a parallel method for adapting the step-length of pseudo-arclength continuation. Our method employs several predictor-corrector sequences with differing step-lengths running concurrently on distinct processors. Our parallel framework permits intermediate results of correction sequences that have not yet converged to seed new predictor-corrector sequences with various step-lengths; the goal is to amortize the cost of corrector steps to make further progress along the underlying numerical curve. Results from numerical experiments suggest a three-fold speedup is attainable when the continuation curve sought has great topological complexity and the corrector steps require significant processor time.

  4. Adaptive grid methods for RLV environment assessment and nozzle analysis

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh J.

    1996-01-01

    Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation

  5. Turbulence profiling methods applied to ESO's adaptive optics facility

    NASA Astrophysics Data System (ADS)

    Valenzuela, Javier; Béchet, Clémentine; Garcia-Rissmann, Aurea; Gonté, Frédéric; Kolb, Johann; Le Louarn, Miska; Neichel, Benoît; Madec, Pierre-Yves; Guesalaga, Andrés.

    2014-07-01

    Two algorithms were recently studied for C2n profiling from wide-field Adaptive Optics (AO) measurements on GeMS (Gemini Multi-Conjugate AO system). They both rely on the Slope Detection and Ranging (SLODAR) approach, using spatial covariances of the measurements issued from various wavefront sensors. The first algorithm estimates the C2n profile by applying the truncated least-squares inverse of a matrix modeling the response of slopes covariances to various turbulent layer heights. In the second method, the profile is estimated by deconvolution of these spatial cross-covariances of slopes. We compare these methods in the new configuration of ESO Adaptive Optics Facility (AOF), a high-order multiple laser system under integration. For this, we use measurements simulated by the AO cluster of ESO. The impact of the measurement noise and of the outer scale of the atmospheric turbulence is analyzed. The important influence of the outer scale on the results leads to the development of a new step for outer scale fitting included in each algorithm. This increases the reliability and robustness of the turbulence strength and profile estimations.

  6. Anadolu University, Open Education Faculty, Turkish Language and Literature Department Graduated Students' Views towards Pedagogical Formation Training Certificate, Special Teaching Methods Courses and Turkish Language and Literature Education from: Sample of Turkey

    ERIC Educational Resources Information Center

    Bulut, Mesut

    2016-01-01

    The aim of this study is to find out Anadolu University Open Education Faculty Turkish Language and Literature graduated students' views towards Pedagogical Formation Training certificate and their opinions about special teaching methods. This study has been done in one of the universities of East Karadeniz in Turkey in which the 20 Turkish…

  7. An adaptive PCA fusion method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Guo, Qing; Li, An; Zhang, Hongqun; Feng, Zhongkui

    2014-10-01

    The principal component analysis (PCA) method is a popular fusion method used for its efficiency and high spatial resolution improvement. However, the spectral distortion is often found in PCA. In this paper, we propose an adaptive PCA method to enhance the spectral quality of the fused image. The amount of spatial details of the panchromatic (PAN) image injected into each band of the multi-spectral (MS) image is appropriately determined by a weighting matrix, which is defined by the edges of the PAN image, the edges of the MS image and the proportions between MS bands. In order to prove the effectiveness of the proposed method, the qualitative visual and quantitative analyses are introduced. The correlation coefficient (CC), the spectral discrepancy (SPD), and the spectral angle mapper (SAM) are used to measure the spectral quality of each fused band image. Q index is calculated to evaluate the global spectral quality of all the fused bands as a whole. The spatial quality is evaluated by the average gradient (AG) and the standard deviation (STD). Experimental results show that the proposed method improves the spectral quality very much comparing to the original PCA method while maintaining the high spatial quality of the original PCA.

  8. Reduction in redundancy of multichannel telemetric information by the method of adaptive discretization with associative sorting

    NASA Technical Reports Server (NTRS)

    Kantor, A. V.; Timonin, V. G.; Azarova, Y. S.

    1974-01-01

    The method of adaptive discretization is the most promising for elimination of redundancy from telemetry messages characterized by signal shape. Adaptive discretization with associative sorting was considered as a way to avoid the shortcomings of adaptive discretization with buffer smoothing and adaptive discretization with logical switching in on-board information compression devices (OICD) in spacecraft. Mathematical investigations of OICD are presented.

  9. The Formative Method for Adapting Psychotherapy (FMAP): A community-based developmental approach to culturally adapting therapy

    PubMed Central

    Hwang, Wei-Chin

    2010-01-01

    How do we culturally adapt psychotherapy for ethnic minorities? Although there has been growing interest in doing so, few therapy adaptation frameworks have been developed. The majority of these frameworks take a top-down theoretical approach to adapting psychotherapy. The purpose of this paper is to introduce a community-based developmental approach to modifying psychotherapy for ethnic minorities. The Formative Method for Adapting Psychotherapy (FMAP) is a bottom-up approach that involves collaborating with consumers to generate and support ideas for therapy adaptation. It involves 5-phases that target developing, testing, and reformulating therapy modifications. These phases include: (a) generating knowledge and collaborating with stakeholders (b) integrating generated information with theory and empirical and clinical knowledge, (c) reviewing the initial culturally adapted clinical intervention with stakeholders and revising the culturally adapted intervention, (d) testing the culturally adapted intervention, and (e) finalizing the culturally adapted intervention. Application of the FMAP is illustrated using examples from a study adapting psychotherapy for Chinese Americans, but can also be readily applied to modify therapy for other ethnic groups. PMID:20625458

  10. A Spectral Adaptive Mesh Refinement Method for the Burgers equation

    NASA Astrophysics Data System (ADS)

    Nasr Azadani, Leila; Staples, Anne

    2013-03-01

    Adaptive mesh refinement (AMR) is a powerful technique in computational fluid dynamics (CFD). Many CFD problems have a wide range of scales which vary with time and space. In order to resolve all the scales numerically, high grid resolutions are required. The smaller the scales the higher the resolutions should be. However, small scales are usually formed in a small portion of the domain or in a special period of time. AMR is an efficient method to solve these types of problems, allowing high grid resolutions where and when they are needed and minimizing memory and CPU time. Here we formulate a spectral version of AMR in order to accelerate simulations of a 1D model for isotropic homogenous turbulence, the Burgers equation, as a first test of this method. Using pseudo spectral methods, we applied AMR in Fourier space. The spectral AMR (SAMR) method we present here is applied to the Burgers equation and the results are compared with the results obtained using standard solution methods performed using a fine mesh.

  11. Robust image registration using adaptive coherent point drift method

    NASA Astrophysics Data System (ADS)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  12. A pedagogical framework for developing innovative science teachers with ICT

    NASA Astrophysics Data System (ADS)

    Rogers, Laurence; Twidle, John

    2013-11-01

    Background: The authors have conducted a number of research projects into the use of ICT in science teaching and most recently have collaborated with five European partners in teacher education to develop resources to assist teacher trainers in delivering courses for the professional development of science teachers. Purpose: 1. To describe the main aspects of pedagogy which are relevant to the use of ICT tools which serve practical science teaching. 2. To discuss approaches to teacher education which aim to emphasise the pedagogical aspects of using those ICT tools. Sources of evidence: 1. A review of the research literature on the effectiveness of using ICT in education with a particular focus on pedagogical knowledge and its interaction with associated technical knowledge. 2. Authors' experience as teacher trainers and as researchers in methods of employing ICT in science education. 3. Studies conducted by partners in the ICT for Innovative Science Teachers Project and training materials developed by the project. Main argument: Starting from the premise that it is the pedagogical actions of the teacher which determine successful learning outcomes of using ICT in science lessons, the paper describes the main components of pedagogical knowledge and understanding required by teachers. It examines the role of an understanding of affordances in helping teachers to deploy software tools appropriately and defines some of the skills for exploiting them to benefit learning. Innovation is successful when ICT activities are incorporated in ways that complement non-ICT activities and serve science learning objectives. When teachers are alert to adapt their pedagogical skills, they evolve new ways of working and interacting with students. Training courses need to provide means of helping teachers to examine the professional beliefs which underpin their pedagogical approaches. This is most effectively achieved when a course blends personal hands-on experience with discourse

  13. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  14. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  15. A forward method for optimal stochastic nonlinear and adaptive control

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1988-01-01

    A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.

  16. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  17. Evaluation of Adaptive Subdivision Method on Mobile Device

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila

    2013-06-01

    Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.

  18. A Study of Two Methods for Adapting Self-Instructional Materials to Individual Differences. Final Report.

    ERIC Educational Resources Information Center

    Melaragno, Ralph J.

    The two-phase study compared two methods of adapting self-instructional materials to individual differences among learners. The methods were compared with each other and with a control condition involving only minimal adaptation. The first adaptation procedure was based on subjects' performances on a learning task in Phase I of the study; the…

  19. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  20. Pedagogical training of medicine professors.

    PubMed

    da Silva Campos Costa, Nilce Maria

    2010-01-01

    This study examines the pedagogical training process of medical professors at a Brazilian university, the meanings attributed to it, and the positive and negative aspects identified in it. This is a descriptive-exploratory study, using a qualitative approach with a questionnaire utilizing open-ended and closed questions and a semi-structured interview. The majority of queried individuals had no formal teacher training and learned to be teachers through a process of socialization that was in part intuitive or by modeling those considered to be good teachers; they received pedagogical training mainly in post-graduate courses. Positives aspects of this training were the possibility of refresher courses in pedagogical methods and increased knowledge in their educational area. Negative factors were a lack of practical activities and a dichotomy between theoretical content and practical teaching. The skills acquired through professional experience formed the basis for teaching competence and pointed to the need for continuing education projects at the institutional level, including these skills themselves as a source of professional knowledge. PMID:20428704

  1. An Adaptive De-Aliasing Strategy for Discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Beck, Andrea; Flad, David; Frank, Hannes; Munz, Claus-Dieter

    2015-11-01

    Discontinuous Galerkin methods combine the accuracy of a local polynomial representation with the geometrical flexibility of an element-based discretization. In combination with their excellent parallel scalability, these methods are currently of great interest for DNS and LES. For high order schemes, the dissipation error approaches a cut-off behavior, which allows an efficient wave resolution per degree of freedom, but also reduces robustness against numerical errors. One important source of numerical error is the inconsistent discretization of the non-linear convective terms, which results in aliasing of kinetic energy and solver instability. Consistent evaluation of the inner products prevents this form of error, but is computationally very expensive. In this talk, we discuss the need for a consistent de-aliasing to achieve a neutrally stable scheme, and present a novel strategy for recovering a part of the incurred computational costs. By implementing the de-aliasing operation through a cell-local projection filter, we can perform adaptive de-aliasing in space and time, based on physically motivated indicators. We will present results for a homogeneous isotropic turbulence and the Taylor-Green vortex flow, and discuss implementation details, accuracy and efficiency.

  2. Method for removing tilt control in adaptive optics systems

    DOEpatents

    Salmon, J.T.

    1998-04-28

    A new adaptive optics system and method of operation are disclosed, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G{prime} = (I{minus}X(X{sup T} X){sup {minus}1}X{sup T})G(I{minus}A). 3 figs.

  3. Method for removing tilt control in adaptive optics systems

    DOEpatents

    Salmon, Joseph Thaddeus

    1998-01-01

    A new adaptive optics system and method of operation, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G'=(I-X(X.sup.T X).sup.-1 X.sup.T)G(I-A)

  4. Adapted G-mode Clustering Method applied to Asteroid Taxonomy

    NASA Astrophysics Data System (ADS)

    Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.

    2013-11-01

    The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.

  5. Adaptable Metadata Rich IO Methods for Portable High Performance IO

    SciTech Connect

    Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten

    2009-01-01

    Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small

  6. A hybrid method for optimization of the adaptive Goldstein filter

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

    2014-12-01

    The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

  7. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  8. The Influence of Alternative Pedagogical Methods in Postsecondary Biology Education: How Do Students Experience a Multimedia Case-Study Environment?

    ERIC Educational Resources Information Center

    Wolter, Bjorn Hugo Karl

    2010-01-01

    The purpose of this study was to better understand how an online, multimedia case study method influenced students' motivation, performance, and perceptions of science in collegiate level biology classes. It utilized a mix-methods design including data from pre- and post-test, student surveys, and focus group interviews to answer one primary…

  9. LDRD Final Report: Adaptive Methods for Laser Plasma Simulation

    SciTech Connect

    Dorr, M R; Garaizar, F X; Hittinger, J A

    2003-01-29

    The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an

  10. Categorizing Pedagogical Patterns by Teaching Activities and Pedagogical Values

    ERIC Educational Resources Information Center

    Bennedsen, Jens; Eriksen, Ole

    2006-01-01

    The main contribution of this paper is a proposal for a universal pedagogical pattern categorization based on teaching values and activities. This categorization would be more sustainable than the arbitrary categorization implied by pedagogical pattern language themes. Pedagogical patterns from two central patterns languages are analyzed and…

  11. Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method

    NASA Astrophysics Data System (ADS)

    Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph

    2008-11-01

    This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.

  12. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  13. Exploring Teachers' Technological Pedagogical Content Knowledge (TPACK) in an Online Course: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Varguez, Ricardo

    2012-01-01

    The constant expansion of Web 2.0 applications available on the World Wide Web and expansion of technology resources has prompted the need to better prepare current and future educators to make more effective use of such resources in their classrooms. The purpose of this embedded mixed methods case study was to describe the experiences and changes…

  14. Change of Iranian EFL Teachers' Traditional Pedagogical Methods through Using "Pronunciation Power" Software in the Instruction of English Pronunciation

    ERIC Educational Resources Information Center

    Gilakjani, Abbas Pourhosein; Sabouri, Narjes Banou

    2014-01-01

    The use of computer technology in learning and teaching has been studied by many studies but less research has been conducted for understanding users' feeling toward it and how this technology helps teachers develop their teaching methods. One of the computer technologies for the instruction of English pronunciation is "Pronunciation…

  15. Method and system for spatial data input, manipulation and distribution via an adaptive wireless transceiver

    NASA Technical Reports Server (NTRS)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.

  16. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  17. The Questionable Benefits of Pedagogical Agents: Response to Veletsianos

    ERIC Educational Resources Information Center

    Clark, Richard E.; Choi, Sunhee

    2007-01-01

    The point of Choi and Clark (2006) was that after many well-designed studies, they have no evidence for either the learning or the motivational benefits of pedagogical agents. In their study, they found compelling evidence that when agents are found to enhance learning, a less expensive and less distracting pedagogical method has equal or greater…

  18. Measuring Teachers' Pedagogical Content Knowledge in Primary Technology Education

    ERIC Educational Resources Information Center

    Rohaan, Ellen J.; Taconis, Ruurd; Jochems, Wim M. G.

    2009-01-01

    Pedagogical content knowledge is found to be a crucial part of the knowledge base for teaching. Studies in the field of primary technology education showed that this domain of teacher knowledge is related to pupils' increased learning, motivation, and interest. The common methods to investigate teachers' pedagogical content knowledge are often…

  19. Adaptation of a-Stratified Method in Variable Length Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai

    Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…

  20. Study of adaptive methods for data compression of scanner data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.

  1. Systems and Methods for Derivative-Free Adaptive Control

    NASA Technical Reports Server (NTRS)

    Yucelen, Tansel (Inventor); Kim, Kilsoo (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    An adaptive control system is disclosed. The control system can control uncertain dynamic systems. The control system can employ one or more derivative-free adaptive control architectures. The control system can further employ one or more derivative-free weight update laws. The derivative-free weight update laws can comprise a time-varying estimate of an ideal vector of weights. The control system of the present invention can therefore quickly stabilize systems that undergo sudden changes in dynamics, caused by, for example, sudden changes in weight. Embodiments of the present invention can also provide a less complex control system than existing adaptive control systems. The control system can control aircraft and other dynamic systems, such as, for example, those with non-minimum phase dynamics.

  2. Pedagogical Content Knowledge Taxonomies.

    ERIC Educational Resources Information Center

    Veal, William R.; MaKinster, James G.

    1999-01-01

    Presents two taxonomies that offer a relatively comprehensive categorization scheme for future studies of pedagogical content knowledge (PCK) development in teacher education. "The General Taxonomy of PCK" addresses distinctions within and between the knowledge bases of various disciplines, science subjects, and science topics. "The Taxonomy of…

  3. Small Business Pedagogic Practices

    ERIC Educational Resources Information Center

    Billett, Stephen; Hernon-Tinning, Barnie; Ehrich, Lisa

    2003-01-01

    Understanding how learning for small businesses should best proceed constitutes a worthwhile, yet challenging, pedagogic project. In order to maintain their viability, small businesses need to be able to respond to new practices and tasks. Yet small businesses seem neither attracted to nor to value the kinds of taught courses that are the standard…

  4. MALL: The Pedagogical Challenges

    ERIC Educational Resources Information Center

    Burston, Jack

    2014-01-01

    In this paper the development of mobile-assisted language learning (MALL) over the past 20 years is reviewed with a particular focus on the pedagogical challenges facing its exploitation. Following a consideration of the definition of mobile learning, the paper describes the dominant mobile technologies upon which MALL applications have been…

  5. Ecological, Pedagogical, Public Rhetoric

    ERIC Educational Resources Information Center

    Rivers, Nathaniel A.; Weber, Ryan P.

    2011-01-01

    Public rhetoric pedagogy can benefit from an ecological perspective that sees change as advocated not through a single document but through multiple mundane and monumental texts. This article summarizes various approaches to rhetorical ecology, offers an ecological read of the Montgomery bus boycotts, and concludes with pedagogical insights on a…

  6. Teaching virtue: pedagogical implications of moral psychology.

    PubMed

    Frey, William J

    2010-09-01

    Moral exemplar studies of computer and engineering professionals have led ethics teachers to expand their pedagogical aims beyond moral reasoning to include the skills of moral expertise. This paper frames this expanded moral curriculum in a psychologically informed virtue ethics. Moral psychology provides a description of character distributed across personality traits, integration of moral value into the self system, and moral skill sets. All of these elements play out on the stage of a social surround called a moral ecology. Expanding the practical and professional curriculum to cover the skills and competencies of moral expertise converts the classroom into a laboratory where students practice moral expertise under the guidance of their teachers. The good news is that this expanded pedagogical approach can be realized without revolutionizing existing methods of teaching ethics. What is required, instead, is a redeployment of existing pedagogical tools such as cases, professional codes, decision-making frameworks, and ethics tests. This essay begins with a summary of virtue ethics and informs this with recent research in moral psychology. After identifying pedagogical means for teaching ethics, it shows how these can be redeployed to meet a broader, skills based agenda. Finally, short module profiles offer concrete examples of the shape this redeployed pedagogical agenda would take in the practical and professional ethics classroom. PMID:19728163

  7. Examining the Impact of an Integrative Method of Using Technology on Students' Achievement and Efficiency of Computer Usage and on Pedagogical Procedure in Geometry

    ERIC Educational Resources Information Center

    Gurevich, Irina; Gurev, Dvora

    2012-01-01

    In the current study we follow the development of the pedagogical procedure for the course "Constructions in Geometry" that resulted from using dynamic geometry software (DGS), where the computer became an integral part of the educational process. Furthermore, we examine the influence of integrating DGS into the course on students' achievement and…

  8. A New Method to Cancel RFI---The Adaptive Filter

    NASA Astrophysics Data System (ADS)

    Bradley, R.; Barnbaum, C.

    1996-12-01

    An increasing amount of precious radio frequency spectrum in the VHF, UHF, and microwave bands is being utilized each year to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Some radio spectral lines of astronomical interest occur outside the protected radio astronomy bands and are unobservable due to heavy interference. Conventional approaches to deal with RFI include legislation, notch filters, RF shielding, and post-processing techniques. Although these techniques are somewhat successful, each suffers from insufficient interference cancellation. One concept of interference excision that has not been used before in radio astronomy is adaptive interference cancellation. The concept of adaptive interference canceling was first introduced in the mid-1970s as a way to reduce unwanted noise in low frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartment of automobiles. Only recently have high-speed digital filter chips made adaptive filtering possible in a bandwidth as large a few megahertz, finally opening the door to astronomical uses. The system consists of two receivers: the main beam of the radio telescope receives the desired signal corrupted by RFI coming in the sidelobes, and the reference antenna receives only the RFI. The reference antenna is processed using a digital adaptive filter and then subtracted from the signal in the main beam, thus producing the system output. The weights of the digital filter are adjusted by way of an algorithm that minimizes, in a least-squares sense, the power output of the system. Through an adaptive-iterative process, the interference canceler will lock onto the RFI and the filter will adjust itself to minimize the effect of the RFI at the system output. We are building a prototype 100 MHz receiver and will measure the cancellation

  9. An Analysis of Social Studies Teachers' Perception Levels Regarding Web Pedagogical Content Knowledge

    ERIC Educational Resources Information Center

    Yesiltas, Erkan

    2016-01-01

    Web pedagogical content knowledge generally takes pedagogical knowledge, content knowledge, and Web knowledge as basis. It is a structure emerging through the interaction of these three components. Content knowledge refers to knowledge of subjects to be taught. Pedagogical knowledge involves knowledge of process, implementation, learning methods,…

  10. The use of the spectral method within the fast adaptive composite grid method

    SciTech Connect

    McKay, S.M.

    1994-12-31

    The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.

  11. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  12. Evaluation of an adaptive beamforming method for hearing aids.

    PubMed

    Greenberg, J E; Zurek, P M

    1992-03-01

    In this paper evaluations of a two-microphone adaptive beamforming system for hearing aids are presented. The system, based on the constrained adaptive beamformer described by Griffiths and Jim [IEEE Trans. Antennas Propag. AP-30, 27-34 (1982)], adapts to preserve target signals from straight ahead and to minimize jammer signals arriving from other directions. Modifications of the basic Griffiths-Jim algorithm are proposed to alleviate problems of target cancellation and misadjustment that arise in the presence of strong target signals. The evaluations employ both computer simulations and a real-time hardware implementation and are restricted to the case of a single jammer. Performance is measured by the spectrally weighted gain in the target-to-jammer ratio in the steady state. Results show that in environments with relatively little reverberation: (1) the modifications allow good performance even with misaligned arrays and high input target-to-jammer ratios; and (2) performance is better with a broadside array with 7-cm spacing between microphones than with a 26-cm broadside or a 7-cm endfire configuration. Performance degrades in reverberant environments; at the critical distance of a room, improvement with a practical system is limited to a few dB. PMID:1564202

  13. Method and apparatus for adaptive force and position control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1989-01-01

    The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.

  14. Online "Chat" Facilities as Pedagogic Tools

    ERIC Educational Resources Information Center

    Kirkpatrick, Graeme

    2005-01-01

    This article assesses the pedagogic value of the "chat" facility in the Blackboard integrated learning platform. It draws on a case study carried out by the author in the 2001-2 academic session. A level three class in research methods involved students in group working away from class and student feedback indicated that more support was needed to…

  15. Is International Accounting Education Delivering Pedagogical Value?

    ERIC Educational Resources Information Center

    Patel, Chris; Millanta, Brian; Tweedie, Dale

    2016-01-01

    This paper examines whether universities are delivering pedagogical value to international accounting students commensurate with the costs of studying abroad. The paper uses survey and interview methods to explore the extent to which Chinese Learners (CLs) in an Australian postgraduate accounting subject have distinct learning needs. The paper…

  16. Multiscale Simulation of Microcrack Based on a New Adaptive Finite Element Method

    NASA Astrophysics Data System (ADS)

    Xu, Yun; Chen, Jun; Chen, Dong Quan; Sun, Jin Shan

    In this paper, a new adaptive finite element (FE) framework based on the variational multiscale method is proposed and applied to simulate the dynamic behaviors of metal under loadings. First, the extended bridging scale method is used to couple molecular dynamics and FE. Then, macro damages evolvements of those micro defects are simulated by the adaptive FE method. Some auxiliary strategies, such as the conservative mesh remapping, failure mechanism and mesh splitting technique are also included in the adaptive FE computation. Efficiency of our method is validated by numerical experiments.

  17. An adaptive response surface method for crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Yang, Ren-Jye; Zhu, Ping

    2013-11-01

    Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.

  18. Robustness of an adaptive beamforming method for hearing aids.

    PubMed

    Peterson, P M; Wei, S M; Rabinowitz, W M; Zurek, P M

    1990-01-01

    We describe the results of computer simulations of a multimicrophone adaptive-beamforming system as a noise reduction device for hearing aids. Of particular concern was the system's sensitivity to violations of the underlying assumption that the target signal is identical at the microphones. Two- and four-microphone versions of the system were tested in simulated anechoic and modestly-reverberant environments with one and two jammers, and with deviations from the assumed straight-ahead target direction. Also examined were the effects of input target-to-jammer ratio and adaptive-filter length. Generally, although the noise-reduction performance of the system is degraded by target misalignment and modest reverberation, the system still provides positive advantage at input target-to-jammer ratios up to about 0 dB. This is in contrast to the degrading target-cancellation effect that the system can have when the equal-target assumption is violated and the input target-to-jammer ratio is greater than zero. PMID:2356741

  19. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  20. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download. PMID:26465549

  1. Investigating Item Exposure Control Methods in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Ozturk, Nagihan Boztunc; Dogan, Nuri

    2015-01-01

    This study aims to investigate the effects of item exposure control methods on measurement precision and on test security under various item selection methods and item pool characteristics. In this study, the Randomesque (with item group sizes of 5 and 10), Sympson-Hetter, and Fade-Away methods were used as item exposure control methods. Moreover,…

  2. A massively parallel adaptive finite element method with dynamic load balancing

    SciTech Connect

    Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.

    1993-05-01

    We construct massively parallel, adaptive finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. We also present results using adaptive p-refinement to reduce the computational cost of the method. We describe tiling, a dynamic, element-based data migration system. Tiling dynamically maintains global load balance in the adaptive method by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. We demonstrate the effectiveness of the dynamic load balancing with adaptive p-refinement examples.

  3. An examination of an adapter method for measuring the vibration transmitted to the human arms

    PubMed Central

    Xu, Xueyan S.; Dong, Ren G.; Welcome, Daniel E.; Warren, Christopher; McDowell, Thomas W.

    2016-01-01

    The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system. PMID:26834309

  4. Investigation of the Multiple Model Adaptive Control (MMAC) method for flight control systems

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The application was investigated of control theoretic ideas to the design of flight control systems for the F-8 aircraft. The design of an adaptive control system based upon the so-called multiple model adaptive control (MMAC) method is considered. Progress is reported.

  5. The older person has a stroke: Learning to adapt using the Feldenkrais® Method.

    PubMed

    Jackson-Wyatt, O

    1995-01-01

    The older person with a stroke requires adapted therapeutic interventions to take into account normal age-related changes. The Feldenkrais® Method presents a model for learning to promote adaptability that addresses key functional changes seen with normal aging. Clinical examples related to specific functional tasks are discussed to highlight major treatment modifications and neuromuscular, psychological, emotional, and sensory considerations. PMID:27619899

  6. An adaptive filter method for spacecraft using gravity assist

    NASA Astrophysics Data System (ADS)

    Ning, Xiaolin; Huang, Panpan; Fang, Jiancheng; Liu, Gang; Ge, Shuzhi Sam

    2015-04-01

    Celestial navigation (CeleNav) has been successfully used during gravity assist (GA) flyby for orbit determination in many deep space missions. Due to spacecraft attitude errors, ephemeris errors, the camera center-finding bias, and the frequency of the images before and after the GA flyby, the statistics of measurement noise cannot be accurately determined, and yet have time-varying characteristics, which may introduce large estimation error and even cause filter divergence. In this paper, an unscented Kalman filter (UKF) with adaptive measurement noise covariance, called ARUKF, is proposed to deal with this problem. ARUKF scales the measurement noise covariance according to the changes in innovation and residual sequences. Simulations demonstrate that ARUKF is robust to the inaccurate initial measurement noise covariance matrix and time-varying measurement noise. The impact factors in the ARUKF are also investigated.

  7. Pedagogical Descriptions of Language: Lexis.

    ERIC Educational Resources Information Center

    Cowie, A. P.

    1989-01-01

    An examination is made of the advances, trends, and future developments in pedagogical lexicography with specific discussions concerning lexical research projects and language learners' dictionaries. (46 references) (GLR)

  8. New methods and astrophysical applications of adaptive mesh fluid simulations

    NASA Astrophysics Data System (ADS)

    Wang, Peng

    The formation of stars, galaxies and supermassive black holes are among the most interesting unsolved problems in astrophysics. Those problems are highly nonlinear and involve enormous dynamical ranges. Thus numerical simulations with spatial adaptivity are crucial in understanding those processes. In this thesis, we discuss the development and application of adaptive mesh refinement (AMR) multi-physics fluid codes to simulate those nonlinear structure formation problems. To simulate the formation of star clusters, we have developed an AMR magnetohydrodynamics (MHD) code, coupled with radiative cooling. We have also developed novel algorithms for sink particle creation, accretion, merging and outflows, all of which are coupled with the fluid algorithms using operator splitting. With this code, we have been able to perform the first AMR-MHD simulation of star cluster formation for several dynamical times, including sink particle and protostellar outflow feedbacks. The results demonstrated that protostellar outflows can drive supersonic turbulence in dense clumps and explain the observed slow and inefficient star formation. We also suggest that global collapse rate is the most important factor in controlling massive star accretion rate. In the topics of galaxy formation, we discuss the results of three projects. In the first project, using cosmological AMR hydrodynamics simulations, we found that isolated massive star still forms in cosmic string wakes even though the mega-parsec scale structure has been perturbed significantly by the cosmic strings. In the second project, we calculated the dynamical heating rate in galaxy formation. We found that by balancing our heating rate with the atomic cooling rate, it gives a critical halo mass which agrees with the result of numerical simulations. This demonstrates that the effect of dynamical heating should be put into semi-analytical works in the future. In the third project, using our AMR-MHD code coupled with radiative

  9. Teacher Pedagogical Constructions: A Reconfiguration of Pedagogical Content Knowledge

    ERIC Educational Resources Information Center

    Hashweh, Maher Z.

    2005-01-01

    A brief review of the history of pedagogical content knowledge reveals various definitions and conceptualizations of the construct, as well as some conceptual problems. A new conceptualization--teacher pedagogical constructions--is offered to address some of the problems associated with PCK. Seven assertions that comprise the new conceptualization…

  10. Pedagogical Authority and Pedagogical Love--Connected or Incompatible?

    ERIC Educational Resources Information Center

    Maatta, Kaarina; Uusiautti, Satu

    2012-01-01

    The core questions in the modern school are: What is a good teacher like? And, how do we educate good teachers? Different eras, theories, ideologies, and conceptions of human beings influence how people can become the best kind of teacher. The fundamental idea in this article is that pedagogical love and pedagogical authority form a salient part…

  11. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  12. An adaptive mesh refinement algorithm for the discrete ordinates method

    SciTech Connect

    Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.

    1996-03-01

    The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.

  13. Analysis of modified SMI method for adaptive array weight control

    NASA Technical Reports Server (NTRS)

    Dilsavor, R. L.; Moses, R. L.

    1989-01-01

    An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.

  14. Medical pedagogical resources management.

    PubMed

    Pouliquen, Bruno; Le Duff, Franck; Delamarre, Denis; Cuggia, Marc; Mougin, Fleur; Le Beux, Pierre

    2003-01-01

    The main objective of this work is to help the management of training resources for students using a pedagogical network available at the Medical School of Rennes. With the increase of the number of connections and the number of medical documents available on this network, the management of new contents requires a lot of efforts for the webmaster. In order to improve the management of the resources, we implemented an automatic web engine for teachers, able to manage the links for the most interesting resources for their practice. PMID:14664034

  15. Pedagogical Approaches for Technology-Integrated Science Teaching

    ERIC Educational Resources Information Center

    Hennessy, Sara; Wishart, Jocelyn; Whitelock, Denise; Deaney, Rosemary; Brawn, Richard; la Velle, Linda; McFarlane, Angela; Ruthven, Kenneth; Winterbottom, Mark

    2007-01-01

    The two separate projects described have examined how teachers exploit computer-based technologies in supporting learning of science at secondary level. This paper examines how pedagogical approaches associated with these technological tools are adapted to both the cognitive and structuring resources available in the classroom setting. Four…

  16. Speckle reduction in optical coherence tomography by adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun

    2015-12-01

    An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.

  17. An adaptation of Krylov subspace methods to path following

    SciTech Connect

    Walker, H.F.

    1996-12-31

    Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

  18. Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.

  19. Adapting Western Research Methods to Indigenous Ways of Knowing

    PubMed Central

    Christopher, Suzanne

    2013-01-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid. PMID:23678897

  20. Solving delay differential equations in S-ADAPT by method of steps.

    PubMed

    Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech

    2013-09-01

    S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. PMID:23810514

  1. Automatic multirate methods for ordinary differential equations. [Adaptive time steps

    SciTech Connect

    Gear, C.W.

    1980-01-01

    A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.

  2. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  3. Platform Support for Pedagogical Scenarios

    ERIC Educational Resources Information Center

    Peter, Yvan; Vantroys, Thomas

    2005-01-01

    This article deals with providing support for the execution of pedagogical scenarios in Learning Management Systems. It takes an engineering point of view to identifies actors, design and use processes. Next it defines the necessary capabilities of a platform so that actors can manage or use pedagogical scenarios. The second part of the article is…

  4. A massively parallel adaptive finite element method with dynamic load balancing

    SciTech Connect

    Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.

    1993-12-31

    The authors construct massively parallel adaptive finite element methods for the solution of hyperbolic conservation laws. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. They demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. They present results using adaptive p-refinement to reduce the computational cost of the method, and tiling, a dynamic, element-based data migration system that maintains global load balance of the adaptive method by overlapping neighborhoods of processors that each perform local balancing.

  5. Restrictive Stochastic Item Selection Methods in Cognitive Diagnostic Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Chun; Chang, Hua-Hua; Huebner, Alan

    2011-01-01

    This paper proposes two new item selection methods for cognitive diagnostic computerized adaptive testing: the restrictive progressive method and the restrictive threshold method. They are built upon the posterior weighted Kullback-Leibler (KL) information index but include additional stochastic components either in the item selection index or in…

  6. Weighted Structural Regression: A Broad Class of Adaptive Methods for Improving Linear Prediction.

    ERIC Educational Resources Information Center

    Pruzek, Robert M.; Lepak, Greg M.

    1992-01-01

    Adaptive forms of weighted structural regression are developed and discussed. Bootstrapping studies indicate that the new methods have potential to recover known population regression weights and predict criterion score values routinely better than do ordinary least squares methods. The new methods are scale free and simple to compute. (SLD)

  7. An Adaptive Kalman Filter using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  8. An Adaptive Kalman Filter Using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  9. Adapting and using quality management methods to improve health promotion.

    PubMed

    Becker, Craig M; Glascoff, Mary A; Felts, William Michael; Kent, Christopher

    2015-01-01

    Although the western world is the most technologically advanced civilization to date, it is also the most addicted, obese, medicated, and in-debt adult population in history. Experts had predicted that the 21st century would be a time of better health and prosperity. Although wealth has increased, our quest to quell health problems using a pathogenic approach without understanding the interconnectedness of everyone and everything has damaged personal and planetary health. While current efforts help identify and eliminate causes of problems, they do not facilitate the creation of health and well-being as would be done with a salutogenic approach. Sociologist Aaron Antonovsky coined the term salutogenesis in 1979. It is derived from salus, which is Latin for health, and genesis, meaning to give birth. Salutogenesis, the study of the origins and creation of health, provides a method to identify an interconnected way to enhance well-being. Salutogenesis provides a framework for a method of practice to improve health promotion efforts. This article illustrates how quality management methods can be used to guide health promotion efforts focused on improving health beyond the absence of disease. PMID:25777291

  10. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  11. Adaptive Discrete Equation Method for injection of stochastic cavitating flows

    NASA Astrophysics Data System (ADS)

    Geraci, Gianluca; Rodio, Maria Giovanna; Iaccarino, Gianluca; Abgrall, Remi; Congedo, Pietro

    2014-11-01

    This work aims at the improvement of the prediction and of the control of biofuel injection for combustion. In fact, common injector should be optimized according to the specific physical/chemical properties of biofuels. In order to attain this scope, an optimized model for reproducing the injection for several biofuel blends will be considered. The originality of this approach is twofold, i) the use of cavitating two-phase compressible models, known as Baer & Nunziato, in order to reproduce the injection, and ii) the design of a global scheme for directly taking into account experimental measurements uncertainties in the simulation. In particular, stochastic intrusive methods display a high efficiency when dealing with discontinuities in unsteady compressible flows. We have recently formulated a new scheme for simulating stochastic multiphase flows relying on the Discrete Equation Method (DEM) for describing multiphase effects. The set-up of the intrusive stochastic method for multiphase unsteady compressible flows in quasi 1D configuration will be presented. The target test-case is a multiphase unsteady nozzle for injection of biofuels, described by complex thermodynamics models, for which experimental data and associated uncertainties are available.

  12. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.

    PubMed

    Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  13. The Pilates method and cardiorespiratory adaptation to training.

    PubMed

    Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen

    2016-01-01

    Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities. PMID:27357919

  14. Error estimation and adaptive order nodal method for solving multidimensional transport problems

    SciTech Connect

    Zamonsky, O.M.; Gho, C.J.; Azmy, Y.Y.

    1998-01-01

    The authors propose a modification of the Arbitrarily High Order Transport Nodal method whereby they solve each node and each direction using different expansion order. With this feature and a previously proposed a posteriori error estimator they develop an adaptive order scheme to automatically improve the accuracy of the solution of the transport equation. They implemented the modified nodal method, the error estimator and the adaptive order scheme into a discrete-ordinates code for solving monoenergetic, fixed source, isotropic scattering problems in two-dimensional Cartesian geometry. They solve two test problems with large homogeneous regions to test the adaptive order scheme. The results show that using the adaptive process the storage requirements are reduced while preserving the accuracy of the results.

  15. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  16. A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures

    SciTech Connect

    Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George

    2012-01-01

    We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.

  17. Impedance adaptation methods of the piezoelectric energy harvesting

    NASA Astrophysics Data System (ADS)

    Kim, Hyeoungwoo

    In this study, the important issues of energy recovery were addressed and a comprehensive investigation was performed on harvesting electrical power from an ambient mechanical vibration source. Also discussed are the impedance matching methods used to increase the efficiency of energy transfer from the environment to the application. Initially, the mechanical impedance matching method was investigated to increase mechanical energy transferred to the transducer from the environment. This was done by reducing the mechanical impedance such as damping factor and energy reflection ratio. The vibration source and the transducer were modeled by a two-degree-of-freedom dynamic system with mass, spring constant, and damper. The transmissibility employed to show how much mechanical energy that was transferred in this system was affected by the damping ratio and the stiffness of elastic materials. The mechanical impedance of the system was described by electrical system using analogy between the two systems in order to simply the total mechanical impedance. Secondly, the transduction rate of mechanical energy to electrical energy was improved by using a PZT material which has a high figure of merit and a high electromechanical coupling factor for electrical power generation, and a piezoelectric transducer which has a high transduction rate was designed and fabricated. The high g material (g33 = 40 [10-3Vm/N]) was developed to improve the figure of merit of the PZT ceramics. The cymbal composite transducer has been found as a promising structure for piezoelectric energy harvesting under high force at cyclic conditions (10--200 Hz), because it has almost 40 times higher effective strain coefficient than PZT ceramics. The endcap of cymbal also enhances the endurance of the ceramic to sustain ac load along with stress amplification. In addition, a macro fiber composite (MFC) was employed as a strain component because of its flexibility and the high electromechanical coupling

  18. A self-adaptive-grid method with application to airfoil flow

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.

  19. Research on adaptive segmentation and activity classification method of filamentous fungi image in microbe fermentation

    NASA Astrophysics Data System (ADS)

    Cai, Xiaochun; Hu, Yihua; Wang, Peng; Sun, Dujuan; Hu, Guilan

    2009-10-01

    The paper presents an adaptive segmentation and activity classification method for filamentous fungi image. Firstly, an adaptive structuring element (SE) construction algorithm is proposed for image background suppression. Based on watershed transform method, the color labeled segmentation of fungi image is taken. Secondly, the fungi elements feature space is described and the feature set for fungi hyphae activity classification is extracted. The growth rate evaluation of fungi hyphae is achieved by using SVM classifier. Some experimental results demonstrate that the proposed method is effective for filamentous fungi image processing.

  20. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    SciTech Connect

    Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  1. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.

  2. Adaptation of the TCLP and SW-846 methods to radioactive mixed waste

    SciTech Connect

    Griest, W.H.; Schenley, R.L.; Caton, J.E.; Wolfe, P.F.

    1994-07-01

    Modifications of conventional sample preparation and analytical methods are necessary to provide radiation protection and to meet sensitivity requirements for regulated constituents when working with radioactive samples. Adaptations of regulatory methods for determining ``total`` Toxicity Characteristic Leaching Procedure (TCLP) volatile and semivolatile organics and pesticides, and for conducting aqueous leaching are presented.

  3. Five Methods to Score the Teacher Observation of Classroom Adaptation Checklist and to Examine Group Differences

    ERIC Educational Resources Information Center

    Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy

    2015-01-01

    This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items, (b) latent factor scores with continuous indicators, (c)…

  4. An adaptive, formally second order accurate version of the immersed boundary method

    NASA Astrophysics Data System (ADS)

    Griffith, Boyce E.; Hornung, Richard D.; McQueen, David M.; Peskin, Charles S.

    2007-04-01

    Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509-534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75-105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves

  5. An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations

    NASA Astrophysics Data System (ADS)

    Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.

    2016-08-01

    In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.

  6. Estimating the Importance of Private Adaptation to Climate Change in Agriculture: A Review of Empirical Methods

    NASA Astrophysics Data System (ADS)

    Moore, F.; Burke, M.

    2015-12-01

    A wide range of studies using a variety of methods strongly suggest that climate change will have a negative impact on agricultural production in many areas. Farmers though should be able to learn about a changing climate and to adjust what they grow and how they grow it in order to reduce these negative impacts. However, it remains unclear how effective these private (autonomous) adaptations will be, or how quickly they will be adopted. Constraining the uncertainty on this adaptation is important for understanding the impacts of climate change on agriculture. Here we review a number of empirical methods that have been proposed for understanding the rate and effectiveness of private adaptation to climate change. We compare these methods using data on agricultural yields in the United States and western Europe.

  7. The adaptive problems of female teenage refugees and their behavioral adjustment methods for coping

    PubMed Central

    Mhaidat, Fatin

    2016-01-01

    This study aimed at identifying the levels of adaptive problems among teenage female refugees in the government schools and explored the behavioral methods that were used to cope with the problems. The sample was composed of 220 Syrian female students (seventh to first secondary grades) enrolled at government schools within the Zarqa Directorate and who came to Jordan due to the war conditions in their home country. The study used the scale of adaptive problems that consists of four dimensions (depression, anger and hostility, low self-esteem, and feeling insecure) and a questionnaire of the behavioral adjustment methods for dealing with the problem of asylum. The results indicated that the Syrian teenage female refugees suffer a moderate degree of adaptation problems, and the positive adjustment methods they have used are more than the negatives. PMID:27175098

  8. Asynchronous multilevel adaptive methods for solving partial differential equations on multiprocessors - Performance results

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.

  9. A new adaptive exponential smoothing method for non-stationary time series with level shifts

    NASA Astrophysics Data System (ADS)

    Monfared, Mohammad Ali Saniee; Ghandali, Razieh; Esmaeili, Maryam

    2014-07-01

    Simple exponential smoothing (SES) methods are the most commonly used methods in forecasting and time series analysis. However, they are generally insensitive to non-stationary structural events such as level shifts, ramp shifts, and spikes or impulses. Similar to that of outliers in stationary time series, these non-stationary events will lead to increased level of errors in the forecasting process. This paper generalizes the SES method into a new adaptive method called revised simple exponential smoothing (RSES), as an alternative method to recognize non-stationary level shifts in the time series. We show that the new method improves the accuracy of the forecasting process. This is done by controlling the number of observations and the smoothing parameter in an adaptive approach, and in accordance with the laws of statistical control limits and the Bayes rule of conditioning. We use a numerical example to show how the new RSES method outperforms its traditional counterpart, SES.

  10. Pedagogical Implications of Contrastive Studies

    ERIC Educational Resources Information Center

    Marton, Waldemar

    1972-01-01

    Pessimism regarding pedagogical applications of contrastive studies, and reasons therefore, are described. Several misunderstandings believed to contribute to this pessimism, and several areas of controversy concerning uses of contrastive studies, are discussed. See FL 508 197 for availability. (RM)

  11. Software for the parallel adaptive solution of conservation laws by discontinous Galerkin methods.

    SciTech Connect

    Flaherty, J. E.; Loy, R. M.; Shephard, M. S.; Teresco, J. D.

    1999-08-17

    The authors develop software tools for the solution of conservation laws using parallel adaptive discontinuous Galerkin methods. In particular, the Rensselaer Partition Model (RPM) provides parallel mesh structures within an adaptive framework to solve the Euler equations of compressible flow by a discontinuous Galerkin method (LOCO). Results are presented for a Rayleigh-Taylor flow instability for computations performed on 128 processors of an IBM SP computer. In addition to managing the distributed data and maintaining a load balance, RPM provides information about the parallel environment that can be used to tailor partitions to a specific computational environment.

  12. A comparison of locally adaptive multigrid methods: LDC, FAC and FIC

    NASA Technical Reports Server (NTRS)

    Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul

    1993-01-01

    This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.

  13. An Adaptive Altitude Information Fusion Method for Autonomous Landing Processes of Small Unmanned Aerial Rotorcraft

    PubMed Central

    Lei, Xusheng; Li, Jingjing

    2012-01-01

    This paper presents an adaptive information fusion method to improve the accuracy and reliability of the altitude measurement information for small unmanned aerial rotorcraft during the landing process. Focusing on the low measurement performance of sensors mounted on small unmanned aerial rotorcraft, a wavelet filter is applied as a pre-filter to attenuate the high frequency noises in the sensor output. Furthermore, to improve altitude information, an adaptive extended Kalman filter based on a maximum a posteriori criterion is proposed to estimate measurement noise covariance matrix in real time. Finally, the effectiveness of the proposed method is proved by static tests, hovering flight and autonomous landing flight tests. PMID:23201993

  14. Adaptive spatial carrier frequency method for fast monitoring optical properties of fibres

    NASA Astrophysics Data System (ADS)

    Sokkar, T. Z. N.; El-Farahaty, K. A.; El-Bakary, M. A.; Omar, E. Z.; Agour, M.; Hamza, A. A.

    2016-05-01

    We present an extension of the adaptive spatial carrier frequency method which is proposed for fast measuring optical properties of fibrous materials. The method can be considered as a two complementary steps. In the first step, the support of the adaptive filter shall be defined. In the second step, the angle between the sample under test and the interference fringe system generated by the utilized interferometer has to be determined. Thus, the support of the optical filter associated with the implementation of the adaptive spatial carrier frequency method is accordingly rotated. This method is experimentally verified by measuring optical properties of polypropylene (PP) fibre with the help of a Mach-Zehnder interferometer. The results show that errors resulting from rotating the fibre with respect to the interference fringes of the interferometer are reduced compared with the traditional band pass filter method. This conclusion was driven by comparing results of the mean refractive index of drown PP fibre at parallel polarization direction obtained from the new and adaptive spatial carrier frequency method.

  15. Differences in Pedagogical Understanding among Student-Teachers in a Four-Year Initial Teacher Education Programme

    ERIC Educational Resources Information Center

    Cheng, May M. H.; Tang, Sylvia Y. F.; Cheng, Annie Y. N.

    2014-01-01

    As teacher educators, preparing student-teachers who are able to address diverse student needs is our main concern. It has been suggested in the literature that teachers who are adaptive to students' needs are those who possess adequate pedagogical content knowledge or pedagogical understanding. However, it is not uncommon for teacher…

  16. A modified implicit Monte Carlo method for time-dependent radiative transfer with adaptive material coupling

    SciTech Connect

    McClarren, Ryan G. Urbatsch, Todd J.

    2009-09-01

    In this paper we develop a robust implicit Monte Carlo (IMC) algorithm based on more accurately updating the linearized equilibrium radiation energy density. The method does not introduce oscillations in the solution and has the same limit as {delta}t{yields}{infinity} as the standard Fleck and Cummings IMC method. Moreover, the approach we introduce can be trivially added to current implementations of IMC by changing the definition of the Fleck factor. Using this new method we develop an adaptive scheme that uses either standard IMC or the modified method basing the adaptation on a zero-dimensional problem solved in each cell. Numerical results demonstrate that the new method can avoid the nonphysical overheating that occurs in standard IMC when the time step is large. The method also leads to decreased noise in the material temperature at the cost of a potential increase in the radiation temperature noise.

  17. Effects of light curing method and resin composite composition on composite adaptation to the cavity wall.

    PubMed

    Yoshikawa, Takako; Morigami, Makoto; Sadr, Alireza; Tagami, Junji

    2014-01-01

    This study aimed to evaluate the effects of the light curing method and resin composite composition on marginal sealing and resin composite adaptation to the cavity wall. Cylindrical cavities were prepared on the buccal or lingual cervical regions. The teeth were restored using Clearfil Liner Bond 2V adhesive system and filled with Clearfil Photo Bright or Palfique Estelite resin composite. The resins were cured using the conventional or slow-start light curing method. After thermal cycling, the specimens were subjected to a dye penetration test. The slow-start curing method showed better resin composite adaptation to the cavity wall for both composites. Furthermore, the slow-start curing method resulted in significantly improved dentin marginal sealing compared with the conventional method for Clearfil Photo Bright. The light-cured resin composite, which exhibited increased contrast ratios duringpolymerization, seems to suggest high compensation for polymerization contraction stress when using the slow-start curing method. PMID:24988883

  18. A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES

    SciTech Connect

    Druckmueller, M.

    2013-08-15

    A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.

  19. A density-based adaptive quantum mechanical/molecular mechanical method.

    PubMed

    Waller, Mark P; Kumbhar, Sadhana; Yang, Jack

    2014-10-20

    We present a density-based adaptive quantum mechanical/molecular mechanical (DBA-QM/MM) method, whereby molecules can switch layers from the QM to the MM region and vice versa. The adaptive partitioning of the molecular system ensures that the layer assignment can change during the optimization procedure, that is, on the fly. The switch from a QM molecule to a MM molecule is determined if there is an absence of noncovalent interactions to any atom of the QM core region. The presence/absence of noncovalent interactions is determined by analysis of the reduced density gradient. Therefore, the location of the QM/MM boundary is based on physical arguments, and this neatly removes some empiricism inherent in previous adaptive QM/MM partitioning schemes. The DBA-QM/MM method is validated by using a water-in-water setup and an explicitly solvated L-alanyl-L-alanine dipeptide. PMID:24954803

  20. A GPU-accelerated adaptive discontinuous Galerkin method for level set equation

    NASA Astrophysics Data System (ADS)

    Karakus, A.; Warburton, T.; Aksel, M. H.; Sert, C.

    2016-01-01

    This paper presents a GPU-accelerated nodal discontinuous Galerkin method for the solution of two- and three-dimensional level set (LS) equation on unstructured adaptive meshes. Using adaptive mesh refinement, computations are localised mostly near the interface location to reduce the computational cost. Small global time step size resulting from the local adaptivity is avoided by local time-stepping based on a multi-rate Adams-Bashforth scheme. Platform independence of the solver is achieved with an extensible multi-threading programming API that allows runtime selection of different computing devices (GPU and CPU) and different threading interfaces (CUDA, OpenCL and OpenMP). Overall, a highly scalable, accurate and mass conservative numerical scheme that preserves the simplicity of LS formulation is obtained. Efficiency, performance and local high-order accuracy of the method are demonstrated through distinct numerical test cases.

  1. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  2. An adaptive mesh finite volume method for the Euler equations of gas dynamics

    NASA Astrophysics Data System (ADS)

    Mungkasi, Sudi

    2016-06-01

    The Euler equations have been used to model gas dynamics for decades. They consist of mathematical equations for the conservation of mass, momentum, and energy of the gas. For a large time value, the solution may contain discontinuities, even when the initial condition is smooth. A standard finite volume numerical method is not able to give accurate solutions to the Euler equations around discontinuities. Therefore we solve the Euler equations using an adaptive mesh finite volume method. In this paper, we present a new construction of the adaptive mesh finite volume method with an efficient computation of the refinement indicator. The adaptive method takes action automatically at around places having inaccurate solutions. Inaccurate solutions are reconstructed to reduce the error by refining the mesh locally up to a certain level. On the other hand, if the solution is already accurate, then the mesh is coarsened up to another certain level to minimize computational efforts. We implement the numerical entropy production as the mesh refinement indicator. As a test problem, we take the Sod shock tube problem. Numerical results show that the adaptive method is more promising than the standard one in solving the Euler equations of gas dynamics.

  3. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  4. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations

    SciTech Connect

    Anderson, R W; Elliott, N S; Pember, R B

    2003-02-14

    A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.

  5. Applications of automatic mesh generation and adaptive methods in computational medicine

    SciTech Connect

    Schmidt, J.A.; Macleod, R.S.; Johnson, C.R.; Eason, J.C.

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  6. Development and evaluation of a method of calibrating medical displays based on fixed adaptation

    SciTech Connect

    Sund, Patrik Månsson, Lars Gunnar; Båth, Magnus

    2015-04-15

    Purpose: The purpose of this work was to develop and evaluate a new method for calibration of medical displays that includes the effect of fixed adaptation and by using equipment and luminance levels typical for a modern radiology department. Methods: Low contrast sinusoidal test patterns were derived at nine luminance levels from 2 to 600 cd/m{sup 2} and used in a two alternative forced choice observer study, where the adaptation level was fixed at the logarithmic average of 35 cd/m{sup 2}. The contrast sensitivity at each luminance level was derived by establishing a linear relationship between the ten pattern contrast levels used at every luminance level and a detectability index (d′) calculated from the fraction of correct responses. A Gaussian function was fitted to the data and normalized to the adaptation level. The corresponding equation was used in a display calibration method that included the grayscale standard display function (GSDF) but compensated for fixed adaptation. In the evaluation study, the contrast of circular objects with a fixed pixel contrast was displayed using both calibration methods and was rated on a five-grade scale. Results were calculated using a visual grading characteristics method. Error estimations in both observer studies were derived using a bootstrap method. Results: The contrast sensitivities for the darkest and brightest patterns compared to the contrast sensitivity at the adaptation luminance were 37% and 56%, respectively. The obtained Gaussian fit corresponded well with similar studies. The evaluation study showed a higher degree of equally distributed contrast throughout the luminance range with the calibration method compensated for fixed adaptation than for the GSDF. The two lowest scores for the GSDF were obtained for the darkest and brightest patterns. These scores were significantly lower than the lowest score obtained for the compensated GSDF. For the GSDF, the scores for all luminance levels were statistically

  7. Adaptive non-local means method for speckle reduction in ultrasound images

    NASA Astrophysics Data System (ADS)

    Ai, Ling; Ding, Mingyue; Zhang, Xuming

    2016-03-01

    Noise removal is a crucial step to enhance the quality of ultrasound images. However, some existing despeckling methods cannot ensure satisfactory restoration performance. In this paper, an adaptive non-local means (ANLM) filter is proposed for speckle noise reduction in ultrasound images. The distinctive property of the proposed method lies in that the decay parameter will not take the fixed value for the whole image but adapt itself to the variation of the local features in the ultrasound images. In the proposed method, the pre-filtered image will be obtained using the traditional NLM method. Based on the pre-filtered result, the local gradient will be computed and it will be utilized to determine the decay parameter adaptively for each image pixel. The final restored image will be produced by the ANLM method using the obtained decay parameters. Simulations on the synthetic image show that the proposed method can deliver sufficient speckle reduction while preserving image details very well and it outperforms the state-of-the-art despeckling filters in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Experiments on the clinical ultrasound image further demonstrate the practicality and advantage of the proposed method over the compared filtering methods.

  8. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  9. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  10. New cardiac MRI gating method using event-synchronous adaptive digital filter.

    PubMed

    Park, Hodong; Park, Youngcheol; Cho, Sungpil; Jang, Bongryoel; Lee, Kyoungjoung

    2009-11-01

    When imaging the heart using MRI, an artefact-free electrocardiograph (ECG) signal is not only important for monitoring the patient's heart activity but also essential for cardiac gating to reduce noise in MR images induced by moving organs. The fundamental problem in conventional ECG is the distortion induced by electromagnetic interference. Here, we propose an adaptive algorithm for the suppression of MR gradient artefacts (MRGAs) in ECG leads of a cardiac MRI gating system. We have modeled MRGAs by assuming a source of strong pulses used for dephasing the MR signal. The modeled MRGAs are rectangular pulse-like signals. We used an event-synchronous adaptive digital filter whose reference signal is synchronous to the gradient peaks of MRI. The event detection processor for the event-synchronous adaptive digital filter was implemented using the phase space method-a sort of topology mapping method-and least-squares acceleration filter. For evaluating the efficiency of the proposed method, the filter was tested using simulation and actual data. The proposed method requires a simple experimental setup that does not require extra hardware connections to obtain the reference signals of adaptive digital filter. The proposed algorithm was more effective than the multichannel approach. PMID:19644754

  11. Item Pocket Method to Allow Response Review and Change in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2013-01-01

    Most computerized adaptive testing (CAT) programs do not allow test takers to review and change their responses because it could seriously deteriorate the efficiency of measurement and make tests vulnerable to manipulative test-taking strategies. Several modified testing methods have been developed that provide restricted review options while…

  12. Method for reducing the drag of blunt-based vehicles by adaptively increasing forebody roughness

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A. (Inventor); Saltzman, Edwin J. (Inventor); Moes, Timothy R. (Inventor); Iliff, Kenneth W. (Inventor)

    2005-01-01

    A method for reducing drag upon a blunt-based vehicle by adaptively increasing forebody roughness to increase drag at the roughened area of the forebody, which results in a decrease in drag at the base of this vehicle, and in total vehicle drag.

  13. [Correction of autonomic reactions parameters in organism of cosmonaut with adaptive biocontrol method

    NASA Technical Reports Server (NTRS)

    Kornilova, L. N.; Cowings, P. S.; Toscano, W. B.; Arlashchenko, N. I.; Korneev, D. Iu; Ponomarenko, A. V.; Salagovich, S. V.; Sarantseva, A. V.; Kozlovskaia, I. B.

    2000-01-01

    Presented are results of testing the method of adaptive biocontrol during preflight training of cosmonauts. Within the MIR-25 crew, a high level of controllability of the autonomous reactions was characteristic of Flight Commanders MIR-23 and MIR-25 and flight Engineer MIR-23, while Flight Engineer MIR-25 displayed a weak intricate dependence of these reactions on the depth of relaxation or strain.

  14. Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics

    NASA Technical Reports Server (NTRS)

    Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy

    2006-01-01

    This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.

  15. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  16. Non-orthogonal spin-adaptation of coupled cluster methods: A new implementation of methods including quadruple excitations

    SciTech Connect

    Matthews, Devin A.; Stanton, John F.

    2015-02-14

    The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating an efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q))

  17. Fuzzy physical programming for Space Manoeuvre Vehicles trajectory optimization based on hp-adaptive pseudospectral method

    NASA Astrophysics Data System (ADS)

    Chai, Runqi; Savvaris, Al; Tsourdos, Antonios

    2016-06-01

    In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.

  18. An Adaptive Instability Suppression Controls Method for Aircraft Gas Turbine Engine Combustors

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; DeLaat, John C.; Chang, Clarence T.

    2008-01-01

    An adaptive controls method for instability suppression in gas turbine engine combustors has been developed and successfully tested with a realistic aircraft engine combustor rig. This testing was part of a program that demonstrated, for the first time, successful active combustor instability control in an aircraft gas turbine engine-like environment. The controls method is called Adaptive Sliding Phasor Averaged Control. Testing of the control method has been conducted in an experimental rig with different configurations designed to simulate combustors with instabilities of about 530 and 315 Hz. Results demonstrate the effectiveness of this method in suppressing combustor instabilities. In addition, a dramatic improvement in suppression of the instability was achieved by focusing control on the second harmonic of the instability. This is believed to be due to a phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling. These results may have implications for future research in combustor instability control.

  19. Adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients.

    PubMed

    Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei

    2012-02-01

    Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L(∞) and L(2) errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356

  20. Adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients

    PubMed Central

    Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei

    2011-01-01

    Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L∞ and L2 errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356

  1. Adaptation strategies for high order discontinuous Galerkin methods based on Tau-estimation

    NASA Astrophysics Data System (ADS)

    Kompenhans, Moritz; Rubio, Gonzalo; Ferrer, Esteban; Valero, Eusebio

    2016-02-01

    In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a τ-estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. It is shown that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.

  2. A wavelet-optimized, very high order adaptive grid and order numerical method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.

  3. An h-adaptive finite element method for turbulent heat transfer

    SciTech Connect

    Carriington, David B

    2009-01-01

    A two-equation turbulence closure model (k-{omega}) using an h-adaptive grid technique and finite element method (FEM) has been developed to simulate low Mach flow and heat transfer. These flows are applicable to many flows in engineering and environmental sciences. Of particular interest in the engineering modeling areas are: combustion, solidification, and heat exchanger design. Flows for indoor air quality modeling and atmospheric pollution transport are typical types of environmental flows modeled with this method. The numerical method is based on a hybrid finite element model using an equal-order projection process. The model includes thermal and species transport, localized mesh refinement (h-adaptive) and Petrov-Galerkin weighting for the stabilizing the advection. This work develops the continuum model of a two-equation turbulence closure method. The fractional step solution method is stated along with the h-adaptive grid method (Carrington and Pepper, 2002). Solutions are presented for 2d flow over a backward-facing step.

  4. A Digitalized Gyroscope System Based on a Modified Adaptive Control Method

    PubMed Central

    Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen

    2016-01-01

    In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm. PMID:26959019

  5. A Digitalized Gyroscope System Based on a Modified Adaptive Control Method.

    PubMed

    Xia, Dunzhu; Hu, Yiwei; Ni, Peizhen

    2016-01-01

    In this work we investigate the possibility of applying the adaptive control algorithm to Micro-Electro-Mechanical System (MEMS) gyroscopes. Through comparing the gyroscope working conditions with the reference model, the adaptive control method can provide online estimation of the key parameters and the proper control strategy for the system. The digital second-order oscillators in the reference model are substituted for two phase locked loops (PLLs) to achieve a more steady amplitude and frequency control. The adaptive law is modified to satisfy the condition of unequal coupling stiffness and coupling damping coefficient. The rotation mode of the gyroscope system is considered in our work and a rotation elimination section is added to the digitalized system. Before implementing the algorithm in the hardware platform, different simulations are conducted to ensure the algorithm can meet the requirement of the angular rate sensor, and some of the key adaptive law coefficients are optimized. The coupling components are detected and suppressed respectively and Lyapunov criterion is applied to prove the stability of the system. The modified adaptive control algorithm is verified in a set of digitalized gyroscope system, the control system is realized in digital domain, with the application of Field Programmable Gate Array (FPGA). Key structure parameters are measured and compared with the estimation results, which validated that the algorithm is feasible in the setup. Extra gyroscopes are used in repeated experiments to prove the commonality of the algorithm. PMID:26959019

  6. Scale-adaptive tensor algebra for local many-body methods of electronic structure theory

    SciTech Connect

    Liakh, Dmitry I

    2014-01-01

    While the formalism of multiresolution analysis (MRA), based on wavelets and adaptive integral representations of operators, is actively progressing in electronic structure theory (mostly on the independent-particle level and, recently, second-order perturbation theory), the concepts of multiresolution and adaptivity can also be utilized within the traditional formulation of correlated (many-particle) theory which is based on second quantization and the corresponding (generally nonorthogonal) tensor algebra. In this paper, we present a formalism called scale-adaptive tensor algebra (SATA) which exploits an adaptive representation of tensors of many-body operators via the local adjustment of the basis set quality. Given a series of locally supported fragment bases of a progressively lower quality, we formulate the explicit rules for tensor algebra operations dealing with adaptively resolved tensor operands. The formalism suggested is expected to enhance the applicability and reliability of local correlated many-body methods of electronic structure theory, especially those directly based on atomic orbitals (or any other localized basis functions).

  7. An adaptive subspace trust-region method for frequency-domain seismic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Zhang, Huan; Li, Xiaofan; Song, Hanjie; Liu, Shaolin

    2015-05-01

    Full waveform inversion is currently considered as a promising seismic imaging method to obtain high-resolution and quantitative images of the subsurface. It is a nonlinear ill-posed inverse problem, the main difficulty of which that prevents the full waveform inversion from widespread applying to real data is the sensitivity to incorrect initial models and noisy data. Local optimization theories including Newton's method and gradient method always lead the convergence to local minima, while global optimization algorithms such as simulated annealing are computationally costly. To confront this issue, in this paper we investigate the possibility of applying the trust-region method to the full waveform inversion problem. Different from line search methods, trust-region methods force the new trial step within a certain neighborhood of the current iterate point. Theoretically, the trust-region methods are reliable and robust, and they have very strong convergence properties. The capability of this inversion technique is tested with the synthetic Marmousi velocity model and the SEG/EAGE Salt model. Numerical examples demonstrate that the adaptive subspace trust-region method can provide solutions closer to the global minima compared to the conventional Approximate Hessian approach and the L-BFGS method with a higher convergence rate. In addition, the match between the inverted model and the true model is still excellent even when the initial model deviates far from the true model. Inversion results with noisy data also exhibit the remarkable capability of the adaptive subspace trust-region method for low signal-to-noise data inversions. Promising numerical results suggest this adaptive subspace trust-region method is suitable for full waveform inversion, as it has stronger convergence and higher convergence rate.

  8. Adaptive mesh refinement techniques for the immersed interface method applied to flow problems.

    PubMed

    Li, Zhilin; Song, Peng

    2013-06-01

    In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515-527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method. PMID:23794763

  9. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  10. Development of the Adaptive Collision Source (ACS) method for discrete ordinates

    SciTech Connect

    Walters, W.; Haghighat, A.

    2013-07-01

    We have developed a new collision source method to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This allows for an optimal use of processing power, by using a high order quadrature for the first few iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and we call it the adaptive collision source method (ACS). The ACS methodology has been implemented in the TITAN discrete ordinates code, and has shown a relative speedup of 1.5-2.5 on a test problem, for the same desired level of accuracy. (authors)

  11. A multigrid method for steady Euler equations on unstructured adaptive grids

    NASA Technical Reports Server (NTRS)

    Riemslagh, Kris; Dick, Erik

    1993-01-01

    A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.

  12. Vortical Flow Prediction using an Adaptive Unstructured Grid Method. Chapter 11

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2009-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  13. Adjoint-based error estimation and mesh adaptation for the correction procedure via reconstruction method

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Wang, Z. J.

    2015-08-01

    Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.

  14. A method for online verification of adapted fields using an independent dose monitor

    SciTech Connect

    Chang Jina; Norrlinger, Bernhard D.; Heaton, Robert K.; Jaffray, David A.; Cho, Young-Bin; Islam, Mohammad K.; Mahon, Robert

    2013-07-15

    Purpose: Clinical implementation of online adaptive radiotherapy requires generation of modified fields and a method of dosimetric verification in a short time. We present a method of treatment field modification to account for patient setup error, and an online method of verification using an independent monitoring system.Methods: The fields are modified by translating each multileaf collimator (MLC) defined aperture in the direction of the patient setup error, and magnifying to account for distance variation to the marked isocentre. A modified version of a previously reported online beam monitoring system, the integral quality monitoring (IQM) system, was investigated for validation of adapted fields. The system consists of a large area ion-chamber with a spatial gradient in electrode separation to provide a spatially sensitive signal for each beam segment, mounted below the MLC, and a calculation algorithm to predict the signal. IMRT plans of ten prostate patients have been modified in response to six randomly chosen setup errors in three orthogonal directions.Results: A total of approximately 49 beams for the modified fields were verified by the IQM system, of which 97% of measured IQM signal agree with the predicted value to within 2%.Conclusions: The modified IQM system was found to be suitable for online verification of adapted treatment fields.

  15. Vivid Motor Imagery as an Adaptation Method for Head Turns on a Short-Arm Centrifuge

    NASA Technical Reports Server (NTRS)

    Newby, N. J.; Mast, F. W.; Natapoff, A.; Paloski, W. H.

    2006-01-01

    from one another. For the perceived duration of sensations, the CG group again exhibited the least amount of adaptation. However, the rates of adaptation of the PA and the MA groups were indistinguishable, suggesting that the imagined pseudostimulus appeared to be just as effective a means of adaptation as the actual stimulus. The MA group's rate of adaptation to motion sickness symptoms was also comparable to the PA group. The use of vivid motor imagery may be an effective method for adapting to the illusory sensations and motion sickness symptoms produced by cross-coupled stimuli. For space-based AG applications, this technique may prove quite useful in retaining astronauts considered highly susceptible to motion sickness as it reduces the number of actual CCS required to attain adaptation.

  16. Eulerian Lagrangian Adaptive Fup Collocation Method for solving the conservative solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Gotovac, Hrvoje; Srzic, Veljko

    2014-05-01

    Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large

  17. Adaptation of LASCA method for diagnostics of malignant tumours in laboratory animals

    SciTech Connect

    Ul'yanov, S S; Laskavyi, V N; Glova, Alina B; Polyanina, T I; Ul'yanova, O V; Fedorova, V A; Ul'yanov, A S

    2012-05-31

    The LASCA method is adapted for diagnostics of malignant neoplasms in laboratory animals. Tumours are studied in mice of Balb/c inbred line after inoculation of cells of syngeneic myeloma cell line Sp.2/0 Ag.8. The appropriateness of using the tLASCA method in tumour investigations is substantiated; its advantages in comparison with the sLASCA method are demonstrated. It is found that the most informative characteristic, indicating the presence of a tumour, is the fractal dimension of LASCA images.

  18. Adaptation of LASCA method for diagnostics of malignant tumours in laboratory animals

    NASA Astrophysics Data System (ADS)

    Ul'yanov, S. S.; Laskavyi, V. N.; Glova, Alina B.; Polyanina, T. I.; Ul'yanova, O. V.; Fedorova, V. A.; Ul'yanov, A. S.

    2012-05-01

    The LASCA method is adapted for diagnostics of malignant neoplasms in laboratory animals. Tumours are studied in mice of Balb/c inbred line after inoculation of cells of syngeneic myeloma cell line Sp.2/0 — Ag.8. The appropriateness of using the tLASCA method in tumour investigations is substantiated; its advantages in comparison with the sLASCA method are demonstrated. It is found that the most informative characteristic, indicating the presence of a tumour, is the fractal dimension of LASCA images.

  19. A novel timestamp based adaptive clock method for circuit emulation service over packet network

    NASA Astrophysics Data System (ADS)

    Dai, Jin-you; Yu, Shao-hua

    2007-11-01

    It is necessary to transport TDM (time division multiplexing) over packet network such as IP and Ethernet, and synchronization is a problem when carrying TDM over the packet network. Clock methods for TDM over packet network are introduced. A new adaptive clock method is presented. The method is a kind of timestamp based adaptive method, but no timestamp needs transporting over packet network. By using the local oscillator and a counter, the timestamp information (local timestamp) related to the service clock of the remote PE (provide edge) and the near PE can be attained. By using D-EWMA filter algorithm, the noise caused by packet network can be filtered and the useful timestamp can be extracted out. With the timestamp and a voltage-controlled oscillator, clock frequency of near PE can be adjusted the same as clock frequency of the remote PE. A kind of simulation device is designed and a test network topology is set up to test and verify the method. The experiment result shows that synthetical performance of the new method is better than ordinary buffer based method and ordinary timestamp based method.

  20. The Complexity of Chinese Pedagogic Discourse

    ERIC Educational Resources Information Center

    Cheng, Liang; Xu, Nan

    2011-01-01

    This is one of the commentaries on Wu's "Interpretation, autonomy, and transformation: Chinese pedagogic discourse in a cross-cultural perspective" ("JCS", 43(5), 569-590). It highlights the paper's demystification of Western pedagogic discourse and recovery of the meaning of Chinese traditional pedagogic discourse as a response to the…

  1. Coherent Vortex Simulation of weakly compressible turbulent mixing layers using adaptive multiresolution methods

    NASA Astrophysics Data System (ADS)

    Roussel, Olivier; Schneider, Kai

    2010-03-01

    An adaptive mulitresolution method based on a second-order finite volume discretization is presented for solving the three-dimensional compressible Navier-Stokes equations in Cartesian geometry. The explicit time discretization is of second-order and for flux evaluation a 2-4 Mac Cormack scheme is used. Coherent Vortex Simulations (CVS) are performed by decomposing the flow variables into coherent and incoherent contributions. The coherent part is computed deterministically on a locally refined grid using the adaptive multiresolution method while the influence of the incoherent part is neglected to model turbulent dissipation. The computational efficiency of this approach in terms of memory and CPU time compression is illustrated for turbulent mixing layers in the weakly compressible regime and for Reynolds numbers based on the mixing layer thickness between 50 and 200. Comparisons with direct numerical simulations allow to assess the precision and efficiency of CVS.

  2. H∞ Adaptive tracking control for switched systems based on an average dwell-time method

    NASA Astrophysics Data System (ADS)

    Wu, Caiyun; Zhao, Jun

    2015-10-01

    This paper investigates the H∞ state tracking model reference adaptive control (MRAC) problem for a class of switched systems using an average dwell-time method. First, a stability criterion is established for a switched reference model. Then, an adaptive controller is designed and the state tracking control problem is converted into the stability analysis. The global practical stability of the error switched system can be guaranteed under a class of switching signals characterised by an average dwell time. Consequently, sufficient conditions for the solvability of the H∞ state tracking MRAC problem are derived. An example of highly manoeuvrable aircraft technology vehicle is given to demonstrate the feasibility and effectiveness of the proposed design method.

  3. An Adaptive Mesh Refinement Strategy for Immersed Boundary/Interface Methods.

    PubMed

    Li, Zhilin; Song, Peng

    2012-01-01

    An adaptive mesh refinement strategy is proposed in this paper for the Immersed Boundary and Immersed Interface methods for two-dimensional elliptic interface problems involving singular sources. The interface is represented by the zero level set of a Lipschitz function φ(x,y). Our adaptive mesh refinement is done within a small tube of |φ(x,y)|≤ δ with finer Cartesian meshes. The discrete linear system of equations is solved by a multigrid solver. The AMR methods could obtain solutions with accuracy that is similar to those on a uniform fine grid by distributing the mesh more economically, therefore, reduce the size of the linear system of the equations. Numerical examples presented show the efficiency of the grid refinement strategy. PMID:22670155

  4. Research on a Pulmonary Nodule Segmentation Method Combining Fast Self-Adaptive FCM and Classification

    PubMed Central

    Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai

    2015-01-01

    The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms. PMID:25945120

  5. An adaptive tau-leaping method for stochastic simulations of reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Padgett, Jill M. A.; Ilie, Silvana

    2016-03-01

    Stochastic modelling is critical for studying many biochemical processes in a cell, in particular when some reacting species have low population numbers. For many such cellular processes the spatial distribution of the molecular species plays a key role. The evolution of spatially heterogeneous biochemical systems with some species in low amounts is accurately described by the mesoscopic model of the Reaction-Diffusion Master Equation. The Inhomogeneous Stochastic Simulation Algorithm provides an exact strategy to numerically solve this model, but it is computationally very expensive on realistic applications. We propose a novel adaptive time-stepping scheme for the tau-leaping method for approximating the solution of the Reaction-Diffusion Master Equation. This technique combines effective strategies for variable time-stepping with path preservation to reduce the computational cost, while maintaining the desired accuracy. The numerical tests on various examples arising in applications show the improved efficiency achieved by the new adaptive method.

  6. Advanced adaptive computational methods for Navier-Stokes simulations in rotorcraft aerodynamics

    NASA Technical Reports Server (NTRS)

    Stowers, S. T.; Bass, J. M.; Oden, J. T.

    1993-01-01

    A phase 2 research and development effort was conducted in area transonic, compressible, inviscid flows with an ultimate goal of numerically modeling complex flows inherent in advanced helicopter blade designs. The algorithms and methodologies therefore are classified as adaptive methods, which are error estimation techniques for approximating the local numerical error, and automatically refine or unrefine the mesh so as to deliver a given level of accuracy. The result is a scheme which attempts to produce the best possible results with the least number of grid points, degrees of freedom, and operations. These types of schemes automatically locate and resolve shocks, shear layers, and other flow details to an accuracy level specified by the user of the code. The phase 1 work involved a feasibility study of h-adaptive methods for steady viscous flows, with emphasis on accurate simulation of vortex initiation, migration, and interaction. Phase 2 effort focused on extending these algorithms and methodologies to a three-dimensional topology.

  7. Research on a pulmonary nodule segmentation method combining fast self-adaptive FCM and classification.

    PubMed

    Liu, Hui; Zhang, Cai-Ming; Su, Zhi-Yuan; Wang, Kai; Deng, Kai

    2015-01-01

    The key problem of computer-aided diagnosis (CAD) of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO) pulmonary nodules than other typical algorithms. PMID:25945120

  8. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    SciTech Connect

    Paganelli, Chiara; Peroni, Marta

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  9. Pedagogic Research into Singularities: Case-Studies, Probes, and Curriculum Innovations.

    ERIC Educational Resources Information Center

    Bassey, Michael

    1983-01-01

    Research using educational data includes both disciplinary and pedagogic research. Concentrating on pedagogic research into singlar cases, rather than into generalities, would improve the quality of education. Such research would mean more emphasis on case studies, probes (account of a method for analyzing practice to improve it), and curriculum…

  10. A Comparative Evaluation of E-Learning and Traditional Pedagogical Process Elements

    ERIC Educational Resources Information Center

    Vavpotic, Damjan; Zvanut, Bostjan; Trobec, Irena

    2013-01-01

    In modern pedagogical processes various teaching methods and approaches (elements of the pedagogical process -- EPPs) are used ranging from traditional ones (e.g., lectures, books) to more recent ones (e.g., e-discussion boards, e-quizzes). Different models for evaluation of the appropriateness of EPPs have been proposed in the past. However, the…

  11. System and method for adaptively deskewing parallel data signals relative to a clock

    DOEpatents

    Jenkins, Philip Nord; Cornett, Frank N.

    2008-10-07

    A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.

  12. System and method for adaptively deskewing parallel data signals relative to a clock

    DOEpatents

    Jenkins, Philip Nord; Cornett, Frank N.

    2011-10-04

    A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.

  13. A Lagrangian-Eulerian finite element method with adaptive gridding for advection-dispersion problems

    SciTech Connect

    Ijiri, Y.; Karasaki, K.

    1994-02-01

    In the present paper, a Lagrangian-Eulerian finite element method with adaptive gridding for solving advection-dispersion equations is described. The code creates new grid points in the vicinity of sharp fronts at every time step in order to reduce numerical dispersion. The code yields quite accurate solutions for a wide range of mesh Peclet numbers and for mesh Courant numbers well in excess of 1.

  14. Laying the Groundwork for NCLEX Success: An Exploration of Adaptive Quizzing as an Examination Preparation Method.

    PubMed

    Cox-Davenport, Rebecca A; Phelan, Julia C

    2015-05-01

    First-time NCLEX-RN pass rates are an important indicator of nursing school success and quality. Nursing schools use different methods to anticipate NCLEX outcomes and help prevent student failure and possible threat to accreditation. This study evaluated the impact of a shift in NCLEX preparation policy at a BSN program in the southeast United States. The policy shifted from the use of predictor score thresholds to determine graduation eligibility to a more proactive remediation strategy involving adaptive quizzing. A descriptive correlational design evaluated the impact of an adaptive quizzing system designed to give students ongoing active practice and feedback and explored the relationship between predictor examinations and NCLEX success. Data from student usage of the system as well as scores on predictor tests were collected for three student cohorts. Results revealed a positive correlation between adaptive quizzing system usage and content mastery. Two of the 69 students in the sample did not pass the NCLEX. With so few students failing the NCLEX, predictability of any course variables could not be determined. The power of predictor examinations to predict NCLEX failure could also not be supported. The most consistent factor among students, however, was their content mastery level within the adaptive quizzing system. Implications of these findings are discussed. PMID:25851560

  15. An automatic locally-adaptive method to estimate heavily-tailed breakthrough curves from particle distributions

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Fernàndez-Garcia, Daniel

    2013-09-01

    Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .

  16. Adaptability and stability of genotypes of sweet sorghum by GGEBiplot and Toler methods.

    PubMed

    de Figueiredo, U J; Nunes, J A R; da C Parrella, R A; Souza, E D; da Silva, A R; Emygdio, B M; Machado, J R A; Tardin, F D

    2015-01-01

    Sweet sorghum has considerable potential for ethanol and energy production. The crop is adaptable and can be grown under a wide range of cultivation conditions in marginal areas; however, studies of phenotypic stability are lacking under tropical conditions. Various methods can be used to assess the stability of the crop. Some of these methods generate the same basic information, whereas others provide additional information on genotype x environment (G x E) interactions and/or a description of the genotypes and environments. In this study, we evaluated the complementarity of two methods, GGEBiplot and Toler, with the aim of achieving more detailed information on G x E interactions and their implications for selection of sweet sorghum genotypes. We used data from 25 sorghum genotypes grown in different environments and evaluated the following traits: flowering (FLOW), green mass yield (GMY), total soluble solids (TSS), and tons of Brix per hectare (TBH). Significant G x E interactions were found for all traits. The most stable genotypes identified with the GGEBiplot method were CMSXS643 for FLOW, CMSXS644 and CMSXS647 for GMY, CMSXS646 and CMSXS637 for TSS, and BRS511 and CMSXSS647 for TBH. Especially for TBH, the genotype BRS511 was classified as doubly desirable by the Toler method; however, unlike the result of the GGEBiplot method, the genotype CMSXS647 was also found to be doubly undesirable. The two analytical methods were complementary and enabled a more reliable identification of adapted and stable genotypes. PMID:26400352

  17. Adaptive non-uniformity correction method based on temperature for infrared detector array

    NASA Astrophysics Data System (ADS)

    Zhang, Zhijie; Yue, Song; Hong, Pu; Jia, Guowei; Lei, Bo

    2013-09-01

    The existence of non-uniformities in the responsitivity of the element array is a severe problem typical to common infrared detector. These non-uniformities result in a "curtain'' like fixed pattern noises (FPN) that appear in the image. Some random noise can be restrained by the method kind of equalization method. But the fixed pattern noise can only be removed by .non uniformity correction method. The produce of non uniformities of detector array is the combined action of infrared detector array, readout circuit, semiconductor device performance, the amplifier circuit and optical system. Conventional linear correction techniques require costly recalibration due to the drift of the detector or changes in temperature. Therefore, an adaptive non-uniformity method is needed to solve this problem. A lot factors including detectors and environment conditions variety are considered to analyze and conduct the cause of detector drift. Several experiments are designed to verify the guess. Based on the experiments, an adaptive non-uniformity correction method is put forward in this paper. The strength of this method lies in its simplicity and low computational complexity. Extensive experimental results demonstrate the disadvantage of traditional non-uniformity correct method is conquered by the proposed scheme.

  18. Shack-Hartmann wavefront sensor with large dynamic range by adaptive spot search method.

    PubMed

    Shinto, Hironobu; Saita, Yusuke; Nomura, Takanori

    2016-07-10

    A Shack-Hartmann wavefront sensor (SHWFS) that consists of a microlens array and an image sensor has been used to measure the wavefront aberrations of human eyes. However, a conventional SHWFS has finite dynamic range depending on the diameter of the each microlens. The dynamic range cannot be easily expanded without a decrease of the spatial resolution. In this study, an adaptive spot search method to expand the dynamic range of an SHWFS is proposed. In the proposed method, spots are searched with the help of their approximate displacements measured with low spatial resolution and large dynamic range. By the proposed method, a wavefront can be correctly measured even if the spot is beyond the detection area. The adaptive spot search method is realized by using the special microlens array that generates both spots and discriminable patterns. The proposed method enables expanding the dynamic range of an SHWFS with a single shot and short processing time. The performance of the proposed method is compared with that of a conventional SHWFS by optical experiments. Furthermore, the dynamic range of the proposed method is quantitatively evaluated by numerical simulations. PMID:27409319

  19. The influence of an intensive in-service workshop on pedagogical content knowledge growth among novice chemical demonstrators

    NASA Astrophysics Data System (ADS)

    Clermont, Christian P.; Krajcik, Joseph S.; Borko, Hilda

    This study examined the influence of an intensive chemical demonstration workshop on fostering pedagogical content knowledge growth among science teachers identified as novice chemical demonstrators. The two-week summer workshop was designed around four training elements considered important to effective teacher in-servicing: theory, modeling, practice, and feedback. Clinical interviews served to probe various aspects of novice demonstrators' pedagogical content knowledge prior to and after the workshop. The interview protocols were analyzed using the methods of taxonomic, componential, and theme analysis. Differences in pre- and postworkshop clinical interview responses suggested growth in novices' representational and adaptational repertoires for demonstrating fundamental topics in chemistry. This growth was reflected in the increased number of chemical demonstrations and demonstration variations on each of the target chemical concepts that the novice demonstrators discussed after the in-service intervention. Their interview responses also suggested an increased awareness of the complexity of several chemical demonstrations, how these complexities could interfere with learning, and how simplified variations of the chemical demonstrations could promote science concept understanding. The research findings suggest that science teachers' pedagogical content knowledge in chemistry can be enhanced through intensive, short-term in-service programs.

  20. Discerning Pedagogical Quality in Preschool

    ERIC Educational Resources Information Center

    Sheridan, Sonja

    2009-01-01

    The aim of this article is to initiate a change of view on quality that goes beyond assumed dichotomies of subjectivity and objectivity. In the view presented here, pedagogical quality is seen as an educational phenomenon of "sustainable dynamism," that is a phenomenon that has structural characteristics and is culturally sensitive. The underlying…

  1. Dyslexia, Learning, and Pedagogical Neuroscience

    ERIC Educational Resources Information Center

    Fawcett, Angela J; Nicolson, Roderick I

    2007-01-01

    The explosion in neuroscientific knowledge has profound implications for education, and we advocate the establishment of the new discipline of "pedagogical neuroscience" designed to combine psychological, medical, and educational perspectives. We propose that specific learning disabilities provide the crucible in which the discipline may be…

  2. Adaptive f-k deghosting method based on non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Lu, Wenkai

    2016-04-01

    For conventional horizontal towed streamer data, the f-k deghosting method is widely used to remove receiver ghosts. In the traditional f-k deghosting method, the depth of the streamer and the sea surface reflection coefficient are two key ghost parameters. In general, for one seismic line, these two parameters are fixed for all shot gathers and given by the users. In practice, these two parameters often vary during acquisition because of the rough sea condition. This paper proposes an automatic method to adaptively obtain these two ghost parameters for every shot gather. Since the proposed method is based on the non-Gaussianity of the deghosting result, it is important to choose a proper non-Gaussian criterion to ensure high accuracy of the parameter estimation. We evaluate six non-Gaussian criteria by synthetic experiment. The conclusion of our experiment is expected to provide a reference for choosing the most appropriate criterion. We apply the proposed method on a 2D real field example. Experimental results show that the optimal parameters vary among shot gathers and validate effectiveness of the parameter estimation process. Moreover, despite that this method ignores the parameter variation within one shot, the adaptive deghosting results show improvements when compared with the deghosting results obtained by using constant parameters for the whole line.

  3. A novel adaptive compression method for hyperspectral images by using EDT and particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Ghamisi, Pedram; Kumar, Lalit

    2012-01-01

    Hyperspectral sensors generate useful information about climate and the earth surface in numerous contiguous narrow spectral bands, and are widely used in resource management, agriculture, environmental monitoring, etc. Compression of the hyperspectral data helps in long-term storage and transmission systems. Lossless compression is preferred for high-detail data, such as hyperspectral data. Due to high redundancy in neighboring spectral bands and the tendency to achieve a higher compression ratio, using adaptive coding methods for hyperspectral data seems suitable for this purpose. This paper introduces two new compression methods. One of these methods is adaptive and powerful for the compression of hyperspectral data, which is based on separating the bands with different specifications by the histogram and Binary Particle Swarm Optimization (BPSO) and compressing each one a different manner. The new proposed methods improve the compression ratio of the JPEG standards and save storage space the transmission. The proposed methods are applied on different test cases, and the results are evaluated and compared with some other compression methods, such as lossless JPEG and JPEG2000.

  4. Hybrid numerical method with adaptive overlapping meshes for solving nonstationary problems in continuum mechanics

    NASA Astrophysics Data System (ADS)

    Burago, N. G.; Nikitin, I. S.; Yakushev, V. L.

    2016-06-01

    Techniques that improve the accuracy of numerical solutions and reduce their computational costs are discussed as applied to continuum mechanics problems with complex time-varying geometry. The approach combines shock-capturing computations with the following methods: (1) overlapping meshes for specifying complex geometry; (2) elastic arbitrarily moving adaptive meshes for minimizing the approximation errors near shock waves, boundary layers, contact discontinuities, and moving boundaries; (3) matrix-free implementation of efficient iterative and explicit-implicit finite element schemes; (4) balancing viscosity (version of the stabilized Petrov-Galerkin method); (5) exponential adjustment of physical viscosity coefficients; and (6) stepwise correction of solutions for providing their monotonicity and conservativeness.

  5. An adaptive finite element method for convective heat transfer with variable fluid properties

    NASA Astrophysics Data System (ADS)

    Pelletier, Dominique; Ilinca, Florin; Hetu, Jean-Francois

    1993-07-01

    This paper presents an adaptive finite element method based on remeshing to solve incompressible viscous flow problems for which fluid properties present a strong temperature dependence. Solutions are obtained in primitive variables using a highly accurate finite element approximation on unstructured grids. Two general purpose error estimators, that take into account fluid properties variations, are presented. The methodology is applied to a problem of practical interest: the thermal convection of corn syrup in an enclosure with localized heating. Predictions are in good agreement with experimental measurements. The method leads to improved accuracy and reliability of finite element predictions.

  6. An adaptive mesh method for phase-field simulation of alloy solidification in three dimensions

    NASA Astrophysics Data System (ADS)

    Bollada, P. C.; Jimack, P. K.; Mullis, A. M.

    2015-06-01

    We present our computational method for binary alloy solidification which takes advantage of high performance computing where up to 1024 cores are employed. Much of the simulation at a sufficiently fine resolution is possible on a modern 12 core PC and the 1024 core simulation is only necessary for very mature dendrite and convergence testing where high resolution puts extreme demands on memory. In outline, the method uses implicit time stepping in conjunction with an iterative solver, adaptive meshing and a scheme for dividing the work load across processors. We include three dimensional results for a Lewis number of 100 and a snapshot for a mature dendrite for a Lewis number of 40.

  7. Development of a Godunov method for Maxwell's equations with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Barbas, Alfonso; Velarde, Pedro

    2015-11-01

    In this paper we present a second order 3D method for Maxwell's equations based on a Godunov scheme with Adaptive Mesh Refinement (AMR). In order to achieve it, we apply a limiter which better preserves extrema and boundary conditions based on a characteristic fields decomposition. Despite being more complex, simplifications in the boundary conditions make the resulting method competitive in computer time consumption and accuracy compared to FDTD. AMR allows us to simulate systems with a sharp step in material properties with negligible rebounds and also large domains with accuracy in small wavelengths.

  8. Model reference adaptive control in fractional order systems using discrete-time approximation methods

    NASA Astrophysics Data System (ADS)

    Abedini, Mohammad; Nojoumian, Mohammad Ali; Salarieh, Hassan; Meghdari, Ali

    2015-08-01

    In this paper, model reference control of a fractional order system has been discussed. In order to control the fractional order plant, discrete-time approximation methods have been applied. Plant and reference model are discretized by Grünwald-Letnikov definition of the fractional order derivative using "Short Memory Principle". Unknown parameters of the fractional order system are appeared in the discrete time approximate model as combinations of parameters of the main system. The discrete time MRAC via RLS identification is modified to estimate the parameters and control the fractional order plant. Numerical results show the effectiveness of the proposed method of model reference adaptive control.

  9. Teaching Analytical Method Development in an Undergraduate Instrumental Analysis Course

    ERIC Educational Resources Information Center

    Lanigan, Katherine C.

    2008-01-01

    Method development and assessment, central components of carrying out chemical research, require problem-solving skills. This article describes a pedagogical approach for teaching these skills through the adaptation of published experiments and application of group-meeting style discussions to the curriculum of an undergraduate instrumental…

  10. An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments

    PubMed Central

    Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui

    2016-01-01

    As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs’ tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N0), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods. PMID:26805853

  11. An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments.

    PubMed

    Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui

    2016-01-01

    As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs' tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N₀), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods. PMID:26805853

  12. Methods and evaluations of MRI content-adaptive finite element mesh generation for bioelectromagnetic problems

    NASA Astrophysics Data System (ADS)

    Lee, W. H.; Kim, T.-S.; Cho, M. H.; Ahn, Y. B.; Lee, S. Y.

    2006-12-01

    In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.

  13. Coherent Vortex Simulation (CVS) of compressible turbulent mixing layers using adaptive multiresolution methods

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Roussel, Olivier; Farge, Marie

    2007-11-01

    Coherent Vortex Simulation is based on the wavelet decomposition of the flow into coherent and incoherent components. An adaptive multiresolution method using second order finite volumes with explicit time discretization, a 2-4 Mac Cormack scheme, allows an efficient computation of the coherent flow on a dynamically adapted grid. Neglecting the influence of the incoherent background models turbulent dissipation. We present CVS computation of three dimensional compressible time developing mixing layer. We show the speed up in CPU time with respect to DNS and the obtained memory reduction thanks to dynamical octree data structures. The impact of different filtering strategies is discussed and it is found that isotropic wavelet thresholding of the Favre averaged gradient of the momentum yields the most effective results.

  14. Encoding and simulation of daily rainfall records via adaptations of the fractal multifractal method

    NASA Astrophysics Data System (ADS)

    Maskey, M.; Puente, C. E.; Sivakumar, B.; Cortis, A.

    2015-12-01

    A deterministic geometric approach, the fractal-multifractal (FM) method, is adapted to encode and simulate daily rainfall records exhibiting noticeable intermittency. Using data sets gathered at Laikakota in Bolivia and Tinkham in Washington State, USA, it is demonstrated that the adapted FM approach can, within the limits of accuracy of measured sets and using only a few geometric parameters, encode and simulate the erratic rainfall records reasonably well. The FM procedure does not only preserve the statistical attributes of the records such as histogram, entropy function and distribution of zeroes, but also captures the overall texture inherent in the rather complex intermittent sets. As such, the FM deterministic representations may be used to supplement stochastic frameworks for data coding and simulation.

  15. Pulse front adaptive optics: a new method for control of ultrashort laser pulses.

    PubMed

    Sun, Bangshan; Salter, Patrick S; Booth, Martin J

    2015-07-27

    Ultrafast lasers enable a wide range of physics research and the manipulation of short pulses is a critical part of the ultrafast tool kit. Current methods of laser pulse shaping are usually considered separately in either the spatial or the temporal domain, but laser pulses are complex entities existing in four dimensions, so full freedom of manipulation requires advanced forms of spatiotemporal control. We demonstrate through a combination of adaptable diffractive and reflective optical elements - a liquid crystal spatial light modulator (SLM) and a deformable mirror (DM) - decoupled spatial control over the pulse front (temporal group delay) and phase front of an ultra-short pulse was enabled. Pulse front modulation was confirmed through autocorrelation measurements. This new adaptive optics technique, for the first time enabling in principle arbitrary shaping of the pulse front, promises to offer a further level of control for ultrafast lasers. PMID:26367595

  16. Adaptive particle refinement and derefinement applied to the smoothed particle hydrodynamics method

    NASA Astrophysics Data System (ADS)

    Barcarolo, D. A.; Le Touzé, D.; Oger, G.; de Vuyst, F.

    2014-09-01

    SPH simulations are usually performed with a uniform particle distribution. New techniques have been recently proposed to enable the use of spatially varying particle distributions, which encouraged the development of automatic adaptivity and particle refinement/derefinement algorithms. All these efforts resulted in very interesting and promising procedures leading to more efficient and faster SPH simulations. In this article, a family of particle refinement techniques is reviewed and a new derefinement technique is proposed and validated through several test cases involving both free-surface and viscous flows. Besides, this new procedure allows higher resolutions in the regions requiring increased accuracy. Moreover, several levels of refinement can be used with this new technique, as often encountered in adaptive mesh refinement techniques in mesh-based methods.

  17. New Adaptive Method for IQ Imbalance Compensation of Quadrature Modulators in Predistortion Systems

    NASA Astrophysics Data System (ADS)

    Zareian, Hassan; Vakili, Vahid Tabataba

    2009-12-01

    Imperfections in quadrature modulators (QMs), such as inphase and quadrature (IQ) imbalance, can severely impact the performance of power amplifier (PA) linearization systems, in particular in adaptive digital predistorters (PDs). In this paper, we first analyze the effect of IQ imbalance on the performance of a memory orthogonal polynomials predistorter (MOP PD), and then we propose a new adaptive algorithm to estimate and compensate the unknown IQ imbalance in QM. Unlike previous compensation techniques, the proposed method was capable of online IQ imbalance compensation with faster convergence, and no special calibration or training signals were needed. The effectiveness of the proposed IQ imbalance compensator was validated by simulations. The results clearly show the performance of the MOP PD to be enhanced significantly by adding the proposed IQ imbalance compensator.

  18. The stochastic control of the F-8C aircraft using the Multiple Model Adaptive Control (MMAC) method

    NASA Technical Reports Server (NTRS)

    Athans, M.; Dunn, K. P.; Greene, E. S.; Lee, W. H.; Sandel, N. R., Jr.

    1975-01-01

    The purpose of this paper is to summarize results obtained for the adaptive control of the F-8C aircraft using the so-called Multiple Model Adaptive Control method. The discussion includes the selection of the performance criteria for both the lateral and the longitudinal dynamics, the design of the Kalman filters for different flight conditions, the 'identification' aspects of the design using hypothesis testing ideas, and the performance of the closed loop adaptive system.

  19. The construction process of pedagogical knowledge among nursing professors.

    PubMed

    Backes, Vânia Marli Schubert; Moyá, Jose Luis Medina; do Prado, Marta Lenise

    2011-01-01

    Didactic knowledge about contents is constructed through an idiosyncratic synthesis between knowledge about the subject area, students' general pedagogical knowledge and the teacher's biography. This study aimed to understand the construction process and the sources of Pedagogical Content Knowledge, as well as to analyze its manifestations and variations in interactive teaching by teachers whom the students considered competent. Data collection involved teachers from an undergraduate nursing program in the South of Brazil, through non-participant observation and semistructured interviews. Data analysis was submitted to the constant comparison method. The results disclose the need for initial education to cover pedagogical aspects for nurses; to assume permanent education as fundamental in view of the complexity of contents and teaching; to use mentoring/monitoring and the value learning with experienced teachers with a view to the development of quality teaching. PMID:21584391

  20. An Adaptive Kernel Smoothing Method for Classifying Austrosimulium tillyardianum (Diptera: Simuliidae) Larval Instars

    PubMed Central

    Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689

  1. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    DOE PAGESBeta

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the newmore » technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less

  2. Locomotor adaptation to a powered ankle-foot orthosis depends on control method

    PubMed Central

    Cain, Stephen M; Gordon, Keith E; Ferris, Daniel P

    2007-01-01

    Background We studied human locomotor adaptation to powered ankle-foot orthoses with the intent of identifying differences between two different orthosis control methods. The first orthosis control method used a footswitch to provide bang-bang control (a kinematic control) and the second orthosis control method used a proportional myoelectric signal from the soleus (a physiological control). Both controllers activated an artificial pneumatic muscle providing plantar flexion torque. Methods Subjects walked on a treadmill for two thirty-minute sessions spaced three days apart under either footswitch control (n = 6) or myoelectric control (n = 6). We recorded lower limb electromyography (EMG), joint kinematics, and orthosis kinetics. We compared stance phase EMG amplitudes, correlation of joint angle patterns, and mechanical work performed by the powered orthosis between the two controllers over time. Results During steady state at the end of the second session, subjects using proportional myoelectric control had much lower soleus and gastrocnemius activation than the subjects using footswitch control. The substantial decrease in triceps surae recruitment allowed the proportional myoelectric control subjects to walk with ankle kinematics close to normal and reduce negative work performed by the orthosis. The footswitch control subjects walked with substantially perturbed ankle kinematics and performed more negative work with the orthosis. Conclusion These results provide evidence that the choice of orthosis control method can greatly alter how humans adapt to powered orthosis assistance during walking. Specifically, proportional myoelectric control results in larger reductions in muscle activation and gait kinematics more similar to normal compared to footswitch control. PMID:18154649

  3. Adaptive circle-ellipse fitting method for estimating tree diameter based on single terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Bu, Guochao; Wang, Pei

    2016-04-01

    Terrestrial laser scanning (TLS) has been used to extract accurate forest biophysical parameters for inventory purposes. The diameter at breast height (DBH) is a key parameter for individual trees because it has the potential for modeling the height, volume, biomass, and carbon sequestration potential of the tree based on empirical allometric scaling equations. In order to extract the DBH from the single-scan data of TLS automatically and accurately within a certain range, we proposed an adaptive circle-ellipse fitting method based on the point cloud transect. This proposed method can correct the error caused by the simple circle fitting method when a tree is slanted. A slanted tree was detected by the circle-ellipse fitting analysis, then the corresponding slant angle was found based on the ellipse fitting result. With this information, the DBH of the trees could be recalculated based on reslicing the point cloud data at breast height. Artificial stem data simulated by a cylindrical model of leaning trees and the scanning data acquired with the RIEGL VZ-400 were used to test the proposed adaptive fitting method. The results shown that the proposed method can detect the trees and accurately estimate the DBH for leaning trees.

  4. Modeling flow through inline tube bundles using an adaptive immersed boundary method

    NASA Astrophysics Data System (ADS)

    Liang, Chunlei; Luo, Xiaoyu; Griffith, Boyce

    2007-11-01

    Fluid flow and its exerted forces on the tube bundle cylinders are important in designing mechanical/nuclear heat exchanger facilities. In this paper, we study the vortex structure of the flow around the tube bundle for different tube spacing. An adaptive, formally 2^nd order immersed boundary (IB) method is used to simulate the flow. One advantage of the IB method is its great flexibility and ease in positioning solid bodies in the fluid domain. Our IB approach uses a six-point regularized delta function and is a type of continuous forcing approach. Validation results obtained using the IB method for two-in-tandem cylinders compare well with those obtained using the finite volume or spectral element methods on unstructured grids. Subsequently, we simulated flow through six-row inline tube bundles with pitch-to-diameter ratios of 2.1, 3.2, and 4, respectively, on structured adaptively refined Cartesian grids. The IB method enables us to study the critical tube spacing when the flow regime switches from the vortex reattachment pattern to alternative individual vortex shedding.

  5. Adaptive Projection Subspace Dimension for the Thick-Restart Lanczos Method

    SciTech Connect

    Yamazaki, Ichitaro; Bai, Zhaojun; Simon, Horst; Wang, Lin-Wang; Wu, K.

    2008-10-01

    The Thick-Restart Lanczos (TRLan) method is an effective method for solving large-scale Hermitian eigenvalue problems. However, its performance strongly depends on the dimension of the projection subspace. In this paper, we propose an objective function to quantify the effectiveness of a chosen subspace dimension, and then introduce an adaptive scheme to dynamically adjust the dimension at each restart. An open-source software package, nu-TRLan, which implements the TRLan method with this adaptive projection subspace dimension is available in the public domain. The numerical results of synthetic eigenvalue problems are presented to demonstrate that nu-TRLan achieves speedups of between 0.9 and 5.1 over the static method using a default subspace dimension. To demonstrate the effectiveness of nu-TRLan in a real application, we apply it to the electronic structure calculations of quantum dots. We show that nu-TRLan can achieve speedups of greater than 1.69 over the state-of-the-art eigensolver for this application, which is based on the Conjugate Gradient method with a powerful preconditioner.

  6. A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D; Burkardt, John V

    2014-03-01

    This work proposes and analyzes a hyper-spherical adaptive hi- erarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the the- oretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a func- tion representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smooth- ness of the hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity anal- yses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  7. Feasibility of an online adaptive replanning method for cranial frameless intensity-modulated radiosurgery

    SciTech Connect

    Calvo, Juan Francisco; San José, Sol; Garrido, LLuís; Puertas, Enrique; Moragues, Sandra; Pozo, Miquel; Casals, Joan

    2013-10-01

    To introduce an approach for online adaptive replanning (i.e., dose-guided radiosurgery) in frameless stereotactic radiosurgery, when a 6-dimensional (6D) robotic couch is not available in the linear accelerator (linac). Cranial radiosurgical treatments are planned in our department using intensity-modulated technique. Patients are immobilized using thermoplastic mask. A cone-beam computed tomography (CBCT) scan is acquired after the initial laser-based patient setup (CBCT{sub setup}). The online adaptive replanning procedure we propose consists of a 6D registration-based mapping of the reference plan onto actual CBCT{sub setup}, followed by a reoptimization of the beam fluences (“6D plan”) to achieve similar dosage as originally was intended, while the patient is lying in the linac couch and the original beam arrangement is kept. The goodness of the online adaptive method proposed was retrospectively analyzed for 16 patients with 35 targets treated with CBCT-based frameless intensity modulated technique. Simulation of reference plan onto actual CBCT{sub setup}, according to the 4 degrees of freedom, supported by linac couch was also generated for each case (4D plan). Target coverage (D99%) and conformity index values of 6D and 4D plans were compared with the corresponding values of the reference plans. Although the 4D-based approach does not always assure the target coverage (D99% between 72% and 103%), the proposed online adaptive method gave a perfect coverage in all cases analyzed as well as a similar conformity index value as was planned. Dose-guided radiosurgery approach is effective to assure the dose coverage and conformity of an intracranial target volume, avoiding resetting the patient inside the mask in a “trial and error” way so as to remove the pitch and roll errors when a robotic table is not available.

  8. Self-adaptive method for high frequency multi-channel analysis of surface wave method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    When the high frequency multi-channel analysis of surface waves (MASW) method is conducted to explore soil properties in the vadose zone, existing rules for selecting the near offset and spread lengths cannot satisfy the requirements of planar dominant Rayleigh waves for all frequencies of interest ...

  9. An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1994-01-01

    This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.

  10. Adaptive finite volume methods for time-dependent P.D.E.S.

    SciTech Connect

    Ware, J.; Berzins, M.

    1995-12-31

    The aim of adaptive methods for time-dependent p.d.e.s is to control the numerical error so that it is less than a user-specified tolerance. This error depends on the spatial discretization method, the spatial mesh, the method of time integration and the timestep. The spatial discretization method and positioning of the spatial mesh points should attempt to ensure that the spatial error is controlled to meet the user`s requirements. It is then desirable to integrate the o.d.e. system in time with sufficient accuracy so that the temporal error does not corrupt the spatial accuracy or the reliability of the spatial error estimates. This paper is concerned with the development of a prototype algorithm of this type, based on a cell-centered triangular finite volume scheme, for two space dimensional convection-dominated problems.

  11. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    NASA Technical Reports Server (NTRS)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  12. Patched based methods for adaptive mesh refinement solutions of partial differential equations

    SciTech Connect

    Saltzman, J.

    1997-09-02

    This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.

  13. Validation of an Adaptive Combustion Instability Control Method for Gas-Turbine Engines

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; DeLaat, John C.; Chang, Clarence T.

    2004-01-01

    This paper describes ongoing testing of an adaptive control method to suppress high frequency thermo-acoustic instabilities like those found in lean-burning, low emission combustors that are being developed for future aircraft gas turbine engines. The method called Adaptive Sliding Phasor Averaged Control, was previously tested in an experimental rig designed to simulate a combustor with an instability of about 530 Hz. Results published earlier, and briefly presented here, demonstrated that this method was effective in suppressing the instability. Because this test rig did not exhibit a well pronounced instability, a question remained regarding the effectiveness of the control methodology when applied to a more coherent instability. To answer this question, a modified combustor rig was assembled at the NASA Glenn Research Center in Cleveland, Ohio. The modified rig exhibited a more coherent, higher amplitude instability, but at a lower frequency of about 315 Hz. Test results show that this control method successfully reduced the instability pressure of the lower frequency test rig. In addition, due to a certain phenomena discovered and reported earlier, the so called Intra-Harmonic Coupling, a dramatic suppression of the instability was achieved by focusing control on the second harmonic of the instability. These results and their implications are discussed, as well as a hypothesis describing the mechanism of intra-harmonic coupling.

  14. Adaptive method for quantifying uncertainty in discharge measurements using velocity-area method.

    NASA Astrophysics Data System (ADS)

    Despax, Aurélien; Favre, Anne-Catherine; Belleville, Arnaud

    2015-04-01

    Streamflow information provided by hydrometric services such as EDF-DTG allow real time monitoring of rivers, streamflow forecasting, paramount hydrological studies and engineering design. In open channels, the traditional approach to measure flow uses a rating curve, which is an indirect method to estimate the discharge in rivers based on water level and punctual discharge measurements. A large proportion of these discharge measurements are performed using the velocity-area method; it consists in integrating flow velocities and depths through the cross-section [1]. The velocity field is estimated by choosing a number m of verticals, distributed across the river, where vertical velocity profile is sampled by a current-meter at ni different depths. Uncertainties coming from several sources are related to the measurement process. To date, the framework for assessing uncertainty in velocity-area discharge measurements is the method presented in the ISO 748 standard [2] which follows the GUM [3] approach. The equation for the combined uncertainty in measured discharge u(Q), at 68% level of confidence, proposed by the ISO 748 standard is expressed as: Σ 2 2 2 -q2i[u2(Bi)+-u2(Di)+-u2p(Vi)+-(1ni) ×-[u2c(Vi)+-u2exp(Vi)

  15. Radiation hydrodynamics including irradiation and adaptive mesh refinement with AZEuS. I. Methods

    NASA Astrophysics Data System (ADS)

    Ramsey, J. P.; Dullemond, C. P.

    2015-02-01

    Aims: The importance of radiation to the physical structure of protoplanetary disks cannot be understated. However, protoplanetary disks evolve with time, and so to understand disk evolution and by association, disk structure, one should solve the combined and time-dependent equations of radiation hydrodynamics. Methods: We implement a new implicit radiation solver in the AZEuS adaptive mesh refinement magnetohydrodynamics fluid code. Based on a hybrid approach that combines frequency-dependent ray-tracing for stellar irradiation with non-equilibrium flux limited diffusion, we solve the equations of radiation hydrodynamics while preserving the directionality of the stellar irradiation. The implementation permits simulations in Cartesian, cylindrical, and spherical coordinates, on both uniform and adaptive grids. Results: We present several hydrostatic and hydrodynamic radiation tests which validate our implementation on uniform and adaptive grids as appropriate, including benchmarks specifically designed for protoplanetary disks. Our results demonstrate that the combination of a hybrid radiation algorithm with AZEuS is an effective tool for radiation hydrodynamics studies, and produces results which are competitive with other astrophysical radiation hydrodynamics codes.

  16. Adaptive optics in spinning disk microscopy: improved contrast and brightness by a simple and fast method.

    PubMed

    Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J

    2015-09-01

    Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples. PMID:25940062

  17. Comparative adaptation accuracy of acrylic denture bases evaluated by two different methods.

    PubMed

    Lee, Chung-Jae; Bok, Sung-Bem; Bae, Ji-Young; Lee, Hae-Hyoung

    2010-08-01

    This study examined the adaptation accuracy of acrylic denture base processed using fluid-resin (PERform), injection-moldings (SR-Ivocap, Success, Mak Press), and two compression-molding techniques. The adaptation accuracy was measured primarily by the posterior border gaps at the mid-palatal area using a microscope and subsequently by weighing of the weight of the impression material between the denture base and master cast using hand-mixed and automixed silicone. The correlation between the data measured using these two test methods was examined. The PERform and Mak Press produced significantly smaller maximum palatal gap dimensions than the other groups (p<0.05). Mak Press also showed a significantly smaller weight of automixed silicone material than the other groups (p<0.05), while SR-Ivocap and Success showed similar adaptation accuracy to the compression-molding denture. The correlationship between the magnitude of the posterior border gap and the weight of the silicone impression materials was affected by either the material or mixing variables. PMID:20675954

  18. Adaptive correction method for an OCXO and investigation of analytical cumulative time error upper bound.

    PubMed

    Zhou, Hui; Kunz, Thomas; Schwartz, Howard

    2011-01-01

    Traditional oscillators used in timing modules of CDMA and WiMAX base stations are large and expensive. Applying cheaper and smaller, albeit more inaccurate, oscillators in timing modules is an interesting research challenge. An adaptive control algorithm is presented to enhance the oscillators to meet the requirements of base stations during holdover mode. An oscillator frequency stability model is developed for the adaptive control algorithm. This model takes into account the control loop which creates the correction signal when the timing module is in locked mode. A recursive prediction error method is used to identify the system model parameters. Simulation results show that an oscillator enhanced by our adaptive control algorithm improves the oscillator performance significantly, compared with uncorrected oscillators. Our results also show the benefit of explicitly modeling the control loop. Finally, the cumulative time error upper bound of such enhanced oscillators is investigated analytically and comparison results between the analytical and simulated upper bound are provided. The results show that the analytical upper bound can serve as a practical guide for system designers. PMID:21244973

  19. Pedagogical content knowledge and preparation of high school physics teachers

    NASA Astrophysics Data System (ADS)

    Etkina, Eugenia

    2010-07-01

    This paper contains a scholarly description of pedagogical practices of the Rutgers Physics/Physical Science Teacher Preparation program. The program focuses on three aspects of teacher preparation: knowledge of physics, knowledge of pedagogy, and knowledge of how to teach physics (pedagogical content knowledge—PCK). The program has been in place for 7 years and has a steady production rate of an average of six teachers per year who remain in the profession. The main purpose of the paper is to provide information about a possible structure, organization, and individual elements of a program that prepares physics teachers. The philosophy of the program and the coursework can be implemented either in a physics department or in a school of education. The paper provides details about the program course work and teaching experiences and suggests ways to adapt it to other local conditions.

  20. Adaptive control system having hedge unit and related apparatus and methods

    NASA Technical Reports Server (NTRS)

    Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)

    2003-01-01

    The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.

  1. Adaptive control system having hedge unit and related apparatus and methods

    NASA Technical Reports Server (NTRS)

    Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)

    2007-01-01

    The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.

  2. Describing Preservice Instrumental Music Educators' Pedagogical Content Knowledge

    ERIC Educational Resources Information Center

    Millican, J. Si

    2016-01-01

    In this descriptive study, I investigated the pedagogical content knowledge of 206 undergraduate music education students by presenting video recordings of beginning band students playing excerpts from their class method books. I asked these preservice educators to identify performance problems and offer potential solutions to the causes of those…

  3. New Pedagogical Literacy Requirement Resulting from Technological Literacy in Education

    ERIC Educational Resources Information Center

    Adigüzel, Abdullah

    2014-01-01

    The aim of this study was to determine the recent pedagogical literacy requirements in the technologically supported lessons. In this study, case study which is one of the qualitative research methods was used. The participants of the study included 12 voluntary classroom teachers who were in service in three different private primary schools…

  4. A Pedagogical Experiment in Crowdsourcing and Enumerative Bibliography

    ERIC Educational Resources Information Center

    Pionke, A. D.

    2013-01-01

    Faced with increasing marginalization within English studies by the explosion of literary criticism in the 1970s, professional bibliographers began to defend their subdiscipline on pedagogical grounds. More recently, the digital revolution in the academic humanities has prompted a further revaluation of methods and outcomes in training graduate…

  5. The Pedagogic Beliefs of Indonesian Teachers in Inclusive Schools

    ERIC Educational Resources Information Center

    Sheehy, Kieron; Budiyanto

    2015-01-01

    This research explores, for the first time, the pedagogical orientations of Indonesian teachers in the context of inclusive education. A mixed-method approach was used for an analysis of questionnaire data from 140 teachers and qualitative interviews from 20 teachers in four inclusive schools. The findings suggest that, in general, the implicit…

  6. Live Case Analysis: Pedagogical Problems and Prospects in Management Education

    ERIC Educational Resources Information Center

    Roth, Kevin J.; Smith, Chad

    2009-01-01

    The selection of an appropriate and effective pedagogy has been a central theme in management education for decades. There currently exists a wide range of pedagogical options designed to match course content with the most appropriate technique(s) for effective learning outcomes. Most recently, a variety of experiential learning methods have been…

  7. On Improving the Experiment Methodology in Pedagogical Research

    ERIC Educational Resources Information Center

    Horakova, Tereza; Houska, Milan

    2014-01-01

    The paper shows how the methodology for a pedagogical experiment can be improved through including the pre-research stage. If the experiment has the form of a test procedure, an improvement of methodology can be achieved using for example the methods of statistical and didactic analysis of tests which are traditionally used in other areas, i.e.…

  8. Data-adapted moving least squares method for 3-D image interpolation

    NASA Astrophysics Data System (ADS)

    Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho

    2013-12-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.

  9. System and method for adaptively deskewing parallel data signals relative to a clock

    DOEpatents

    Jenkins, Philip Nord; Cornett, Frank N.

    2006-04-18

    A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. Each of the plurality of delayed signals is compared to a reference signal to detect changes in the skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in the detected skew.

  10. Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map

    SciTech Connect

    Frankie Li, Shiu Fai

    2014-06-01

    IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is also designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.

  11. Wavefront detection method of a single-sensor based adaptive optics system.

    PubMed

    Wang, Chongchong; Hu, Lifa; Xu, Huanyu; Wang, Yukun; Li, Dayu; Wang, Shaoxin; Mu, Quanquan; Yang, Chengliang; Cao, Zhaoliang; Lu, Xinghai; Xuan, Li

    2015-08-10

    In adaptive optics system (AOS) for optical telescopes, the reported wavefront sensing strategy consists of two parts: a specific sensor for tip-tilt (TT) detection and another wavefront sensor for other distortions detection. Thus, a part of incident light has to be used for TT detection, which decreases the light energy used by wavefront sensor and eventually reduces the precision of wavefront correction. In this paper, a single Shack-Hartmann wavefront sensor based wavefront measurement method is presented for both large amplitude TT and other distortions' measurement. Experiments were performed for testing the presented wavefront method and validating the wavefront detection and correction ability of the single-sensor based AOS. With adaptive correction, the root-mean-square of residual TT was less than 0.2 λ, and a clear image was obtained in the lab. Equipped on a 1.23-meter optical telescope, the binary stars with angle distance of 0.6″ were clearly resolved using the AOS. This wavefront measurement method removes the separate TT sensor, which not only simplifies the AOS but also saves light energy for subsequent wavefront sensing and imaging, and eventually improves the detection and imaging capability of the AOS. PMID:26367988

  12. Removal of Cardiopulmonary Resuscitation Artifacts with an Enhanced Adaptive Filtering Method: An Experimental Trial

    PubMed Central

    Gong, Yushun; Yu, Tao; Chen, Bihua; He, Mi; Li, Yongqin

    2014-01-01

    Current automated external defibrillators mandate interruptions of chest compression to avoid the effect of artifacts produced by CPR for reliable rhythm analyses. But even seconds of interruption of chest compression during CPR adversely affects the rate of restoration of spontaneous circulation and survival. Numerous digital signal processing techniques have been developed to remove the artifacts or interpret the corrupted ECG with promising result, but the performance is still inadequate, especially for nonshockable rhythms. In the present study, we suppressed the CPR artifacts with an enhanced adaptive filtering method. The performance of the method was evaluated by comparing the sensitivity and specificity for shockable rhythm detection before and after filtering the CPR corrupted ECG signals. The dataset comprised 283 segments of shockable and 280 segments of nonshockable ECG signals during CPR recorded from 22 adult pigs that experienced prolonged cardiac arrest. For the unfiltered signals, the sensitivity and specificity were 99.3% and 46.8%, respectively. After filtering, a sensitivity of 93.3% and a specificity of 96.0% were achieved. This animal trial demonstrated that the enhanced adaptive filtering method could significantly improve the detection of nonshockable rhythms without compromising the ability to detect a shockable rhythm during uninterrupted CPR. PMID:24795878

  13. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  14. A Newton method with adaptive finite elements for solving phase-change problems with natural convection

    NASA Astrophysics Data System (ADS)

    Danaila, Ionut; Moglan, Raluca; Hecht, Frédéric; Le Masson, Stéphane

    2014-10-01

    We present a new numerical system using finite elements with mesh adaptivity for the simulation of solid-liquid phase change systems. In the liquid phase, the natural convection flow is simulated by solving the incompressible Navier-Stokes equations with Boussinesq approximation. A variable viscosity model allows the velocity to progressively vanish in the solid phase, through an intermediate mushy region. The phase change is modeled by introducing an implicit enthalpy source term in the heat equation. The final system of equations describing the liquid-solid system by a single domain approach is solved using a Newton iterative algorithm. The space discretization is based on a P2-P1 Taylor-Hood finite elements and mesh adaptivity by metric control is used to accurately track the solid-liquid interface or the density inversion interface for water flows. The numerical method is validated against classical benchmarks that progressively add strong non-linearities in the system of equations: natural convection of air, natural convection of water, melting of a phase-change material and water freezing. Very good agreement with experimental data is obtained for each test case, proving the capability of the method to deal with both melting and solidification problems with convection. The presented numerical method is easy to implement using FreeFem++ software using a syntax close to the mathematical formulation.

  15. FALCON: A method for flexible adaptation of local coordinates of nuclei.

    PubMed

    König, Carolin; Hansen, Mads Bøttger; Godtliebsen, Ian H; Christiansen, Ove

    2016-02-21

    We present a flexible scheme for calculating vibrational rectilinear coordinates with well-defined strict locality on a certain set of atoms. Introducing a method for Flexible Adaption of Local COordinates of Nuclei (FALCON) we show how vibrational subspaces can be "grown" in an adaptive manner. Subspace Hessian matrices are set up and used to calculate and analyze vibrational modes and frequencies. FALCON coordinates can more generally be used to construct vibrational coordinates for describing local and (semi-local) interacting modes with desired features. For instance, spatially local vibrations can be approximately described as internal motion within only a group of atoms and delocalized modes can be approximately expressed as relative motions of rigid groups of atoms. The FALCON method can support efficiency in the calculation and analysis of vibrational coordinates and energies in the context of harmonic and anharmonic calculations. The features of this method are demonstrated on a few small molecules, i.e., formylglycine, coumarin, and dimethylether as well as for the amide-I band and low-frequency modes of alanine oligomers and alpha conotoxin. PMID:26896977

  16. Adaptive explicit and implicit finite element methods for transient thermal analysis

    NASA Technical Reports Server (NTRS)

    Probert, E. J.; Hassan, O.; Morgan, K.; Peraire, J.

    1992-01-01

    The application of adaptive finite element methods to the solution of transient heat conduction problems in two dimensions is investigated. The computational domain is represented by an unstructured assembly of linear triangular elements and the mesh adaptation is achieved by local regeneration of the grid, using an error estimation procedure coupled to an automatic triangular mesh generator. Two alternative solution procedures are considered. In the first procedure, the solution is advanced by explicit timestepping, with domain decomposition being used to improve the computational efficiency of the method. In the second procedure, an algorithm for constructing continuous lines which pass only once through each node of the mesh is employed. The lines are used as the basis of a fully implicit method, in which the equation system is solved by line relaxation using a block tridiagonal equation solver. The numerical performance of the two procedures is compared for the analysis of a problem involving a moving heat source applied to a convectively cooled cylindrical leading edge.

  17. Efficient reconstruction method for ground layer adaptive optics with mixed natural and laser guide stars.

    PubMed

    Wagner, Roland; Helin, Tapio; Obereder, Andreas; Ramlau, Ronny

    2016-02-20

    The imaging quality of modern ground-based telescopes such as the planned European Extremely Large Telescope is affected by atmospheric turbulence. In consequence, they heavily depend on stable and high-performance adaptive optics (AO) systems. Using measurements of incoming light from guide stars, an AO system compensates for the effects of turbulence by adjusting so-called deformable mirror(s) (DMs) in real time. In this paper, we introduce a novel reconstruction method for ground layer adaptive optics. In the literature, a common approach to this problem is to use Bayesian inference in order to model the specific noise structure appearing due to spot elongation. This approach leads to large coupled systems with high computational effort. Recently, fast solvers of linear order, i.e., with computational complexity O(n), where n is the number of DM actuators, have emerged. However, the quality of such methods typically degrades in low flux conditions. Our key contribution is to achieve the high quality of the standard Bayesian approach while at the same time maintaining the linear order speed of the recent solvers. Our method is based on performing a separate preprocessing step before applying the cumulative reconstructor (CuReD). The efficiency and performance of the new reconstructor are demonstrated using the OCTOPUS, the official end-to-end simulation environment of the ESO for extremely large telescopes. For more specific simulations we also use the MOST toolbox. PMID:26906596

  18. A CD adaptive monitoring and compensation method based on the average of the autocorrelation matrix eigenvalue

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Lei, Jianming; Guo, Junhui; Zou, Xuecheng; Li, Bin; Lu, Li

    2014-02-01

    A new autocorrelation matrix eigenvalue based digital signal processing (DSP) chromatic dispersion (CD) adaptive monitoring and compensation method is proposed. It employs the average of the autocorrelation matrix eigenvalue instead of eigenvalue spread to be the metric of scanning. The average calculation has been effective in relieving the degradation of performance caused by the fluctuation of autocorrelation matrix eigenvalue. Compare with the eigenvalue spread scanning algorithm, this method reduces the monitoring errors to below 10 ps/nm from more than 200 ps/nm, while not increasing its computation complexity. Simulation results show that in 100 Gbit/s polarization division multiplexing (PDM) quadrature phase shift keying (QPSK) coherent optical transmission system, this method improves the bit error rate (BER) performance and the system robustness against the amplified-spontaneous-emission noise.

  19. Dynamics of the adaptive natural gradient descent method for soft committee machines

    NASA Astrophysics Data System (ADS)

    Inoue, Masato; Park, Hyeyoung; Okada, Masato

    2004-05-01

    Adaptive natural gradient descent (ANGD) method realizes natural gradient descent (NGD) without needing to know the input distribution of learning data and reduces the calculation cost from a cubic order to a square order. However, no performance analysis of ANGD has been done. We have developed a statistical-mechanical theory of the simplified version of ANGD dynamics for soft committee machines in on-line learning; this method provides deterministic learning dynamics expressed through a few order parameters, even though ANGD intrinsically holds a large approximated Fisher information matrix. Numerical results obtained using this theory were consistent with those of a simulation, with respect not only to the learning curve but also to the learning failure. Utilizing this method, we numerically evaluated ANGD efficiency and found that ANGD generally performs as well as NGD. We also revealed the key condition affecting the learning plateau in ANGD.

  20. Directionally adaptive finite element method for multidimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Tan, Zhiqiang; Varghese, Philip L.

    1993-01-01

    A directionally adaptive finite element method for multidimensional compressible flows is presented. Quadrilateral and hexahedral elements are used because they have several advantages over triangular and tetrahedral elements. Unlike traditional methods that use quadrilateral/hexahedral elements, our method allows an element to be divided in each of the three directions in 3D and two directions in 2D. Some restrictions on mesh structure are found to be necessary, especially in 3D. The refining and coarsening procedures, and the treatment of constraints are given. A new implementation of upwind schemes in the constrained finite element system is presented. Some example problems, including a Mach 10 shock interaction with the walls of a 2D channel, a 2D viscous compression corner flow, and inviscid and viscous 3D flows in square channels, are also shown.

  1. An adaptive distance-based group contribution method for thermodynamic property prediction.

    PubMed

    He, Tanjin; Li, Shuang; Chi, Yawei; Zhang, Hong-Bo; Wang, Zhi; Yang, Bin; He, Xin; You, Xiaoqing

    2016-09-14

    In the search for an accurate yet inexpensive method to predict thermodynamic properties of large hydrocarbon molecules, we have developed an automatic and adaptive distance-based group contribution (DBGC) method. The method characterizes the group interaction within a molecule with an exponential decay function of the group-to-group distance, defined as the number of bonds between the groups. A database containing the molecular bonding information and the standard enthalpy of formation (Hf,298K) for alkanes, alkenes, and their radicals at the M06-2X/def2-TZVP//B3LYP/6-31G(d) level of theory was constructed. Multiple linear regression (MLR) and artificial neural network (ANN) fitting were used to obtain the contributions from individual groups and group interactions for further predictions. Compared with the conventional group additivity (GA) method, the DBGC method predicts Hf,298K for alkanes more accurately using the same training sets. Particularly for some highly branched large hydrocarbons, the discrepancy with the literature data is smaller for the DBGC method than the conventional GA method. When extended to other molecular classes, including alkenes and radicals, the overall accuracy level of this new method is still satisfactory. PMID:27522953

  2. Application of Symmetry Adapted Function Method for Three-Dimensional Reconstruction of Octahedral Biological Macromolecules

    PubMed Central

    Zeng, Songjun; Liu, Hongrong; Yang, Qibin

    2010-01-01

    A method for three-dimensional (3D) reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs) method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N) = 0.1, 0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise. PMID:20150955

  3. Using Mixed-Methods Research to Adapt and Evaluate a Family Strengthening Intervention in Rwanda

    PubMed Central

    Betancourt, Theresa S.; Meyers-Ohki, Sarah E.; Stevenson, Anne; Ingabire, Charles; Kanyanganzi, Fredrick; Munyana, Morris; Mushashi, Christina; Teta, Sharon; Fayida, Ildephonse; Cyamatare, Felix Rwabukwisi; Stulac, Sara; Beardslee, William R.

    2013-01-01

    Introduction Research in several international settings indicates that children and adolescents affected by HIV and other compounded adversities are at increased risk for a range of mental health problems including depression, anxiety, and social withdrawal. More intervention research is needed to develop valid measurement and intervention tools to address child mental health in such settings. Objective This article presents a collaborative mixed-methods approach to designing and evaluating a mental health intervention to assist families facing multiple adversities in Rwanda. Methods Qualitative methods were used to gain knowledge of culturally-relevant mental health problems in children and adolescents, individual, family and community resources, and contextual dynamics among HIV-affected families. This data was used to guide the selection and adaptation of mental health measures to assess intervention outcomes. Measures were subjected to a quantitative validation exercise. Qualitative data and community advisory board input also informed the selection and adaptation of a family-based preventive intervention to reduce the risk for mental health problems among children in families affected by HIV.. Community-based participatory methods were used to ensure that the intervention targeted relevant problems manifest in Rwandan children and families and built on local strengths. Results Qualitative data on culturally-appropriate practices for building resilience in vulnerable families has enriched the development of a Family-Strengthening Intervention (FSI). Input from community partners has also contributed to creating a feasible and culturally-relevant intervention. Mental health measures demonstrate strong performance in this population. Conclusion The mixed-methods model discussed represents a refined, multi-phase protocol for incorporating qualitative data and community input in the development and evaluation of feasible, culturally-sound quantitative assessments

  4. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  5. Building Adaptive Capacity with the Delphi Method and Mediated Modeling for Water Quality and Climate Change Adaptation in Lake Champlain Basin

    NASA Astrophysics Data System (ADS)

    Coleman, S.; Hurley, S.; Koliba, C.; Zia, A.; Exler, S.

    2014-12-01

    Eutrophication and nutrient pollution of surface waters occur within complex governance, social, hydrologic and biophysical basin contexts. The pervasive and perennial nutrient pollution in Lake Champlain Basin, despite decades of efforts, exemplifies problems found across the world's surface waters. Stakeholders with diverse values, interests, and forms of explicit and tacit knowledge determine water quality impacts through land use, agricultural and water resource decisions. Uncertainty, ambiguity and dynamic feedback further complicate the ability to promote the continual provision of water quality and ecosystem services. Adaptive management of water resources and land use requires mechanisms to allow for learning and integration of new information over time. The transdisciplinary Research on Adaptation to Climate Change (RACC) team is working to build regional adaptive capacity in Lake Champlain Basin while studying and integrating governance, land use, hydrological, and biophysical systems to evaluate implications for adaptive management. The RACC team has engaged stakeholders through mediated modeling workshops, online forums, surveys, focus groups and interviews. In March 2014, CSS2CC.org, an interactive online forum to source and identify adaptive interventions from a group of stakeholders across sectors was launched. The forum, based on the Delphi Method, brings forward the collective wisdom of stakeholders and experts to identify potential interventions and governance designs in response to scientific uncertainty and ambiguity surrounding the effectiveness of any strategy, climate change impacts, and the social and natural systems governing water quality and eutrophication. A Mediated Modeling Workshop followed the forum in May 2014, where participants refined and identified plausible interventions under different governance, policy and resource scenarios. Results from the online forum and workshop can identify emerging consensus across scales and sectors

  6. Developing Pre-service Elementary Teachers' Pedagogical Practices While Planning Using the Learning Cycle

    NASA Astrophysics Data System (ADS)

    Ross, Danielle K.; Cartier, Jennifer L.

    2015-10-01

    Without the science content knowledge required to effectively teach this discipline, many elementary teachers struggle without the support of curriculum materials. Curriculum materials are often the main means by which these science practices and canonical knowledge are incorporated into lessons. As part of a 5-year longitudinal research and design project at a large university in the USA, faculty developed an elementary science methods course for pre-service elementary teachers. As a result the pre-service elementary teachers come to understand the Learning Cycle framework as support mechanism for science instruction. This study examined pre-service elementary teachers' use of curriculum materials in lesson planning by identifying types of instructional tools used during the Learning Cycle. Findings highlight the importance of providing pre-service elementary teachers with supportive frameworks and opportunities to learn to critique and adapt curriculum materials in order to begin the development of their pedagogical design capacity for Learning Cycle lessons.

  7. The Adaptive Biasing Force Method: Everything You Always Wanted To Know but Were Afraid To Ask

    PubMed Central

    2014-01-01

    In the host of numerical schemes devised to calculate free energy differences by way of geometric transformations, the adaptive biasing force algorithm has emerged as a promising route to map complex free-energy landscapes. It relies upon the simple concept that as a simulation progresses, a continuously updated biasing force is added to the equations of motion, such that in the long-time limit it yields a Hamiltonian devoid of an average force acting along the transition coordinate of interest. This means that sampling proceeds uniformly on a flat free-energy surface, thus providing reliable free-energy estimates. Much of the appeal of the algorithm to the practitioner is in its physically intuitive underlying ideas and the absence of any requirements for prior knowledge about free-energy landscapes. Since its inception in 2001, the adaptive biasing force scheme has been the subject of considerable attention, from in-depth mathematical analysis of convergence properties to novel developments and extensions. The method has also been successfully applied to many challenging problems in chemistry and biology. In this contribution, the method is presented in a comprehensive, self-contained fashion, discussing with a critical eye its properties, applicability, and inherent limitations, as well as introducing novel extensions. Through free-energy calculations of prototypical molecular systems, many methodological aspects are examined, from stratification strategies to overcoming the so-called hidden barriers in orthogonal space, relevant not only to the adaptive biasing force algorithm but also to other importance-sampling schemes. On the basis of the discussions in this paper, a number of good practices for improving the efficiency and reliability of the computed free-energy differences are proposed. PMID:25247823

  8. Modeling, mesh generation, and adaptive numerical methods for partial differential equations

    SciTech Connect

    Babuska, I.; Henshaw, W.D.; Oliger, J.E.; Flaherty, J.E.; Hopcroft, J.E.; Tezduyar, T.

    1995-12-31

    Mesh generation is one of the most time consuming aspects of computational solutions of problems involving partial differential equations. It is, furthermore, no longer acceptable to compute solutions without proper verification that specified accuracy criteria are being satisfied. Mesh generation must be related to the solution through computable estimates of discretization errors. Thus, an iterative process of alternate mesh and solution generation evolves in an adaptive manner with the end result that the solution is computed to prescribed specifications in an optimal, or at least efficient, manner. While mesh generation and adaptive strategies are becoming available, major computational challenges remain. One, in particular, involves moving boundaries and interfaces, such as free-surface flows and fluid-structure interactions. A 3-week program was held from July 5 to July 23, 1993 with 173 participants and 66 keynote, invited, and contributed presentations. This volume represents written versions of 21 of these lectures. These proceedings are organized roughly in order of their presentation at the workshop. Thus, the initial papers are concerned with geometry and mesh generation and discuss the representation of physical objects and surfaces on a computer and techniques to use this data to generate, principally, unstructured meshes of tetrahedral or hexahedral elements. The remainder of the papers cover adaptive strategies, error estimation, and applications. Several submissions deal with high-order p- and hp-refinement methods where mesh refinement/coarsening (h-refinement) is combined with local variation of method order (p-refinement). Combinations of mathematically verified and physically motivated approaches to error estimation are represented. Applications center on fluid mechanics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  9. Practical Method of Adaptive Radiotherapy for Prostate Cancer Using Real-Time Electromagnetic Tracking

    SciTech Connect

    Olsen, Jeffrey R.; Noel, Camille E.; Baker, Kenneth; Santanam, Lakshmi; Michalski, Jeff M.; Parikh, Parag J.

    2012-04-01

    Purpose: We have created an automated process using real-time tracking data to evaluate the adequacy of planning target volume (PTV) margins in prostate cancer, allowing a process of adaptive radiotherapy with minimal physician workload. We present an analysis of PTV adequacy and a proposed adaptive process. Methods and Materials: Tracking data were analyzed for 15 patients who underwent step-and-shoot multi-leaf collimation (SMLC) intensity-modulated radiation therapy (IMRT) with uniform 5-mm PTV margins for prostate cancer using the Calypso Registered-Sign Localization System. Additional plans were generated with 0- and 3-mm margins. A custom software application using the planned dose distribution and structure location from computed tomography (CT) simulation was developed to evaluate the dosimetric impact to the target due to motion. The dose delivered to the prostate was calculated for the initial three, five, and 10 fractions, and for the entire treatment. Treatment was accepted as adequate if the minimum delivered prostate dose (D{sub min}) was at least 98% of the planned D{sub min}. Results: For 0-, 3-, and 5-mm PTV margins, adequate treatment was obtained in 3 of 15, 12 of 15, and 15 of 15 patients, and the delivered D{sub min} ranged from 78% to 99%, 96% to 100%, and 99% to 100% of the planned D{sub min}. Changes in D{sub min} did not correlate with magnitude of prostate motion. Treatment adequacy during the first 10 fractions predicted sufficient dose delivery for the entire treatment for all patients and margins. Conclusions: Our adaptive process successfully used real-time tracking data to predict the need for PTV modifications, without the added burden of physician contouring and image analysis. Our methods are applicable to other uses of real-time tracking, including hypofractionated treatment.

  10. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. PMID:25463325

  11. An adaptive multifluid interface-capturing method for compressible flow in complex geometries

    SciTech Connect

    Greenough, J.A.; Beckner, V.; Pember, R.B.; Crutchfield, W.Y.; Bell, J.B.; Colella, P.

    1995-04-01

    We present a numerical method for solving the multifluid equations of gas dynamics using an operator-split second-order Godunov method for flow in complex geometries in two and three dimensions. The multifluid system treats the fluid components as thermodynamically distinct entities and correctly models fluids with different compressibilities. This treatment allows a general equation-of-state (EOS) specification and the method is implemented so that the EOS references are minimized. The current method is complementary to volume-of-fluid (VOF) methods in the sense that a VOF representation is used, but no interface reconstruction is performed. The Godunov integrator captures the interface during the solution process. The basic multifluid integrator is coupled to a Cartesian grid algorithm that also uses a VOF representation of the fluid-body interface. This representation of the fluid-body interface allows the algorithm to easily accommodate arbitrarily complex geometries. The resulting single grid multifluid-Cartesian grid integration scheme is coupled to a local adaptive mesh refinement algorithm that dynamically refines selected regions of the computational grid to achieve a desired level of accuracy. The overall method is fully conservative with respect to the total mixture. The method will be used for a simple nozzle problem in two-dimensional axisymmetric coordinates.

  12. Compact integration factor methods for complex domains and adaptive mesh refinement

    PubMed Central

    Liu, Xinfeng; Nie, Qing

    2010-01-01

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed. PMID:20543883

  13. An adaptively refined phase-space element method for cosmological simulations and collisionless dynamics

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Angulo, Raul E.

    2016-01-01

    N-body simulations are essential for understanding the formation and evolution of structure in the Universe. However, the discrete nature of these simulations affects their accuracy when modelling collisionless systems. We introduce a new approach to simulate the gravitational evolution of cold collisionless fluids by solving the Vlasov-Poisson equations in terms of adaptively refineable `Lagrangian phase-space elements'. These geometrical elements are piecewise smooth maps between Lagrangian space and Eulerian phase-space and approximate the continuum structure of the distribution function. They allow for dynamical adaptive splitting to accurately follow the evolution even in regions of very strong mixing. We discuss in detail various one-, two- and three-dimensional test problems to demonstrate the performance of our method. Its advantages compared to N-body algorithms are: (i) explicit tracking of the fine-grained distribution function, (ii) natural representation of caustics, (iii) intrinsically smooth gravitational potential fields, thus (iv) eliminating the need for any type of ad hoc force softening. We show the potential of our method by simulating structure formation in a warm dark matter scenario. We discuss how spurious collisionality and large-scale discreteness noise of N-body methods are both strongly suppressed, which eliminates the artificial fragmentation of filaments. Therefore, we argue that our new approach improves on the N-body method when simulating self-gravitating cold and collisionless fluids, and is the first method that allows us to explicitly follow the fine-grained evolution in six-dimensional phase-space.

  14. Blended particle methods with adaptive subspaces for filtering turbulent dynamical systems

    NASA Astrophysics Data System (ADS)

    Qi, Di; Majda, Andrew J.

    2015-04-01

    It is a major challenge throughout science and engineering to improve uncertain model predictions by utilizing noisy data sets from nature. Hybrid methods combining the advantages of traditional particle filters and the Kalman filter offer a promising direction for filtering or data assimilation in high dimensional turbulent dynamical systems. In this paper, blended particle filtering methods that exploit the physical structure of turbulent dynamical systems are developed. Non-Gaussian features of the dynamical system are captured adaptively in an evolving-in-time low dimensional subspace through particle methods, while at the same time statistics in the remaining portion of the phase space are amended by conditional Gaussian mixtures interacting with the particles. The importance of both using the adaptively evolving subspace and introducing conditional Gaussian statistics in the orthogonal part is illustrated here by simple examples. For practical implementation of the algorithms, finding the most probable distributions that characterize the statistics in the phase space as well as effective resampling strategies is discussed to handle realizability and stability issues. To test the performance of the blended algorithms, the forty dimensional Lorenz 96 system is utilized with a five dimensional subspace to run particles. The filters are tested extensively in various turbulent regimes with distinct statistics and with changing observation time frequency and both dense and sparse spatial observations. In real applications perfect dynamical models are always inaccessible considering the complexities in both modeling and computation of high dimensional turbulent system. The effects of model errors from imperfect modeling of the systems are also checked for these methods. The blended methods show uniformly high skill in both capturing non-Gaussian statistics and achieving accurate filtering results in various dynamical regimes with and without model errors.

  15. A Wavelet-Based ECG Delineation Method: Adaptation to an Experimental Electrograms with Manifested Global Ischemia.

    PubMed

    Hejč, Jakub; Vítek, Martin; Ronzhina, Marina; Nováková, Marie; Kolářová, Jana

    2015-09-01

    We present a novel wavelet-based ECG delineation method with robust classification of P wave and T wave. The work is aimed on an adaptation of the method to long-term experimental electrograms (EGs) measured on isolated rabbit heart and to evaluate the effect of global ischemia in experimental EGs on delineation performance. The algorithm was tested on a set of 263 rabbit EGs with established reference points and on human signals using standard Common Standards for Quantitative Electrocardiography Standard Database (CSEDB). On CSEDB, standard deviation (SD) of measured errors satisfies given criterions in each point and the results are comparable to other published works. In rabbit signals, our QRS detector reached sensitivity of 99.87% and positive predictivity of 99.89% despite an overlay of spectral components of QRS complex, P wave and power line noise. The algorithm shows great performance in suppressing J-point elevation and reached low overall error in both, QRS onset (SD = 2.8 ms) and QRS offset (SD = 4.3 ms) delineation. T wave offset is detected with acceptable error (SD = 12.9 ms) and sensitivity nearly 99%. Variance of the errors during global ischemia remains relatively stable, however more failures in detection of T wave and P wave occur. Due to differences in spectral and timing characteristics parameters of rabbit based algorithm have to be highly adaptable and set more precisely than in human ECG signals to reach acceptable performance. PMID:26577367

  16. An adaptive compromise programming method for multi-objective path optimization

    NASA Astrophysics Data System (ADS)

    Li, Rongrong; Leung, Yee; Lin, Hui; Huang, Bo

    2013-04-01

    Network routing problems generally involve multiple objectives which may conflict one another. An effective way to solve such problems is to generate a set of Pareto-optimal solutions that is small enough to be handled by a decision maker and large enough to give an overview of all possible trade-offs among the conflicting objectives. To accomplish this, the present paper proposes an adaptive method based on compromise programming to assist decision makers in identifying Pareto-optimal paths, particularly for non-convex problems. This method can provide an unbiased approximation of the Pareto-optimal alternatives by adaptively changing the origin and direction of search in the objective space via the dynamic updating of the largest unexplored region till an appropriately structured Pareto front is captured. To demonstrate the efficacy of the proposed methodology, a case study is carried out for the transportation of dangerous goods in the road network of Hong Kong with the support of geographic information system. The experimental results confirm the effectiveness of the approach.

  17. An Adaptive Fast Multipole Boundary Element Method for Poisson-Boltzmann Electrostatics

    SciTech Connect

    Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, Jonathan

    2009-01-01

    The numerical solution of the Poisson Boltzmann (PB) equation is a useful but a computationally demanding tool for studying electrostatic solvation effects in chemical and biomolecular systems. Recently, we have described a boundary integral equation-based PB solver accelerated by a new version of the fast multipole method (FMM). The overall algorithm shows an order N complexity in both the computational cost and memory usage. Here, we present an updated version of the solver by using an adaptive FMM for accelerating the convolution type matrix-vector multiplications. The adaptive algorithm, when compared to our previous nonadaptive one, not only significantly improves the performance of the overall memory usage but also remarkably speeds the calculation because of an improved load balancing between the local- and far-field calculations. We have also implemented a node-patch discretization scheme that leads to a reduction of unknowns by a factor of 2 relative to the constant element method without sacrificing accuracy. As a result of these improvements, the new solver makes the PB calculation truly feasible for large-scale biomolecular systems such as a 30S ribosome molecule even on a typical 2008 desktop computer.

  18. Adaptive Controls Method Demonstrated for the Active Suppression of Instabilities in Engine Combustors

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2004-01-01

    An adaptive feedback control method was demonstrated that suppresses thermoacoustic instabilities in a liquid-fueled combustor of a type used in aircraft engines. Extensive research has been done to develop lean-burning (low fuel-to-air ratio) combustors that can reduce emissions throughout the mission cycle to reduce the environmental impact of aerospace propulsion systems. However, these lean-burning combustors are susceptible to thermoacoustic instabilities (high-frequency pressure waves), which can fatigue combustor components and even the downstream turbine blades. This can significantly decrease the safe operating lives of the combustor and turbine. Thus, suppressing the thermoacoustic combustor instabilities is an enabling technology for lean, low-emissions combustors under NASA's Propulsion and Power Program. This control methodology has been developed and tested in a partnership of the NASA Glenn Research Center, Pratt & Whitney, United Technologies Research Center, and the Georgia Institute of Technology. Initial combustor rig testing of the controls algorithm was completed during 2002. Subsequently, the test results were analyzed and improvements to the method were incorporated in 2003, which culminated in the final status of this controls algorithm. This control methodology is based on adaptive phase shifting. The combustor pressure oscillations are sensed and phase shifted, and a high-frequency fuel valve is actuated to put pressure oscillations into the combustor to cancel pressure oscillations produced by the instability.

  19. Numerical simulation of diffusion MRI signals using an adaptive time-stepping method.

    PubMed

    Li, Jing-Rebecca; Calhoun, Donna; Poupon, Cyril; Le Bihan, Denis

    2014-01-20

    The effect on the MRI signal of water diffusion in biological tissues in the presence of applied magnetic field gradient pulses can be modelled by a multiple compartment Bloch-Torrey partial differential equation. We present a method for the numerical solution of this equation by coupling a standard Cartesian spatial discretization with an adaptive time discretization. The time discretization is done using the explicit Runge-Kutta-Chebyshev method, which is more efficient than the forward Euler time discretization for diffusive-type problems. We use this approach to simulate the diffusion MRI signal from the extra-cylindrical compartment in a tissue model of the brain gray matter consisting of cylindrical and spherical cells and illustrate the effect of cell membrane permeability. PMID:24351275

  20. Numerical simulation of diffusion MRI signals using an adaptive time-stepping method

    NASA Astrophysics Data System (ADS)

    Li, Jing-Rebecca; Calhoun, Donna; Poupon, Cyril; Le Bihan, Denis

    2014-01-01

    The effect on the MRI signal of water diffusion in biological tissues in the presence of applied magnetic field gradient pulses can be modelled by a multiple compartment Bloch-Torrey partial differential equation. We present a method for the numerical solution of this equation by coupling a standard Cartesian spatial discretization with an adaptive time discretization. The time discretization is done using the explicit Runge-Kutta-Chebyshev method, which is more efficient than the forward Euler time discretization for diffusive-type problems. We use this approach to simulate the diffusion MRI signal from the extra-cylindrical compartment in a tissue model of the brain gray matter consisting of cylindrical and spherical cells and illustrate the effect of cell membrane permeability.

  1. Adaptive neural network nonlinear control for BTT missile based on the differential geometry method

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Wang, Yongji; Xu, Jiangsheng

    2007-11-01

    A new nonlinear control strategy incorporated the differential geometry method with adaptive neural networks is presented for the nonlinear coupling system of Bank-to-Turn missile in reentry phase. The basic control law is designed using the differential geometry feedback linearization method, and the online learning neural networks are used to compensate the system errors due to aerodynamic parameter errors and external disturbance in view of the arbitrary nonlinear mapping and rapid online learning ability for multi-layer neural networks. The online weights and thresholds tuning rules are deduced according to the tracking error performance functions by Levenberg-Marquardt algorithm, which will make the learning process faster and more stable. The six degree of freedom simulation results show that the attitude angles can track the desired trajectory precisely. It means that the proposed strategy effectively enhance the stability, the tracking performance and the robustness of the control system.

  2. Grid coupling mechanism in the semi-implicit adaptive Multi-Level Multi-Domain method

    NASA Astrophysics Data System (ADS)

    Innocenti, M. E.; Tronci, C.; Markidis, S.; Lapenta, G.

    2016-05-01

    The Multi-Level Multi-Domain (MLMD) method is a semi-implicit adaptive method for Particle-In-Cell plasma simulations. It has been demonstrated in the past in simulations of Maxwellian plasmas, electrostatic and electromagnetic instabilities, plasma expansion in vacuum, magnetic reconnection [1, 2, 3]. In multiple occasions, it has been commented on the coupling between the coarse and the refined grid solutions. The coupling mechanism itself, however, has never been explored in depth. Here, we investigate the theoretical bases of grid coupling in the MLMD system. We obtain an evolution law for the electric field solution in the overlap area of the MLMD system which highlights a dependance on the densities and currents from both the coarse and the refined grid, rather than from the coarse grid alone: grid coupling is obtained via densities and currents.

  3. Numerical Relativistic Magnetohydrodynamics with ADER Discontinuous Galerkin methods on adaptively refined meshes.

    NASA Astrophysics Data System (ADS)

    Zanotti, O.; Dumbser, M.; Fambri, F.

    2016-05-01

    We describe a new method for the solution of the ideal MHD equations in special relativity which adopts the following strategy: (i) the main scheme is based on Discontinuous Galerkin (DG) methods, allowing for an arbitrary accuracy of order N+1, where N is the degree of the basis polynomials; (ii) in order to cope with oscillations at discontinuities, an ”a-posteriori” sub-cell limiter is activated, which scatters the DG polynomials of the previous time-step onto a set of 2N+1 sub-cells, over which the solution is recomputed by means of a robust finite volume scheme; (iii) a local spacetime Discontinuous-Galerkin predictor is applied both on the main grid of the DG scheme and on the sub-grid of the finite volume scheme; (iv) adaptive mesh refinement (AMR) with local time-stepping is used. We validate the new scheme and comment on its potential applications in high energy astrophysics.

  4. Adaptive-Grid Methods for Phase Field Models of Microstructure Development

    NASA Technical Reports Server (NTRS)

    Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.

    1999-01-01

    In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.

  5. Adaptive ultrasonic imaging with the total focusing method for inspection of complex components immersed in water

    NASA Astrophysics Data System (ADS)

    Le Jeune, L.; Robert, S.; Dumas, P.; Membre, A.; Prada, C.

    2015-03-01

    In this paper, we propose an ultrasonic adaptive imaging method based on the phased-array technology and the synthetic focusing algorithm Total Focusing Method (TFM). The general principle is to image the surface by applying the TFM algorithm in a semi-infinite water medium. Then, the reconstructed surface is taken into account to make a second TFM image inside the component. In the surface reconstruction step, the TFM algorithm has been optimized to decrease computation time and to limit noise in water. In the second step, the ultrasonic paths through the reconstructed surface are calculated by the Fermat's principle and an iterative algorithm, and the classical TFM is applied to obtain an image inside the component. This paper presents several results of TFM imaging in components of different geometries, and a result obtained with a new technology of probes equipped with a flexible wedge filled with water (manufactured by Imasonic).

  6. Identifying minefields and verifying clearance: adapting statistical methods for UXO target detection

    NASA Astrophysics Data System (ADS)

    Gilbert, Richard O.; O'Brien, Robert F.; Wilson, John E.; Pulsipher, Brent A.; McKinstry, Craig A.

    2003-09-01

    It may not be feasible to completely survey large tracts of land suspected of containing minefields. It is desirable to develop a characterization protocol that will confidently identify minefields within these large land tracts if they exist. Naturally, surveying areas of greatest concern and most likely locations would be necessary but will not provide the needed confidence that an unknown minefield had not eluded detection. Once minefields are detected, methods are needed to bound the area that will require detailed mine detection surveys. The US Department of Defense Strategic Environmental Research and Development Program (SERDP) is sponsoring the development of statistical survey methods and tools for detecting potential UXO targets. These methods may be directly applicable to demining efforts. Statistical methods are employed to determine the optimal geophysical survey transect spacing to have confidence of detecting target areas of a critical size, shape, and anomaly density. Other methods under development determine the proportion of a land area that must be surveyed to confidently conclude that there are no UXO present. Adaptive sampling schemes are also being developed as an approach for bounding the target areas. These methods and tools will be presented and the status of relevant research in this area will be discussed.

  7. A Star Recognition Method Based on the Adaptive Ant Colony Algorithm for Star Sensors

    PubMed Central

    Quan, Wei; Fang, Jiancheng

    2010-01-01

    A new star recognition method based on the Adaptive Ant Colony (AAC) algorithm has been developed to increase the star recognition speed and success rate for star sensors. This method draws circles, with the center of each one being a bright star point and the radius being a special angular distance, and uses the parallel processing ability of the AAC algorithm to calculate the angular distance of any pair of star points in the circle. The angular distance of two star points in the circle is solved as the path of the AAC algorithm, and the path optimization feature of the AAC is employed to search for the optimal (shortest) path in the circle. This optimal path is used to recognize the stellar map and enhance the recognition success rate and speed. The experimental results show that when the position error is about 50″, the identification success rate of this method is 98% while the Delaunay identification method is only 94%. The identification time of this method is up to 50 ms. PMID:22294908

  8. An adaptive lattice Boltzmann method for predicting turbulent wake fields in wind parks

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2014-11-01

    Wind turbines create large-scale wake structures that can affect downstream turbines considerably. Numerical simulation of the turbulent flow field is a viable approach in order to obtain a better understanding of these interactions and to optimize the turbine placement in wind parks. Yet, the development of effective computational methods for predictive wind farm simulation is challenging. As an alternative approach to presently employed vortex and actuator-based methods, we are currently developing a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that shows good potential for effective wind turbine wake prediction. Since the method is formulated in an Eulerian frame of reference and on a dynamically changing nonuniform Cartesian grid, even moving boundaries can be considered rather easily. The presentation will describe all crucial components of the numerical method and discuss first verification computations. Among other configurations, simulations of the wake fields created by multiple Vesta V27 turbines will be shown.

  9. The geometry of r-adaptive meshes generated using optimal transport methods

    NASA Astrophysics Data System (ADS)

    Budd, C. J.; Russell, R. D.; Walsh, E.

    2015-02-01

    The principles of mesh equidistribution and alignment play a fundamental role in the design of adaptive methods, and a metric tensor and mesh metric are useful theoretical tools for understanding a method's level of mesh alignment, or anisotropy. We consider a mesh redistribution method based on the Monge-Ampère equation which combines equidistribution of a given scalar density function with optimal transport. It does not involve explicit use of a metric tensor, although such a tensor must exist for the method, and an interesting question to ask is whether or not the alignment produced by the metric gives an anisotropic mesh. For model problems with a linear feature and with a radially symmetric feature, we derive the exact form of the metric, which involves expressions for its eigenvalues and eigenvectors. The eigenvectors are shown to be orthogonal and tangential to the feature, and the ratio of the eigenvalues (corresponding to the level of anisotropy) is shown to depend, both locally and globally, on the value of the density function and the amount of curvature. We thereby demonstrate how the optimal transport method produces an anisotropic mesh along a given feature while equidistributing a suitably chosen scalar density function. Numerical results are given to verify these results and to demonstrate how the analysis is useful for problems involving more complex features, including for a non-trivial time dependant nonlinear PDE which evolves narrow and curved reaction fronts.

  10. Eulerian adaptive finite-difference method for high-velocity impact and penetration problems

    SciTech Connect

    Barton, Philip T.; Deiterding, Ralf; Meiron, Daniel I.; Pullin, Dale I

    2013-01-01

    Owing to the complex processes involved, faithful prediction of high-velocity impact events demands a simulation method delivering efficient calculations based on comprehensively formulated constitutive models. Such an approach is presented herein, employing a weighted essentially non-oscillatory (WENO) method within an adaptive mesh refinement (AMR) framework for the numerical solution of hyperbolic partial differential equations. Applied widely in computational fluid dynamics, these methods are well suited to the involved locally non-smooth finite deformations, circumventing any requirement for artificial viscosity functions for shock capturing. Application of the methods is facilitated through using a model of solid dynamics based upon hyper-elastic theory comprising kinematic evolution equations for the elastic distortion tensor. The model for finite inelastic deformations is phenomenologically equivalent to Maxwell s model of tangential stress relaxation. Closure relations tailored to the expected high-pressure states are proposed and calibrated for the materials of interest. Sharp interface resolution is achieved by employing level-set functions to track boundary motion, along with a ghost material method to capture the necessary internal boundary conditions for material interactions and stress-free surfaces. The approach is demonstrated for the simulation of high velocity impacts of steel projectiles on aluminium target plates in two and three dimensions.

  11. A star recognition method based on the Adaptive Ant Colony algorithm for star sensors.

    PubMed

    Quan, Wei; Fang, Jiancheng

    2010-01-01

    A new star recognition method based on the Adaptive Ant Colony (AAC) algorithm has been developed to increase the star recognition speed and success rate for star sensors. This method draws circles, with the center of each one being a bright star point and the radius being a special angular distance, and uses the parallel processing ability of the AAC algorithm to calculate the angular distance of any pair of star points in the circle. The angular distance of two star points in the circle is solved as the path of the AAC algorithm, and the path optimization feature of the AAC is employed to search for the optimal (shortest) path in the circle. This optimal path is used to recognize the stellar map and enhance the recognition success rate and speed. The experimental results show that when the position error is about 50″, the identification success rate of this method is 98% while the Delaunay identification method is only 94%. The identification time of this method is up to 50 ms. PMID:22294908

  12. An adaptive cut-cell method for animal-locomotion fluid mechanics

    NASA Astrophysics Data System (ADS)

    Pederzani, Jean-Noel; Haj-Hariri, H.

    2011-11-01

    In this work we present a numerical method for solving the incompressible Navier-Stokes equation for biomimetic fluid-structure interaction problems. The method is designed to study the flow generated by interaction with arbitrarily complex motion of a self-propelling animal. We consider the specific case of a manta ray. The method combines the embedded-boundary (or cut-cell) method for complex geometry with moving boundaries, and block-structured adaptive mesh refinement (AMR). The control volumes are formed by the intersection of the irregular boundary with Cartesian grid cells. These control volumes fit naturally within parallelizable, disjoint-block data structures, and permit dynamic AMR coarsening and refinement as the simulation progresses. We present two- and three-dimensional results to illustrate the accuracy of the method. Results are compared with experimental results for a flapping elliptical fin that mimics the natural motion of a manta ray. In particular the hydrodynamic signature of the vortex structure behind the fin is studied for its effect on swimming performance.

  13. The image adaptive method for solder paste 3D measurement system

    NASA Astrophysics Data System (ADS)

    Xiaohui, Li; Changku, Sun; Peng, Wang

    2015-03-01

    The extensive application of Surface Mount Technology (SMT) requires various measurement methods to evaluate the circuit board. The solder paste 3D measurement system utilizing laser light projecting on the printed circuit board (PCB) surface is one of the critical methods. The local oversaturation, arising from the non-consistent reflectivity of the PCB surface, will lead to inaccurate measurement. The paper reports a novel optical image adaptive method of remedying the local oversaturation for solder paste measurement. The liquid crystal on silicon (LCoS) and image sensor (CCD or CMOS) are combined as the high dynamic range image (HDRI) acquisition system. The significant characteristic of the new method is that the image after adjustment is captured by specially designed HDRI acquisition system programmed by the LCoS mask. The formation of the LCoS mask, depending on a HDRI combined with the image fusion algorithm, is based on separating the laser light from the local oversaturated region. Experimental results demonstrate that the method can significantly improve the accuracy for the solder paste 3D measurement system with local oversaturation.

  14. Simulation of traffic flow and control using conventional, fuzzy, and adaptive methods

    SciTech Connect

    Bisset, K.R.; Kelsey, R.L.

    1992-01-01

    This paper describes the graphical simulation of a traffic environment. The environment includes streets leading to an intersection, the intersection, vehicle traffic, and signal lights in the intersection controlled by different methods. The simulation allows for the study of parameters affecting traffic environments and the study of different control strategies for traffic signal lights, including conventional, fuzzy, and adaptive control methods. Realistic traffic environments are simulated including a cross intersection, with one or more lanes of traffic in each direction, with and without turn lanes. Vehicle traffic patterns are a mixture of cars going straight and making right or left turns. The free velocities of vehicles follow a normal distribution with a mean of the posted'' speed limit. Actual velocities depend on such factors as the proximity and velocity of surrounding traffic, approaches to intersections, and human response time. The simulation proves the be a useful tool for evaluating controller methods. Preliminary results show that larger quantities of traffic are handled'' by fuzzy control methods then by conventional control methods. Also, the average time spent waiting in traffic decreases with the use of fuzzy control versus conventional control.

  15. Simulation of traffic flow and control using conventional, fuzzy, and adaptive methods

    SciTech Connect

    Bisset, K.R.; Kelsey, R.L.

    1992-06-01

    This paper describes the graphical simulation of a traffic environment. The environment includes streets leading to an intersection, the intersection, vehicle traffic, and signal lights in the intersection controlled by different methods. The simulation allows for the study of parameters affecting traffic environments and the study of different control strategies for traffic signal lights, including conventional, fuzzy, and adaptive control methods. Realistic traffic environments are simulated including a cross intersection, with one or more lanes of traffic in each direction, with and without turn lanes. Vehicle traffic patterns are a mixture of cars going straight and making right or left turns. The free velocities of vehicles follow a normal distribution with a mean of the ``posted`` speed limit. Actual velocities depend on such factors as the proximity and velocity of surrounding traffic, approaches to intersections, and human response time. The simulation proves the be a useful tool for evaluating controller methods. Preliminary results show that larger quantities of traffic are ``handled`` by fuzzy control methods then by conventional control methods. Also, the average time spent waiting in traffic decreases with the use of fuzzy control versus conventional control.

  16. TU-C-17A-07: FusionARC Treatment with Adaptive Beam Selection Method

    SciTech Connect

    Kim, H; Li, R; Xing, L; Lee, R

    2014-06-15

    Purpose: Recently, a new treatment scheme, FusionARC, has been introduced to compensate for the pitfalls in single-arc VMAT planning. It basically allows for the static field treatment in selected locations, while the remaining is treated by single-rotational arc delivery. The important issue is how to choose the directions for static field treatment. This study presents an adaptive beam selection method to formulate fusionARC treatment scheme. Methods: The optimal plan for single-rotational arc treatment is obtained from two-step approach based on the reweighted total-variation (TV) minimization. To choose the directions for static field treatment with extra segments, a value of our proposed cost function at each field is computed on the new fluence-map, which adds an extra segment to the designated field location only. The cost function is defined as a summation of equivalent uniform dose (EUD) of all structures with the fluence-map, while assuming that the lower cost function value implies the enhancement of plan quality. Finally, the extra segments for static field treatment would be added to the selected directions with low cost function values. A prostate patient data was applied and evaluated with three different plans: conventional VMAT, fusionARC, and static IMRT. Results: The 7 field locations, corresponding to the lowest cost function values, are chosen to insert extra segment for step-and-shoot dose delivery. Our proposed fusionARC plan with the selected angles improves the dose sparing to the critical organs, relative to static IMRT and conventional VMAT plans. The dose conformity to the target is significantly enhanced at the small expense of treatment time, compared with VMAT plan. Its estimated treatment time, however, is still much faster than IMRT. Conclusion: The fusionARC treatment with adaptive beam selection method could improve the plan quality with insignificant damage in the treatment time, relative to the conventional VMAT.

  17. Adaptive spacetime method using Riemann jump conditions for coupled atomistic-continuum dynamics

    NASA Astrophysics Data System (ADS)

    Kraczek, B.; Miller, S. T.; Haber, R. B.; Johnson, D. D.

    2010-03-01

    We combine the Spacetime Discontinuous Galerkin (SDG) method for elastodynamics with the mathematically consistent Atomistic Discontinuous Galerkin (ADG) method in a new scheme that concurrently couples continuum and atomistic models of dynamic response in solids. The formulation couples non-overlapping continuum and atomistic models across sharp interfaces by weakly enforcing jump conditions, for both momentum balance and kinematic compatibility, using Riemann values to preserve the characteristic structure of the underlying hyperbolic system. Momentum balances to within machine-precision accuracy over every element, on each atom, and over the coupled system, with small, controllable energy dissipation in the continuum region that ensures numerical stability. When implemented on suitable unstructured spacetime grids, the continuum SDG model offers linear computational complexity in the number of elements and powerful adaptive analysis capabilities that readily bridge between atomic and continuum scales in both space and time. A special trace operator for the atomic velocities and an associated atomistic traction field enter the jump conditions at the coupling interface. The trace operator depends on parameters that specify, at the scale of the atomic spacing, the position of the coupling interface relative to the atoms. In a key finding, we demonstrate that optimizing these parameters suppresses spurious reflections at the coupling interface without the use of non-physical damping or special boundary conditions. We formulate the implicit SDG-ADG coupling scheme in up to three spatial dimensions, and describe an efficient iterative solution scheme that outperforms common explicit schemes, such as the Velocity Verlet integrator. Numerical examples, in 1d×time and employing both linear and nonlinear potentials, demonstrate the performance of the SDG-ADG method and show how adaptive spacetime meshing reconciles disparate time steps and resolves atomic-scale signals

  18. An Adaptive Finite Difference Method for Hyperbolic Systems in OneSpace Dimension

    SciTech Connect

    Bolstad, John H.

    1982-06-01

    Many problems of physical interest have solutions which are generally quite smooth in a large portion of the region of interest, but have local phenomena such as shocks, discontinuities or large gradients which require much more accurate approximations or finer grids for reasonable accuracy. Examples are atmospheric fronts, ocean currents, and geological discontinuities. In this thesis we develop and partially analyze an adaptive finite difference mesh refinement algorithm for the initial boundary value problem for hyperbolic systems in one space dimension. The method uses clusters of uniform grids which can ''move'' along with pulses or steep gradients appearing in the calculation, and which are superimposed over a uniform coarse grid. Such refinements are created, destroyed, merged, separated, recursively nested or moved based on estimates of the local truncation error. We use a four-way linked tree and sequentially allocated deques (double-ended queues) to perform these operations efficiently. The local truncation error in the interior of the region is estimated using a three-step Richardson extrapolation procedure, which can also be considered a deferred correction method. At the boundaries we employ differences to estimate the error. Our algorithm was implemented using a portable, extensible Fortran preprocessor, to which we added records and pointers. The method is applied to three model problems: the first order wave equation, the second order wave equation, and the inviscid Burgers equation. For the first two model problems our algorithm is shown to be three to five times more efficient (in computing time) than the use of a uniform coarse mesh, for the same accuracy. Furthermore, to our knowledge, our algorithm is the only one which adaptively treats time-dependent boundary conditions for hyperbolic systems.

  19. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  20. Stochastic nonlinear aeroelastic analysis of a supersonic lifting surface using an adaptive spectral method

    NASA Astrophysics Data System (ADS)

    Chassaing, J.-C.; Lucor, D.; Trégon, J.

    2012-01-01

    An adaptive stochastic spectral projection method is deployed for the uncertainty quantification in limit-cycle oscillations of an elastically mounted two-dimensional lifting surface in a supersonic flow field. Variabilities in the structural parameters are propagated in the aeroelastic system which accounts for nonlinear restoring force and moment by means of hardening cubic springs. The physical nonlinearities promote sharp and sudden flutter onset for small change of the reduced velocity. In a stochastic context, this behavior translates to steep solution gradients developing in the parametric space. A remedy is to expand the stochastic response of the airfoil on a piecewise generalized polynomial chaos basis. Accurate approximation andaffordable computational costs are obtained using sensitivity-based adaptivity for various types of supersonic stochastic responses depending on the selected values of the Mach number on the bifurcation map. Sensitivity analysis via Sobol' indices shows how the probability density function of the peak pitch amplitude responds to combined uncertainties: e.g. the elastic axis location, torsional stiffness and flap angle. We believe that this work demonstrates the capability and flexibility of the approach for more reliable predictions of realistic aeroelastic systems subject to a moderate number of uncertainties.

  1. Adaptive model-based control systems and methods for controlling a gas turbine

    NASA Technical Reports Server (NTRS)

    Brunell, Brent Jerome (Inventor); Mathews, Jr., Harry Kirk (Inventor); Kumar, Aditya (Inventor)

    2004-01-01

    Adaptive model-based control systems and methods are described so that performance and/or operability of a gas turbine in an aircraft engine, power plant, marine propulsion, or industrial application can be optimized under normal, deteriorated, faulted, failed and/or damaged operation. First, a model of each relevant system or component is created, and the model is adapted to the engine. Then, if/when deterioration, a fault, a failure or some kind of damage to an engine component or system is detected, that information is input to the model-based control as changes to the model, constraints, objective function, or other control parameters. With all the information about the engine condition, and state and directives on the control goals in terms of an objective function and constraints, the control then solves an optimization so the optimal control action can be determined and taken. This model and control may be updated in real-time to account for engine-to-engine variation, deterioration, damage, faults and/or failures using optimal corrective control action command(s).

  2. An adaptive 6-DOF tracking method by hybrid sensing for ultrasonic endoscopes.

    PubMed

    Du, Chengyang; Chen, Xiaodong; Wang, Yi; Li, Junwei; Yu, Daoyin

    2014-01-01

    In this paper, a novel hybrid sensing method for tracking an ultrasonic endoscope within the gastrointestinal (GI) track is presented, and the prototype of the tracking system is also developed. We implement 6-DOF localization by sensing integration and information fusion. On the hardware level, a tri-axis gyroscope and accelerometer, and a magnetic angular rate and gravity (MARG) sensor array are attached at the end of endoscopes, and three symmetric cylindrical coils are placed around patients' abdomens. On the algorithm level, an adaptive fast quaternion convergence (AFQC) algorithm is introduced to determine the orientation by fusing inertial/magnetic measurements, in which the effects of magnetic disturbance and acceleration are estimated to gain an adaptive convergence output. A simplified electro-magnetic tracking (SEMT) algorithm for dimensional position is also implemented, which can easily integrate the AFQC's results and magnetic measurements. Subsequently, the average position error is under 0.3 cm by reasonable setting, and the average orientation error is 1° without noise. If magnetic disturbance or acceleration exists, the average orientation error can be controlled to less than 3.5°. PMID:24915179

  3. An Adaptive 6-DOF Tracking Method by Hybrid Sensing for Ultrasonic Endoscopes

    PubMed Central

    Du, Chengyang; Chen, Xiaodong; Wang, Yi; Li, Junwei; Yu, Daoyin

    2014-01-01

    In this paper, a novel hybrid sensing method for tracking an ultrasonic endoscope within the gastrointestinal (GI) track is presented, and the prototype of the tracking system is also developed. We implement 6-DOF localization by sensing integration and information fusion. On the hardware level, a tri-axis gyroscope and accelerometer, and a magnetic angular rate and gravity (MARG) sensor array are attached at the end of endoscopes, and three symmetric cylindrical coils are placed around patients' abdomens. On the algorithm level, an adaptive fast quaternion convergence (AFQC) algorithm is introduced to determine the orientation by fusing inertial/magnetic measurements, in which the effects of magnetic disturbance and acceleration are estimated to gain an adaptive convergence output. A simplified electro-magnetic tracking (SEMT) algorithm for dimensional position is also implemented, which can easily integrate the AFQC's results and magnetic measurements. Subsequently, the average position error is under 0.3 cm by reasonable setting, and the average orientation error is 1° without noise. If magnetic disturbance or acceleration exists, the average orientation error can be controlled to less than 3.5°. PMID:24915179

  4. A Fast Variational Method for the Construction of Resolution Adaptive C-Smooth Molecular Surfaces.

    PubMed

    Bajaj, Chandrajit L; Xu, Guoliang; Zhang, Qin

    2009-05-01

    We present a variational approach to smooth molecular (proteins, nucleic acids) surface constructions, starting from atomic coordinates, as available from the protein and nucleic-acid data banks. Molecular dynamics (MD) simulations traditionally used in understanding protein and nucleic-acid folding processes, are based on molecular force fields, and require smooth models of these molecular surfaces. To accelerate MD simulations, a popular methodology is to employ coarse grained molecular models, which represent clusters of atoms with similar physical properties by psuedo- atoms, resulting in coarser resolution molecular surfaces. We consider generation of these mixed-resolution or adaptive molecular surfaces. Our approach starts from deriving a general form second order geometric partial differential equation in the level-set formulation, by minimizing a first order energy functional which additionally includes a regularization term to minimize the occurrence of chemically infeasible molecular surface pockets or tunnel-like artifacts. To achieve even higher computational efficiency, a fast cubic B-spline C(2) interpolation algorithm is also utilized. A narrow band, tri-cubic B-spline level-set method is then used to provide C(2) smooth and resolution adaptive molecular surfaces. PMID:19802355

  5. Q-Learning: A Data Analysis Method for Constructing Adaptive Interventions

    ERIC Educational Resources Information Center

    Nahum-Shani, Inbal; Qian, Min; Almirall, Daniel; Pelham, William E.; Gnagy, Beth; Fabiano, Gregory A.; Waxmonsky, James G.; Yu, Jihnhee; Murphy, Susan A.

    2012-01-01

    Increasing interest in individualizing and adapting intervention services over time has led to the development of adaptive interventions. Adaptive interventions operationalize the individualization of a sequence of intervention options over time via the use of decision rules that input participant information and output intervention…

  6. Experimental Design and Primary Data Analysis Methods for Comparing Adaptive Interventions

    ERIC Educational Resources Information Center

    Nahum-Shani, Inbal; Qian, Min; Almirall, Daniel; Pelham, William E.; Gnagy, Beth; Fabiano, Gregory A.; Waxmonsky, James G.; Yu, Jihnhee; Murphy, Susan A.

    2012-01-01

    In recent years, research in the area of intervention development has been shifting from the traditional fixed-intervention approach to "adaptive interventions," which allow greater individualization and adaptation of intervention options (i.e., intervention type and/or dosage) over time. Adaptive interventions are operationalized via a sequence…

  7. Comparing Methods of Assessing Differential Item Functioning in a Computerized Adaptive Testing Environment

    ERIC Educational Resources Information Center

    Lei, Pui-Wa; Chen, Shu-Ying; Yu, Lan

    2006-01-01

    Mantel-Haenszel and SIBTEST, which have known difficulty in detecting non-unidirectional differential item functioning (DIF), have been adapted with some success for computerized adaptive testing (CAT). This study adapts logistic regression (LR) and the item-response-theory-likelihood-ratio test (IRT-LRT), capable of detecting both unidirectional…

  8. Pre-Service Chemistry Teachers' Beliefs about Teaching and Their Pedagogical Content Knowledge

    ERIC Educational Resources Information Center

    Oskay, Ozge Ozyalcin; Erdem, Emine; Yilmaz, Ayhan

    2009-01-01

    In this study the relationship between pre-service chemistry teachers' beliefs about teaching and their pedagogical content knowledge were investigated. The sample of the study consists of 99 pre-service chemistry teachers attending Hacettepe University, Faculty of Education. As data collection tools the adapted form of "Beliefs About Teaching…

  9. Examining Chemistry Teachers' Use of Curriculum Materials: In View of Teachers' Pedagogical Content Knowledge

    ERIC Educational Resources Information Center

    Chen, Bo; Wei, Bing

    2015-01-01

    This paper aimed to explore how pedagogical content knowledge (PCK) of teachers influenced their adaptations of the curriculum materials of the new senior secondary chemistry curriculum, a standards-based science curriculum, in China. This study was based on the premise that the interaction of the teacher with the curriculum materials determines…

  10. New Literacies: A Pedagogical Framework for Reading Virtual Worlds--A Journey into "Barbiegirls.com"

    ERIC Educational Resources Information Center

    Connelly, Jan

    2011-01-01

    As the tectonic plates of technology shift across human networks, dedicated and determined educators understand that the integration of digital mediated texts and the new literacies competencies they engender, amount to little without pedagogical ingenuity, innovative adaptation, and creative application. This article is a response to the rapidly…

  11. Pedagogical Applications of Social Media in Business Education: Student and Faculty Perspectives

    ERIC Educational Resources Information Center

    Piotrowski, Chris

    2015-01-01

    There has been wide academic and research interest in the application of social media modalities, as pedagogical tools, in higher education. Recent research indicates that business-related topics are a major focus of study on this emerging educational issue. Yet a systematic review of outcome studies regarding instructional Web 2.0 adaptations in…

  12. Evaluation of intrinsic respiratory signal determination methods for 4D CBCT adapted for mice

    SciTech Connect

    Martin, Rachael; Pan, Tinsu; Rubinstein, Ashley; Court, Laurence; Ahmad, Moiz

    2015-01-15

    Purpose: 4D CT imaging in mice is important in a variety of areas including studies of lung function and tumor motion. A necessary step in 4D imaging is obtaining a respiratory signal, which can be done through an external system or intrinsically through the projection images. A number of methods have been developed that can successfully determine the respiratory signal from cone-beam projection images of humans, however only a few have been utilized in a preclinical setting and most of these rely on step-and-shoot style imaging. The purpose of this work is to assess and make adaptions of several successful methods developed for humans for an image-guided preclinical radiation therapy system. Methods: Respiratory signals were determined from the projection images of free-breathing mice scanned on the X-RAD system using four methods: the so-called Amsterdam shroud method, a method based on the phase of the Fourier transform, a pixel intensity method, and a center of mass method. The Amsterdam shroud method was modified so the sharp inspiration peaks associated with anesthetized mouse breathing could be detected. Respiratory signals were used to sort projections into phase bins and 4D images were reconstructed. Error and standard deviation in the assignment of phase bins for the four methods compared to a manual method considered to be ground truth were calculated for a range of region of interest (ROI) sizes. Qualitative comparisons were additionally made between the 4D images obtained using each of the methods and the manual method. Results: 4D images were successfully created for all mice with each of the respiratory signal extraction methods. Only minimal qualitative differences were noted between each of the methods and the manual method. The average error (and standard deviation) in phase bin assignment was 0.24 ± 0.08 (0.49 ± 0.11) phase bins for the Fourier transform method, 0.09 ± 0.03 (0.31 ± 0.08) phase bins for the modified Amsterdam shroud method, 0

  13. An efficient contents-adaptive backlight control method for mobile devices

    NASA Astrophysics Data System (ADS)

    Chen, Qiao Song; Yan, Ya Xing; Zhang, Xiao Mou; Cai, Hua; Deng, Xin; Wang, Jin

    2015-03-01

    For most of mobile devices with a large screen, image quality and power consumption are both of the major factors affecting the consumers' preference. Contents-adaptive backlight control (CABC) method can be utilized to adjust the backlight and promote the performance of mobile devices. Unlike the previous works mostly focusing on the reduction of power consumption, both of image quality and power consumption are taken into account in the proposed method. Firstly, region of interest (ROI) is detected to divide image into two parts: ROI and non-ROI. Then, three attributes including entropy, luminance, and saturation information in ROI are calculated. To achieve high perceived image quality in mobile devices, optimal value of backlight can be calculated by a linear combination of the aforementioned attributes. Coefficients of the linear combination are determined by applying the linear regression to the subjective scores of human visual experiments and objective values of the attributes. Based on the optimal value of backlight, displayed image data are processed brightly and backlight is darkened to reduce the power consumption of backlight later. Here, the ratios of increasing image data and decreasing backlight functionally depend on the luminance information of displayed image. Also, the proposed method is hardware implemented. Experimental results indicate that the proposed technique exhibits better performance compared to the conventional methods.

  14. New adaptive methods for sensing of chemical components and biological agents

    NASA Astrophysics Data System (ADS)

    Yatsenko, Vitaliy A.; Chiarini, Bruno H.; Pardalos, Panos M.

    2004-02-01

    It is known that leaf reflectance spectra can be used to estimate the contents of chemical components in vegetation. Recent novel applications include the detection of harmful biological agents that can originate from agricultural bioterrorism attacks. Such attacks have been identified as a major threat to the United States" agriculture. Nevertheless, the usefulness of such approach is currently limited by distorting factors, in particular soil reflectance. The quantitative analysis of the spectral curves from the reflection of plant leaves may be the basis for the development of new methods for interpreting the data obtained by the remote measurement of plants. We consider the problem of characterizing the chemical composition from noisy spectral data using an experimental optical method. Using our experience in signal processing and optimization of complex systems we propose a new mathematical model for sensing of chemical components in vegetation. Estimates are defined as minimizers of penalized cost functionals with sequential quadratic programming (SQR) methods. A deviation measure used in risk analysis is also considered. This framework is demonstrated for different agricultural plants using adaptive filtration, principal components analysis, and optimization techniques for classification of spectral curves of chemical components. Various estimation problems will be considered to illustrate the computational aspects of the proposed method.

  15. SuBSENSE: a universal change detection method with local adaptive sensitivity.

    PubMed

    St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert

    2015-01-01

    Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online. PMID:25494507

  16. An Adaptive and Implicit Immersed Boundary Method for Cardiovascular Device Modeling

    NASA Astrophysics Data System (ADS)

    Bhalla, Amneet Pal S.; Griffith, Boyce E.

    2015-11-01

    Computer models and numerical simulations are playing an increasingly important role in understanding the mechanics of fluid-structure interactions (FSI) in cardiovascular devices. To model cardiac devices realistically, there is a need to solve the classical fluid-structure interaction equations efficiently. Peskin's explicit immersed boundary method is one such approach to model FSI equations for elastic structures efficiently. However, in the presence of rigid structures the IB method faces a severe timestep restriction. To overcome this limitation, we are developing an implicit version of immersed boundary method on adaptive Cartesian grids. Higher grid resolution is employed in spatial regions occupying the structure while relatively coarser discretization is used elsewhere. The resulting discrete system is solved using geometric multigrid solver for the combined Stokes and elasticity operators. We use a rediscretization approach for standard finite difference approximations to the divergence, gradient, and viscous stress. In contrast, coarse grid versions of the Eulerian elasticity operator are constructed via a Galerkin approach. The implicit IB method is tested for a pulse duplicator cardiac device system that consists of both rigid mountings and elastic membrane.

  17. Higher-order schemes with CIP method and adaptive Soroban grid towards mesh-free scheme

    NASA Astrophysics Data System (ADS)

    Yabe, Takashi; Mizoe, Hiroki; Takizawa, Kenji; Moriki, Hiroshi; Im, Hyo-Nam; Ogata, Youichi

    2004-02-01

    A new class of body-fitted grid system that can keep the third-order accuracy in time and space is proposed with the help of the CIP (constrained interpolation profile/cubic interpolated propagation) method. The grid system consists of the straight lines and grid points moving along these lines like abacus - Soroban in Japanese. The length of each line and the number of grid points in each line can be different. The CIP scheme is suitable to this mesh system and the calculation of large CFL (>10) at locally refined mesh is easily performed. Mesh generation and searching of upstream departure point are very simple and almost mesh-free treatment is possible. Adaptive grid movement and local mesh refinement are demonstrated.

  18. A Space-Time Adaptive Method for Simulating Complex Cardiac Dynamics

    NASA Astrophysics Data System (ADS)

    Cherry, E. M.; Greenside, H. S.; Henriquez, C. S.

    2000-03-01

    A new space-time adaptive mesh refinement algorithm (AMRA) is presented and analyzed which, by automatically adding and deleting local patches of higher-resolution Cartesian meshes, can simulate quantitatively accurate models of cardiac electrical dynamics efficiently in large domains. We find in two space dimensions that the AMRA is able to achieve a factor of 5 speedup and a factor of 5 reduction in memory while achieving the same accuracy compared to a code based on a uniform space-time mesh at the highest resolution of the AMRA method. We summarize applications of the code to the Luo-Rudy 1 cardiac model in large two- and three-dimensional domains and discuss the implications of our results for understanding the initiation of arrhythmias.

  19. Practical improvements of multi-grid iteration for adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Miyashita, Hisashi; Yamada, Yoshiyuki

    2005-03-01

    Adaptive mesh refinement(AMR) is a powerful tool to efficiently solve multi-scaled problems. However, the vanilla AMR method has a well-known critical demerit, i.e., it cannot be applied to non-local problems. Although multi-grid iteration (MGI) can be regarded as a good remedy for a non-local problem such as the Poisson equation, we observed fundamental difficulties in applying the MGI technique in AMR to realistic problems under complicated mesh layouts because it does not converge or it requires too many iterations even if it does converge. To cope with the problem, when updating the next approximation in the MGI process, we calculate the precise total corrections that are relatively accurate to the current residual by introducing a new iteration for such a total correction. This procedure greatly accelerates the MGI convergence speed especially under complicated mesh layouts.

  20. Adaptive Forward Modeling Method for Analysis and Reconstructions of Orientation Image Map

    Energy Science and Technology Software Center (ESTSC)

    2014-06-01

    IceNine is a MPI-parallel orientation reconstruction and microstructure analysis code. It's primary purpose is to reconstruct a spatially resolved orientation map given a set of diffraction images from a high energy x-ray diffraction microscopy (HEDM) experiment (1). In particular, IceNine implements the adaptive version of the forward modeling method (2, 3). Part of IceNine is a library used to for conbined analysis of the microstructure with the experimentally measured diffraction signal. The libraries is alsomore » designed for tapid prototyping of new reconstruction and analysis algorithms. IceNine is also built with a simulator of diffraction images with an input microstructure.« less

  1. Innovative Adaptive Control Method Demonstrated for Active Suppression of Instabilities in Engine Combustors

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    2005-01-01

    This year, an improved adaptive-feedback control method was demonstrated that suppresses thermoacoustic instabilities in a liquid-fueled combustor of a type used in aircraft engines. Extensive research has been done to develop lean-burning (low fuel-to-air ratio) combustors that can reduce emissions throughout the mission cycle to reduce the environmental impact of aerospace propulsion systems. However, these lean-burning combustors are susceptible to thermoacoustic instabilities (high-frequency pressure waves), which can fatigue combustor components and even downstream turbine blades. This can significantly decrease the safe operating life of the combustor and turbine. Thus, suppressing the thermoacoustic combustor instabilities is an enabling technology for meeting the low-emission goals of the NASA Ultra-Efficient Engine Technology (UEET) Project.

  2. A Review on Effectiveness and Adaptability of the Design-Build Method

    NASA Astrophysics Data System (ADS)

    Kudo, Masataka; Miyatake, Ichiro; Baba, Kazuhito; Yokoi, Hiroyuki; Fueta, Toshiharu

    In the Ministry of Land, Infrastructure, Transport and Tourism (MLIT), various approaches have been taken for efficient implementation of public works projects, one of which is the ongoing use of the design-build method on a trial basis, as a means to utilize the technical skills and knowledge of private companies. In 2005, MLIT further introduced the a dvanced technical proposal type, a kind of the comprehensive evaluation method, as part of its efforts to improve tendering and contracting systems. Meanwhile, although the positive effect of the design build method has been reported, it has not been widely published, which may be one of the reasons that the number of MLIT projects using the design-build method is declining year by year. In this context, this paper discusses the result and review of the study concerning the extent of flexibility allowed for the process and design (proposal) of public work projects, and the follow-up surveys of the actual test case projects, conducted as basic researches to examine the measure to expand and promote the use of the design-build method. The study objects were selected from the tunnel construction projects using the shield tunneling method for developing the common utility duct, and the bridge construction projects ordering construction of supers tructure work and substructure work in a single contract. In providing the result and review of the studies, the structures and the temporary installations were separately examined, and effectiveness and adaptability of the design-build method was discussed for each, respectively.

  3. What Is Technological Pedagogical Content Knowledge (TPACK)?

    ERIC Educational Resources Information Center

    Koehler, Mathew J.; Mishra, Punya; Cain, William

    2013-01-01

    This paper describes TPACK, technological pedagogical content knowledge (originally TPCK), a teacher knowledge framework for technology integration that builds on Lee S. Shulman's construct of pedagogical content knowledge to include technology knowledge. The paper begins with a brief introduction to the complex, ill-structured nature of teaching.…

  4. The District Social-Pedagogical Complex.

    ERIC Educational Resources Information Center

    Lebedev, O. E.; Fedorets, N. A.

    1992-01-01

    Explores the role of the regional organizational-pedagogical system in the former Soviet Union. Suggests that the organization of social interaction can be addressed on the district level. Discusses a project in the Dnestrovskii district in which a social pedagogical complex has been organized to strengthen schools' material base, develop…

  5. Rx for Pedagogical Correctness: Professional Correctness.

    ERIC Educational Resources Information Center

    Lasley, Thomas J.

    1993-01-01

    Describes the difficulties caused by educators holding to a view of teaching that assumes that there is one "pedagogically correct" way of running a classroom. Provides three examples of harmful pedagogical correctness ("untracked" classes, cooperative learning, and testing and test-wiseness). Argues that such dogmatic views of education limit…

  6. What Is Technological Pedagogical Content Knowledge?

    ERIC Educational Resources Information Center

    Koehler, Matthew J.; Mishra, Punya

    2009-01-01

    This paper describes a framework for teacher knowledge for technology integration called technological pedagogical content knowledge (originally TPCK, now known as TPACK, or technology, pedagogy, and content knowledge). This framework builds on Lee Shulman's construct of pedagogical content knowledge (PCK) to include technology knowledge. The…

  7. Pedagogical Formation Education via Distance Education

    ERIC Educational Resources Information Center

    Ozcan, Deniz; Genc, Zeynep

    2016-01-01

    The purpose of this research is to identify the perceptions of the efficacy of curriculum development on the part of pedagogical formation students, their views regarding their professional attitudes, and their attitudes towards the pedagogical formation education they receive via distance education. The study sample includes 438 Near East…

  8. A Review of Technological Pedagogical Content Knowledge

    ERIC Educational Resources Information Center

    Chai, Ching Sing; Koh, Joyce Hwee Ling; Tsai, Chin-Chung

    2013-01-01

    This paper reviews 74 journal papers that investigate ICT integration from the framework of technological pedagogical content knowledge (TPACK). The TPACK framework is an extension of the pedagogical content knowledge (Shulman, 1986). TPACK is the type of integrative and transformative knowledge teachers need for effective use of ICT in…

  9. Art's Pedagogical Paradox

    ERIC Educational Resources Information Center

    Kalin, Nadine M.

    2014-01-01

    This article contributes to conversations concerning art education futures through engaging alternative relations between art, education, and democracy that mobilize education as art projects associated with the "pedagogical turn" as sites of liminality and paradox. An analysis of the art project, Pedagogical Factory, is used to outline…

  10. Disciplinary Literacy and Pedagogical Content Knowledge

    ERIC Educational Resources Information Center

    Carney, Michelle; Indrisano, Roselmina

    2013-01-01

    This review reports selected literature on theory, research, and practice in disciplinary literacy, primarily reading. The authors consider the ways this literature can be viewed through the lens of Lee S. Shulman's theory of Pedagogical Content Knowledge, which includes: subject matter content knowledge, pedagogical content knowledge, and…

  11. Pedagogical Plans as Communication Oriented Objects

    ERIC Educational Resources Information Center

    Olimpo, G.; Bottino, R. M.; Earp, J.; Ott, M.; Pozzi, F.; Tavella, M.

    2010-01-01

    This paper focuses on pedagogical plans intended as objects to support human communication. Its purpose is to describe a structural model for pedagogical plans which can assist both authors and users. The model helps authors to engage in the design of a plan as a communication project and helps users in the process of understanding, customizing,…

  12. Philosophy for Children: Towards Pedagogical Transformation

    ERIC Educational Resources Information Center

    Scholl, Rosie; Nichols, Kim; Burgh, Gilbert

    2009-01-01

    Philosophical inquiry (Lipman, Sharp & Oscanyan, 1980) has the capacity to push boundaries in teaching and learning interactions with students and improve teacher's pedagogical experiences (Scholl, Nichols, Burgh, 2008). This paper focuses on the potential for Philosophy to foster pedagogical transformation. Two groups of primary school teachers,…

  13. Adaptive internal state space construction method for reinforcement learning of a real-world agent.

    PubMed

    Samejima, K; Omori, T

    1999-10-01

    One of the difficulties encountered in the application of the reinforcement learning to real-world problems is the construction of a discrete state space from a continuous sensory input signal. In the absence of a priori knowledge about the task, a straightforward approach to this problem is to discretize the input space into a grid, and to use a lookup table. However, this method suffers from the curse of dimensionality. Some studies use continuous function approximators such as neural networks instead of lookup tables. However, when global basis functions such as sigmoid functions are used, convergence cannot be guaranteed. To overcome this problem, we propose a method in which local basis functions are incrementally assigned depending on the task requirement. Initially, only one basis function is allocated over the entire space. The basis function is divided according to the statistical property of locally weighted temporal difference error (TD error) of the value function. We applied this method to an autonomous robot collision avoidance problem, and evaluated the validity of the algorithm in simulation. The proposed algorithm, which we call adaptive basis division (ABD) algorithm, achieved the task using a smaller number of basis functions than the conventional methods. Moreover, we applied the method to a goal-directed navigation problem of a real mobile robot. The action strategy was learned using a database of sensor data, and it was then used for navigation of a real machine. The robot reached the goal using a smaller number of internal states than with the conventional methods. PMID:12662650

  14. FMM-Yukawa: An adaptive fast multipole method for screened Coulomb interactions

    NASA Astrophysics Data System (ADS)

    Huang, Jingfang; Jia, Jun; Zhang, Bo

    2009-11-01

    A Fortran program package is introduced for the rapid evaluation of the screened Coulomb interactions of N particles in three dimensions. The method utilizes an adaptive oct-tree structure, and is based on the new version of fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related packages are also available at http://www.fastmultipole.org/. This paper is a brief review of the program and its performance. Catalogue identifier: AEEQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL 2.0 No. of lines in distributed program, including test data, etc.: 12 385 No. of bytes in distributed program, including test data, etc.: 79 222 Distribution format: tar.gz Programming language: Fortran77 and Fortran90 Computer: Any Operating system: Any RAM: Depends on the number of particles, their distribution, and the adaptive tree structure Classification: 4.8, 4.12 Nature of problem: To evaluate the screened Coulomb potential and force field of N charged particles, and to evaluate a convolution type integral where the Green's function is the fundamental solution of the modified Helmholtz equation. Solution method: An adaptive oct-tree is generated, and a new version of fast multipole method is applied in which the "multipole-to-local" translation operator is diagonalized. Restrictions: Only three and six significant digits accuracy options are provided in this version. Unusual features: Most of the codes are written in

  15. Automated endmember determination and adaptive spectral mixture analysis using kernel methods

    NASA Astrophysics Data System (ADS)

    Rand, Robert S.; Banerjee, Amit; Broadwater, Joshua

    2013-09-01

    Various phenomena occur in geographic regions that cause pixels of a scene to contain spectrally mixed pixels. The mixtures may be linear or nonlinear. It could simply be that the pixel size of a sensor is too large so many pixels contain patches of different materials within them (linear), or there could be microscopic mixtures and multiple scattering occurring within pixels (non-linear). Often enough, scenes may contain cases of both linear and non-linear mixing on a pixel-by-pixel basis. Furthermore, appropriate endmembers in a scene are not always easy to determine. A reference spectral library of materials may or may not be available, yet, even if a library is available, using it directly for spectral unmixing may not always be fruitful. This study investigates a generalized kernel-based method for spectral unmixing that attempts to determine if each pixel in a scene is linear or non-linear, and adapts to compute a mixture model at each pixel accordingly. The effort also investigates a kernel-based support vector method for determining spectral endmembers in a scene. Two scenes of hyperspectral imagery calibrated to reflectance are used to validate the methods. We test the approaches using a HyMAP scene collected over the Waimanalo Bay region in Oahu, Hawaii, as well as an AVIRIS scene collected over the oil spill region in the Gulf of Mexico during the Deepwater Horizon oil incident.

  16. Adaptive region of interest method for analytical micro-CT reconstruction.

    PubMed

    Yang, Wanneng; Xu, Xiaochun; Bi, Kun; Zeng, Shaoqun; Liu, Qian; Chen, Shangbin

    2011-01-01

    The real-time imaging is important in automatic successive inspection with micro-computerized tomography (micro-CT). Generally, the size of the detector is chosen according to the most probable size of the measured object to acquire all the projection data. Given enough imaging area and imaging resolution of X-ray detector, the detector is larger than specimen projection area, which results in redundant data in the Sinogram. The process of real-time micro-CT is computation-intensive because of the large amounts of source and destination data. The speed of the reconstruction algorithm can't always meet the requirements of real-time applications. A preprocessing method called adaptive region of interest (AROI), which detects the object's boundaries automatically to focus the active Sinogram regions, is introduced into the analytical reconstruction algorithm in this paper. The AROI method reduces the volume of the reconstructing data and thus directly accelerates the reconstruction process. It has been further shown that image quality is not compromised when applying AROI, while the reconstruction speed is increased as the square of the ratio of the sizes of the detector and the specimen slice. In practice, the conch reconstruction experiment indicated that the process is accelerated by 5.2 times with AROI and the imaging quality is not degraded. Therefore, the AROI method improves the speed of analytical micro-CT reconstruction significantly. PMID:21422587

  17. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  18. An efficient Bayesian inference approach to inverse problems based on an adaptive sparse grid collocation method

    NASA Astrophysics Data System (ADS)

    Ma, Xiang; Zabaras, Nicholas

    2009-03-01

    A new approach to modeling inverse problems using a Bayesian inference method is introduced. The Bayesian approach considers the unknown parameters as random variables and seeks the probabilistic distribution of the unknowns. By introducing the concept of the stochastic prior state space to the Bayesian formulation, we reformulate the deterministic forward problem as a stochastic one. The adaptive hierarchical sparse grid collocation (ASGC) method is used for constructing an interpolant to the solution of the forward model in this prior space which is large enough to capture all the variability/uncertainty in the posterior distribution of the unknown parameters. This solution can be considered as a function of the random unknowns and serves as a stochastic surrogate model for the likelihood calculation. Hierarchical Bayesian formulation is used to derive the posterior probability density function (PPDF). The spatial model is represented as a convolution of a smooth kernel and a Markov random field. The state space of the PPDF is explored using Markov chain Monte Carlo algorithms to obtain statistics of the unknowns. The likelihood calculation is performed by directly sampling the approximate stochastic solution obtained through the ASGC method. The technique is assessed on two nonlinear inverse problems: source inversion and permeability estimation in flow through porous media.

  19. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  20. An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Wen, Ning; Gordon, James J.; Elshaikh, Mohamed A.; Movsas, Benjamin; Chetty, Indrin J.

    2015-04-01

    Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm-3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume

  1. Using preconditioned adaptive step size Runge-Kutta methods for solving the time-dependent Schroedinger equation

    SciTech Connect

    Tremblay, Jean Christophe; Carrington, Tucker Jr.

    2004-12-15

    If the Hamiltonian is time dependent it is common to solve the time-dependent Schroedinger equation by dividing the propagation interval into slices and using an (e.g., split operator, Chebyshev, Lanczos) approximate matrix exponential within each slice. We show that a preconditioned adaptive step size Runge-Kutta method can be much more efficient. For a chirped laser pulse designed to favor the dissociation of HF the preconditioned adaptive step size Runge-Kutta method is about an order of magnitude more efficient than the time sliced method.

  2. An adaptive Newton continuation strategy for the fully implicit finite element immersed boundary method

    NASA Astrophysics Data System (ADS)

    Hoppe, R. H. W.; Linsenmann, C.

    2012-05-01

    The immersed boundary method (IB) is known as a powerful technique for the numerical solution of fluid-structure interaction problems as, for instance, the motion and deformation of viscoelastic bodies immersed in an external flow. It is based on the treatment of the flow equations within an Eulerian framework and of the equations of motion of the immersed bodies with respect to a Lagrangian coordinate system including interaction equations providing the transfer between both frames. The classical IB uses finite differences, but the IBM can be set up within a finite element approach in the spatial variables as well (FE-IB). The discretization in time usually relies on the Backward Euler (BE) method for the semidiscretized flow equations and the Forward Euler (FE) method for the equations of motion of the immersed bodies. The BE/FE FE-IB is subject to a CFL-type condition, whereas the fully implicit BE/BE FE-IB is unconditionally stable. The latter one can be solved numerically by Newton-type methods whose convergence properties are dictated by an appropriate choice of the time step size, in particular, if one is faced with sudden changes in the total energy of the system. In this paper, taking advantage of the well developed affine covariant convergence theory for Newton-type methods, we study a predictor-corrector continuation strategy in time with an adaptive choice of the continuation steplength. The feasibility of the approach and its superiority to BE/FE FE-IB is illustrated by two representative numerical examples.

  3. The Opinions of the Teacher Candidates Who Attended the Pedagogical Formation Certificate Programme for the Professional Competency of Instructors

    ERIC Educational Resources Information Center

    Sahin, Mehmet

    2013-01-01

    The main purpose of this study was to determine the opinions of the teacher candidates who attend the pedagogical formation certificate programme for the professional competency of instructors. The method of the research is a descriptive survey based on scanning. The study group included the teacher candidates who attend the pedagogical formation…

  4. Increasing the Pedagogical Sophistication of Parents: The Basis for Improving the Education of School Pupils in the Family Setting

    ERIC Educational Resources Information Center

    Grebennikov, I. V.

    1978-01-01

    Maintains that lack of pedagogical training of parents in the Soviet Union leads to errors in the education of children in the family setting. Methods to increase parents' pedagogical sophistication include media presentations about social education, community education clubs for young families, parent teacher conferences, and training sessions…

  5. A convergent blind deconvolution method for post-adaptive-optics astronomical imaging

    NASA Astrophysics Data System (ADS)

    Prato, M.; La Camera, A.; Bonettini, S.; Bertero, M.

    2013-06-01

    In this paper, we propose a blind deconvolution method which applies to data perturbed by Poisson noise. The objective function is a generalized Kullback-Leibler (KL) divergence, depending on both the unknown object and unknown point spread function (PSF), without the addition of regularization terms; constrained minimization, with suitable convex constraints on both unknowns, is considered. The problem is non-convex and we propose to solve it by means of an inexact alternating minimization method, whose global convergence to stationary points of the objective function has been recently proved in a general setting. The method is iterative and each iteration, also called outer iteration, consists of alternating an update of the object and the PSF by means of a fixed number of iterations, also called inner iterations, of the scaled gradient projection (SGP) method. Therefore, the method is similar to other proposed methods based on the Richardson-Lucy (RL) algorithm, with SGP replacing RL. The use of SGP has two advantages: first, it allows one to prove global convergence of the blind method; secondly, it allows the introduction of different constraints on the object and the PSF. The specific constraint on the PSF, besides non-negativity and normalization, is an upper bound derived from the so-called Strehl ratio (SR), which is the ratio between the peak value of an aberrated versus a perfect wavefront. Therefore, a typical application, but not a unique one, is to the imaging of modern telescopes equipped with adaptive optics systems for the partial correction of the aberrations due to atmospheric turbulence. In the paper, we describe in detail the algorithm and we recall the results leading to its convergence. Moreover, we illustrate its effectiveness by means of numerical experiments whose results indicate that the method, pushed to convergence, is very promising in the reconstruction of non-dense stellar clusters. The case of more complex astronomical targets is

  6. Tensor Product Model Transformation Based Adaptive Integral-Sliding Mode Controller: Equivalent Control Method

    PubMed Central

    Zhao, Guoliang; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897

  7. Method and apparatus for adaptive force and position control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1995-01-01

    The described and improved multi-arm invention of this application presents three strategies for adaptive control of cooperative multi-arm robots which coordinate control over a common load. In the position-position control strategy, the adaptive controllers ensure that the end-effector positions of both arms track desired trajectories in Cartesian space despite unknown time-varying interaction forces exerted through a load. In the position-hybrid control strategy, the adaptive controller of one arm controls end-effector motions in the free directions and applied forces in the constraint directions; while the adaptive controller of the other arm ensures that the end-effector tracks desired position trajectories. In the hybrid-hybrid control strategy, the adaptive controllers ensure that both end-effectors track reference position trajectories while simultaneously applying desired forces on the load. In all three control strategies, the cross-coupling effects between the arms are treated as disturbances which are compensated for by the adaptive controllers while following desired commands in a common frame of reference. The adaptive controllers do not require the complex mathematical model of the arm dynamics or any knowledge of the arm dynamic parameters or the load parameters such as mass and stiffness. Circuits in the adaptive feedback and feedforward controllers are varied by novel adaptation laws.

  8. Adaptive optimization of reference intensity for optical coherence imaging using galvanometric mirror tilting method

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2015-09-01

    Integration time and reference intensity are important factors for achieving high signal-to-noise ratio (SNR) and sensitivity in optical coherence tomography (OCT). In this context, we present an adaptive optimization method of reference intensity for OCT setup. The reference intensity is automatically controlled by tilting a beam position using a Galvanometric scanning mirror system. Before sample scanning, the OCT system acquires two dimensional intensity map with normalized intensity and variables in color spaces using false-color mapping. Then, the system increases or decreases reference intensity following the map data for optimization with a given algorithm. In our experiments, the proposed method successfully corrected the reference intensity with maintaining spectral shape, enabled to change integration time without manual calibration of the reference intensity, and prevented image degradation due to over-saturation and insufficient reference intensity. Also, SNR and sensitivity could be improved by increasing integration time with automatic adjustment of the reference intensity. We believe that our findings can significantly aid in the optimization of SNR and sensitivity for optical coherence tomography systems.

  9. The Sequential Empirical Bayes Method: An Adaptive Constrained-Curve Fitting Algorithm for Lattice QCD

    SciTech Connect

    Ying Chen; Shao-Jing Dong; Terrence Draper; Ivan Horvath; Keh-Fei Liu; Nilmani Mathur; Sonali Tamhankar; Cidambi Srinivasan; Frank X. Lee; Jianbo Zhang

    2004-05-01

    We introduce the ''Sequential Empirical Bayes Method'', an adaptive constrained-curve fitting procedure for extracting reliable priors. These are then used in standard augmented-{chi}{sup 2} fits on separate data. This better stabilizes fits to lattice QCD overlap-fermion data at very low quark mass where a priori values are not otherwise known. Lessons learned (including caveats limiting the scope of the method) from studying artificial data are presented. As an illustration, from local-local two-point correlation functions, we obtain masses and spectral weights for ground and first-excited states of the pion, give preliminary fits for the a{sub 0} where ghost states (a quenched artifact) must be dealt with, and elaborate on the details of fits of the Roper resonance and S{sub 11}(N{sup 1/2-}) previously presented elsewhere. The data are from overlap fermions on a quenched 16{sup 3} x 28 lattice with spatial size La = 3.2 fm and pion mass as low as {approx}180 MeV.

  10. Adaptive multi-grid method for a periodic heterogeneous medium in 1-D

    SciTech Connect

    Fish, J.; Belsky, V.

    1995-12-31

    A multi-grid method for a periodic heterogeneous medium in 1-D is presented. Based on the homogenization theory special intergrid connection operators have been developed to imitate a low frequency response of the differential equations with oscillatory coefficients. The proposed multi-grid method has been proved to have a fast rate of convergence governed by the ratio q/(4-q), where oadaptive multiscale computational scheme is developed. By this technique a computational model entirely constructed on the scale of material heterogeneity is only used where it is necessary to do so, or as indicated by so called Microscale Reduction Error (MRE) indicators, while in the remaining portion of the problem domain, the medium is treated as homogeneous with effective properties. Such a posteriori MRE indicators and estimators are developed on the basis of assessing the validity of two-scale asymptotic expansion.

  11. A new method for beam-damage-diagnosis using adaptive fuzzy neural structure and wavelet analysis

    NASA Astrophysics Data System (ADS)

    Nguyen, Sy Dzung; Ngo, Kieu Nhi; Tran, Quang Thinh; Choi, Seung-Bok

    2013-08-01

    In this work, we present a new beam-damage-locating (BDL) method based on an algorithm which is a combination of an adaptive fuzzy neural structure (AFNS) and an average quantity solution to wavelet transform coefficient (AQWTC) of beam vibration signal. The AFNS is used for remembering undamaged-beam dynamic properties, while the AQWTC is used for signal analysis. Firstly, the beam is divided into elements and excited to be vibrated. Vibrating signal at each element, which is displacement in this work, is measured, filtered and transformed into wavelet signal with a used-scale-sheet to calculate the corresponding difference of AQWTC between two cases: undamaged status and the status at the checked time. Database about this difference is then used for finding out the elements having strange features in wavelet quantitative analysis, which directly represents the beam-damage signs. The effectiveness of the proposed approach which combines fuzzy neural structure and wavelet transform methods is demonstrated by experiment on measured data sets in a vibrated beam-type steel frame structure. `

  12. Adapting phase-switch Monte Carlo method for flexible organic molecules

    NASA Astrophysics Data System (ADS)

    Bridgwater, Sally; Quigley, David

    2014-03-01

    The role of cholesterol in lipid bilayers has been widely studied via molecular simulation, however, there has been relatively little work on crystalline cholesterol in biological environments. Recent work has linked the crystallisation of cholesterol in the body with heart attacks and strokes. Any attempt to model this process will require new models and advanced sampling methods to capture and quantify the subtle polymorphism of solid cholesterol, in which two crystalline phases are separated by a phase transition close to body temperature. To this end, we have adapted phase-switch Monte Carlo for use with flexible molecules, to calculate the free energy between crystal polymorphs to a high degree of accuracy. The method samples an order parameter , which divides a displacement space for the N molecules, into regions energetically favourable for each polymorph; which is traversed using biased Monte Carlo. Results for a simple model of butane will be presented, demonstrating that conformational flexibility can be correctly incorporated within a phase-switching scheme. Extension to a coarse grained model of cholesterol and the resulting free energies will be discussed.

  13. Global adaptive rank truncated product method for gene-set analysis in association studies.

    PubMed

    Vilor-Tejedor, Natalia; Calle, M Luz

    2014-08-01

    Gene set analysis (GSA) aims to assess the overall association of a set of genetic variants with a phenotype and has the potential to detect subtle effects of variants in a gene or a pathway that might be missed when assessed individually. We present a new implementation of the Adaptive Rank Truncated Product method (ARTP) for analyzing the association of a set of Single Nucleotide Polymorphisms (SNPs) in a gene or pathway. The new implementation, referred to as globalARTP, improves the original one by allowing the different SNPs in the set to have different modes of inheritance. We perform a simulation study for exploring the power of the proposed methodology in a set of scenarios with different numbers of causal SNPs with different effect sizes. Moreover, we show the advantage of using the gene set approach in the context of an Alzheimer's disease case-control study where we explore the endocytosis pathway. The new method is implemented in the R function globalARTP of the globalGSA package available at http://cran.r-project.org. PMID:25082012

  14. Bayesian adaptive estimation of the contrast sensitivity function: the quick CSF method.

    PubMed

    Lesmes, Luis Andres; Lu, Zhong-Lin; Baek, Jongsoo; Albright, Thomas D

    2010-01-01

    The contrast sensitivity function (CSF) predicts functional vision better than acuity, but long testing times prevent its psychophysical assessment in clinical and practical applications. This study presents the quick CSF (qCSF) method, a Bayesian adaptive procedure that applies a strategy developed to estimate multiple parameters of the psychometric function (A. B. Cobo-Lewis, 1996; L. L. Kontsevich & C. W. Tyler, 1999). Before each trial, a one-step-ahead search finds the grating stimulus (defined by frequency and contrast) that maximizes the expected information gain (J. V. Kujala & T. J. Lukka, 2006; L. A. Lesmes et al., 2006), about four CSF parameters. By directly estimating CSF parameters, data collected at one spatial frequency improves sensitivity estimates across all frequencies. A psychophysical study validated that CSFs obtained with 100 qCSF trials ( approximately 10 min) exhibited good precision across spatial frequencies (SD < 2-3 dB) and excellent agreement with CSFs obtained independently (mean RMSE = 0.86 dB). To estimate the broad sensitivity metric provided by the area under the log CSF (AULCSF), only 25 trials were needed to achieve a coefficient of variation of 15-20%. The current study demonstrates the method's value for basic and clinical investigations. Further studies, applying the qCSF to measure wider ranges of normal and abnormal vision, will determine how its efficiency translates to clinical assessment. PMID:20377294

  15. Adaptive multiple feature method (AMFM) for early detecton of parenchymal pathology in a smoking population

    NASA Astrophysics Data System (ADS)

    Uppaluri, Renuka; McLennan, Geoffrey; Enright, Paul; Standen, James; Boyer-Pfersdorf, Pamela; Hoffman, Eric A.

    1998-07-01

    Application of the Adaptive Multiple Feature Method (AMFM) to identify early changes in a smoking population is discussed. This method was specifically applied to determine if differences in CT images of smokers (with normal lung function) and non-smokers (with normal lung function) could be found through computerized texture analysis. Results demonstrated that these groups could be differentiated with over 80.0% accuracy. Further, differences on CT images between normal appearing lung from non-smokers (with normal lung function) and normal appearing lung from smokers (with abnormal lung function) were also investigated. These groups were differentiated with over 89.5% accuracy. In analyzing the whole lung region by region, the AMFM characterized 38.6% of a smoker lung (with normal lung function) as mild emphysema. We can conclude that the AMFM detects parenchymal patterns in the lungs of smokers which are different from normal patterns occurring in healthy non-smokers. These patterns could perhaps indicate early smoking-related changes.

  16. Adaptive moving mesh methods for simulating one-dimensional groundwater problems with sharp moving fronts

    USGS Publications Warehouse

    Huang, W.; Zheng, Lingyun; Zhan, X.

    2002-01-01

    Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.

  17. Detection of Anthropogenic Particles in Fish Stomachs: An Isolation Method Adapted to Identification by Raman Spectroscopy.

    PubMed

    Collard, France; Gilbert, Bernard; Eppe, Gauthier; Parmentier, Eric; Das, Krishna

    2015-10-01

    Microplastic particles (MP) contaminate oceans and affect marine organisms in several ways. Ingestion combined with food intake is generally reported. However, data interpretation often is circumvented by the difficulty to separate MP from bulk samples. Visual examination often is used as one or the only step to sort these particles. However, color, size, and shape are insufficient and often unreliable criteria. We present an extraction method based on hypochlorite digestion and isolation of MP from the membrane by sonication. The protocol is especially well adapted to a subsequent analysis by Raman spectroscopy. The method avoids fluorescence problems, allowing better identification of anthropogenic particles (AP) from stomach contents of fish by Raman spectroscopy. It was developed with commercial samples of microplastics and cotton along with stomach contents from three different Clupeiformes fishes: Clupea harengus, Sardina pilchardus, and Engraulis encrasicolus. The optimized digestion and isolation protocol showed no visible impact on microplastics and cotton particles while the Raman spectroscopic spectrum allowed the precise identification of microplastics and textile fibers. Thirty-five particles were isolated from nine fish stomach contents. Raman analysis has confirmed 11 microplastics and 13 fibers mainly made of cellulose or lignin. Some particles were not completely identified but contained artificial dyes. The novel approach developed in this manuscript should help to assess the presence, quantity, and composition of AP in planktivorous fish stomachs. PMID:26289815

  18. Adaptive meshless local maximum-entropy finite element method for convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Wu, C. T.; Young, D. L.; Hong, H. K.

    2014-01-01

    In this paper, a meshless local maximum-entropy finite element method (LME-FEM) is proposed to solve 1D Poisson equation and steady state convection-diffusion problems at various Peclet numbers in both 1D and 2D. By using local maximum-entropy (LME) approximation scheme to construct the element shape functions in the formulation of finite element method (FEM), additional nodes can be introduced within element without any mesh refinement to increase the accuracy of numerical approximation of unknown function, which procedure is similar to conventional p-refinement but without increasing the element connectivity to avoid the high conditioning matrix. The resulted LME-FEM preserves several significant characteristics of conventional FEM such as Kronecker-delta property on element vertices, partition of unity of shape function and exact reproduction of constant and linear functions. Furthermore, according to the essential properties of LME approximation scheme, nodes can be introduced in an arbitrary way and the continuity of the shape function along element edge is kept at the same time. No transition element is needed to connect elements of different orders. The property of arbitrary local refinement makes LME-FEM be a numerical method that can adaptively solve the numerical solutions of various problems where troublesome local mesh refinement is in general necessary to obtain reasonable solutions. Several numerical examples with dramatically varying solutions are presented to test the capability of the current method. The numerical results show that LME-FEM can obtain much better and stable solutions than conventional FEM with linear element.

  19. The morphing method as a flexible tool for adaptive local/non-local simulation of static fracture

    NASA Astrophysics Data System (ADS)

    Azdoud, Yan; Han, Fei; Lubineau, Gilles

    2014-09-01

    We introduce a framework that adapts local and non-local continuum models to simulate static fracture problems. Non-local models based on the peridynamic theory are promising for the simulation of fracture, as they allow discontinuities in the displacement field. However, they remain computationally expensive. As an alternative, we develop an adaptive coupling technique based on the morphing method to restrict the non-local model adaptively during the evolution of the fracture. The rest of the structure is described by local continuum mechanics. We conduct all simulations in three dimensions, using the relevant discretization scheme in each domain, i.e., the discontinuous Galerkin finite element method in the peridynamic domain and the continuous finite element method in the local continuum mechanics domain.

  20. Science Teacher Education in the Twenty-First Century: a Pedagogical Framework for Technology-Integrated Social Constructivism

    NASA Astrophysics Data System (ADS)

    Barak, Miri

    2016-01-01

    Changes in our global world have shifted the skill demands from acquisition of structured knowledge to mastery of skills, often referred to as twenty-first century competencies. Given these changes, a sequential explanatory mixed methods study was undertaken to (a) examine predominant instructional methods and technologies used by teacher educators, (b) identify attributes for learning and teaching in the twenty-first century, and (c) develop a pedagogical framework for promoting meaningful usage of advanced technologies. Quantitative and qualitative data were collected via an online survey, personal interviews, and written reflections with science teacher educators and student teachers. Findings indicated that teacher educators do not provide sufficient models for the promotion of reform-based practice via web 2.0 environments, such as Wikis, blogs, social networks, or other cloud technologies. Findings also indicated four attributes for teaching and learning in the twenty-first century: (a) adapting to frequent changes and uncertain situations, (b) collaborating and communicating in decentralized environments, (c) generating data and managing information, and (d) releasing control by encouraging exploration. Guided by social constructivist paradigms and twenty-first century teaching attributes, this study suggests a pedagogical framework for fostering meaningful usage of advanced technologies in science teacher education courses.

  1. A Freestream-Preserving High-Order Finite-Volume Method for Mapped Grids with Adaptive-Mesh Refinement

    SciTech Connect

    Guzik, S; McCorquodale, P; Colella, P

    2011-12-16

    A fourth-order accurate finite-volume method is presented for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Novel considerations for formulating the semi-discrete system of equations in computational space combined with detailed mechanisms for accommodating the adapting grids ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). Advancement in time is achieved with a fourth-order Runge-Kutta method.

  2. AP-Cloud: Adaptive particle-in-cloud method for optimal solutions to Vlasov–Poisson equation

    DOE PAGESBeta

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin

    2016-04-19

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov–Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes ofmore » computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Here, simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.« less

  3. AP-Cloud: Adaptive Particle-in-Cloud method for optimal solutions to Vlasov-Poisson equation

    NASA Astrophysics Data System (ADS)

    Wang, Xingyu; Samulyak, Roman; Jiao, Xiangmin; Yu, Kwangmin

    2016-07-01

    We propose a new adaptive Particle-in-Cloud (AP-Cloud) method for obtaining optimal numerical solutions to the Vlasov-Poisson equation. Unlike the traditional particle-in-cell (PIC) method, which is commonly used for solving this problem, the AP-Cloud adaptively selects computational nodes or particles to deliver higher accuracy and efficiency when the particle distribution is highly non-uniform. Unlike other adaptive techniques for PIC, our method balances the errors in PDE discretization and Monte Carlo integration, and discretizes the differential operators using a generalized finite difference (GFD) method based on a weighted least square formulation. As a result, AP-Cloud is independent of the geometric shapes of computational domains and is free of artificial parameters. Efficient and robust implementation is achieved through an octree data structure with 2:1 balance. We analyze the accuracy and convergence order of AP-Cloud theoretically, and verify the method using an electrostatic problem of a particle beam with halo. Simulation results show that the AP-Cloud method is substantially more accurate and faster than the traditional PIC, and it is free of artificial forces that are typical for some adaptive PIC techniques.

  4. Potential benefit of the CT adaptive statistical iterative reconstruction method for pediatric cardiac diagnosis

    NASA Astrophysics Data System (ADS)

    Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2010-04-01

    Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.

  5. Grid generation and adaptation for the Direct Simulation Monte Carlo Method. [for complex flows past wedges and cones

    NASA Technical Reports Server (NTRS)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1988-01-01

    A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.

  6. Using high-order methods on adaptively refined block-structured meshes - discretizations, interpolations, and filters.

    SciTech Connect

    Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.

    2006-01-01

    Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.

  7. An adaptive total variation image reconstruction method for speckles through disordered media

    NASA Astrophysics Data System (ADS)

    Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei

    2013-09-01

    Multiple scattering of light in highly disordered medium can break the diffraction limit of conventional optical system combined with image reconstruction method. Once the transmission matrix of the imaging system is obtained, the target image can be reconstructed from its speckle pattern by image reconstruction algorithm. Nevertheless, the restored image attained by common image reconstruction algorithms such as Tikhonov regularization has a relatively low signal-tonoise ratio (SNR) due to the experimental noise and reconstruction noise, greatly reducing the quality of the result image. In this paper, the speckle pattern of the test image is simulated by the combination of light propagation theories and statistical optics theories. Subsequently, an adaptive total variation (ATV) algorithm—the TV minimization by augmented Lagrangian and alternating direction algorithms (TVAL3), which is based on augmented Lagrangian and alternating direction algorithm, is utilized to reconstruct the target image. Numerical simulation experimental results show that, the TVAL3 algorithm can effectively suppress the noise of the restored image and preserve more image details, thus greatly boosts the SNR of the restored image. It also indicates that, compared with the image directly formed by `clean' system, the reconstructed results can overcoming the diffraction limit of the `clean' system, therefore being conductive to the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.

  8. An experimental study of concurrent methods for adaptively controlling vertical tail buffet in high performance aircraft

    NASA Astrophysics Data System (ADS)

    Roberts, Patrick J.

    High performance twin-tail aircraft, like the F-15 and F/A-18, encounter a condition known as tail buffet. At high angles of attack, vortices are generated at the wing fuselage interface (shoulder) or other leading edge extensions. These vortices are directed toward the twin vertical tails. When the flow interacts with the vertical tail it creates pressure variations that can oscillate the vertical tail assembly. This results in fatigue cracks in the vertical tail assembly that can decrease the fatigue life and increase maintenance costs. Recently, an offset piezoceramic stack actuator was used on an F-15 wind tunnel model to control buffet induced vibrations at high angles of attack. The controller was based on the acceleration feedback control methods, In this thesis a procedure for designing the offset piezoceramic stack actuators is developed. This design procedure includes determining the quantity and type of piezoceramic stacks used in these actuators. The changes of stresses, in the vertical tail caused by these actuators during an active control, are investigated. In many cases, linear controllers are very effective in reducing vibrations. However, during flight, the natural frequencies of the vertical tail structural system changes as the airspeed increases. This in turn, reduces the effectiveness of a linear controller. Other causes such as the unmodeled dynamics and nonlinear effects due to debonds also reduce the effectiveness of linear controllers. In this thesis, an adaptive neural network is used to augment the linear controller to correct these effects.

  9. Adaptive Multilevel Splitting Method for Molecular Dynamics Calculation of Benzamidine-Trypsin Dissociation Time.

    PubMed

    Teo, Ivan; Mayne, Christopher G; Schulten, Klaus; Lelièvre, Tony

    2016-06-14

    Adaptive multilevel splitting (AMS) is a rare event sampling method that requires minimal parameter tuning and allows unbiased sampling of transition pathways of a given rare event. Previous simulation studies have verified the efficiency and accuracy of AMS in the calculation of transition times for simple systems in both Monte Carlo and molecular dynamics (MD) simulations. Now, AMS is applied for the first time to an MD simulation of protein-ligand dissociation, representing a leap in complexity from the previous test cases. Of interest is the dissociation rate, which is typically too low to be accessible to conventional MD. The present study joins other recent efforts to develop advanced sampling techniques in MD to calculate dissociation rates, which are gaining importance in the pharmaceutical field as indicators of drug efficacy. The system investigated here, benzamidine bound to trypsin, is an example common to many of these efforts. The AMS estimate of the dissociation rate was found to be (2.6 ± 2.4) × 10(2) s(-1), which compares well with the experimental value. PMID:27159059

  10. A Domain-Decomposed Multi-Level Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.; Nixon, David (Technical Monitor)

    1998-01-01

    The work presents a new method for on-the-fly domain decomposition technique for mapping grids and solution algorithms to parallel machines, and is applicable to both shared-memory and message-passing architectures. It will be demonstrated on the Cray T3E, HP Exemplar, and SGI Origin 2000. Computing time has been secured on all these platforms. The decomposition technique is an outgrowth of techniques used in computational physics for simulations of N-body problems and the event horizons of black holes, and has not been previously used by the CFD community. Since the technique offers on-the-fly partitioning, it offers a substantial increase in flexibility for computing in heterogeneous environments, where the number of available processors may not be known at the time of job submission. In addition, since it is dynamic it permits the job to be repartitioned without global communication in cases where additional processors become available after the simulation has begun, or in cases where dynamic mesh adaptation changes the mesh size during the course of a simulation. The platform for this partitioning strategy is a completely new Cartesian Euler solver tarcreted at parallel machines which may be used in conjunction with Ames' "Cart3D" arbitrary geometry simulation package.

  11. An Adaptive Sensor Data Segments Selection Method for Wearable Health Care Services.

    PubMed

    Chen, Shih-Yeh; Lai, Chin-Feng; Hwang, Ren-Hung; Lai, Ying-Hsun; Wang, Ming-Shi

    2015-12-01

    As cloud computing and wearable devices technologies mature, relevant services have grown more and more popular in recent years. The healthcare field is one of the popular services for this technology that adopts wearable devices to sense signals of negative physiological events, and to notify users. The development and implementation of long-term healthcare monitoring that can prevent or quickly respond to the occurrence of disease and accidents present an interesting challenge for computing power and energy limits. This study proposed an adaptive sensor data segments selection method for wearable health care services, and considered the sensing frequency of the various signals from human body, as well as the data transmission among the devices. The healthcare service regulates the sensing frequency of devices by considering the overall cloud computing environment and the sensing variations of wearable health care services. The experimental results show that the proposed service can effectively transmit the sensing data and prolong the overall lifetime of health care services. PMID:26490152

  12. Bayesian adaptive estimation of the contrast sensitivity function: The quick CSF method

    PubMed Central

    Lesmes, Luis Andres; Lu, Zhong-Lin; Baek, Jongsoo; Albright, Thomas D.

    2015-01-01

    The contrast sensitivity function (CSF) predicts functional vision better than acuity, but long testing times prevent its psychophysical assessment in clinical and practical applications. This study presents the quick CSF (qCSF) method, a Bayesian adaptive procedure that applies a strategy developed to estimate multiple parameters of the psychometric function (A. B. Cobo-Lewis, 1996; L. L. Kontsevich & C. W. Tyler, 1999). Before each trial, a one-step-ahead search finds the grating stimulus (defined by frequency and contrast) that maximizes the expected information gain (J. V. Kujala & T. J. Lukka, 2006; L. A. Lesmes et al., 2006), about four CSF parameters. By directly estimating CSF parameters, data collected at one spatial frequency improves sensitivity estimates across all frequencies. A psychophysical study validated that CSFs obtained with 100 qCSF trials (~10 min) exhibited good precision across spatial frequencies (SD < 2–3 dB) and excellent agreement with CSFs obtained independently (mean RMSE = 0.86 dB). To estimate the broad sensitivity metric provided by the area under the log CSF (AULCSF), only 25 trials were needed to achieve a coefficient of variation of 15–20%. The current study demonstrates the method’s value for basic and clinical investigations. Further studies, applying the qCSF to measure wider ranges of normal and abnormal vision, will determine how its efficiency translates to clinical assessment. PMID:20377294

  13. Predictive simulation of wind turbine wake interaction with an adaptive lattice Boltzmann method for moving boundaries

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2015-11-01

    Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed the first version of a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The presentation will describe the employed algorithms and present relevant verification and validation computations. For instance, power and thrust coefficients of a Vestas V27 turbine are predicted within 5% of the manufacturer's specifications. Simulations of three Vestas V27-225kW turbines in triangular arrangement analyze the reduction in power production due to upstream wake generation for different inflow conditions.

  14. Real-Time Reconfigurable Adaptive Speech Recognition Command and Control Apparatus and Method

    NASA Technical Reports Server (NTRS)

    Salazar, George A. (Inventor); Haynes, Dena S. (Inventor); Sommers, Marc J. (Inventor)

    1998-01-01

    An adaptive speech recognition and control system and method for controlling various mechanisms and systems in response to spoken instructions and in which spoken commands are effective to direct the system into appropriate memory nodes, and to respective appropriate memory templates corresponding to the voiced command is discussed. Spoken commands from any of a group of operators for which the system is trained may be identified, and voice templates are updated as required in response to changes in pronunciation and voice characteristics over time of any of the operators for which the system is trained. Provisions are made for both near-real-time retraining of the system with respect to individual terms which are determined not be positively identified, and for an overall system training and updating process in which recognition of each command and vocabulary term is checked, and in which the memory templates are retrained if necessary for respective commands or vocabulary terms with respect to an operator currently using the system. In one embodiment, the system includes input circuitry connected to a microphone and including signal processing and control sections for sensing the level of vocabulary recognition over a given period and, if recognition performance falls below a given level, processing audio-derived signals for enhancing recognition performance of the system.

  15. System and method for the adaptive mapping of matrix data to sets of polygons

    NASA Technical Reports Server (NTRS)

    Burdon, David (Inventor)

    2003-01-01

    A system and method for converting bitmapped data, for example, weather data or thermal imaging data, to polygons is disclosed. The conversion of the data into polygons creates smaller data files. The invention is adaptive in that it allows for a variable degree of fidelity of the polygons. Matrix data is obtained. A color value is obtained. The color value is a variable used in the creation of the polygons. A list of cells to check is determined based on the color value. The list of cells to check is examined in order to determine a boundary list. The boundary list is then examined to determine vertices. The determination of the vertices is based on a prescribed maximum distance. When drawn, the ordered list of vertices create polygons which depict the cell data. The data files which include the vertices for the polygons are much smaller than the corresponding cell data files. The fidelity of the polygon representation can be adjusted by repeating the logic with varying fidelity values to achieve a given maximum file size or a maximum number of vertices per polygon.

  16. High-order solution-adaptive central essentially non-oscillatory (CENO) method for viscous flows

    NASA Astrophysics Data System (ADS)

    Ivan, Lucian; Groth, Clinton P. T.

    2014-01-01

    A high-order, central, essentially non-oscillatory (CENO), finite-volume scheme in combination with a block-based adaptive mesh refinement (AMR) algorithm is proposed for solution of the Navier-Stokes equations on body-fitted multi-block mesh. In contrast to other ENO schemes which require reconstruction on multiple stencils, the proposed CENO method uses a hybrid reconstruction approach based on a fixed central stencil. This feature is crucial to avoiding the complexities associated with multiple stencils of ENO schemes, providing high-order accuracy at relatively lower computational cost as well as being very well suited for extension to unstructured meshes. The spatial discretization of the inviscid (hyperbolic) fluxes combines an unlimited high-order k-exact least-squares reconstruction technique following from the optimal central stencil with a monotonicity-preserving, limited, linear, reconstruction algorithm. This hybrid reconstruction procedure retains the unlimited high-order k-exact reconstruction for cells in which the solution is fully resolved and reverts to the limited lower-order counterpart for cells with under-resolved/discontinuous solution content. Switching in the hybrid procedure is determined by a smoothness indicator. The high-order viscous (elliptic) fluxes are computed to the same order of accuracy as the hyperbolic fluxes based on a k-order accurate cell interface gradient derived from the unlimited, cell-centred, reconstruction. A somewhat novel h-refinement criterion based on the solution smoothness indicator is used to direct the steady and unsteady mesh adaptation. The proposed numerical procedure is thoroughly analyzed for advection-diffusion problems characterized by the full range of Péclet numbers, and its predictive capabilities are also demonstrated for several inviscid and laminar flows. The ability of the scheme to accurately represent solutions with smooth extrema and yet robustly handle under-resolved and/or non

  17. A resilience perspective to water risk management: case-study application of the adaptation tipping point method

    NASA Astrophysics Data System (ADS)

    Gersonius, Berry; Ashley, Richard; Jeuken, Ad; Nasruddin, Fauzy; Pathirana, Assela; Zevenbergen, Chris

    2010-05-01

    In a context of high uncertainty about hydrological variables due to climate change and other factors, the development of updated risk management approaches is as important as—if not more important than—the provision of improved data and forecasts of the future. Traditional approaches to adaptation attempt to manage future water risks to cities with the use of the predict-then-adapt method. This method uses hydrological change projections as the starting point to identify adaptive strategies, which is followed by analysing the cause-effect chain based on some sort of Pressures-State-Impact-Response (PSIR) scheme. The predict-then-adapt method presumes that it is possible to define a singular (optimal) adaptive strategy according to a most likely or average projection of future change. A key shortcoming of the method is, however, that the planning of water management structures is typically decoupled from forecast uncertainties and is, as such, inherently inflexible. This means that there is an increased risk of under- or over-adaptation, resulting in either mal-functioning or unnecessary costs. Rather than taking a traditional approach, responsible water risk management requires an alternative approach to adaptation that recognises and cultivates resiliency for change. The concept of resiliency relates to the capability of complex socio-technical systems to make aspirational levels of functioning attainable despite the occurrence of possible changes. Focusing on resiliency does not attempt to reduce uncertainty associated with future change, but rather to develop better ways of managing it. This makes it a particularly relevant perspective for adaptation to long-term hydrological change. Although resiliency is becoming more refined as a theory, the application of the concept to water risk management is still in an initial phase. Different methods are used in practice to support the implementation of a resilience-focused approach. Typically these approaches

  18. Three-dimensional multi bioluminescent sources reconstruction based on adaptive finite element method

    NASA Astrophysics Data System (ADS)

    Ma, Xibo; Tian, Jie; Zhang, Bo; Zhang, Xing; Xue, Zhenwen; Dong, Di; Han, Dong

    2011-03-01

    Among many optical molecular imaging modalities, bioluminescence imaging (BLI) has more and more wide application in tumor detection and evaluation of pharmacodynamics, toxicity, pharmacokinetics because of its noninvasive molecular and cellular level detection ability, high sensitivity and low cost in comparison with other imaging technologies. However, BLI can not present the accurate location and intensity of the inner bioluminescence sources such as in the bone, liver or lung etc. Bioluminescent tomography (BLT) shows its advantage in determining the bioluminescence source distribution inside a small animal or phantom. Considering the deficiency of two-dimensional imaging modality, we developed three-dimensional tomography to reconstruct the information of the bioluminescence source distribution in transgenic mOC-Luc mice bone with the boundary measured data. In this paper, to study the osteocalcin (OC) accumulation in transgenic mOC-Luc mice bone, a BLT reconstruction method based on multilevel adaptive finite element (FEM) algorithm was used for localizing and quantifying multi bioluminescence sources. Optical and anatomical information of the tissues are incorporated as a priori knowledge in this method, which can reduce the ill-posedness of BLT. The data was acquired by the dual modality BLT and Micro CT prototype system that was developed by us. Through temperature control and absolute intensity calibration, a relative accurate intensity can be calculated. The location of the OC accumulation was reconstructed, which was coherent with the principle of bone differentiation. This result also was testified by ex vivo experiment in the black 96-plate well using the BLI system and the chemiluminescence apparatus.

  19. A self-adaptive oriented particles Level-Set method for tracking interfaces

    NASA Astrophysics Data System (ADS)

    Ianniello, S.; Di Mascio, A.

    2010-02-01

    A new method for tracking evolving interfaces by lagrangian particles in conjunction with a Level-Set approach is introduced. This numerical technique is based on the use of time evolution equations for fundamental vector and tensor quantities defined on the front and represents a new and convenient way to couple the advantages of the Eulerian description given by a Level-Set function ϕ to the use of Lagrangian massless particles. The term oriented points out that the information advected by the particles not only concern the spatial location, but also the local (outward) normal vector n to the interface Γ and the second fundamental tensor (the shape operator) ∇n. The particles are exactly located upon Γ and provide all the requested information for tracking the interface on their own. In addition, a self-adaptive mechanism suitably modifies, at each time step, the markers distribution in the numerical domain: each particle behaves both as a potential seeder of new markers on Γ (so as to guarantee an accurate reconstruction of the interface) and a de-seeder (to avoid any useless gathering of markers and to limit the computational effort). The algorithm is conceived to avoid any transport equation for ϕ and to confine the Level-Set function to the role of a mere post-processing tool; thus, all the numerical diffusion problems usually affecting the Level-Set methodology are removed. The method has been tested both on 2D and 3D configurations; it carries out a fast reconstruction of the interface and its accuracy is only limited by the spatial resolution of the mesh.

  20. A gradient-free adaptation method for nonlinear active noise control

    NASA Astrophysics Data System (ADS)

    Spiriti, Emanuele; Morici, Simone; Piroddi, Luigi

    2014-01-01

    Active Noise Control (ANC) problems are often affected by nonlinear effects, such as saturation and distortion of microphones and loudspeakers. Nonlinear models and specific adaptation algorithms must be employed to properly account for these effects. The nonlinear structure of the problem complicates the application of gradient-based Least Mean Squares (LMS) algorithms, due to the fact that exact gradient calculation requires executing nonlinear recursive filtering operations, which pose computational and stability issues. One favored solution to this problem consists in neglecting recursive terms in the gradient calculation, an approximation which is not always without consequences on the convergence performance. Besides, an efficient application of nonlinear models cannot avoid some form of model structure selection, to avoid the well-known effects of overparametrization and to reduce the computational load on-line. Unfortunately, the standard ANC setting configures an indirect identification problem, due to the presence of the secondary path in the control loop. In the nonlinear case, this destroys the linear regression structure of the problem even if the control filter is linear-in-the-parameters, thereby making it impossible to apply the many existing model selection methods for linear regression problems. A simple and computationally wise low demanding approach is here proposed for parameter estimation and model structure selection that provides an answer to the mentioned issues. The proposed method avoids altogether the use of the error gradient and relies on direct cost function evaluations. A virtualization scheme is used to assess the accuracy improvements when the model is subject to parametric or structural modifications, without directly affecting the control performance. Several simulation examples are discussed to show the effectiveness of the proposed algorithms.