Sample records for simple addition problems

  1. No Generalization of Practice for Nonzero Simple Addition

    ERIC Educational Resources Information Center

    Campbell, Jamie I. D.; Beech, Leah C.

    2014-01-01

    Several types of converging evidence have suggested recently that skilled adults solve very simple addition problems (e.g., 2 + 1, 4 + 2) using a fast, unconscious counting algorithm. These results stand in opposition to the long-held assumption in the cognitive arithmetic literature that such simple addition problems normally are solved by fact…

  2. Operator priming and generalization of practice in adults' simple arithmetic.

    PubMed

    Chen, Yalin; Campbell, Jamie I D

    2016-04-01

    There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication, suggesting that a general addition procedure was primed by the + sign. In Experiment 1 (n = 36), we applied this operator-priming paradigm to rule-based problems (0 + N = N, 1 × N = N, 0 × N = 0) and 1 + N problems with N ranging from 0 to 9. For the rule-based problems, we found both operator-preview facilitation and generalization of practice (e.g., practicing 0 + 3 sped up unpracticed 0 + 8), the latter being a signature of procedure use; however, we also found operator-preview facilitation for 1 + N in the absence of generalization, which implies the 1 + N problems were solved by fact retrieval but nonetheless were facilitated by an operator preview. Thus, the operator preview effect does not discriminate procedure use from fact retrieval. Experiment 2 (n = 36) investigated whether a population with advanced mathematical training-engineering and computer science students-would show generalization of practice for nonrule-based simple addition problems (e.g., 1 + 4, 4 + 7). The 0 + N problems again presented generalization, whereas no nonzero problem type did; but all nonzero problems sped up when the identical problems were retested, as predicted by item-specific fact retrieval. The results pose a strong challenge to the generality of the proposal that skilled adults' simple addition is based on fast procedural algorithms, and instead support a fact-retrieval model of fast addition performance. (c) 2016 APA, all rights reserved).

  3. "Compacted" procedures for adults' simple addition: A review and critique of the evidence.

    PubMed

    Chen, Yalin; Campbell, Jamie I D

    2018-04-01

    We review recent empirical findings and arguments proffered as evidence that educated adults solve elementary addition problems (3 + 2, 4 + 1) using so-called compacted procedures (e.g., unconscious, automatic counting); a conclusion that could have significant pedagogical implications. We begin with the large-sample experiment reported by Uittenhove, Thevenot and Barrouillet (2016, Cognition, 146, 289-303), which tested 90 adults on the 81 single-digit addition problems from 1 + 1 to 9 + 9. They identified the 12 very-small addition problems with different operands both ≤ 4 (e.g., 4 + 3) as a distinct subgroup of problems solved by unconscious, automatic counting: These items yielded a near-perfectly linear increase in answer response time (RT) yoked to the sum of the operands. Using the data reported in the article, however, we show that there are clear violations of the sum-counting model's predictions among the very-small addition problems, and that there is no real RT boundary associated with addends ≤4. Furthermore, we show that a well-known associative retrieval model of addition facts-the network interference theory (Campbell, 1995)-predicts the results observed for these problems with high precision. We also review the other types of evidence adduced for the compacted procedure theory of simple addition and conclude that these findings are unconvincing in their own right and only distantly consistent with automatic counting. We conclude that the cumulative evidence for fast compacted procedures for adults' simple addition does not justify revision of the long-standing assumption that direct memory retrieval is ultimately the most efficient process of simple addition for nonzero problems, let alone sufficient to recommend significant changes to basic addition pedagogy.

  4. Tour of a Simple Trigonometry Problem

    ERIC Educational Resources Information Center

    Poon, Kin-Keung

    2012-01-01

    This article focuses on a simple trigonometric problem that generates a strange phenomenon when different methods are applied to tackling it. A series of problem-solving activities are discussed, so that students can be alerted that the precision of diagrams is important when solving geometric problems. In addition, the problem-solving plan was…

  5. Operator Priming and Generalization of Practice in Adults' Simple Arithmetic

    ERIC Educational Resources Information Center

    Chen, Yalin; Campbell, Jamie I. D.

    2016-01-01

    There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication,…

  6. Interoperation transfer in Chinese-English bilinguals' arithmetic.

    PubMed

    Campbell, Jamie I D; Dowd, Roxanne R

    2012-10-01

    We examined interoperation transfer of practice in adult Chinese-English bilinguals' memory for simple multiplication (6 × 8 = 48) and addition (6 + 8 = 14) facts. The purpose was to determine whether they possessed distinct number-fact representations in both Chinese (L1) and English (L2). Participants repeatedly practiced multiplication problems (e.g., 4 × 5 = ?), answering a subset in L1 and another subset in L2. Then separate groups answered corresponding addition problems (4 + 5 = ?) and control addition problems in either L1 (N = 24) or L2 (N = 24). The results demonstrated language-specific negative transfer of multiplication practice to corresponding addition problems. Specifically, large simple addition problems (sum > 10) presented a significant response time cost (i.e., retrieval-induced forgetting) after their multiplication counterparts were practiced in the same language, relative to practice in the other language. The results indicate that our Chinese-English bilinguals had multiplication and addition facts represented in distinct language-specific memory stores.

  7. Tour of a simple trigonometry problem

    NASA Astrophysics Data System (ADS)

    Poon, Kin-Keung

    2012-06-01

    This article focuses on a simple trigonometric problem that generates a strange phenomenon when different methods are applied to tackling it. A series of problem-solving activities are discussed, so that students can be alerted that the precision of diagrams is important when solving geometric problems. In addition, the problem-solving plan was implemented in a high school and the results indicated that students are relatively weak in problem-solving abilities but they understand and appreciate the thinking process in different stages and steps of the activities.

  8. Strategies of Pre-Service Primary School Teachers for Solving Addition Problems with Negative Numbers

    ERIC Educational Resources Information Center

    Almeida, Rut; Bruno, Alicia

    2014-01-01

    This paper analyses the strategies used by pre-service primary school teachers for solving simple addition problems involving negative numbers. The findings reveal six different strategies that depend on the difficulty of the problem and, in particular, on the unknown quantity. We note that students use negative numbers in those problems they find…

  9. Scalability problems of simple genetic algorithms.

    PubMed

    Thierens, D

    1999-01-01

    Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.

  10. The Use of Procedural Knowledge in Simple Addition and Subtraction Problems

    ERIC Educational Resources Information Center

    Fayol, Michel; Thevenot, Catherine

    2012-01-01

    In a first experiment, adults were asked to solve one-digit additions, subtractions and multiplications. When the sign appeared 150 ms before the operands, addition and subtraction were solved faster than when the sign and the operands appeared simultaneously on screen. This priming effect was not observed for multiplication problems. A second…

  11. Cognitive Addition: Comparison of Learning Disabled and Academically Normal Children.

    ERIC Educational Resources Information Center

    Geary, David C.; And Others

    To isolate the process deficits underlying a specific learning disability in mathematics achievement, 77 academically normal and 46 learning disabled (LD) students in second, fourth or sixth grade were presented 140 simple addition problems using a true-false reaction time verification paradigm. (The problems were on a video screen controlled by…

  12. Simple mental addition in children with and without mild mental retardation.

    PubMed

    Janssen, R; De Boeck, P; Viaene, M; Vallaeys, L

    1999-11-01

    The speeded performance on simple mental addition problems of 6- and 7-year-old children with and without mild mental retardation is modeled from a person perspective and an item perspective. On the person side, it was found that a single cognitive dimension spanned the performance differences between the two ability groups. However, a discontinuity, or "jump," was observed in the performance of the normal ability group on the easier items. On the item side, the addition problems were almost perfectly ordered in difficulty according to their problem size. Differences in difficulty were explained by factors related to the difficulty of executing nonretrieval strategies. All findings were interpreted within the framework of Siegler's (e.g., R. S. Siegler & C. Shipley, 1995) model of children's strategy choices in arithmetic. Models from item response theory were used to test the hypotheses. Copyright 1999 Academic Press.

  13. Additive schemes for certain operator-differential equations

    NASA Astrophysics Data System (ADS)

    Vabishchevich, P. N.

    2010-12-01

    Unconditionally stable finite difference schemes for the time approximation of first-order operator-differential systems with self-adjoint operators are constructed. Such systems arise in many applied problems, for example, in connection with nonstationary problems for the system of Stokes (Navier-Stokes) equations. Stability conditions in the corresponding Hilbert spaces for two-level weighted operator-difference schemes are obtained. Additive (splitting) schemes are proposed that involve the solution of simple problems at each time step. The results are used to construct splitting schemes with respect to spatial variables for nonstationary Navier-Stokes equations for incompressible fluid. The capabilities of additive schemes are illustrated using a two-dimensional model problem as an example.

  14. Posing Problems to Understand Children's Learning of Fractions

    ERIC Educational Resources Information Center

    Cheng, Lu Pien

    2013-01-01

    In this study, ways in which problem posing activities aid our understanding of children's learning of addition of unlike fractions and product of proper fractions was examined. In particular, how a simple problem posing activity helps teachers take a second, deeper look at children's understanding of fraction concepts will be discussed. The…

  15. Pictorial Representations of Simple Arithmetic Problems Are Not Always Helpful: A Cognitive Load Perspective

    ERIC Educational Resources Information Center

    van Lieshout, Ernest C. D. M.; Xenidou-Dervou, Iro

    2018-01-01

    At the start of mathematics education children are often presented with addition and subtraction problems in the form of pictures. They are asked to solve the problems by filling in corresponding number sentences. One type of problem concerns the representation of an increase or a decrease in a depicted amount. A decrease is, however, more…

  16. On the Problem-Size Effect in Small Additions: Can We Really Discard Any Counting-Based Account?

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Thevenot, Catherine

    2013-01-01

    The problem-size effect in simple additions, that is the increase in response times (RTs) and error rates with the size of the operands, is one of the most robust effects in cognitive arithmetic. Current accounts focus on factors that could affect speed of retrieval of the answers from long-term memory such as the occurrence of interference in a…

  17. Adults' strategies for simple addition and multiplication: verbal self-reports and the operand recognition paradigm.

    PubMed

    Metcalfe, Arron W S; Campbell, Jamie I D

    2011-05-01

    Accurate measurement of cognitive strategies is important in diverse areas of psychological research. Strategy self-reports are a common measure, but C. Thevenot, M. Fanget, and M. Fayol (2007) proposed a more objective method to distinguish different strategies in the context of mental arithmetic. In their operand recognition paradigm, speed of recognition memory for problem operands after solving a problem indexes strategy (e.g., direct memory retrieval vs. a procedural strategy). Here, in 2 experiments, operand recognition time was the same following simple addition or multiplication, but, consistent with a wide variety of previous research, strategy reports indicated much greater use of procedures (e.g., counting) for addition than multiplication. Operation, problem size (e.g., 2 + 3 vs. 8 + 9), and operand format (digits vs. words) had interactive effects on reported procedure use that were not reflected in recognition performance. Regression analyses suggested that recognition time was influenced at least as much by the relative difficulty of the preceding problem as by the strategy used. The findings indicate that the operand recognition paradigm is not a reliable substitute for strategy reports and highlight the potential impact of difficulty-related carryover effects in sequential cognitive tasks.

  18. The calculating hemispheres: studies of a split-brain patient.

    PubMed

    Funnell, Margaret G; Colvin, Mary K; Gazzaniga, Michael S

    2007-06-11

    The purpose of the study was to investigate simple calculation in the two cerebral hemispheres of a split-brain patient. In a series of four experiments, the left hemisphere was superior to the right in simple calculation, confirming the previously reported left hemisphere specialization for calculation. In two different recognition paradigms, right hemisphere performance was at chance for all arithmetic operations, with the exception of subtraction in a two-alternative forced choice paradigm (performance was at chance when the lure differed from the correct answer by a magnitude of 1 but above chance when the magnitude difference was 4). In a recall paradigm, the right hemisphere performed above chance for both addition and subtraction, but performed at chance levels for multiplication and division. The error patterns in that experiment suggested that for subtraction and addition, the right hemisphere does have some capacity for approximating the solution even when it is unable to generate the exact solution. Furthermore, right hemisphere accuracy in addition and subtraction was higher for problems with small operands than with large operands. An additional experiment assessed approximate and exact addition in the two hemispheres for problems with small and large operands. The left hemisphere was equally accurate in both tasks but the right hemisphere was more accurate in approximate addition than in exact addition. In exact addition, right hemisphere accuracy was higher for problems with small operands than large, but the opposite pattern was found for approximate addition.

  19. The Development from Effortful to Automatic Processing in Mathematical Cognition.

    ERIC Educational Resources Information Center

    Kaye, Daniel B.; And Others

    This investigation capitalizes upon the information processing models that depend upon measurement of latency of response to a mathematical problem and the decomposition of reaction time (RT). Simple two term addition problems were presented with possible solutions for true-false verification, and accuracy and RT to response were recorded. Total…

  20. Effects of Numerical Surface Form in Arithmetic Word Problems

    ERIC Educational Resources Information Center

    Orrantia, Josetxu; Múñez, David; San Romualdo, Sara; Verschaffel, Lieven

    2015-01-01

    Adults' simple arithmetic performance is more efficient when operands are presented in Arabic digit (3 + 5) than in number word (three + five) formats. An explanation provided is that visual familiarity with digits is higher respect to number words. However, most studies have been limited to single-digit addition and multiplication problems. In…

  1. Overview and extensions of a system for routing directed graphs on SIMD architectures

    NASA Technical Reports Server (NTRS)

    Tomboulian, Sherryl

    1988-01-01

    Many problems can be described in terms of directed graphs that contain a large number of vertices where simple computations occur using data from adjacent vertices. A method is given for parallelizing such problems on an SIMD machine model that uses only nearest neighbor connections for communication, and has no facility for local indirect addressing. Each vertex of the graph will be assigned to a processor in the machine. Rules for a labeling are introduced that support the use of a simple algorithm for movement of data along the edges of the graph. Additional algorithms are defined for addition and deletion of edges. Modifying or adding a new edge takes the same time as parallel traversal. This combination of architecture and algorithms defines a system that is relatively simple to build and can do fast graph processing. All edges can be traversed in parallel in time O(T), where T is empirically proportional to the average path length in the embedding times the average degree of the graph. Additionally, researchers present an extension to the above method which allows for enhanced performance by allowing some broadcasting capabilities.

  2. Attentional bias induced by solving simple and complex addition and subtraction problems.

    PubMed

    Masson, Nicolas; Pesenti, Mauro

    2014-01-01

    The processing of numbers has been shown to induce shifts of spatial attention in simple probe detection tasks, with small numbers orienting attention to the left and large numbers to the right side of space. Recently, the investigation of this spatial-numerical association has been extended to mental arithmetic with the hypothesis that solving addition or subtraction problems may induce attentional displacements (to the right and to the left, respectively) along a mental number line onto which the magnitude of the numbers would range from left to right, from small to large numbers. Here we investigated such attentional shifts using a target detection task primed by arithmetic problems in healthy participants. The constituents of the addition and subtraction problems (first operand; operator; second operand) were flashed sequentially in the centre of a screen, then followed by a target on the left or the right side of the screen, which the participants had to detect. This paradigm was employed with arithmetic facts (Experiment 1) and with more complex arithmetic problems (Experiment 2) in order to assess the effects of the operation, the magnitude of the operands, the magnitude of the results, and the presence or absence of a requirement for the participants to carry or borrow numbers. The results showed that arithmetic operations induce some spatial shifts of attention, possibly through a semantic link between the operation and space.

  3. Production system chunking in SOAR: Case studies in automated learning

    NASA Technical Reports Server (NTRS)

    Allen, Robert

    1989-01-01

    A preliminary study of SOAR, a general intelligent architecture for automated problem solving and learning, is presented. The underlying principles of universal subgoaling and chunking were applied to a simple, yet representative, problem in artificial intelligence. A number of problem space representations were examined and compared. It is concluded that learning is an inherent and beneficial aspect of problem solving. Additional studies are suggested in domains relevant to mission planning and to SOAR itself.

  4. Brain Hyper-Connectivity and Operation-Specific Deficits during Arithmetic Problem Solving in Children with Developmental Dyscalculia

    ERIC Educational Resources Information Center

    Rosenberg-Lee, Miriam; Ashkenazi, Sarit; Chen, Tianwen; Young, Christina B.; Geary, David C.; Menon, Vinod

    2015-01-01

    Developmental dyscalculia (DD) is marked by specific deficits in processing numerical and mathematical information despite normal intelligence (IQ) and reading ability. We examined how brain circuits used by young children with DD to solve simple addition and subtraction problems differ from those used by typically developing (TD) children who…

  5. Formal language constrained path problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvablemore » efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.« less

  6. A More Intuitive Version of the Lorentz Velocity Addition Formula

    ERIC Educational Resources Information Center

    Devlin, John F.

    2009-01-01

    The Lorentz velocity addition formula for one-dimensional motion presents a number of problems for beginning students of special relativity. In this paper we suggest a simple rewrite of the formula that is easier for students to memorize and manipulate, and furthermore is more intuitive in understanding the correction necessary when adding…

  7. Primary Numberplay: "InterActivities" for the Discovery of Mathematics Concepts. User's Guide.

    ERIC Educational Resources Information Center

    Sullivan, W. Edward

    This document plus diskette product provides nine interactive puzzles and games that both teach and provide practice with simple addition and subtraction concepts. The activities address these skills through carrying in addition and regrouping in subtraction. The activities address cognitive skills such as problem solving, planning, visual pattern…

  8. Rocket Engine Oscillation Diagnostics

    NASA Technical Reports Server (NTRS)

    Nesman, Tom; Turner, James E. (Technical Monitor)

    2002-01-01

    Rocket engine oscillating data can reveal many physical phenomena ranging from unsteady flow and acoustics to rotordynamics and structural dynamics. Because of this, engine diagnostics based on oscillation data should employ both signal analysis and physical modeling. This paper describes an approach to rocket engine oscillation diagnostics, types of problems encountered, and example problems solved. Determination of design guidelines and environments (or loads) from oscillating phenomena is required during initial stages of rocket engine design, while the additional tasks of health monitoring, incipient failure detection, and anomaly diagnostics occur during engine development and operation. Oscillations in rocket engines are typically related to flow driven acoustics, flow excited structures, or rotational forces. Additional sources of oscillatory energy are combustion and cavitation. Included in the example problems is a sampling of signal analysis tools employed in diagnostics. The rocket engine hardware includes combustion devices, valves, turbopumps, and ducts. Simple models of an oscillating fluid system or structure can be constructed to estimate pertinent dynamic parameters governing the unsteady behavior of engine systems or components. In the example problems it is shown that simple physical modeling when combined with signal analysis can be successfully employed to diagnose complex rocket engine oscillatory phenomena.

  9. The relation between language and arithmetic in bilinguals: insights from different stages of language acquisition

    PubMed Central

    Van Rinsveld, Amandine; Brunner, Martin; Landerl, Karin; Schiltz, Christine; Ugen, Sonja

    2015-01-01

    Solving arithmetic problems is a cognitive task that heavily relies on language processing. One might thus wonder whether this language-reliance leads to qualitative differences (e.g., greater difficulties, error types, etc.) in arithmetic for bilingual individuals who frequently have to solve arithmetic problems in more than one language. The present study investigated how proficiency in two languages interacts with arithmetic problem solving throughout language acquisition in adolescents and young adults. Additionally, we examined whether the number word structure that is specific to a given language plays a role in number processing over and above bilingual proficiency. We addressed these issues in a German–French educational bilingual setting, where there is a progressive transition from German to French as teaching language. Importantly, German and French number naming structures differ clearly, as two-digit number names follow a unit-ten order in German, but a ten-unit order in French. We implemented a transversal developmental design in which bilingual pupils from grades 7, 8, 10, 11, and young adults were asked to solve simple and complex additions in both languages. The results confirmed that language proficiency is crucial especially for complex addition computation. Simple additions in contrast can be retrieved equally well in both languages after extended language practice. Additional analyses revealed that over and above language proficiency, language-specific number word structures (e.g., unit-ten vs. ten-unit) also induced significant modulations of bilinguals' arithmetic performances. Taken together, these findings support the view of a strong relation between language and arithmetic in bilinguals. PMID:25821442

  10. Babies and Math: A Meta-Analysis of Infants' Simple Arithmetic Competence

    ERIC Educational Resources Information Center

    Christodoulou, Joan; Lac, Andrew; Moore, David S.

    2017-01-01

    Wynn's (1992) seminal research reported that infants looked longer at stimuli representing "incorrect" versus "correct" solutions of basic addition and subtraction problems and concluded that infants have innate arithmetical abilities. Since then, infancy researchers have attempted to replicate this effect, yielding mixed…

  11. Vecksler-Macmillan phase stability for neutral atoms accelerated by a laser beam

    NASA Astrophysics Data System (ADS)

    Mel'nikov, I. V.; Haus, J. W.; Kazansky, P. G.

    2003-05-01

    We use a Fokker-Planck equation to study the phenomenon of accelerating a neutral atom bunch by a chirped optical beam. This method enables us to obtain a semi-analytical solution to the problem in which a wide range of parameters can be studied. In addition it provides a simple physical interpretation where the problem is reduced to an analogous problem of charged particles accelerators, that is, the Vecksler-Macmillan principle of phase stability. A possible experimental scenario is suggested, which uses a photonic crystal fiber as the guiding medium.

  12. Phase retrieval by constrained power inflation and signum flipping

    NASA Astrophysics Data System (ADS)

    Laganà, A. R.; Morabito, A. F.; Isernia, T.

    2016-12-01

    In this paper we consider the problem of retrieving a signal from the modulus of its Fourier transform (or other suitable transformations) and some additional information, which is also known as "Phase Retrieval" problem. The problem arises in many areas of applied Sciences such as optics, electron microscopy, antennas, and crystallography. In particular, we introduce a new approach, based on power inflation and tunneling, allowing an increased robustness with respect to the possible occurrence of false solutions. Preliminary results are presented for the simple yet relevant case of one-dimensional arrays and noisy data.

  13. Letting Your Students "Fly" in the Classroom.

    ERIC Educational Resources Information Center

    Adams, Thomas

    1997-01-01

    Students investigate the concept of motion by making simple paper airplanes and flying them in the classroom. Students are introduced to conversion factors to calculate various speeds. Additional activities include rounding decimal numbers, estimating, finding averages, making bar graphs, and solving problems. Offers ideas for extension such as…

  14. Copyright and CAI.

    ERIC Educational Resources Information Center

    Kearsley, G.P.; Hunka, S.

    The application of copyright laws to Computer Assisted Instruction (CAI) is not a simple matter of extending traditional literary practices because of the legal complications introduced by the use of computers to store and reproduce materials. In addition, CAI courseware poses some new problems for the definitions of educational usage. Some…

  15. Small Oscillations via Conservation of Energy

    ERIC Educational Resources Information Center

    Troy, Tia; Reiner, Megan; Haugen, Andrew J.; Moore, Nathan T.

    2017-01-01

    The work describes an analogy-based small oscillations analysis of a standard static equilibrium lab problem. In addition to force analysis, a potential energy function for the system is developed, and by drawing out mathematical similarities to the simple harmonic oscillator, we are able to describe (and experimentally verify) the period of small…

  16. Volume change and energy exchange: How they affect symmetry in the Noh problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vachal, Pavel; Wendroff, Burton

    The edge viscosity of Caramana, Shashkov and Whalen is known to fail on the Noh problem in an initially rectangular grid. In this paper, we present a simple change that significantly improves the behavior in that case. We also show that added energy exchange between cells improves the symmetry of both edge viscosity and the tensor viscosity of Campbell and Shashkov. Finally, as suggested by Noh, this addition also reduces the wall heating effect.

  17. Volume change and energy exchange: How they affect symmetry in the Noh problem

    DOE PAGES

    Vachal, Pavel; Wendroff, Burton

    2018-03-14

    The edge viscosity of Caramana, Shashkov and Whalen is known to fail on the Noh problem in an initially rectangular grid. In this paper, we present a simple change that significantly improves the behavior in that case. We also show that added energy exchange between cells improves the symmetry of both edge viscosity and the tensor viscosity of Campbell and Shashkov. Finally, as suggested by Noh, this addition also reduces the wall heating effect.

  18. Inertial Confinement fusion targets

    NASA Technical Reports Server (NTRS)

    Hendricks, C. D.

    1982-01-01

    Inertial confinement fusion (ICF) targets are made as simple flat discs, as hollow shells or as complicated multilayer structures. Many techniques were devised for producing the targets. Glass and metal shells are made by using drop and bubble techniques. Solid hydrogen shells are also produced by adapting old methods to the solution of modern problems. Some of these techniques, problems, and solutions are discussed. In addition, the applications of many of the techniques to fabrication of ICF targets is presented.

  19. Arithmetic on Your Phone: A Large Scale Investigation of Simple Additions and Multiplications.

    PubMed

    Zimmerman, Federico; Shalom, Diego; Gonzalez, Pablo A; Garrido, Juan Manuel; Alvarez Heduan, Facundo; Dehaene, Stanislas; Sigman, Mariano; Rieznik, Andres

    2016-01-01

    We present the results of a gamified mobile device arithmetic application which allowed us to collect vast amount of data in simple arithmetic operations. Our results confirm and replicate, on a large sample, six of the main principles derived in a long tradition of investigation: size effect, tie effect, size-tie interaction effect, five-effect, RTs and error rates correlation effect, and most common error effect. Our dataset allowed us to perform a robust analysis of order effects for each individual problem, for which there is controversy both in experimental findings and in the predictions of theoretical models. For addition problems, the order effect was dominated by a max-then-min structure (i.e 7+4 is easier than 4+7). This result is predicted by models in which additions are performed as a translation starting from the first addend, with a distance given by the second addend. In multiplication, we observed a dominance of two effects: (1) a max-then-min pattern that can be accounted by the fact that it is easier to perform fewer additions of the largest number (i.e. 8x3 is easier to compute as 8+8+8 than as 3+3+…+3) and (2) a phonological effect by which problems for which there is a rhyme (i.e. "seis por cuatro es veinticuatro") are performed faster. Above and beyond these results, our study bares an important practical conclusion, as proof of concept, that participants can be motivated to perform substantial arithmetic training simply by presenting it in a gamified format.

  20. Arithmetic on Your Phone: A Large Scale Investigation of Simple Additions and Multiplications

    PubMed Central

    Zimmerman, Federico; Shalom, Diego; Gonzalez, Pablo A.; Garrido, Juan Manuel; Alvarez Heduan, Facundo; Dehaene, Stanislas; Sigman, Mariano; Rieznik, Andres

    2016-01-01

    We present the results of a gamified mobile device arithmetic application which allowed us to collect vast amount of data in simple arithmetic operations. Our results confirm and replicate, on a large sample, six of the main principles derived in a long tradition of investigation: size effect, tie effect, size-tie interaction effect, five-effect, RTs and error rates correlation effect, and most common error effect. Our dataset allowed us to perform a robust analysis of order effects for each individual problem, for which there is controversy both in experimental findings and in the predictions of theoretical models. For addition problems, the order effect was dominated by a max-then-min structure (i.e 7+4 is easier than 4+7). This result is predicted by models in which additions are performed as a translation starting from the first addend, with a distance given by the second addend. In multiplication, we observed a dominance of two effects: (1) a max-then-min pattern that can be accounted by the fact that it is easier to perform fewer additions of the largest number (i.e. 8x3 is easier to compute as 8+8+8 than as 3+3+…+3) and (2) a phonological effect by which problems for which there is a rhyme (i.e. "seis por cuatro es veinticuatro") are performed faster. Above and beyond these results, our study bares an important practical conclusion, as proof of concept, that participants can be motivated to perform substantial arithmetic training simply by presenting it in a gamified format. PMID:28033357

  1. Focusing on the golden ball metaheuristic: an extended study on a wider set of problems.

    PubMed

    Osaba, E; Diaz, F; Carballedo, R; Onieva, E; Perallos, A

    2014-01-01

    Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results.

  2. Focusing on the Golden Ball Metaheuristic: An Extended Study on a Wider Set of Problems

    PubMed Central

    Osaba, E.; Diaz, F.; Carballedo, R.; Onieva, E.; Perallos, A.

    2014-01-01

    Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results. PMID:25165742

  3. Eating breakfast enhances the efficiency of neural networks engaged during mental arithmetic in school-aged children

    USDA-ARS?s Scientific Manuscript database

    To determine the influence of a morning meal on complex mental functions in children (8-11 y), time-frequency analyses were applied to electroencephalographic (EEG) activity recorded while children solved simple addition problems after an overnight fast and again after having either eaten or skipped...

  4. Self-Regulated Learning of Basic Arithmetic Skills: A Longitudinal Study

    ERIC Educational Resources Information Center

    Throndsen, Inger

    2011-01-01

    Background: Several studies have examined young primary school children's use of strategies when solving simple addition and subtraction problems. Most of these studies have investigated students' strategy use as if they were isolated processes. To date, we have little knowledge about how math strategies in young students are related to other…

  5. Response Functions for Neutron Skyshine Analyses

    NASA Astrophysics Data System (ADS)

    Gui, Ah Auu

    Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.

  6. Passive hand movements disrupt adults' counting strategies.

    PubMed

    Imbo, Ineke; Vandierendonck, André; Fias, Wim

    2011-01-01

    In the present study, we experimentally tested the role of hand motor circuits in simple-arithmetic strategies. Educated adults solved simple additions (e.g., 8 + 3) or simple subtractions (e.g., 11 - 3) while they were required to retrieve the answer from long-term memory (e.g., knowing that 8 + 3 = 11), to transform the problem by making an intermediate step (e.g., 8 + 3 = 8 + 2 + 1 = 10 + 1 = 11) or to count one-by-one (e.g., 8 + 3 = 8…9…10…11). During the process of solving the arithmetic problems, the experimenter did or did not move the participants' hand on a four-point matrix. The results show that passive hand movements disrupted the counting strategy while leaving the other strategies unaffected. This pattern of results is in agreement with a procedural account, showing that the involvement of hand motor circuits in adults' mathematical abilities is reminiscent of finger counting during childhood.

  7. Robust and Simple Non-Reflecting Boundary Conditions for the Euler Equations: A New Approach Based on the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Himansu, Ananda; Loh, Ching-Yuen; Wang, Xiao-Yen; Yu, Shang-Tao

    2003-01-01

    This paper reports on a significant advance in the area of non-reflecting boundary conditions (NRBCs) for unsteady flow computations. As a part of the development of the space-time conservation element and solution element (CE/SE) method, sets of NRBCs for 1D Euler problems are developed without using any characteristics-based techniques. These conditions are much simpler than those commonly reported in the literature, yet so robust that they are applicable to subsonic, transonic and supersonic flows even in the presence of discontinuities. In addition, the straightforward multidimensional extensions of the present 1D NRBCs have been shown numerically to be equally simple and robust. The paper details the theoretical underpinning of these NRBCs, and explains their unique robustness and accuracy in terms of the conservation of space-time fluxes. Some numerical results for an extended Sod's shock-tube problem, illustrating the effectiveness of the present NRBCs are included, together with an associated simple Fortran computer program. As a preliminary to the present development, a review of the basic CE/SE schemes is also included.

  8. Symmetries of relativistic world lines

    NASA Astrophysics Data System (ADS)

    Koch, Benjamin; Muñoz, Enrique; Reyes, Ignacio A.

    2017-10-01

    Symmetries are essential for a consistent formulation of many quantum systems. In this paper we discuss a fundamental symmetry, which is present for any Lagrangian term that involves x˙2. As a basic model that incorporates the fundamental symmetries of quantum gravity and string theory, we consider the Lagrangian action of the relativistic point particle. A path integral quantization for this seemingly simple system has long presented notorious problems. Here we show that those problems are overcome by taking into account the additional symmetry, leading directly to the exact Klein-Gordon propagator.

  9. Computing multiple periodic solutions of nonlinear vibration problems using the harmonic balance method and Groebner bases

    NASA Astrophysics Data System (ADS)

    Grolet, Aurelien; Thouverez, Fabrice

    2015-02-01

    This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.

  10. Weak task-related modulation and stimulus representations during arithmetic problem solving in children with developmental dyscalculia

    PubMed Central

    Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Tenison, Caitlin; Menon, Vinod

    2015-01-01

    Developmental dyscalculia (DD) is a disability that impacts math learning and skill acquisition in school-age children. Here we investigate arithmetic problem solving deficits in young children with DD using univariate and multivariate analysis of fMRI data. During fMRI scanning, 17 children with DD (ages 7–9, grades 2 and 3) and 17 IQ- and reading ability-matched typically developing (TD) children performed complex and simple addition problems which differed only in arithmetic complexity. While the TD group showed strong modulation of brain responses with increasing arithmetic complexity, children with DD failed to show such modulation. Children with DD showed significantly reduced activation compared to TD children in the intraparietal sulcus, superior parietal lobule, supramarginal gyrus and bilateral dorsolateral prefrontal cortex in relation to arithmetic complexity. Critically, multivariate representational similarity revealed that brain response patterns to complex and simple problems were less differentiated in the DD group in bilateral anterior IPS, independent of overall differences in signal level. Taken together, these results show that children with DD not only under-activate key brain regions implicated in mathematical cognition, but they also fail to generate distinct neural responses and representations for different arithmetic problems. Our findings provide novel insights into the neural basis of DD. PMID:22682904

  11. Weak task-related modulation and stimulus representations during arithmetic problem solving in children with developmental dyscalculia.

    PubMed

    Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Tenison, Caitlin; Menon, Vinod

    2012-02-15

    Developmental dyscalculia (DD) is a disability that impacts math learning and skill acquisition in school-age children. Here we investigate arithmetic problem solving deficits in young children with DD using univariate and multivariate analysis of fMRI data. During fMRI scanning, 17 children with DD (ages 7-9, grades 2 and 3) and 17 IQ- and reading ability-matched typically developing (TD) children performed complex and simple addition problems which differed only in arithmetic complexity. While the TD group showed strong modulation of brain responses with increasing arithmetic complexity, children with DD failed to show such modulation. Children with DD showed significantly reduced activation compared to TD children in the intraparietal sulcus, superior parietal lobule, supramarginal gyrus and bilateral dorsolateral prefrontal cortex in relation to arithmetic complexity. Critically, multivariate representational similarity revealed that brain response patterns to complex and simple problems were less differentiated in the DD group in bilateral anterior IPS, independent of overall differences in signal level. Taken together, these results show that children with DD not only under-activate key brain regions implicated in mathematical cognition, but they also fail to generate distinct neural responses and representations for different arithmetic problems. Our findings provide novel insights into the neural basis of DD. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Lattice gas methods for computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.

    1995-01-01

    This paper presents the lattice gas solution to the category 1 problems of the ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics. The first and second problems were solved for Delta t = Delta x = 1, and additionally the second problem was solved for Delta t = 1/4 and Delta x = 1/2. The results are striking: even for these large time and space grids the lattice gas numerical solutions are almost indistinguishable from the analytical solutions. A simple bug in the Mathematica code was found in the solutions submitted for comparison, and the comparison plots shown at the end of this volume show the bug. An Appendix to the present paper shows an example lattice gas solution with and without the bug.

  13. Coherent state coding approaches the capacity of non-Gaussian bosonic channels

    NASA Astrophysics Data System (ADS)

    Huber, Stefan; König, Robert

    2018-05-01

    The additivity problem asks if the use of entanglement can boost the information-carrying capacity of a given channel beyond what is achievable by coding with simple product states only. This has recently been shown not to be the case for phase-insensitive one-mode Gaussian channels, but remains unresolved in general. Here we consider two general classes of bosonic noise channels, which include phase-insensitive Gaussian channels as special cases: these are attenuators with general, potentially non-Gaussian environment states and classical noise channels with general probabilistic noise. We show that additivity violations, if existent, are rather minor for all these channels: the maximal gain in classical capacity is bounded by a constant independent of the input energy. Our proof shows that coding by simple classical modulation of coherent states is close to optimal.

  14. Handle with Care! Mid-Atlantic Marine Animals That Demand Your Respect. Educational Series No. 26. Third Printing.

    ERIC Educational Resources Information Center

    Lucy, Jon

    Generally speaking, marine organisms found along middle Atlantic shores are not considered threatening to people. However, some of these animals can cause problems, either upon simple contact with the skin, as in the case of some jellyfish, or through careless handling. In addition, larger inhabitants of coastal waters (such as sharks) must always…

  15. Ethical Information Transparency and Sexually Transmitted Infections.

    PubMed

    Feltz, Adam

    2015-01-01

    Shared decision making is intended to help protect patient autonomy while satisfying the demands of beneficence. In shared decision making, information is shared between health care professional and patient. The sharing of information presents new and practical problems about how much information to share and how transparent that information should be. Sharing information also allows for subtle paternalistic strategies to be employed to "nudge" the patient in a desired direction. These problems are illustrated in two experiments. Experiment 1 (N = 146) suggested that positively framed messages increased the strength of judgments about whether a patient with HIV should designate a surrogate compared to a negatively framed message. A simple decision aid did not reliably reduce this effect. Experiment 2 (N = 492) replicated these effects. In addition, Experiment 2 suggested that providing some additional information (e.g., about surrogate decision making accuracy) can reduce tendencies to think that one with AIDS should designate a surrogate. These results indicate that in some circumstances, nudges (e.g., framing) influence judgments in ways that non-nudging interventions (e.g., simple graphs) do not. While non-nudging interventions are generally preferable, careful thought is required for determining the relative benefits and costs associated with information transparency and persuasion.

  16. Brain hyper-connectivity and operation-specific deficits during arithmetic problem solving in children with developmental dyscalculia

    PubMed Central

    Rosenberg-Lee, Miriam; Ashkenazi, Sarit; Chen, Tianwen; Young, Christina B.; Geary, David C.; Menon, Vinod

    2014-01-01

    Developmental dyscalculia (DD) is marked by specific deficits in processing numerical and mathematical information despite normal intelligence (IQ) and reading ability. We examined how brain circuits used by young children with DD to solve simple addition and subtraction problems differ from those used by typically developing (TD) children who were matched on age, IQ, reading ability, and working memory. Children with DD were slower and less accurate during problem solving than TD children, and were especially impaired on their ability to solve subtraction problems. Children with DD showed significantly greater activity in multiple parietal, occipito-temporal and prefrontal cortex regions while solving addition and subtraction problems. Despite poorer performance during subtraction, children with DD showed greater activity in multiple intra-parietal sulcus (IPS) and superior parietal lobule subdivisions in the dorsal posterior parietal cortex as well as fusiform gyrus in the ventral occipito-temporal cortex. Critically, effective connectivity analyses revealed hyper-connectivity, rather than reduced connectivity, between the IPS and multiple brain systems including the lateral fronto-parietal and default mode networks in children with DD during both addition and subtraction. These findings suggest the IPS and its functional circuits are a major locus of dysfunction during both addition and subtraction problem solving in DD, and that inappropriate task modulation and hyper-connectivity, rather than under-engagement and under-connectivity, are the neural mechanisms underlying problem solving difficulties in children with DD. We discuss our findings in the broader context of multiple levels of analysis and performance issues inherent in neuroimaging studies of typical and atypical development. PMID:25098903

  17. Brain hyper-connectivity and operation-specific deficits during arithmetic problem solving in children with developmental dyscalculia.

    PubMed

    Rosenberg-Lee, Miriam; Ashkenazi, Sarit; Chen, Tianwen; Young, Christina B; Geary, David C; Menon, Vinod

    2015-05-01

    Developmental dyscalculia (DD) is marked by specific deficits in processing numerical and mathematical information despite normal intelligence (IQ) and reading ability. We examined how brain circuits used by young children with DD to solve simple addition and subtraction problems differ from those used by typically developing (TD) children who were matched on age, IQ, reading ability, and working memory. Children with DD were slower and less accurate during problem solving than TD children, and were especially impaired on their ability to solve subtraction problems. Children with DD showed significantly greater activity in multiple parietal, occipito-temporal and prefrontal cortex regions while solving addition and subtraction problems. Despite poorer performance during subtraction, children with DD showed greater activity in multiple intra-parietal sulcus (IPS) and superior parietal lobule subdivisions in the dorsal posterior parietal cortex as well as fusiform gyrus in the ventral occipito-temporal cortex. Critically, effective connectivity analyses revealed hyper-connectivity, rather than reduced connectivity, between the IPS and multiple brain systems including the lateral fronto-parietal and default mode networks in children with DD during both addition and subtraction. These findings suggest the IPS and its functional circuits are a major locus of dysfunction during both addition and subtraction problem solving in DD, and that inappropriate task modulation and hyper-connectivity, rather than under-engagement and under-connectivity, are the neural mechanisms underlying problem solving difficulties in children with DD. We discuss our findings in the broader context of multiple levels of analysis and performance issues inherent in neuroimaging studies of typical and atypical development. © 2014 John Wiley & Sons Ltd.

  18. Rarity-weighted richness: a simple and reliable alternative to integer programming and heuristic algorithms for minimum set and maximum coverage problems in conservation planning.

    PubMed

    Albuquerque, Fabio; Beier, Paul

    2015-01-01

    Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from <1 ha to 2,500 km2. On average, RWR solutions were more efficient than Zonation solutions. Integer programming remains the only guaranteed way find an optimal solution, and heuristic algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.

  19. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  20. Gelled-electrolyte batteries for electric vehicles

    NASA Astrophysics Data System (ADS)

    Tuphorn, Hans

    Increasing problems of air pollution have pushed activities of electric vehicle projects worldwide and in spite of projects for developing new battery systems for high energy densities, today lead/acid batteries are almost the single system, ready for technical usage in this application. Valve-regulated lead/acid batteries with gelled electrolyte have the advantage that no maintenance is required and because the gel system does not cause problems with electrolyte stratification, no additional appliances for central filling or acid addition are required, which makes the system simple. Those batteries with high density active masses indicate high endurance results and field tests with 40 VW-CityStromers, equipped with 96 V/160 A h gel batteries with thermal management show good results during four years. In addition, gelled lead/acid batteries possess superior high rate performance compared with conventional lead/acid batteries, which guarantees good acceleration results of the car and which makes the system recommendable for application in electric vehicles.

  1. A simple technique to increase profits in wood products marketing

    Treesearch

    George B. Harpole

    1971-01-01

    Mathematical models can be used to solve quickly some simple day-to-day marketing problems. This note explains how a sawmill production manager, who has an essentially fixed-capacity mill, can solve several optimization problems by using pencil and paper, a forecast of market prices, and a simple algorithm. One such problem is to maximize profits in an operating period...

  2. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. A Coupling Strategy of FEM and BEM for the Solution of a 3D Industrial Crack Problem

    NASA Astrophysics Data System (ADS)

    Kouitat Njiwa, Richard; Taha Niane, Ngadia; Frey, Jeremy; Schwartz, Martin; Bristiel, Philippe

    2015-03-01

    Analyzing crack stability in an industrial context is challenging due to the geometry of the structure. The finite element method is effective for defect-free problems. The boundary element method is effective for problems in simple geometries with singularities. We present a strategy that takes advantage of both approaches. Within the iterative solution procedure, the FEM solves a defect-free problem over the structure while the BEM solves the crack problem over a fictitious domain with simple geometry. The effectiveness of the approach is demonstrated on some simple examples which allow comparison with literature results and on an industrial problem.

  4. Robust and Simple Non-Reflecting Boundary Conditions for the Euler Equations - A New Approach based on the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, S.-C.; Himansu, A.; Loh, C.-Y.; Wang, X.-Y.; Yu, S.-T.J.

    2005-01-01

    This paper reports on a significant advance in the area of nonreflecting boundary conditions (NRBCs) for unsteady flow computations. As a part of t he development of t he space-time conservation element and solution element (CE/SE) method, sets of NRBCs for 1D Euler problems are developed without using any characteristics- based techniques. These conditions are much simpler than those commonly reported in the literature, yet so robust that they are applicable to subsonic, transonic and supersonic flows even in the presence of discontinuities. In addition, the straightforward multidimensional extensions of the present 1D NRBCs have been shown numerically to be equally simple and robust. The paper details the theoretical underpinning of these NRBCs, and explains t heir unique robustness and accuracy in terms of t he conservation of space-time fluxes. Some numerical results for an extended Sod's shock-tube problem, illustrating the effectiveness of the present NRBCs are included, together with an associated simple Fortran computer program. As a preliminary to the present development, a review of the basic CE/SE schemes is also included.

  5. Angular aberration in the problem of power beaming to geostationary satellites through the atmosphere.

    PubMed

    Baryshnikov, F F

    1995-10-20

    The influence of angular aberration of radiation as a result of the difference in speed of a geostationary satellite and the speed of the Earth's surface on laser power beaming to satellites is considered. Angular aberration makes it impossible to direct the energy to the satellite, and additional beam rotation is necessary. Because the Earth's rotation may cause bad phase restoration, we face a serious problem: how to transfer incoherent radiation to remote satellites. In the framework of the Kolmogorov turbulence model simple conditions of energy transfer are derived and discussed.

  6. New graphic methods for determining the depth and thickness of strata and the projection of dip

    USGS Publications Warehouse

    Palmer, Harold S.

    1919-01-01

    Geologists, both in the field and in the office, frequently encounter trigonometric problems the solution of which, though simple enough, is somewhat laborious by the use of trigonometric and logarithmic tables. Charts, tables, and diagrams of various types for facilitating the computations have been published, and a new method may seem to be a superfluous addition to the literature.

  7. Bio-charcoal production from municipal organic solid wastes

    NASA Astrophysics Data System (ADS)

    AlKhayat, Z. Q.

    2017-08-01

    The economic and environmental problems of handling the increasingly huge amounts of urban and/or suburban organic municipal solid wastes MSW, from collection to end disposal, in addition to the big fluctuations in power supply and other energy form costs for the various civilian needs, is studied for Baghdad city, the ancient and glamorous capital of Iraq, and a simple control device is suggested, built and tested by carbonizing these dried organic wastes in simple environment friendly bio-reactor in order to produce low pollution potential, economical and local charcoal capsules that might be useful for heating, cooking and other municipal uses. That is in addition to the solve of solid wastes management problem which involves huge human and financial resources and causes many lethal health and environmental problems. Leftovers of different social level residential campuses were collected, classified for organic materials then dried in order to be supplied into the bio-reactor, in which it is burnt and then mixed with small amounts of sugar sucrose that is extracted from Iraqi planted sugar cane, to produce well shaped charcoal capsules. The burning process is smoke free as the closed burner’s exhaust pipe is buried 1m underground hole, in order to use the subsurface soil as natural gas filter. This process has proved an excellent performance of handling about 120kg/day of classified MSW, producing about 80-100 kg of charcoal capsules, by the use of 200 l reactor volume.

  8. The Development of Student’s Activity Sheets (SAS) Based on Multiple Intelligences and Problem-Solving Skills Using Simple Science Tools

    NASA Astrophysics Data System (ADS)

    Wardani, D. S.; Kirana, T.; Ibrahim, M.

    2018-01-01

    The aim of this research is to produce SAS based on MI and problem-solving skills using simple science tools that are suitable to be used by elementary school students. The feasibility of SAS is evaluated based on its validity, practicality, and effectiveness. The completion Lesson Plan (LP) implementation and student’s activities are the indicators of SAS practicality. The effectiveness of SAS is measured by indicators of increased learning outcomes and problem-solving skills. The development of SAS follows the 4-D (define, design, develop, and disseminate) phase. However, this study was done until the third stage (develop). The written SAS was then validated through expert evaluation done by two experts of science, before its is tested to the target students. The try-out of SAS used one group with pre-test and post-test design. The result of this research shows that SAS is valid with “good” category. In addition, SAS is considered practical as seen from the increase of student activity at each meeting and LP implementation. Moreover, it was considered effective due to the significant difference between pre-test and post-test result of the learning outcomes and problem-solving skill test. Therefore, SAS is feasible to be used in learning.

  9. A 1.375-approximation algorithm for sorting by transpositions.

    PubMed

    Elias, Isaac; Hartman, Tzvika

    2006-01-01

    Sorting permutations by transpositions is an important problem in genome rearrangements. A transposition is a rearrangement operation in which a segment is cut out of the permutation and pasted in a different location. The complexity of this problem is still open and it has been a 10-year-old open problem to improve the best known 1.5-approximation algorithm. In this paper, we provide a 1.375-approximation algorithm for sorting by transpositions. The algorithm is based on a new upper bound on the diameter of 3-permutations. In addition, we present some new results regarding the transposition diameter: we improve the lower bound for the transposition diameter of the symmetric group and determine the exact transposition diameter of simple permutations.

  10. Sturm-Liouville eigenproblems with an interior pole

    NASA Technical Reports Server (NTRS)

    Boyd, J. P.

    1981-01-01

    The eigenvalues and eigenfunctions of self-adjoint Sturm-Liouville problems with a simple pole on the interior of an interval are investigated. Three general theorems are proved, and it is shown that as n approaches infinity, the eigenfunctions more and more closely resemble those of an ordinary Sturm-Liouville problem. The low-order modes differ significantly from those of a nonsingular eigenproblem in that both eigenvalues and eigenfunctions are complex, and the eigenvalues for all small n may cluster about a common value in contrast to the widely separated eigenvalues of the corresponding nonsingular problem. In addition, the WKB is shown to be accurate for all n, and all eigenvalues of a normal one-dimensional Sturm-Liouville equation with nonperiodic boundary conditions are well separated.

  11. Two-way ANOVA Problems with Simple Numbers.

    ERIC Educational Resources Information Center

    Read, K. L. Q.; Shihab, L. H.

    1998-01-01

    Describes how to construct simple numerical examples in two-way ANOVAs, specifically randomized blocks, balanced two-way layouts, and Latin squares. Indicates that working through simple numerical problems is helpful to students meeting a technique for the first time and should be followed by computer-based analysis of larger, real datasets when…

  12. Control of H2S emissions using an ozone oxidation process: Preliminary results

    NASA Technical Reports Server (NTRS)

    Defaveri, D.; Ferrando, B.; Ferraiolo, G.

    1986-01-01

    The problem of eliminating industrial emission odors does not have a simple solution, and consequently has not been researched extensively. Therefore, an experimental research program regarding oxidation of H2S through ozone was undertaken to verify the applicable limits of the procedure and, in addition, was designed to supply a useful analytical means of rationalizing the design of reactors employed in the sector.

  13. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics

    PubMed Central

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P.; Nordsletten, David A.

    2014-01-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii–Newton–Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics. PMID:25187672

  14. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics.

    PubMed

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P; Nordsletten, David A

    2014-06-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii-Newton-Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics.

  15. BOOK REVIEW: The Quantum Mechanics Solver: How to Apply Quantum Theory to Modern Physics, 2nd edition

    NASA Astrophysics Data System (ADS)

    Robbin, J. M.

    2007-07-01

    he hallmark of a good book of problems is that it allows you to become acquainted with an unfamiliar topic quickly and efficiently. The Quantum Mechanics Solver fits this description admirably. The book contains 27 problems based mainly on recent experimental developments, including neutrino oscillations, tests of Bell's inequality, Bose Einstein condensates, and laser cooling and trapping of atoms, to name a few. Unlike many collections, in which problems are designed around a particular mathematical method, here each problem is devoted to a small group of phenomena or experiments. Most problems contain experimental data from the literature, and readers are asked to estimate parameters from the data, or compare theory to experiment, or both. Standard techniques (e.g., degenerate perturbation theory, addition of angular momentum, asymptotics of special functions) are introduced only as they are needed. The style is closer to a non-specialist seminar rather than an undergraduate lecture. The physical models are kept simple; the emphasis is on cultivating conceptual and qualitative understanding (although in many of the problems, the simple models fit the data quite well). Some less familiar theoretical techniques are introduced, e.g. a variational method for lower (not upper) bounds on ground-state energies for many-body systems with two-body interactions, which is then used to derive a surprisingly accurate relation between baryon and meson masses. The exposition is succinct but clear; the solutions can be read as worked examples if you don't want to do the problems yourself. Many problems have additional discussion on limitations and extensions of the theory, or further applications outside physics (e.g., the accuracy of GPS positioning in connection with atomic clocks; proton and ion tumor therapies in connection with the Bethe Bloch formula for charged particles in solids). The problems use mainly non-relativistic quantum mechanics and are organised into three sections: Elementary Particles, Nuclei and Atoms; Quantum Entanglement and Measurement; and Complex Systems. The coverage is not comprehensive; there is little on scattering theory, for example, and some areas of recent interest, such as topological aspects of quantum mechanics and semiclassics, are not included. The problems are based on examination questions given at the École Polytechnique in the last 15 years. The book is accessible to undergraduates, but working physicists should find it a delight.

  16. Adaptive triangular mesh generation

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Eiseman, P. R.

    1984-01-01

    A general adaptive grid algorithm is developed on triangular grids. The adaptivity is provided by a combination of node addition, dynamic node connectivity and a simple node movement strategy. While the local restructuring process and the node addition mechanism take place in the physical plane, the nodes are displaced on a monitor surface, constructed from the salient features of the physical problem. An approximation to mean curvature detects changes in the direction of the monitor surface, and provides the pulling force on the nodes. Solutions to the axisymmetric Grad-Shafranov equation demonstrate the capturing, by triangles, of the plasma-vacuum interface in a free-boundary equilibrium configuration.

  17. Artificial neural networks using complex numbers and phase encoded weights.

    PubMed

    Michel, Howard E; Awwal, Abdul Ahad S

    2010-04-01

    The model of a simple perceptron using phase-encoded inputs and complex-valued weights is proposed. The aggregation function, activation function, and learning rule for the proposed neuron are derived and applied to Boolean logic functions and simple computer vision tasks. The complex-valued neuron (CVN) is shown to be superior to traditional perceptrons. An improvement of 135% over the theoretical maximum of 104 linearly separable problems (of three variables) solvable by conventional perceptrons is achieved without additional logic, neuron stages, or higher order terms such as those required in polynomial logic gates. The application of CVN in distortion invariant character recognition and image segmentation is demonstrated. Implementation details are discussed, and the CVN is shown to be very attractive for optical implementation since optical computations are naturally complex. The cost of the CVN is less in all cases than the traditional neuron when implemented optically. Therefore, all the benefits of the CVN can be obtained without additional cost. However, on those implementations dependent on standard serial computers, CVN will be more cost effective only in those applications where its increased power can offset the requirement for additional neurons.

  18. Information retrieval from holographic interferograms: Fundamentals and problems

    NASA Technical Reports Server (NTRS)

    Vest, Charles M.

    1987-01-01

    Holographic interferograms can contain large amounts of information about flow and temperature fields. Their information content can be very high because they can be viewed from many different directions. This multidirectionality, and fringe localization add to the information contained in the fringe pattern if diffuse illumination is used. Additional information, and increased accuracy can be obtained through the use of dual reference wave holography to add reference fringes or to effect discrete phase shift or hetrodyne interferometry. Automated analysis of fringes is possible if interferograms are of simple structure and good quality. However, in practice a large number of practical problems can arise, so that a difficult image processing task results.

  19. A simple homogeneous model for regular and irregular metallic wire media samples

    NASA Astrophysics Data System (ADS)

    Kosulnikov, S. Y.; Mirmoosa, M. S.; Simovski, C. R.

    2018-02-01

    To simplify the solution of electromagnetic problems with wire media samples, it is reasonable to treat them as the samples of a homogeneous material without spatial dispersion. The account of spatial dispersion implies additional boundary conditions and makes the solution of boundary problems difficult especially if the sample is not an infinitely extended layer. Moreover, for a novel type of wire media - arrays of randomly tilted wires - a spatially dispersive model has not been developed. Here, we introduce a simplistic heuristic model of wire media samples shaped as bricks. Our model covers WM of both regularly and irregularly stretched wires.

  20. Children's mathematical performance: five cognitive tasks across five grades.

    PubMed

    Moore, Alex M; Ashcraft, Mark H

    2015-07-01

    Children in elementary school, along with college adults, were tested on a battery of basic mathematical tasks, including digit naming, number comparison, dot enumeration, and simple addition or subtraction. Beyond cataloguing performance to these standard tasks in Grades 1 to 5, we also examined relationships among the tasks, including previously reported results on a number line estimation task. Accuracy and latency improved across grades for all tasks, and classic interaction patterns were found, for example, a speed-up of subitizing and counting, increasingly shallow slopes in number comparison, and progressive speeding of responses especially to larger addition and subtraction problems. Surprisingly, digit naming was faster than subitizing at all ages, arguing against a pre-attentive processing explanation for subitizing. Estimation accuracy and speed were strong predictors of children's addition and subtraction performance. Children who gave exponential responses on the number line estimation task were slower at counting in the dot enumeration task and had longer latencies on addition and subtraction problems. The results provided further support for the importance of estimation as an indicator of children's current and future mathematical expertise. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Language-specific memory for everyday arithmetic facts in Chinese-English bilinguals.

    PubMed

    Chen, Yalin; Yanke, Jill; Campbell, Jamie I D

    2016-04-01

    The role of language in memory for arithmetic facts remains controversial. Here, we examined transfer of memory training for evidence that bilinguals may acquire language-specific memory stores for everyday arithmetic facts. Chinese-English bilingual adults (n = 32) were trained on different subsets of simple addition and multiplication problems. Each operation was trained in one language or the other. The subsequent test phase included all problems with addition and multiplication alternating across trials in two blocks, one in each language. Averaging over training language, the response time (RT) gains for trained problems relative to untrained problems were greater in the trained language than in the untrained language. Subsequent analysis showed that English training produced larger RT gains for trained problems relative to untrained problems in English at test relative to the untrained Chinese language. In contrast, there was no evidence with Chinese training that problem-specific RT gains differed between Chinese and the untrained English language. We propose that training in Chinese promoted a translation strategy for English arithmetic (particularly multiplication) that produced strong cross-language generalization of practice, whereas training in English strengthened relatively weak, English-language arithmetic memories and produced little generalization to Chinese (i.e., English training did not induce an English translation strategy for Chinese language trials). The results support the existence of language-specific strengthening of memory for everyday arithmetic facts.

  2. Stakeholder perspectives on barriers for healthy living for low-income african american families.

    PubMed

    Jones, Veronnie Faye; Rowland, Michael L; Young, Linda; Atwood, Katherine; Thompson, Kirsten; Sterrett, Emma; Honaker, Sarah Morsbach; Williams, Joel E; Johnson, Knowlton; Davis, Deborah Winders

    2014-01-01

    Childhood obesity is a growing problem for children in the United States, especially for children from low-income, African American families. The purpose of this qualitative study was to understand facilitators and barriers to engaging in healthy lifestyles faced by low-income African American children and their families. This qualitative study used semi-structured focus group interviews with eight African American children clinically identified as overweight or obese (BMI ≥ 85) and their parents. An expert panel provided insights in developing culturally appropriate intervention strategies. Child and parent focus group analysis revealed 11 barriers and no definitive facilitators for healthy eating and lifestyles. Parents reported confusion regarding what constitutes nutritional eating, varying needs of family members in terms of issues with weight, and difficulty in engaging the family in appropriate and safe physical activities; to name a few themes. Community experts independently suggested that nutritional information is confusing and, often, contradictory. Additionally, they recommended simple messaging and practical interventions such as helping with shopping lists, meal planning, and identifying simple and inexpensive physical activities. Childhood obesity in the context of low-resource families is a complex problem with no simple solutions. Culturally sensitive and family informed interventions are needed to support low-income African American families in dealing with childhood obesity.

  3. Approximations of Two-Attribute Utility Functions

    DTIC Science & Technology

    1976-09-01

    preferred to") be a bina-zy relation on the set • of simple probability measures or ’gambles’ defined on a set T of consequences. Throughout this study it...simplifying independence assumptions. Although there are several approaches to this problem, the21 present study will focus on approximations of u... study will elicit additional interest in the topic. 2. REMARKS ON APPROXIMATION THEORY This section outlines a few basic ideas of approximation theory

  4. Application of a dual unscented Kalman filter for simultaneous state and parameter estimation in problems of surface-atmosphere exchange

    Treesearch

    J.H. Gove; D.Y. Hollinger; D.Y. Hollinger

    2006-01-01

    A dual unscented Kalman filter (UKF) was used to assimilate net CO2 exchange (NEE) data measured over a spruce-hemlock forest at the Howland AmeriFlux site in Maine, USA, into a simple physiological model for the purpose of filling gaps in an eddy flux time series. In addition to filling gaps in the measurement record, the UKF approach provides continuous estimates of...

  5. Misconceptions of Mexican Teachers in the Solution of Simple Pendulum

    ERIC Educational Resources Information Center

    Garcia Trujillo, Luis Antonio; Ramirez Díaz, Mario H.; Rodriguez Castillo, Mario

    2013-01-01

    Solving the position of a simple pendulum at any time is apparently one of the most simple and basic problems to solve in high school and college physics courses. However, because of this apparent simplicity, teachers and physics texts often assume that the solution is immediate without pausing to reflect on the problem formulation or verifying…

  6. TRUMP. Transient & S-State Temperature Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    1992-03-03

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  7. Menstrual problems in university students: an electronic mail survey.

    PubMed

    Anastasakis, E; Kingman, C E; Lee, C A; Economides, D L; Kadir, R A

    2008-01-01

    To establish the prevalence of menstrual-related problems among university students. A questionnaire regarding gynecological, bleeding and family history was sent by electronic mail (e-mail) to all female students attending University College London (UCL). A total of 767 students aged 18-39 years replied; 71% had a regular menstrual cycle. One in three (n = 264) had received some treatment for their menstrual periods (such as the combined oral contraceptive pill or simple analgesia). Those with heavy or painful periods were more likely to feel that their menstrual problems had a substantial impact on their academic and social life; however, even among those with light periods, one in every four females felt that their life was considerably affected. A considerable prevalence of menstrual-related problems was demonstrated among this young healthy population. Additionally, the use of e-mail could present potential benefits as a research medium for this kind of study.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  9. Perspectives on scaling and multiscaling in passive scalar turbulence

    NASA Astrophysics Data System (ADS)

    Banerjee, Tirthankar; Basu, Abhik

    2018-05-01

    We revisit the well-known problem of multiscaling in substances passively advected by homogeneous and isotropic turbulent flows or passive scalar turbulence. To that end we propose a two-parameter continuum hydrodynamic model for an advected substance concentration θ , parametrized jointly by y and y ¯, that characterize the spatial scaling behavior of the variances of the advecting stochastic velocity and the stochastic additive driving force, respectively. We analyze it within a one-loop dynamic renormalization group method to calculate the multiscaling exponents of the equal-time structure functions of θ . We show how the interplay between the advective velocity and the additive force may lead to simple scaling or multiscaling. In one limit, our results reduce to the well-known results from the Kraichnan model for passive scalar. Our framework of analysis should be of help for analytical approaches for the still intractable problem of fluid turbulence itself.

  10. Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target

    PubMed Central

    Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji

    2009-01-01

    In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326

  11. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.

    PubMed

    Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

  12. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm

    PubMed Central

    Baig, Fahd; Little, Max A.

    2016-01-01

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525

  13. Overweight and obesity vs. simple carbohydrates consumption by elderly people suffering from diseases of the cardiovascular system.

    PubMed

    Skop-Lewandowska, Agata; Zając, Joanna; Kolarzyk, Emilia

    2017-12-23

    Overweight and obesity belong to the alarming and constantly increasing problems of the 21st century among all age groups. One of the major factors enhancing these problems are simple carbohydrates commonly found in popular sweet drinks. The aim of the study was to estimate the nutritional patterns of elderly people with diagnosed cardiovascular system diseases, and analysis of the relationship between consumption of simple carbohydrates and prevalence of overweight and obesity. From 233 individuals hospitalized in the Clinic of Cardiology and Hypertension in Krakow, Poland, a group of 128 elderly people was selected (66 women and 62 men). Actual food consumption for each individual was assessed using a 24-hour nutrition recall. BMI values was calculated for assessment of nutritional status. Statistical analysis was performed on two groups: one with BMI <25kg/m2 and other with BMI≥25kg/m2. Overweight was stated among 33.8% of women and 50% of men, obesity among 27.7% of women and 17.7% of men. Results indicated that consumption of products rich in sucrose was associated with overweight and obesity. People with overweight and obesity statistically more often ate sweet products comparing to those with proper weight: 46.2 g vs 33.8g. The growing world-wide epidemic of overweight and obesity is one of the main priorities of preventive medicine remains changing eating patterns As observed in this study, one additional spoon of sugar consumed daily increases the risk of being overweight or obese by about 14%. Overweight and obesity was found among 60% of the examined elderly people. Correlation was found between rise in risk of obesity or overweight by about 14% with each additional spoon of sugar (5g) eaten every day.

  14. Simple arithmetic: not so simple for highly math anxious individuals.

    PubMed

    Chang, Hyesang; Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G

    2017-12-01

    Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low-compared to high-math anxious individuals perform better when they activate this network less-a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. © The Author (2017). Published by Oxford University Press.

  15. Simple arithmetic: not so simple for highly math anxious individuals

    PubMed Central

    Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G

    2017-01-01

    Abstract Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low—compared to high—math anxious individuals perform better when they activate this network less—a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. PMID:29140499

  16. A simple derivation for amplitude and time period of charged particles in an electrostatic bathtub potential

    NASA Astrophysics Data System (ADS)

    Prathap Reddy, K.

    2016-11-01

    An ‘electrostatic bathtub potential’ is defined and analytical expressions for the time period and amplitude of charged particles in this potential are obtained and compared with simulations. These kinds of potentials are encountered in linear electrostatic ion traps, where the potential along the axis appears like a bathtub. Ion traps are used in basic physics research and mass spectrometry to store ions; these stored ions make oscillatory motion within the confined volume of the trap. Usually these traps are designed and studied using ion optical software, but in this work the bathtub potential is reproduced by making two simple modifications to the harmonic oscillator potential. The addition of a linear ‘k 1|x|’ potential makes the simple harmonic potential curve steeper with a sharper turn at the origin, while the introduction of a finite-length zero potential region at the centre reproduces the flat region of the bathtub curve. This whole exercise of modelling a practical experimental situation in terms of a well-known simple physics problem may generate interest among readers.

  17. Class and Home Problems: Humidification, a True "Home" Problem for p. Chemical Engineer

    ERIC Educational Resources Information Center

    Condoret, Jean-Stephane

    2012-01-01

    The problem of maintaining hygrothermal comfort in a house is addressed using the chemical engineer's toolbox. A simple dynamic modelling proved to give a good description of the humidification of the house in winter, using a domestic humidifier. Parameters of the model were identified from a simple experiment. Surprising results, especially…

  18. Feedback laws for fuel minimization for transport aircraft

    NASA Technical Reports Server (NTRS)

    Price, D. B.; Gracey, C.

    1984-01-01

    The Theoretical Mechanics Branch has as one of its long-range goals to work toward solving real-time trajectory optimization problems on board an aircraft. This is a generic problem that has application to all aspects of aviation from general aviation through commercial to military. Overall interest is in the generic problem, but specific problems to achieve concrete results are examined. The problem is to develop control laws that generate approximately optimal trajectories with respect to some criteria such as minimum time, minimum fuel, or some combination of the two. These laws must be simple enough to be implemented on a computer that is flown on board an aircraft, which implies a major simplification from the two point boundary value problem generated by a standard trajectory optimization problem. In addition, the control laws allow for changes in end conditions during the flight, and changes in weather along a planned flight path. Therefore, a feedback control law that generates commands based on the current state rather than a precomputed open-loop control law is desired. This requirement, along with the need for order reduction, argues for the application of singular perturbation techniques.

  19. Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality

    NASA Technical Reports Server (NTRS)

    Acikmese, Ahmet Behcet; Corless, Martin

    2004-01-01

    We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.

  20. Solutions to problems of weathering in Antarctic eucrites

    NASA Technical Reports Server (NTRS)

    Strait, Melissa M.

    1990-01-01

    Neutron activation analysis was performed for major and trace elements on a suite of eucrites from both Antarctic and non-Antarctic sources. The chemistry was examined to see if there was an easy way to distinguish Antarctic eucrites that had been disturbed in their trace elements systematics from those that had normal abundances relative to non-Antarctic eucrites. There was no simple correlation found, and identifying the disturbed meteorites still remains a problem. In addition, a set of mineral separates from an eucrite were analyzed. The results showed no abnormalities in the chemistry and provides a possible way to use Antarctic eucrites that were disturbed in modelling of the eucrite parent body.

  1. Fast realization of nonrecursive digital filters with limits on signal delay

    NASA Astrophysics Data System (ADS)

    Titov, M. A.; Bondarenko, N. N.

    1983-07-01

    Attention is given to the problem of achieving a fast realization of nonrecursive digital filters with the aim of reducing signal delay. It is shown that a realization wherein the impulse characteristic of the filter is divided into blocks satisfies the delay requirements and is almost as economical in terms of the number of multiplications as conventional fast convolution. In addition, the block method leads to a reduction in the needed size of the memory and in the number of additions; the short-convolution procedure is substantially simplified. Finally, the block method facilitates the paralleling of computations owing to the simple transfers between subfilters.

  2. A simple level set method for solving Stefan problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S.; Merriman, B.; Osher, S.

    1997-07-15

    Discussed in this paper is an implicit finite difference scheme for solving a heat equation and a simple level set method for capturing the interface between solid and liquid phases which are used to solve Stefan problems.

  3. Heats of Segregation of BCC Metals Using Ab Initio and Quantum Approximate Methods

    NASA Technical Reports Server (NTRS)

    Good, Brian; Chaka, Anne; Bozzolo, Guillermo

    2003-01-01

    Many multicomponent alloys exhibit surface segregation, in which the composition at or near a surface may be substantially different from that of the bulk. A number of phenomenological explanations for this tendency have been suggested, involving, among other things, differences among the components' surface energies, molar volumes, and heats of solution. From a theoretical standpoint, the complexity of the problem has precluded a simple, unified explanation, thus preventing the development of computational tools that would enable the identification of the driving mechanisms for segregation. In that context, we investigate the problem of surface segregation in a variety of bcc metal alloys by computing dilute-limit heats of segregation using both the quantum-approximate energy method of Bozzolo, Ferrante and Smith (BFS), and all-electron density functional theory. In addition, the composition dependence of the heats of segregation is investigated using a BFS-based Monte Carlo procedure, and, for selected cases of interest, density functional calculations. Results are discussed in the context of a simple picture that describes segregation behavior as the result of a competition between size mismatch and alloying effects

  4. Outreach pharmacy service in old age homes: a Hong Kong experience.

    PubMed

    Lau, Wai-Man; Chan, Kit; Yung, Tsz-Ho; Lee, Anna See-Wing

    2003-06-01

    To explore drug-related problems in old age homes in Hong Kong through outreach pharmacy service. A standard form was used by outreach pharmacists to identify drug-related problems at old age homes. Homes were selected through random sampling, voluntary participation or adverse selection. Initial observation and assessment were performed in the first and second weeks. Appropriate advice and recommendations were given upon assessment and supplemented by a written report. Educational talks were provided to staff of the homes in addition to other drug information materials. At week 7 to 9, evaluations were carried out. Eighty-five homes were assessed and identified to have problems in the drug management system. These problems could generally be classified into physical storage (8.8%), quality of storage (19.2%), drug administration system (13.3%), documentation (16.4%), and drug knowledge of staff of homes (42.2%). Quality of drug storage was the most common problem found, followed by documentation and drug knowledge (73%, 50% and 44% of points assessed with problems, respectively). Apart from lack of drug knowledge and unawareness of potential risks by staff, minimal professional standards unmet may be fundamentally related to lack of professional input and inadequacy in legislation. Most homes demonstrated significant improvements upon simple interventions, from a majority of homes with more than 10 problems to a majority with less than 5 problems. Diverse problems in drug management are common in old age homes, which warrants attention and professional inputs. Simple interventions and education by pharmacists are shown to be effective in improving the quality of drug management and hence care to residents. While future financing of old age home service can be reviewed within the social context to provide incentives for improvement, review of regulatory policy with enforcement may be more fundamental and effective in upholding the service standard.

  5. A practical model for pressure probe system response estimation (with review of existing models)

    NASA Astrophysics Data System (ADS)

    Hall, B. F.; Povey, T.

    2018-04-01

    The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.

  6. Stakeholder Perspectives on Barriers for Healthy Living for Low-Income African American Families

    PubMed Central

    Jones, Veronnie Faye; Rowland, Michael L.; Young, Linda; Atwood, Katherine; Thompson, Kirsten; Sterrett, Emma; Honaker, Sarah Morsbach; Williams, Joel E.; Johnson, Knowlton; Davis, Deborah Winders

    2014-01-01

    Background: Childhood obesity is a growing problem for children in the United States, especially for children from low-income, African American families. Objective: The purpose of this qualitative study was to understand facilitators and barriers to engaging in healthy lifestyles faced by low-income African American children and their families. Methods: This qualitative study used semi-structured focus group interviews with eight African American children clinically identified as overweight or obese (BMI ≥ 85) and their parents. An expert panel provided insights in developing culturally appropriate intervention strategies. Results: Child and parent focus group analysis revealed 11 barriers and no definitive facilitators for healthy eating and lifestyles. Parents reported confusion regarding what constitutes nutritional eating, varying needs of family members in terms of issues with weight, and difficulty in engaging the family in appropriate and safe physical activities; to name a few themes. Community experts independently suggested that nutritional information is confusing and, often, contradictory. Additionally, they recommended simple messaging and practical interventions such as helping with shopping lists, meal planning, and identifying simple and inexpensive physical activities. Conclusion: Childhood obesity in the context of low-resource families is a complex problem with no simple solutions. Culturally sensitive and family informed interventions are needed to support low-income African American families in dealing with childhood obesity. PMID:25538931

  7. Influence of the Mesh Geometry Evolution on Gearbox Dynamics during Its Maintenance

    NASA Astrophysics Data System (ADS)

    Dąbrowski, Z.; Dziurdź, J.; Klekot, G.

    2017-12-01

    Toothed gears constitute the necessary elements of power transmission systems. They are applied as stationary devices in drive systems of road vehicles, ships and crafts as well as airplanes and helicopters. One of the problems related to the toothed gears usage is the determination of their technical state or its evolutions. Assuming that the gear slippage velocity is attributed to vibrations and noises generated by cooperating toothed wheels, the application of a simple cooperation model of rolled wheels of skew teeth is proposed for the analysis of the mesh evolution influence on the gear dynamics. In addition, an example of utilising an ordinary coherence function for investigating evolutionary mesh changes related to the effects impossible to be described by means of the simple kinematic model is presented.

  8. Simple diffusion can support the pitchfork, the flip bifurcations, and the chaos

    NASA Astrophysics Data System (ADS)

    Meng, Lili; Li, Xinfu; Zhang, Guang

    2017-12-01

    In this paper, a discrete rational fration population model with the Dirichlet boundary conditions will be considered. According to the discrete maximum principle and the sub- and supper-solution method, the necessary and sufficient conditions of uniqueness and existence of positive steady state solutions will be obtained. In addition, the dynamical behavior of a special two patch metapopulation model is investigated by using the bifurcation method, the center manifold theory, the bifurcation diagrams and the largest Lyapunov exponent. The results show that there exist the pitchfork, the flip bifurcations, and the chaos. Clearly, these phenomena are caused by the simple diffusion. The theoretical analysis of chaos is very imortant, unfortunately, there is not any results in this hand. However, some open problems are given.

  9. The dumbest experiment in space. [problems in laboratory apparatus adaption to space environment

    NASA Technical Reports Server (NTRS)

    Prouty, C. R.

    1981-01-01

    A simple conceptual experiment is used to illustrate (1) the fundamentals of performing an experiment, including the theoretical concept, the experiment design, the performance of the experiment, and the recording of observations; (2) the increasing challenges posed by performance of the same experiment in a location remote from the experimenter, such as additional planning and equipment and their associated cost increases; and (3) the significant growth of difficulties to be overcome when the simple experiment is performed in a highly restrictive environment, such as a spacecraft in orbit, with someone else remotely operating the experiment. It is shown that performing an experiment in the remote, hostile environment of space will pose difficulties equaling or exceeding those of the experiment itself, entailing mastery of a widening range of disciplines.

  10. The effects of cumulative practice on mathematics problem solving.

    PubMed

    Mayfield, Kristin H; Chase, Philip N

    2002-01-01

    This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving.

  11. The effects of cumulative practice on mathematics problem solving.

    PubMed Central

    Mayfield, Kristin H; Chase, Philip N

    2002-01-01

    This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving. PMID:12102132

  12. Missile Interceptor Guidance System Technology (La Technologie Pour Les Systemes De Guidage Des Missiles Intercepteurs (DE Missiles Ou D’Aeronefs)

    DTIC Science & Technology

    1990-01-01

    robustness of feedback systems with structured uncertainty. Theorem: Robust Stability Fu(G,A) stable V AA iff suP (Gll(JW))Sl. Theorem: Robust ...through a gain KR. The addition of other dynamics and feedback paths creates stabilization problems for this simple roll attitude feedback control...characteristics are most useful to the designer when examined in the frequency domain. Both relative stability and robustness can be determined from an

  13. High-order centered difference methods with sharp shock resolution

    NASA Technical Reports Server (NTRS)

    Gustafsson, Bertil; Olsson, Pelle

    1994-01-01

    In this paper we consider high-order centered finite difference approximations of hyperbolic conservation laws. We propose different ways of adding artificial viscosity to obtain sharp shock resolution. For the Riemann problem we give simple explicit formulas for obtaining stationary one and two-point shocks. This can be done for any order of accuracy. It is shown that the addition of artificial viscosity is equivalent to ensuring the Lax k-shock condition. We also show numerical experiments that verify the theoretical results.

  14. On the Pressure of a Neutron Gas Interacting with the Non-Uniform Magnetic Field of a Neutron Star

    NASA Astrophysics Data System (ADS)

    Skobelev, V. V.

    2018-04-01

    On the basis of simple arguments, practically not going beyond the scope of an undergraduate course in general physics, we estimate the additional pressure (at zero temperature) of degenerate neutron matter due to its interaction with the non-uniform magnetic field of a neutron star. This work has methodological and possibly scientific value as an intuitive application of the content of such a course to a solution of topical problems of astrophysics.

  15. On simple aerodynamic sensitivity derivatives for use in interdisciplinary optimization

    NASA Technical Reports Server (NTRS)

    Doggett, Robert V., Jr.

    1991-01-01

    Low-aspect-ratio and piston aerodynamic theories are reviewed as to their use in developing aerodynamic sensitivity derivatives for use in multidisciplinary optimization applications. The basic equations relating surface pressure (or lift and moment) to normal wash are given and discussed briefly for each theory. The general means for determining selected sensitivity derivatives are pointed out. In addition, some suggestions in very general terms are included as to sample problems for use in studying the process of using aerodynamic sensitivity derivatives in optimization studies.

  16. Learning in tele-autonomous systems using Soar

    NASA Technical Reports Server (NTRS)

    Laird, John E.; Yager, Eric S.; Tuck, Christopher M.; Hucka, Michael

    1989-01-01

    Robo-Soar is a high-level robot arm control system implemented in Soar. Robo-Soar learns to perform simple block manipulation tasks using advice from a human. Following learning, the system is able to perform similar tasks without external guidance. It can also learn to correct its knowledge, using its own problem solving in addition to outside guidance. Robo-Soar corrects its knowledge by accepting advice about relevance of features in its domain, using a unique integration of analytic and empirical learning techniques.

  17. A review on simple assembly line balancing type-e problem

    NASA Astrophysics Data System (ADS)

    Jusop, M.; Rashid, M. F. F. Ab

    2015-12-01

    Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.

  18. Simple and complex mental subtraction: strategy choice and speed-of-processing differences in younger and older adults.

    PubMed

    Geary, D C; Frensch, P A; Wiley, J G

    1993-06-01

    Thirty-six younger adults (10 male, 26 female; ages 18 to 38 years) and 36 older adults (14 male, 22 female; ages 61 to 80 years) completed simple and complex paper-and-pencil subtraction tests and solved a series of simple and complex computer-presented subtraction problems. For the computer task, strategies and solution times were recorded on a trial-by-trial basis. Older Ss used a developmentally more mature mix of problem-solving strategies to solve both simple and complex subtraction problems. Analyses of component scores derived from the solution times suggest that the older Ss are slower at number encoding and number production but faster at executing the borrow procedure. In contrast, groups did not appear to differ in the speed of subtraction fact retrieval. Results from a computational simulation are consistent with the interpretation that older adults' advantage for strategy choices and for the speed of executing the borrow procedure might result from more practice solving subtraction problems.

  19. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  20. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  1. Precise computer controlled positioning of robot end effectors using force sensors

    NASA Technical Reports Server (NTRS)

    Shieh, L. S.; Mcinnis, B. C.; Wang, J. C.

    1988-01-01

    A thorough study of combined position/force control using sensory feedback for a one-dimensional manipulator model, which may count for the spacecraft docking problem or be extended to the multi-joint robot manipulator problem, was performed. The additional degree of freedom introduced by the compliant force sensor is included in the system dynamics in the design of precise position control. State feedback based on the pole placement method and with integral control is used to design the position controller. A simple constant gain force controller is used as an example to illustrate the dependence of the stability and steady-state accuracy of the overall position/force control upon the design of the inner position controller. Supportive simulation results are also provided.

  2. Sniffer Channel Selection for Monitoring Wireless LANs

    NASA Astrophysics Data System (ADS)

    Song, Yuan; Chen, Xian; Kim, Yoo-Ah; Wang, Bing; Chen, Guanling

    Wireless sniffers are often used to monitor APs in wireless LANs (WLANs) for network management, fault detection, traffic characterization, and optimizing deployment. It is cost effective to deploy single-radio sniffers that can monitor multiple nearby APs. However, since nearby APs often operate on orthogonal channels, a sniffer needs to switch among multiple channels to monitor its nearby APs. In this paper, we formulate and solve two optimization problems on sniffer channel selection. Both problems require that each AP be monitored by at least one sniffer. In addition, one optimization problem requires minimizing the maximum number of channels that a sniffer listens to, and the other requires minimizing the total number of channels that the sniffers listen to. We propose a novel LP-relaxation based algorithm, and two simple greedy heuristics for the above two optimization problems. Through simulation, we demonstrate that all the algorithms are effective in achieving their optimization goals, and the LP-based algorithm outperforms the greedy heuristics.

  3. Using Algorithms in Solving Synapse Transmission Problems.

    ERIC Educational Resources Information Center

    Stencel, John E.

    1992-01-01

    Explains how a simple three-step algorithm can aid college students in solving synapse transmission problems. Reports that all of the students did not completely understand the algorithm. However, many learn a simple working model of synaptic transmission and understand why an impulse will pass across a synapse quantitatively. Students also see…

  4. Simple adaptive control for quadcopters with saturated actuators

    NASA Astrophysics Data System (ADS)

    Borisov, Oleg I.; Bobtsov, Alexey A.; Pyrkin, Anton A.; Gromov, Vladislav S.

    2017-01-01

    The stabilization problem for quadcopters with saturated actuators is considered. A simple adaptive output control approach is proposed. The control law "consecutive compensator" is augmented with the auxiliary integral loop and anti-windup scheme. Efficiency of the obtained regulator was confirmed by simulation of the quadcopter control problem.

  5. Comparison of iterative inverse coarse-graining methods

    NASA Astrophysics Data System (ADS)

    Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.

    2016-10-01

    Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.

  6. Finite-time synchronization of stochastic coupled neural networks subject to Markovian switching and input saturation.

    PubMed

    Selvaraj, P; Sakthivel, R; Kwon, O M

    2018-06-07

    This paper addresses the problem of finite-time synchronization of stochastic coupled neural networks (SCNNs) subject to Markovian switching, mixed time delay, and actuator saturation. In addition, coupling strengths of the SCNNs are characterized by mutually independent random variables. By utilizing a simple linear transformation, the problem of stochastic finite-time synchronization of SCNNs is converted into a mean-square finite-time stabilization problem of an error system. By choosing a suitable mode dependent switched Lyapunov-Krasovskii functional, a new set of sufficient conditions is derived to guarantee the finite-time stability of the error system. Subsequently, with the help of anti-windup control scheme, the actuator saturation risks could be mitigated. Moreover, the derived conditions help to optimize estimation of the domain of attraction by enlarging the contractively invariant set. Furthermore, simulations are conducted to exhibit the efficiency of proposed control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Analysis of genome rearrangement by block-interchanges.

    PubMed

    Lu, Chin Lung; Lin, Ying Chih; Huang, Yen Lin; Tang, Chuan Yi

    2007-01-01

    Block-interchanges are a new kind of genome rearrangements that affect the gene order in a chromosome by swapping two nonintersecting blocks of genes of any length. More recently, the study of such rearrangements is becoming increasingly important because of its applications in molecular evolution. Usually, this kind of study requires to solve a combinatorial problem, called the block-interchange distance problem, which is to find a minimum number of block-interchanges between two given gene orders of linear/circular chromosomes to transform one gene order into another. In this chapter, we shall introduce the basics of block-interchange rearrangements and permutation groups in algebra that are useful in analyses of genome rearrangements. In addition, we shall present a simple algorithm on the basis of permutation groups to efficiently solve the block-interchange distance problem, as well as ROBIN, a web server for the online analyses of block-interchange rearrangements.

  8. An improved genetic algorithm and its application in the TSP problem

    NASA Astrophysics Data System (ADS)

    Li, Zheng; Qin, Jinlei

    2011-12-01

    Concept and research actuality of genetic algorithm are introduced in detail in the paper. Under this condition, the simple genetic algorithm and an improved algorithm are described and applied in an example of TSP problem, where the advantage of genetic algorithm is adequately shown in solving the NP-hard problem. In addition, based on partial matching crossover operator, the crossover operator method is improved into extended crossover operator in order to advance the efficiency when solving the TSP. In the extended crossover method, crossover operator can be performed between random positions of two random individuals, which will not be restricted by the position of chromosome. Finally, the nine-city TSP is solved using the improved genetic algorithm with extended crossover method, the efficiency of whose solution process is much higher, besides, the solving speed of the optimal solution is much faster.

  9. A genuinely discontinuous approach for multiphase EHD problems

    NASA Astrophysics Data System (ADS)

    Natarajan, Mahesh; Desjardins, Olivier

    2017-11-01

    Electrohydrodynamics (EHD) involves solving the Poisson equation for the electric field potential. For multiphase flows, although the electric field potential is a continuous quantity, due to the discontinuity in the electric permittivity between the phases, additional jump conditions at the interface, for the normal and tangential components of the electric field need to be satisfied. All approaches till date either ignore the jump conditions, or involve simplifying assumptions, and hence yield unconvincing results even for simple test problems. In the present work, we develop a genuinely discontinuous approach for the Poisson equation for multiphase flows using a Finite Volume Unsplit Volume of Fluid method. The governing equation and the jump conditions without assumptions are used to develop the method, and its efficiency is demonstrated by comparison of the numerical results with canonical test problems having exact solutions. Postdoctoral Associate, Department of Mechanical and Aerospace Engineering.

  10. A Simple Label Switching Algorithm for Semisupervised Structural SVMs.

    PubMed

    Balamurugan, P; Shevade, Shirish; Sundararajan, S

    2015-10-01

    In structured output learning, obtaining labeled data for real-world applications is usually costly, while unlabeled examples are available in abundance. Semisupervised structured classification deals with a small number of labeled examples and a large number of unlabeled structured data. In this work, we consider semisupervised structural support vector machines with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labeled and unlabeled examples, along with the domain constraints. We propose a simple optimization approach that alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective label switching method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching and avoiding poor local minima, which are not very useful. The algorithm is simple and easy to implement. Further, it is suitable for any structured output learning problem where exact inference is available. Experiments on benchmark sequence labeling data sets and a natural language parsing data set show that the proposed approach, though simple, achieves comparable generalization performance.

  11. Maintaining Sexual Health throughout Gynecologic Cancer Survivorship: A Comprehensive Review and Clinical Guide

    PubMed Central

    Huffman, Laura B.; Hartenbach, Ellen M.; Carter, Jeanne; Rash, Joanne K.; Kushner, David M.

    2016-01-01

    Objective The diagnosis and treatment of gynecologic cancer can cause short- and long-term negative effects on sexual health and quality of life (QoL). The aim of this article is to present a comprehensive overview of the sexual health concerns of gynecologic cancer survivors and discuss evidence-based treatment options for commonly encountered sexual health issues. Methods A comprehensive literature search of English language studies on sexual health in gynecologic cancer survivors and the treatment of sexual dysfunction was conducted in MEDLINE databases. Relevant data are presented in this review. Additionally, personal and institutional practices are incorporated where relevant. Results Sexual dysfunction is prevalent among gynecologic cancer survivors as a result of surgery, radiation, and chemotherapy--negatively impacting QoL. Many patients expect their healthcare providers to address sexual health concerns, but most have never discussed sex-related issues with their physician. Lubricants, moisturizers, and dilators are effective, simple, non-hormonal interventions that can alleviate the morbidity of vaginal atrophy, stenosis, and pain. Pelvic floor physical therapy can be an additional tool to address dyspareunia. Cognitive behavioral therapy has been shown to be beneficial to patients reporting problems with sexual interest, arousal, and orgasm. Conclusion Oncology providers can make a significant impact on the QoL of gynecologic cancer survivors by addressing sexual health concerns. Simple strategies can be implemented into clinical practice to discuss and treat many sexual issues. Referral to specialized sexual health providers may be needed to address more complex problems. PMID:26556768

  12. GRIPs (Group Investigation Problems) for Introductory Physics

    NASA Astrophysics Data System (ADS)

    Moore, Thomas A.

    2006-12-01

    GRIPs lie somewhere between homework problems and simple labs: they are open-ended questions that require a mixture of problem-solving skills and hands-on experimentation to solve practical puzzles involving simple physical objects. In this talk, I will describe three GRIPs that I developed for a first-semester introductory calculus-based physics course based on the "Six Ideas That Shaped Physics" text. I will discuss the design of the three GRIPs we used this past fall, our experience in working with students on these problems, and students' response as reported on course evaluations.

  13. Considering context: reliable entity networks through contextual relationship extraction

    NASA Astrophysics Data System (ADS)

    David, Peter; Hawes, Timothy; Hansen, Nichole; Nolan, James J.

    2016-05-01

    Existing information extraction techniques can only partially address the problem of exploiting unreadable-large amounts text. When discussion of events and relationships is limited to simple, past-tense, factual descriptions of events, current NLP-based systems can identify events and relationships and extract a limited amount of additional information. But the simple subset of available information that existing tools can extract from text is only useful to a small set of users and problems. Automated systems need to find and separate information based on what is threatened or planned to occur, has occurred in the past, or could potentially occur. We address the problem of advanced event and relationship extraction with our event and relationship attribute recognition system, which labels generic, planned, recurring, and potential events. The approach is based on a combination of new machine learning methods, novel linguistic features, and crowd-sourced labeling. The attribute labeler closes the gap between structured event and relationship models and the complicated and nuanced language that people use to describe them. Our operational-quality event and relationship attribute labeler enables Warfighters and analysts to more thoroughly exploit information in unstructured text. This is made possible through 1) More precise event and relationship interpretation, 2) More detailed information about extracted events and relationships, and 3) More reliable and informative entity networks that acknowledge the different attributes of entity-entity relationships.

  14. Towards syntactic characterizations of approximation schemes via predicate and graph decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H.B. III; Stearns, R.E.; Jacob, R.

    1998-12-01

    The authors present a simple extensible theoretical framework for devising polynomial time approximation schemes for problems represented using natural syntactic (algebraic) specifications endowed with natural graph theoretic restrictions on input instances. Direct application of the technique yields polynomial time approximation schemes for all the problems studied in [LT80, NC88, KM96, Ba83, DTS93, HM+94a, HM+94] as well as the first known approximation schemes for a number of additional combinatorial problems. One notable aspect of the work is that it provides insights into the structure of the syntactic specifications and the corresponding algorithms considered in [KM96, HM+94]. The understanding allows them tomore » extend the class of syntactic specifications for which generic approximation schemes can be developed. The results can be shown to be tight in many cases, i.e. natural extensions of the specifications can be shown to yield non-approximable problems. The results provide a non-trivial characterization of a class of problems having a PTAS and extend the earlier work on this topic by [KM96, HM+94].« less

  15. Generating compact classifier systems using a simple artificial immune system.

    PubMed

    Leung, Kevin; Cheong, France; Cheong, Christopher

    2007-10-01

    Current artificial immune system (AIS) classifiers have two major problems: 1) their populations of B-cells can grow to huge proportions, and 2) optimizing one B-cell (part of the classifier) at a time does not necessarily guarantee that the B-cell pool (the whole classifier) will be optimized. In this paper, the design of a new AIS algorithm and classifier system called simple AIS is described. It is different from traditional AIS classifiers in that it takes only one B-cell, instead of a B-cell pool, to represent the classifier. This approach ensures global optimization of the whole system, and in addition, no population control mechanism is needed. The classifier was tested on seven benchmark data sets using different classification techniques and was found to be very competitive when compared to other classifiers.

  16. A Simple Acronym for Doing Calculus: CAL

    ERIC Educational Resources Information Center

    Hathaway, Richard J.

    2008-01-01

    An acronym is presented that provides students a potentially useful, unifying view of the major topics covered in an elementary calculus sequence. The acronym (CAL) is based on viewing the calculus procedure for solving a calculus problem P* in three steps: (1) recognizing that the problem cannot be solved using simple (non-calculus) techniques;…

  17. Using Probabilistic Information in Solving Resource Allocation Problems for a Decentralized Firm

    DTIC Science & Technology

    1978-09-01

    deterministic equivalent form of HIQ’s problem (5) by an approach similar to the one used in stochastic programming with simple recourse. See Ziemba [38) or, in...1964). 38. Ziemba , W.T., "Stochastic Programs with Simple Recourse," Technical Report 72-15, Stanford University, Department of Operations Research

  18. The Role of Competitive Inhibition and Top-Down Feedback in Binding during Object Recognition

    PubMed Central

    Wyatte, Dean; Herd, Seth; Mingus, Brian; O’Reilly, Randall

    2012-01-01

    How does the brain bind together visual features that are processed concurrently by different neurons into a unified percept suitable for processes such as object recognition? Here, we describe how simple, commonly accepted principles of neural processing can interact over time to solve the brain’s binding problem. We focus on mechanisms of neural inhibition and top-down feedback. Specifically, we describe how inhibition creates competition among neural populations that code different features, effectively suppressing irrelevant information, and thus minimizing illusory conjunctions. Top-down feedback contributes to binding in a similar manner, but by reinforcing relevant features. Together, inhibition and top-down feedback contribute to a competitive environment that ensures only the most appropriate features are bound together. We demonstrate this overall proposal using a biologically realistic neural model of vision that processes features across a hierarchy of interconnected brain areas. Finally, we argue that temporal synchrony plays only a limited role in binding – it does not simultaneously bind multiple objects, but does aid in creating additional contrast between relevant and irrelevant features. Thus, our overall theory constitutes a solution to the binding problem that relies only on simple neural principles without any binding-specific processes. PMID:22719733

  19. Influence of different types of seals on the stability behavior of turbopumps

    NASA Technical Reports Server (NTRS)

    Diewald, W.; Nordmann, R.

    1989-01-01

    One of the main problems in designing a centrifugal pump is to achieve a good efficiency while not neglecting the dynamic performance of the machine. The first aspect leads to the design of grooved seals in order to minimize the leakage flow. But the influence of these grooves to the dynamic behavior is not well known. Experimental and theoretical results of the rotordynamic coefficients for different groove shapes and depths in seals is presented. In addition, the coefficients are applied to a simple pump model.

  20. Detection of occult abscesses with /sup 111/In-labeled leukocytes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, W.R.; Gurevich, N.; Goris, M.L.

    1979-07-01

    Clinicians are frequently faced with the problem of a patient in whom they suspect an occult abscess. In such a situation, there may be no clinical signs to localize the site of the abscess and often extensive investigations do not provide additional useful information. This report illustrates the efficacy of autologous leukocytes labeled with /sup 111/In oxine in detecting the site and extent of occult abscesses in two patients. The technique of in vitro lebeling of leukocytes is simple and has been mastered by all of our nuclear medicine technologists.

  1. Excess Claims and Data Trimming in the Context of Credibility Rating Procedures,

    DTIC Science & Technology

    1981-11-01

    Triining in the Context of Credibility Rating Procedures by Hans BShlmann, Alois Gisler, William S. Jewell* 1. Motivation In Ratemaking and in Experience...work on the ETH computer. __.1: " Zen * ’ ’ II / -2- 2. The Basic Model Throughout the paper we work with the most simple model in the credibility...additional structure are summed up by stating that the density -3- f 8 (x) has the following form 1) fe(x) -(1-r)po (x/e) + rape(x) 3. The Basic Problem As

  2. On Tree-Based Phylogenetic Networks.

    PubMed

    Zhang, Louxin

    2016-07-01

    A large class of phylogenetic networks can be obtained from trees by the addition of horizontal edges between the tree edges. These networks are called tree-based networks. We present a simple necessary and sufficient condition for tree-based networks and prove that a universal tree-based network exists for any number of taxa that contains as its base every phylogenetic tree on the same set of taxa. This answers two problems posted by Francis and Steel recently. A byproduct is a computer program for generating random binary phylogenetic networks under the uniform distribution model.

  3. Improved silver staining of nucleolar organiser regions in paraffin wax sections using an inverted incubation technique.

    PubMed Central

    Coghill, G; Grant, A; Orrell, J M; Jankowski, J; Evans, A T

    1990-01-01

    A new simple modification to the silver staining of nucleolar organiser regions (AgNORs) was devised which, by performing the incubation with the slide inverted, results in minimal undesirable background staining, a persistent problem. Inverted incubation is facilitated by the use of a commercially available plastic coverplate. This technique has several additional advantages over other published staining protocols. In particular, the method is straightforward, fast, and maintains a high degree of contrast between the background and the AgNORs. Images PMID:1702451

  4. Nonblocking and orphan free message logging protocols

    NASA Technical Reports Server (NTRS)

    Alvisi, Lorenzo; Hoppe, Bruce; Marzullo, Keith

    1992-01-01

    Currently existing message logging protocols demonstrate a classic pessimistic vs. optimistic tradeoff. We show that the optimistic-pessimistic tradeoff is not inherent to the problem of message logging. We construct a message-logging protocol that has the positive features of both optimistic and pessimistic protocol: our protocol prevents orphans and allows simple failure recovery; however, it requires no blocking in failure-free runs. Furthermore, this protocol does not introduce any additional message overhead as compared to one implemented for a system in which messages may be lost but processes do not crash.

  5. Nonblocking and orphan free message logging protocols

    NASA Astrophysics Data System (ADS)

    Alvisi, Lorenzo; Hoppe, Bruce; Marzullo, Keith

    1992-12-01

    Currently existing message logging protocols demonstrate a classic pessimistic vs. optimistic tradeoff. We show that the optimistic-pessimistic tradeoff is not inherent to the problem of message logging. We construct a message-logging protocol that has the positive features of both optimistic and pessimistic protocol: our protocol prevents orphans and allows simple failure recovery; however, it requires no blocking in failure-free runs. Furthermore, this protocol does not introduce any additional message overhead as compared to one implemented for a system in which messages may be lost but processes do not crash.

  6. Integrated Force Method Solution to Indeterminate Structural Mechanics Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Halford, Gary R.

    2004-01-01

    Strength of materials problems have been classified into determinate and indeterminate problems. Determinate analysis primarily based on the equilibrium concept is well understood. Solutions of indeterminate problems required additional compatibility conditions, and its comprehension was not exclusive. A solution to indeterminate problem is generated by manipulating the equilibrium concept, either by rewriting in the displacement variables or through the cutting and closing gap technique of the redundant force method. Compatibility improvisation has made analysis cumbersome. The authors have researched and understood the compatibility theory. Solutions can be generated with equal emphasis on the equilibrium and compatibility concepts. This technique is called the Integrated Force Method (IFM). Forces are the primary unknowns of IFM. Displacements are back-calculated from forces. IFM equations are manipulated to obtain the Dual Integrated Force Method (IFMD). Displacement is the primary variable of IFMD and force is back-calculated. The subject is introduced through response variables: force, deformation, displacement; and underlying concepts: equilibrium equation, force deformation relation, deformation displacement relation, and compatibility condition. Mechanical load, temperature variation, and support settling are equally emphasized. The basic theory is discussed. A set of examples illustrate the new concepts. IFM and IFMD based finite element methods are introduced for simple problems.

  7. Turning Points of the Spherical Pendulum and the Golden Ratio

    ERIC Educational Resources Information Center

    Essen, Hanno; Apazidis, Nicholas

    2009-01-01

    We study the turning point problem of a spherical pendulum. The special cases of the simple pendulum and the conical pendulum are noted. For simple initial conditions the solution to this problem involves the golden ratio, also called the golden section, or the golden number. This number often appears in mathematics where you least expect it. To…

  8. String and Sticky Tape Experiments: Simple Self-Lubricated Electric Motor for Elementary Physics Lab.

    ERIC Educational Resources Information Center

    Entrikin, Jerry; Griffiths, David

    1983-01-01

    The main problem in constructing functioning electric motors from simple parts is the mounting of the axle (which is too flimsy to maintain good electrical contacts or too tight, imposing excessive friction at the supports). This problem is solved by using a pencil sharpened at both ends as the axle. (JN)

  9. Special Relativity as a Simple Geometry Problem

    ERIC Educational Resources Information Center

    de Abreu, Rodrigo; Guerra, Vasco

    2009-01-01

    The null result of the Michelson-Morley experiment and the constancy of the one-way speed of light in the "rest system" are used to formulate a simple problem, to be solved by elementary geometry techniques using a pair of compasses and non-graduated rulers. The solution consists of a drawing allowing a direct visualization of all the fundamental…

  10. A simple model of hysteresis behavior using spreadsheet analysis

    NASA Astrophysics Data System (ADS)

    Ehrmann, A.; Blachowicz, T.

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.

  11. Definitive or conservative surgery for perforated gastric ulcer?--An unresolved problem.

    PubMed

    Sarath Chandra, Sistla; Kumar, S Siva

    2009-04-01

    Gastric ulcer perforation has not been the focus of many studies. In addition there is a need to analyze the results of gastric perforation separately and not along with duodenal perforations, to identify the factors influencing the outcome and to develop strategies for its management. Retrospective analysis of 54 patients presenting with gastric perforation. Mean age of the patients was 44.5 years with male preponderance. Morbidity following Closure of the perforation, acid reduction surgery and resection was not significantly different. Overall mortality was 16.6% with highest mortality 24.1% following simple closure. Mortality following simple closure and definitive surgery was not significantly different. Univariate analysis revealed preoperative shock, associated medical illness and surgical delay to be significant factors for mortality whereas on multivariate analysis, preoperative shock was the only independent predictor of mortality. Mortality increased with increasing Boey score but the association between the type of surgery and probability of survival was not statistically significant. Boey risk score is useful in predicting the outcome of surgical treatment for gastric perforation. Definitive surgery is not associated with greater morbidity or mortality compared to simple closure.

  12. Benchmark solutions for the galactic ion transport equations: Energy and spatially dependent problems

    NASA Technical Reports Server (NTRS)

    Ganapol, Barry D.; Townsend, Lawrence W.; Wilson, John W.

    1989-01-01

    Nontrivial benchmark solutions are developed for the galactic ion transport (GIT) equations in the straight-ahead approximation. These equations are used to predict potential radiation hazards in the upper atmosphere and in space. Two levels of difficulty are considered: (1) energy independent, and (2) spatially independent. The analysis emphasizes analytical methods never before applied to the GIT equations. Most of the representations derived have been numerically implemented and compared to more approximate calculations. Accurate ion fluxes are obtained (3 to 5 digits) for nontrivial sources. For monoenergetic beams, both accurate doses and fluxes are found. The benchmarks presented are useful in assessing the accuracy of transport algorithms designed to accommodate more complex radiation protection problems. In addition, these solutions can provide fast and accurate assessments of relatively simple shield configurations.

  13. Implementation of Implicit Adaptive Mesh Refinement in an Unstructured Finite-Volume Flow Solver

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2013-01-01

    This paper explores the implementation of adaptive mesh refinement in an unstructured, finite-volume solver. Unsteady and steady problems are considered. The effect on the recovery of high-order numerics is explored and the results are favorable. Important to this work is the ability to provide a path for efficient, implicit time advancement. A method using a simple refinement sensor based on undivided differences is discussed and applied to a practical problem: a shock-shock interaction on a hypersonic, inviscid double-wedge. Cases are compared to uniform grids without the use of adapted meshes in order to assess error and computational expense. Discussion of difficulties, advances, and future work prepare this method for additional research. The potential for this method in more complicated flows is described.

  14. A Genetic Algorithm Tool (splicer) for Complex Scheduling Problems and the Space Station Freedom Resupply Problem

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Valenzuela-Rendon, Manuel

    1993-01-01

    The Space Station Freedom will require the supply of items in a regular fashion. A schedule for the delivery of these items is not easy to design due to the large span of time involved and the possibility of cancellations and changes in shuttle flights. This paper presents the basic concepts of a genetic algorithm model, and also presents the results of an effort to apply genetic algorithms to the design of propellant resupply schedules. As part of this effort, a simple simulator and an encoding by which a genetic algorithm can find near optimal schedules have been developed. Additionally, this paper proposes ways in which robust schedules, i.e., schedules that can tolerate small changes, can be found using genetic algorithms.

  15. Fast algorithms for transforming back and forth between a signed permutation and its equivalent simple permutation.

    PubMed

    Gog, Simon; Bader, Martin

    2008-10-01

    The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.

  16. Implicitly solving phase appearance and disappearance problems using two-fluid six-equation model

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-01-25

    Phase appearance and disappearance issue presents serious numerical challenges in two-phase flow simulations using the two-fluid six-equation model. Numerical challenges arise from the singular equation system when one phase is absent, as well as from the discontinuity in the solution space when one phase appears or disappears. In this work, a high-resolution spatial discretization scheme on staggered grids and fully implicit methods were applied for the simulation of two-phase flow problems using the two-fluid six-equation model. A Jacobian-free Newton-Krylov (JFNK) method was used to solve the discretized nonlinear problem. An improved numerical treatment was proposed and proved to be effectivemore » to handle the numerical challenges. The treatment scheme is conceptually simple, easy to implement, and does not require explicit truncations on solutions, which is essential to conserve mass and energy. Various types of phase appearance and disappearance problems relevant to thermal-hydraulics analysis have been investigated, including a sedimentation problem, an oscillating manometer problem, a non-condensable gas injection problem, a single-phase flow with heat addition problem and a subcooled flow boiling problem. Successful simulations of these problems demonstrate the capability and robustness of the proposed numerical methods and numerical treatments. As a result, volume fraction of the absent phase can be calculated effectively as zero.« less

  17. A Simple Experimental Setup for Teaching Additive Colors with Arduino

    NASA Astrophysics Data System (ADS)

    Carvalho, Paulo Simeão; Hahn, Marcelo

    2016-04-01

    The result of additive colors is always fascinating to young students. When we teach this topic to 14- to 16-year-old students, they do not usually notice we use maximum light quantities of red (R), green (G), and blue (B) to obtain yellow, magenta, and cyan colors in order to build the well-known additive color diagram of Fig. 1. But how about using different light intensities for R, G, and B? What colors do we get? This problem of color mixing has been intensively discussed for decades by several authors, as pointed out by Ruiz's "Color Addition and Subtraction Apps" work and the references included therein. An early LED demonstrator for additive color mixing dates back to 1985, and apps to illustrate color mixing are available online. In this work, we describe an experimental setup making use of a microcontroller device: the Arduino Uno. This setup is designed as a game in order to improve students' understanding of color mixing.

  18. Pizza again? On the division of polygons into sections with a common origin

    NASA Astrophysics Data System (ADS)

    Sinitsky, Ilya; Stupel, Moshe; Sinitsky, Marina

    2018-02-01

    The paper explores the division of a polygon into equal-area pieces using line segments originating at a common point. The mathematical background of the proposed method is very simple and belongs to secondary school geometry. Simple examples dividing a square into two, four or eight congruent pieces provide a starting point to discovering how to divide a regular polygon into any number of equal-area pieces using line segments originating from the centre. Moreover, it turns out that there are infinite ways to do the division. Discovering the basic invariant involved allows application of the same procedure to divide any tangential polygon, as after suitable adjustment, it can be used also for rectangles and parallelograms. Further generalization offers many additional solutions of the problem, and some of them are presented for the case of an arbitrary triangle and a square. Links to dynamic demonstrations in GeoGebra serve to illustrate the main results.

  19. The Lippmann-Dewey "Debate" Revisited: The Problem of Knowledge and the Role of Experts in Modern Democratic Theory

    ERIC Educational Resources Information Center

    DeCesare, Tony

    2012-01-01

    With only some fear of oversimplification, the fundamental differences between Walter Lippmann and John Dewey that are of concern here can be introduced by giving attention to Lippmann's deceptively simple formulation of a central problem in democratic theory: "The environment is complex. Man's political capacity is simple. Can a bridge be built…

  20. A survey of methods of feasible directions for the solution of optimal control problems

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1972-01-01

    Three methods of feasible directions for optimal control are reviewed. These methods are an extension of the Frank-Wolfe method, a dual method devised by Pironneau and Polack, and a Zontendijk method. The categories of continuous optimal control problems are shown as: (1) fixed time problems with fixed initial state, free terminal state, and simple constraints on the control; (2) fixed time problems with inequality constraints on both the initial and the terminal state and no control constraints; (3) free time problems with inequality constraints on the initial and terminal states and simple constraints on the control; and (4) fixed time problems with inequality state space contraints and constraints on the control. The nonlinear programming algorithms are derived for each of the methods in its associated category.

  1. Orthodontics for the dog. Treatment methods.

    PubMed

    Ross, D L

    1986-09-01

    This article considers the prevention of orthodontic problems, occlusal adjustments, simple tooth movements, rotational techniques, tipping problems, adjustment of crown height, descriptions of common orthodontic appliances, and problems associated with therapy.

  2. New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.

  3. Ecological principles, biodiversity, and the electric utility industry

    NASA Astrophysics Data System (ADS)

    Temple, Stanley A.

    1996-11-01

    The synthetic field of conservation biology uses principles derived from many different disciplines to address biodiversity issues. Many of these principles have come from ecology, and two simple ones that seem to relate to many issues involving the utility industry are: (1) “Everything is interconnected” (and should usually stay that way), and (2) “We can never do merely one thing.” The first principle can be applied to both the biotic and physical environments that are impacted by industrial activities. Habitat fragmentation and the loss of physical and biotic connectedness that results are frequently associated with transmission rights-of-way. These problems can be reduced—or even turned into conservation benefits—by careful planning and creative management. The second principle applies to the utility industry's programs to deal with carbon released by burning fossil fuels. Ecological knowledge can allow these programs to contribute to the preservation of biodiversity in addition to addressing a pollution problem. Without careful ecological analyses, industry could easily create new problems while implementing solutions to old ones.

  4. Unemployment, Parental Distress and Youth Emotional Well-Being: The Moderation Roles of Parent-Youth Relationship and Financial Deprivation.

    PubMed

    Frasquilho, Diana; de Matos, Margarida Gaspar; Marques, Adilson; Neville, Fergus G; Gaspar, Tânia; Caldas-de-Almeida, J M

    2016-10-01

    We investigated, in a sample of 112 unemployed parents of adolescents aged 10-19 years, the links between parental distress and change in youth emotional problems related to parental unemployment, and the moderation roles of parent-youth relationship and financial deprivation. Data were analyzed using descriptive statistics and correlations. Further, simple moderation, additive moderation, and moderated moderation models of regression were performed to analyze the effects of parental distress, parent-youth relationship and financial deprivation in predicting change in youth emotional problems related to parental unemployment. Results show that parental distress moderated by parent-youth relationship predicted levels of change in youth emotional problems related to parental unemployment. This study provides evidence that during job loss, parental distress is linked to youth emotional well-being and that parent-youth relationships play an important moderation role. This raises the importance of further researching parental distress impacts on youth well-being, especially during periods of high unemployment rates.

  5. Modelling of a Solar Thermal Power Plant for Benchmarking Blackbox Optimization Solvers

    NASA Astrophysics Data System (ADS)

    Lemyre Garneau, Mathieu

    A new family of problems is provided to serve as a benchmark for blackbox optimization solvers. The problems are single or bi-objective and vary in complexity in terms of the number of variables used (from 5 to 29), the type of variables (integer, real, category), the number of constraints (from 5 to 17) and their types (binary or continuous). In order to provide problems exhibiting dynamics that reflect real engineering challenges, they are extracted from an original numerical model of a concentrated solar power (CSP) power plant with molten salt thermal storage. The model simulates the performance of the power plant by using a high level modeling of each of its main components, namely, an heliostats field, a central cavity receiver, a molten salt heat storage, a steam generator and an idealized powerblock. The heliostats field layout is determined through a simple automatic strategy that finds the best individual positions on the field by considering their respective cosine efficiency, atmospheric scattering and spillage losses as a function of the design parameters. A Monte-Carlo integral method is used to evaluate the heliostats field's optical performance throughout the day so that shadowing effects between heliostats are considered, and the results of this evaluation provide the inputs to simulate the levels and temperatures of the thermal storage. The molten salt storage inventory is used to transfer thermal energy to the powerblock, which simulates a simple Rankine cycle with a single steam turbine. Auxiliary models are used to provide additional optimization constraints on the investment cost, parasitic losses or components failure. The results of preliminary optimizations performed with the NOMAD software using default settings are provided to show the validity of the problems.

  6. Identifying non-elliptical entity mentions in a coordinated NP with ellipses.

    PubMed

    Chae, Jeongmin; Jung, Younghee; Lee, Taemin; Jung, Soonyoung; Huh, Chan; Kim, Gilhan; Kim, Hyeoncheol; Oh, Heungbum

    2014-02-01

    Named entities in the biomedical domain are often written using a Noun Phrase (NP) along with a coordinating conjunction such as 'and' and 'or'. In addition, repeated words among named entity mentions are frequently omitted. It is often difficult to identify named entities. Although various Named Entity Recognition (NER) methods have tried to solve this problem, these methods can only deal with relatively simple elliptical patterns in coordinated NPs. We propose a new NER method for identifying non-elliptical entity mentions with simple or complex ellipses using linguistic rules and an entity mention dictionary. The GENIA and CRAFT corpora were used to evaluate the performance of the proposed system. The GENIA corpus was used to evaluate the performance of the system according to the quality of the dictionary. The GENIA corpus comprises 3434 non-elliptical entity mentions in 1585 coordinated NPs with ellipses. The system achieves 92.11% precision, 95.20% recall, and 93.63% F-score in identification of non-elliptical entity mentions in coordinated NPs. The accuracy of the system in resolving simple and complex ellipses is 94.54% and 91.95%, respectively. The CRAFT corpus was used to evaluate the performance of the system under realistic conditions. The system achieved 78.47% precision, 67.10% recall, and 72.34% F-score in coordinated NPs. The performance evaluations of the system show that it efficiently solves the problem caused by ellipses, and improves NER performance. The algorithm is implemented in PHP and the code can be downloaded from https://code.google.com/p/medtextmining/. Copyright © 2013. Published by Elsevier Inc.

  7. Hardware problems encountered in solar heating and cooling systems

    NASA Technical Reports Server (NTRS)

    Cash, M.

    1978-01-01

    Numerous problems in the design, production, installation, and operation of solar energy systems are discussed. Described are hardware problems, which range from simple to obscure and complex, and their resolution.

  8. On the solution of integral equations with a generalized cauchy kernal

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    A certain class of singular integral equations that may arise from the mixed boundary value problems in nonhonogeneous materials is considered. The distinguishing feature of these equations is that in addition to the Cauchy singularity, the kernels contain terms that are singular only at the end points. In the form of the singular integral equations adopted, the density function is a potential or a displacement and consequently the kernal has strong singularities of the form (t-x)(-2), x(n-2) (t+x)(n), (n is = or 2, 0 x, t b). The complex function theory is used to determine the fundamental function of the problem for the general case and a simple numerical technique is described to solve the integral equation. Two examples from the theory of elasticity are then considered to show the application of the technique.

  9. Brane junctions in the Randall-Sundrum scenario

    NASA Astrophysics Data System (ADS)

    Csáki, Csaba; Shirman, Yuri

    2000-01-01

    We present static solutions to Einstein's equations corresponding to branes at various angles intersecting in a single 3-brane. Such configurations may be useful for building models with localized gravity via the Randall-Sundrum mechanism. We find that such solutions may exist only if the mechanical forces acting on the junction exactly cancel. In addition to this constraint there are further conditions that the parameters of the theory have to satisfy. We find that at least one of these involves only the brane tensions and cosmological constants, and thus cannot have a dynamical origin. We present these conditions in detail for two simple examples. We discuss the nature of the cosmological constant problem in the framework of these scenarios, and outline the desired features of the brane configurations which may bring us closer towards a resolution of the cosmological constant problem.

  10. A sequential solution for anisotropic total variation image denoising with interval constraints

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Noo, Frédéric

    2017-09-01

    We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.

  11. An Algorithm for the Mixed Transportation Network Design Problem

    PubMed Central

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803

  12. Considerations affecting the additional weight required in mass balance of ailerons

    NASA Technical Reports Server (NTRS)

    Diehl, W S

    1937-01-01

    This paper is essentially a consideration of mass balance of ailerons from a preliminary design standpoint, in which the extra weight of the mass counterbalance is the most important phase of the problem. Equations are developed for the required balance weight for a simple aileron and this weight is correlated with the mass-balance coefficient. It is concluded the location of the c.g. of the basic aileron is of paramount importance and that complete mass balance imposes no great weight penalty if the aileron is designed to have its c.g. inherently near to the hinge axis.

  13. Wentzel-Kramers-Brillouin method in the Bargmann representation. [of quantum mechanics

    NASA Technical Reports Server (NTRS)

    Voros, A.

    1989-01-01

    It is demonstrated that the Bargmann representation of quantum mechanics is ideally suited for semiclassical analysis, using as an example the WKB method applied to the bound-state problem in a single well of one degree of freedom. For the harmonic oscillator, this WKB method trivially gives the exact eigenfunctions in addition to the exact eigenvalues. For an anharmonic well, a self-consistent variational choice of the representation greatly improves the accuracy of the semiclassical ground state. Also, a simple change of scale illuminates the relationship of semiclassical versus linear perturbative expansions, allowing a variety of multidimensional extensions.

  14. Flow Past a Descending Balloon

    NASA Technical Reports Server (NTRS)

    Baginski, Frank

    2001-01-01

    In this report, we present our findings related to aerodynamic loading of partially inflated balloon shapes. This report will consider aerodynamic loading of partially inflated inextensible natural shape balloons and some relevant problems in potential flow. For the axisymmetric modeling, we modified our Balloon Design Shape Program (BDSP) to handle axisymmetric inextensible ascent shapes with aerodynamic loading. For a few simple examples of two dimensional potential flows, we used the Matlab PDE Toolbox. In addition, we propose a model for aerodynamic loading of strained energy minimizing balloon shapes with lobes. Numerical solutions are presented for partially inflated strained balloon shapes with lobes and no aerodynamic loading.

  15. Alumina Handling Dustiness

    NASA Astrophysics Data System (ADS)

    Authier-Martin, Monique

    Dustiness of calcined alumina is a major concern, causing undesirable working conditions and serious alumina losses. These losses occur primarily during unloading and handling or pot loading and crust breaking. The handling side of the problem is first addressed. The Perra pulvimeter constitutes a simple and reproducible tool to quantify handling dustiness and yields results in agreement with plant experience. Attempts are made to correlate dustiness with bulk properties (particle size, attrition index, …) for a large number of diverse aluminas. The characterization of the dust generated with the Perra pulvimeter is most revealing. The effect of the addition of E.S.P. dust is also reported.

  16. Quantum Approach to Cournot-type Competition

    NASA Astrophysics Data System (ADS)

    Frąckiewicz, Piotr

    2018-02-01

    The aim of this paper is to investigate Cournot-type competition in the quantum domain with the use of the Li-Du-Massar scheme for continuous-variable quantum games. We derive a formula which, in a simple way, determines a unique Nash equilibrium. The result concerns a large class of Cournot duopoly problems including the competition, where the demand and cost functions are not necessary linear. Further, we show that the Nash equilibrium converges to a Pareto-optimal strategy profile as the quantum correlation increases. In addition to illustrating how the formula works, we provide the readers with two examples.

  17. Individualized Math Problems in Simple Equations. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume require solution of linear equations, systems…

  18. Impact of simple conventional and Telehealth solutions on improving mental health in Afghanistan.

    PubMed

    Khoja, Shariq; Scott, Richard; Husyin, Nida; Durrani, Hammad; Arif, Maria; Faqiri, Faqir; Hedayat, Ebadullah; Yousufzai, Wahab

    2016-12-01

    For more than a century Afghanistan has been unstable, facing decades of war, social problems, and intense poverty. As a result, many of the population suffer from a variety of mental health problems. The Government recognises the situation and has prioritised mental health, but progress is slow and services outside of Kabul remain poor. An international collaborative implemented a project in Badakshan province of Afghanistan using conventional and simple low-cost e-Health solutions to address the four most common issues: depression, psychosis, post-traumatic stress disorder, and substance abuse. Conventional town hall meetings informed community members to raise awareness and knowledge. In addition, an android-based mobile application used the World Health Organization's Mental Health Gap Action Programme guidelines and protocols to: collect information from community healthcare workers; provide referral services to patients; provide blended learning to improve providers' mental health knowledge, skills, and practice; and to provide store-and-forward and live consultations. Preliminary evaluation of the intervention shows enhanced access to care for remote communities, decreased stigma, and improved quality of health services. Primary care workers are also able to bridge the gap in consultations for rural and remote communities, connecting them with specialists and providing better access to care. © The Author(s) 2016.

  19. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  20. Statistical methodologies for the control of dynamic remapping

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Nicol, D. M.

    1986-01-01

    Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.

  1. Simple algorithm for improved security in the FDDI protocol

    NASA Astrophysics Data System (ADS)

    Lundy, G. M.; Jones, Benjamin

    1993-02-01

    We propose a modification to the Fiber Distributed Data Interface (FDDI) protocol based on a simple algorithm which will improve confidential communication capability. This proposed modification provides a simple and reliable system which exploits some of the inherent security properties in a fiber optic ring network. This method differs from conventional methods in that end to end encryption can be facilitated at the media access control sublayer of the data link layer in the OSI network model. Our method is based on a variation of the bit stream cipher method. The transmitting station takes the intended confidential message and uses a simple modulo two addition operation against an initialization vector. The encrypted message is virtually unbreakable without the initialization vector. None of the stations on the ring will have access to both the encrypted message and the initialization vector except the transmitting and receiving stations. The generation of the initialization vector is unique for each confidential transmission and thus provides a unique approach to the key distribution problem. The FDDI protocol is of particular interest to the military in terms of LAN/MAN implementations. Both the Army and the Navy are considering the standard as the basis for future network systems. A simple and reliable security mechanism with the potential to support realtime communications is a necessary consideration in the implementation of these systems. The proposed method offers several advantages over traditional methods in terms of speed, reliability, and standardization.

  2. Effects of blade-to-blade dissimilarities on rotor-body lead-lag dynamics

    NASA Technical Reports Server (NTRS)

    Mcnulty, M. J.

    1986-01-01

    Small blade-to-blade property differences are investigated to determine their effects on the behavior of a simple rotor-body system. An analytical approach is used which emphasizes the significance of these effects from the experimental point of view. It is found that the primary effect of blade-to-blade dissimilarities is the appearance of additional peaks in the frequency spectrum which are separated from the convention response modes by multiples of the rotor speed. These additional responses are potential experimental problems because when they occur near a mode of interest they act as contaminant frequencies which can make damping measurements difficult. The effects of increased rotor-body coupling and a rotor shaft degree of freedom act to improve the situation by altering the frequency separation of the modes.

  3. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.

    PubMed

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-07

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  4. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods

    PubMed Central

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-01-01

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459

  5. Concurrent airline fleet allocation and aircraft design with profit modeling for multiple airlines

    NASA Astrophysics Data System (ADS)

    Govindaraju, Parithi

    A "System of Systems" (SoS) approach is particularly beneficial in analyzing complex large scale systems comprised of numerous independent systems -- each capable of independent operations in their own right -- that when brought in conjunction offer capabilities and performance beyond the constituents of the individual systems. The variable resource allocation problem is a type of SoS problem, which includes the allocation of "yet-to-be-designed" systems in addition to existing resources and systems. The methodology presented here expands upon earlier work that demonstrated a decomposition approach that sought to simultaneously design a new aircraft and allocate this new aircraft along with existing aircraft in an effort to meet passenger demand at minimum fleet level operating cost for a single airline. The result of this describes important characteristics of the new aircraft. The ticket price model developed and implemented here enables analysis of the system using profit maximization studies instead of cost minimization. A multiobjective problem formulation has been implemented to determine characteristics of a new aircraft that maximizes the profit of multiple airlines to recognize the fact that aircraft manufacturers sell their aircraft to multiple customers and seldom design aircraft customized to a single airline's operations. The route network characteristics of two simple airlines serve as the example problem for the initial studies. The resulting problem formulation is a mixed-integer nonlinear programming problem, which is typically difficult to solve. A sequential decomposition strategy is applied as a solution methodology by segregating the allocation (integer programming) and aircraft design (non-linear programming) subspaces. After solving a simple problem considering two airlines, the decomposition approach is then applied to two larger airline route networks representing actual airline operations in the year 2005. The decomposition strategy serves as a promising technique for future detailed analyses. Results from the profit maximization studies favor a smaller aircraft in terms of passenger capacity due to its higher yield generation capability on shorter routes while results from the cost minimization studies favor a larger aircraft due to its lower direct operating cost per seat mile.

  6. Single-shot work extraction in quantum thermodynamics revisited

    NASA Astrophysics Data System (ADS)

    Wang, Shang-Yung

    2018-01-01

    We revisit the problem of work extraction from a system in contact with a heat bath to a work storage system, and the reverse problem of state formation from a thermal system state in single-shot quantum thermodynamics. A physically intuitive and mathematically simple approach using only elementary majorization theory and matrix analysis is developed, and a graphical interpretation of the maximum extractable work, minimum work cost of formation, and corresponding single-shot free energies is presented. This approach provides a bridge between two previous methods based respectively on the concept of thermomajorization and a comparison of subspace dimensions. In addition, a conceptual inconsistency with regard to general work extraction involving transitions between multiple energy levels of the work storage system is clarified and resolved. It is shown that an additional contribution to the maximum extractable work in those general cases should be interpreted not as work extracted from the system, but as heat transferred from the heat bath. Indeed, the additional contribution is an artifact of a work storage system (essentially a suspended ‘weight’ that can be raised or lowered) that does not truly distinguish work from heat. The result calls into question the common concept that a work storage system in quantum thermodynamics is simply the quantum version of a suspended weight in classical thermodynamics.

  7. Clairvoyant fusion: a new methodology for designing robust detection algorithms

    NASA Astrophysics Data System (ADS)

    Schaum, Alan

    2016-10-01

    Many realistic detection problems cannot be solved with simple statistical tests for known alternative probability models. Uncontrollable environmental conditions, imperfect sensors, and other uncertainties transform simple detection problems with likelihood ratio solutions into composite hypothesis (CH) testing problems. Recently many multi- and hyperspectral sensing CH problems have been addressed with a new approach. Clairvoyant fusion (CF) integrates the optimal detectors ("clairvoyants") associated with every unspecified value of the parameters appearing in a detection model. For problems with discrete parameter values, logical rules emerge for combining the decisions of the associated clairvoyants. For many problems with continuous parameters, analytic methods of CF have been found that produce closed-form solutions-or approximations for intractable problems. Here the principals of CF are reviewed and mathematical insights are described that have proven useful in the derivation of solutions. It is also shown how a second-stage fusion procedure can be used to create theoretically superior detection algorithms for ALL discrete parameter problems.

  8. Problem Solving with the Elementary Youngster.

    ERIC Educational Resources Information Center

    Swartz, Vicki

    This paper explores research on problem solving and suggests a problem-solving approach to elementary school social studies, using a culture study of the ancient Egyptians and King Tut as a sample unit. The premise is that problem solving is particularly effective in dealing with problems which do not have one simple and correct answer but rather…

  9. Conscious thought beats deliberation without attention in diagnostic decision-making: at least when you are an expert

    PubMed Central

    Schmidt, Henk G.; Rikers, Remy M. J. P.; Custers, Eugene J. F. M.; Splinter, Ted A. W.; van Saase, Jan L. C. M.

    2010-01-01

    Contrary to what common sense makes us believe, deliberation without attention has recently been suggested to produce better decisions in complex situations than deliberation with attention. Based on differences between cognitive processes of experts and novices, we hypothesized that experts make in fact better decisions after consciously thinking about complex problems whereas novices may benefit from deliberation-without-attention. These hypotheses were confirmed in a study among doctors and medical students. They diagnosed complex and routine problems under three conditions, an immediate-decision condition and two delayed conditions: conscious thought and deliberation-without-attention. Doctors did better with conscious deliberation when problems were complex, whereas reasoning mode did not matter in simple problems. In contrast, deliberation-without-attention improved novices’ decisions, but only in simple problems. Experts benefit from consciously thinking about complex problems; for novices thinking does not help in those cases. PMID:20354726

  10. Conscious thought beats deliberation without attention in diagnostic decision-making: at least when you are an expert.

    PubMed

    Mamede, Sílvia; Schmidt, Henk G; Rikers, Remy M J P; Custers, Eugene J F M; Splinter, Ted A W; van Saase, Jan L C M

    2010-11-01

    Contrary to what common sense makes us believe, deliberation without attention has recently been suggested to produce better decisions in complex situations than deliberation with attention. Based on differences between cognitive processes of experts and novices, we hypothesized that experts make in fact better decisions after consciously thinking about complex problems whereas novices may benefit from deliberation-without-attention. These hypotheses were confirmed in a study among doctors and medical students. They diagnosed complex and routine problems under three conditions, an immediate-decision condition and two delayed conditions: conscious thought and deliberation-without-attention. Doctors did better with conscious deliberation when problems were complex, whereas reasoning mode did not matter in simple problems. In contrast, deliberation-without-attention improved novices' decisions, but only in simple problems. Experts benefit from consciously thinking about complex problems; for novices thinking does not help in those cases.

  11. The Synthesis of Proteins-A Simple Experiment To Show the Procedures and Problems of Using Radioisotopes in Biochemical Studies

    NASA Astrophysics Data System (ADS)

    Hawcroft, David M.

    1996-11-01

    Courses of organic chemistry frequently include studies of biochemistry and hence of biochemical techniques. Radioisotopes have played a major role in the understanding of metabolic pathways, transport, enzyme activity and other processes. The experiment described in this paper uses simple techniques to illustrate the procedures involved in working with radioisotopes when following a simplified metabolic pathway. Safety considerations are discussed and a list of safety rules is provided, but the experiment itself uses very low levels of a weak beta-emitting isotope (tritium). Plant material is suggested to reduce legal, financial and emotive problems, but the techniques are applicable to all soft-tissued material. The problems involved in data interpretation in radioisotope experiments resulting from radiation quenching are resolved by simple correction calculations, and the merits of using radioisotopes shown by a calculation of the low mass of material being measured. Suggestions for further experiments are given.

  12. Technology in rural transportation: "Simple Solutions"

    DOT National Transportation Integrated Search

    1997-10-01

    The Rural Outreach Project: Simple Solutions Report contains the findings of a research effort aimed at identifying and describing proven, cost-effective, low-tech solutions for rural transportation-related problems or needs. Through a process ...

  13. Bladder Control Problems in Women: Lifestyle Strategies for Relief

    MedlinePlus

    Bladder control: Lifestyle strategies ease problems Simple lifestyle changes may improve bladder control or enhance response to medication. Find out what you can do to help with your bladder control problem. By Mayo Clinic Staff If you've ...

  14. The 2014 Sandia Verification and Validation Challenge: Problem statement

    DOE PAGES

    Hu, Kenneth; Orient, George

    2016-01-18

    This paper presents a case study in utilizing information from experiments, models, and verification and validation (V&V) to support a decision. It consists of a simple system with data and models provided, plus a safety requirement to assess. The goal is to pose a problem that is flexible enough to allow challengers to demonstrate a variety of approaches, but constrained enough to focus attention on a theme. This was accomplished by providing a good deal of background information in addition to the data, models, and code, but directing the participants' activities with specific deliverables. In this challenge, the theme ismore » how to gather and present evidence about the quality of model predictions, in order to support a decision. This case study formed the basis of the 2014 Sandia V&V Challenge Workshop and this resulting special edition of the ASME Journal of Verification, Validation, and Uncertainty Quantification.« less

  15. From Feynman rules to conserved quantum numbers, I

    NASA Astrophysics Data System (ADS)

    Nogueira, P.

    2017-05-01

    In the context of Quantum Field Theory (QFT) there is often the need to find sets of graph-like diagrams (the so-called Feynman diagrams) for a given physical model. If negative, the answer to the related problem 'Are there any diagrams with this set of external fields?' may settle certain physical questions at once. Here the latter problem is formulated in terms of a system of linear diophantine equations derived from the Lagrangian density, from which necessary conditions for the existence of the required diagrams may be obtained. Those conditions are equalities that look like either linear diophantine equations or linear modular (i.e. congruence) equations, and may be found by means of fairly simple algorithms that involve integer computations. The diophantine equations so obtained represent (particle) number conservation rules, and are related to the conserved (additive) quantum numbers that may be assigned to the fields of the model.

  16. An investigation of rooftop STOL port aerodynamics

    NASA Technical Reports Server (NTRS)

    Blanton, J. N.; Parker, H. M.

    1972-01-01

    An investigation into aerodynamic problems associated with large building rooftop STOLports was performed. Initially, a qualitative flow visualization study indicated two essential problems: (1) the establishment of smooth, steady, attached flow over the rooftop, and (2) the generation of acceptable crosswind profile once (1) has been achieved. This study indicated that (1) could be achieved by attaching circular-arc rounded edge extensions to the upper edges of the building and that crosswind profiles could be modified by the addition of porous vertical fences to the lateral edges of the rooftop. Important fence parameters associated with crosswind alteration were found to be solidity, fence element number and spacing. Large scale building induced velocity fluctuations were discovered for most configurations tested and a possible explanation for their occurrence was postulated. Finally, a simple equation relating fence solidity to the resulting velocity profile was developed and tested for non-uniform single element fences with 30 percent maximum solidity.

  17. Noise Response Data Reveal Novel Controllability Gramian for Nonlinear Network Dynamics

    PubMed Central

    Kashima, Kenji

    2016-01-01

    Control of nonlinear large-scale dynamical networks, e.g., collective behavior of agents interacting via a scale-free connection topology, is a central problem in many scientific and engineering fields. For the linear version of this problem, the so-called controllability Gramian has played an important role to quantify how effectively the dynamical states are reachable by a suitable driving input. In this paper, we first extend the notion of the controllability Gramian to nonlinear dynamics in terms of the Gibbs distribution. Next, we show that, when the networks are open to environmental noise, the newly defined Gramian is equal to the covariance matrix associated with randomly excited, but uncontrolled, dynamical state trajectories. This fact theoretically justifies a simple Monte Carlo simulation that can extract effectively controllable subdynamics in nonlinear complex networks. In addition, the result provides a novel insight into the relationship between controllability and statistical mechanics. PMID:27264780

  18. On the solution of integral equations with a generalized cauchy kernel

    NASA Technical Reports Server (NTRS)

    Kaya, A. C.; Erdogan, F.

    1986-01-01

    In this paper a certain class of singular integral equations that may arise from the mixed boundary value problems in nonhomogeneous materials is considered. The distinguishing feature of these equations is that in addition to the Cauchy singularity, the kernels contain terms that are singular only at the end points. In the form of the singular integral equations adopted, the density function is a potential or a displacement and consequently the kernel has strong singularities of the form (t-x) sup-2, x sup n-2 (t+x) sup n, (n or = 2, 0x,tb). The complex function theory is used to determine the fundamental function of the problem for the general case and a simple numerical technique is described to solve the integral equation. Two examples from the theory of elasticity are then considered to show the application of the technique.

  19. Fostering collaboration in the medical practice: twenty-five tips.

    PubMed

    Hills, Laura

    2013-01-01

    It's a given that collaboration is an important aspect of medical practice management. But achieving genuine collaboration among the members of your medical practice team may not be as simple as it seems. This article suggests 25 practical strategies for medical practice employees and their managers to help them create and foster collaboration in their medical practices. Tips for collaborative goal setting, communication, ground rules, task delineation, sustainability, problem solving, and anticipating and handling problems are all described. In addition, this article offers a four-step strategy for dealing with a domineering collaborator and a five-step strategy for dealing with a collaboration slacker. This article also includes a 20-question self-quiz to help you and your employees evaluate your collaborative work style. Finally, this article describes 10 common collaboration pitfalls and the strategies you and your staff can use to avoid falling victim to them.

  20. Combining convolutional neural networks and Hough Transform for classification of images containing lines

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy

    2017-03-01

    In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.

  1. Relativistic quantum cryptography

    NASA Astrophysics Data System (ADS)

    Molotkov, S. N.; Nazin, S. S.

    2003-07-01

    The problem of unconditional security of quantum cryptography (i.e. the security which is guaranteed by the fundamental laws of nature rather than by technical limitations) is one of the central points in quantum information theory. We propose a relativistic quantum cryptosystem and prove its unconditional security against any eavesdropping attempts. Relativistitic causality arguments allow to demonstrate the security of the system in a simple way. Since the proposed protocol does not empoly collective measurements and quantum codes, the cryptosystem can be experimentally realized with the present state-of-art in fiber optics technologies. The proposed cryptosystem employs only the individual measurements and classical codes and, in addition, the key distribution problem allows to postpone the choice of the state encoding scheme until after the states are already received instead of choosing it before sending the states into the communication channel (i.e. to employ a sort of "antedate" coding).

  2. Verification of low-Mach number combustion codes using the method of manufactured solutions

    NASA Astrophysics Data System (ADS)

    Shunn, Lee; Ham, Frank; Knupp, Patrick; Moin, Parviz

    2007-11-01

    Many computational combustion models rely on tabulated constitutive relations to close the system of equations. As these reactive state-equations are typically multi-dimensional and highly non-linear, their implications on the convergence and accuracy of simulation codes are not well understood. In this presentation, the effects of tabulated state-relationships on the computational performance of low-Mach number combustion codes are explored using the method of manufactured solutions (MMS). Several MMS examples are developed and applied, progressing from simple one-dimensional configurations to problems involving higher dimensionality and solution-complexity. The manufactured solutions are implemented in two multi-physics hydrodynamics codes: CDP developed at Stanford University and FUEGO developed at Sandia National Laboratories. In addition to verifying the order-of-accuracy of the codes, the MMS problems help highlight certain robustness issues in existing variable-density flow-solvers. Strategies to overcome these issues are briefly discussed.

  3. ULTRA-SHARP solution of the Smith-Hutton problem

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Mokhtari, Simin

    1992-01-01

    Highly convective scalar transport involving near-discontinuities and strong streamline curvature was addressed in a paper by Smith and Hutton in 1982, comparing several different convection schemes applied to a specially devised test problem. First order methods showed significant artificial diffusion, whereas higher order methods gave less smearing but had a tendency to overshoot and oscillate. Perhaps because unphysical oscillations are more obvious than unphysical smearing, the intervening period has seen a rise in popularity of low order artificially diffusive schemes, especially in the numerical heat transfer industry. The present paper describes an alternate strategy of using non-artificially diffusive high order methods, while maintaining strictly monotonic transitions through the use of simple flux limited constraints. Limited third order upwinding is usually found to be the most cost effective basic convection scheme. Tighter resolution of discontinuities can be obtained at little additional cost by using automatic adaptive stencil expansion to higher order in local regions, as needed.

  4. Turbopump Performance Improved by Evolutionary Algorithms

    NASA Technical Reports Server (NTRS)

    Oyama, Akira; Liou, Meng-Sing

    2002-01-01

    The development of design optimization technology for turbomachinery has been initiated using the multiobjective evolutionary algorithm under NASA's Intelligent Synthesis Environment and Revolutionary Aeropropulsion Concepts programs. As an alternative to the traditional gradient-based methods, evolutionary algorithms (EA's) are emergent design-optimization algorithms modeled after the mechanisms found in natural evolution. EA's search from multiple points, instead of moving from a single point. In addition, they require no derivatives or gradients of the objective function, leading to robustness and simplicity in coupling any evaluation codes. Parallel efficiency also becomes very high by using a simple master-slave concept for function evaluations, since such evaluations often consume the most CPU time, such as computational fluid dynamics. Application of EA's to multiobjective design problems is also straightforward because EA's maintain a population of design candidates in parallel. Because of these advantages, EA's are a unique and attractive approach to real-world design optimization problems.

  5. An Efficient, Non-iterative Method of Identifying the Cost-Effectiveness Frontier

    PubMed Central

    Suen, Sze-chuan; Goldhaber-Fiebert, Jeremy D.

    2015-01-01

    Cost-effectiveness analysis aims to identify treatments and policies that maximize benefits subject to resource constraints. However, the conventional process of identifying the efficient frontier (i.e., the set of potentially cost-effective options) can be algorithmically inefficient, especially when considering a policy problem with many alternative options or when performing an extensive suite of sensitivity analyses for which the efficient frontier must be found for each. Here, we describe an alternative one-pass algorithm that is conceptually simple, easier to implement, and potentially faster for situations that challenge the conventional approach. Our algorithm accomplishes this by exploiting the relationship between the net monetary benefit and the cost-effectiveness plane. To facilitate further evaluation and use of this approach, we additionally provide scripts in R and Matlab that implement our method and can be used to identify efficient frontiers for any decision problem. PMID:25926282

  6. Accelerated Path-following Iterative Shrinkage Thresholding Algorithm with Application to Semiparametric Graph Estimation

    PubMed Central

    Zhao, Tuo; Liu, Han

    2016-01-01

    We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, i.e., APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results which do not exist in the existing literature. Thorough numerical results are provided to back up our theory. PMID:28133430

  7. On the look-up tables for the critical heat flux in tubes (history and problems)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirillov, P.L.; Smogalev, I.P.

    1995-09-01

    The complication of critical heat flux (CHF) problem for boiling in channels is caused by the large number of variable factors and the variety of two-phase flows. The existence of several hundreds of correlations for the prediction of CHF demonstrates the unsatisfactory state of this problem. The phenomenological CHF models can provide only the qualitative predictions of CHF primarily in annular-dispersed flow. The CHF look-up tables covered the results of numerous experiments received more recognition in the last 15 years. These tables are based on the statistical averaging of CHF values for each range of pressure, mass flux and quality.more » The CHF values for regions, where no experimental data is available, are obtained by extrapolation. The correction of these tables to account for the diameter effect is a complicated problem. There are ranges of conditions where the simple correlations cannot produce the reliable results. Therefore, diameter effect on CHF needs additional study. The modification of look-up table data for CHF in tubes to predict CHF in rod bundles must include a method which to take into account the nonuniformity of quality in a rod bundle cross section.« less

  8. Topological String Theory and Enumerative Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Y. S

    In this thesis we investigate several problems which have their roots in both topological string theory and enumerative geometry. In the former case, underlying theories are topological field theories, whereas the latter case is concerned with intersection theories on moduli spaces. A permeating theme in this thesis is to examine the close interplay between these two complementary fields of study. The main problems addressed are as follows: In considering the Hurwitz enumeration problem of branched covers of compact connected Riemann surfaces, we completely solve the problem in the case of simple Hurwitz numbers. In addition, utilizing the connection between Hurwitzmore » numbers and Hodge integrals, we derive a generating function for the latter on the moduli space {bar M}{sub g,2} of 2-pointed, genus-g Deligne-Mumford stable curves. We also investigate Givental's recent conjecture regarding semisimple Frobenius structures and Gromov-Witten invariants, both of which are closely related to topological field theories; we consider the case of a complex projective line P{sup 1} as a specific example and verify his conjecture at low genera. In the last chapter, we demonstrate that certain topological open string amplitudes can be computed via relative stable morphisms in the algebraic category.« less

  9. A multiple process solution to the logical problem of language acquisition*

    PubMed Central

    MACWHINNEY, BRIAN

    2006-01-01

    Many researchers believe that there is a logical problem at the center of language acquisition theory. According to this analysis, the input to the learner is too inconsistent and incomplete to determine the acquisition of grammar. Moreover, when corrective feedback is provided, children tend to ignore it. As a result, language learning must rely on additional constraints from universal grammar. To solve this logical problem, theorists have proposed a series of constraints and parameterizations on the form of universal grammar. Plausible alternatives to these constraints include: conservatism, item-based learning, indirect negative evidence, competition, cue construction, and monitoring. Careful analysis of child language corpora has cast doubt on claims regarding the absence of positive exemplars. Using demonstrably available positive data, simple learning procedures can be formulated for each of the syntactic structures that have traditionally motivated invocation of the logical problem. Within the perspective of emergentist theory (MacWhinney, 2001), the operation of a set of mutually supportive processes is viewed as providing multiple buffering for developmental outcomes. However, the fact that some syntactic structures are more difficult to learn than others can be used to highlight areas of intense grammatical competition and processing load. PMID:15658750

  10. An improved exploratory search technique for pure integer linear programming problems

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1990-01-01

    The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.

  11. Simple additive manufacturing of an osteoconductive ceramic using suspension melt extrusion.

    PubMed

    Slots, Casper; Jensen, Martin Bonde; Ditzel, Nicholas; Hedegaard, Martin A B; Borg, Søren Wiatr; Albrektsen, Ole; Thygesen, Torben; Kassem, Moustapha; Andersen, Morten Østergaard

    2017-02-01

    Craniofacial bone trauma is a leading reason for surgery at most hospitals. Large pieces of destroyed or resected bone are often replaced with non-resorbable and stock implants, and these are associated with a variety of problems. This paper explores the use of a novel fatty acid/calcium phosphate suspension melt for simple additive manufacturing of ceramic tricalcium phosphate implants. A wide variety of non-aqueous liquids were tested to determine the formulation of a storable 3D printable tricalcium phosphate suspension ink, and only fatty acid-based inks were found to work. A heated stearic acid-tricalcium phosphate suspension melt was then 3D printed, carbonized and sintered, yielding implants with controllable macroporosities. Their microstructure, compressive strength and chemical purity were analyzed with electron microscopy, mechanical testing and Raman spectroscopy, respectively. Mesenchymal stem cell culture was used to assess their osteoconductivity as defined by collagen deposition, alkaline phosphatase secretion and de-novo mineralization. After a rapid sintering process, the implants retained their pre-sintering shape with open pores. They possessed clinically relevant mechanical strength and were chemically pure. They supported adhesion of mesenchymal stem cells, and these were able to deposit collagen onto the implants, secrete alkaline phosphatase and further mineralize the ceramic. The tricalcium phosphate/fatty acid ink described here and its 3D printing may be sufficiently simple and effective to enable rapid, on-demand and in-hospital fabrication of individualized ceramic implants that allow clinicians to use them for treatment of bone trauma. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  12. Efficient solvers for coupled models in respiratory mechanics.

    PubMed

    Verdugo, Francesc; Roth, Christian J; Yoshihara, Lena; Wall, Wolfgang A

    2017-02-01

    We present efficient preconditioners for one of the most physiologically relevant pulmonary models currently available. Our underlying motivation is to enable the efficient simulation of such a lung model on high-performance computing platforms in order to assess mechanical ventilation strategies and contributing to design more protective patient-specific ventilation treatments. The system of linear equations to be solved using the proposed preconditioners is essentially the monolithic system arising in fluid-structure interaction (FSI) extended by additional algebraic constraints. The introduction of these constraints leads to a saddle point problem that cannot be solved with usual FSI preconditioners available in the literature. The key ingredient in this work is to use the idea of the semi-implicit method for pressure-linked equations (SIMPLE) for getting rid of the saddle point structure, resulting in a standard FSI problem that can be treated with available techniques. The numerical examples show that the resulting preconditioners approach the optimal performance of multigrid methods, even though the lung model is a complex multiphysics problem. Moreover, the preconditioners are robust enough to deal with physiologically relevant simulations involving complex real-world patient-specific lung geometries. The same approach is applicable to other challenging biomedical applications where coupling between flow and tissue deformations is modeled with additional algebraic constraints. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Problematic video game play in a college sample and its relationship to time management skills and attention-deficit/hyperactivity disorder symptomology.

    PubMed

    Tolchinsky, Anatol; Jefferson, Stephen D

    2011-09-01

    Although numerous benefits have been uncovered related to moderate video game play, research suggests that problematic video game playing behaviors can cause problems in the lives of some video game players. To further our understanding of this phenomenon, we investigated how problematic video game playing symptoms are related to an assortment of variables, including time management skills and attention-deficit/hyperactivity disorder (ADHD) symptoms. Additionally, we tested several simple mediation/moderation models to better explain previous theories that posit simple correlations between these variables. As expected, the results from the present study indicated that time management skills appeared to mediate the relationship between ADHD symptoms and problematic play endorsement (though only for men). Unexpectedly, we found that ADHD symptoms appeared to mediate the relation between time management skills and problematic play behaviors; however, this was only found for women in our sample. Finally, future implications are discussed.

  14. Recoiling from a Kick in the Head-On Case

    NASA Technical Reports Server (NTRS)

    Choi, Dae-Il; Kelly, Bernard J.; Boggs, William D.; Baker, John G.; Centrella, Joan; Van Meter, James

    2007-01-01

    Recoil "kicks" induced by gravitational radiation are expected in the inspiral and merger of black holes. Recently the numerical relativity community has begun to measure the significant kicks found when both unequal masses and spins are considered. Because understanding the cause and magnitude of each component of this kick may be complicated in inspiral simulations, we consider these effects in the context of a simple test problem. We study recoils from collisions of binaries with initially head-on trajectories, starting with the simplest case of equal masses with no spin; adding spin and varying the mass ratio, both separately and jointly. We find spin-induced recoils to be significant even in head-on configurations. Additionally, it appears that the scaling of transverse kicks with spins is consistent with post-Newtonian (PN) theory, even though the kick is generated in the nonlinear merger interaction, where PN theory should not apply. This suggests that a simple heuristic description might be effective in the estimation of spin-kicks.

  15. Curcumin-Eudragit® E PO solid dispersion: A simple and potent method to solve the problems of curcumin.

    PubMed

    Li, Jinglei; Lee, Il Woo; Shin, Gye Hwa; Chen, Xiguang; Park, Hyun Jin

    2015-08-01

    Using a simple solution mixing method, curcumin was dispersed in the matrix of Eudragit® E PO polymer. Water solubility of curcumin in curcumin-Eudragit® E PO solid dispersion (Cur@EPO) was greatly increased. Based on the results of several tests, curcumin was demonstrated to exist in the polymer matrix in amorphous state. The interaction between curcumin and the polymer was investigated through Fourier transform infrared spectroscopy and (1)H NMR which implied that OH group of curcumin and carbonyl group of the polymer involved in the H bonding formation. Cur@EPO also provided protection function for curcumin as verified by the pH challenge and UV irradiation test. The pH value influenced curcumin release profile in which sustained release pattern was revealed. Additionally, in vitro transdermal test was conducted to assess the potential of Cur@EPO as a vehicle to deliver curcumin through this alternative administration route. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Simple online recognition of optical data strings based on conservative optical logic

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John; Shamir, Joseph; Zavalin, Andrey I.; Silberman, Enrique; Qian, Lei; Vikram, Chandra S.

    2006-06-01

    Optical packet switching relies on the ability of a system to recognize header information on an optical signal. Unless the headers are very short with large Hamming distances, optical correlation fails and optical logic becomes attractive because it can handle long headers with Hamming distances as low as 1. Unfortunately, the only optical logic gates fast enough to keep up with current communication speeds involve semiconductor optical amplifiers and do not lend themselves to the incorporation of large numbers of elements for header recognition and would consume a lot of power as well. The ideal system would operate at any bandwidth with no power consumption. We describe how to design and build such a system by using passive optical logic. This too leads to practical problems that we discuss. We show theoretically various ways to use optical interferometric logic for reliable recognition of long data streams such as headers in optical communication. In addition, we demonstrate one particularly simple experimental approach using interferometric coinc gates.

  17. Off-axis holographic laser speckle contrast imaging of blood vessels in tissues

    NASA Astrophysics Data System (ADS)

    Abdurashitov, Arkady; Bragina, Olga; Sindeeva, Olga; Sergey, Sindeev; Semyachkina-Glushkovskaya, Oxana V.; Tuchin, Valery V.

    2017-09-01

    Laser speckle contrast imaging (LSCI) has become one of the most common tools for functional imaging in tissues. Incomplete theoretical description and sophisticated interpretation of measurement results are completely sidelined by a low-cost and simple hardware, fastness, consistent results, and repeatability. In addition to the relatively low measuring volume with around 700 μm of the probing depth for the visible spectral range of illumination, there is no depth selectivity in conventional LSCI configuration; furthermore, in a case of high NA objective, the actual penetration depth of light in tissues is greater than depth of field (DOF) of an imaging system. Thus, the information about these out-of-focus regions persists in the recorded frames but cannot be retrieved due to intensity-based registration method. We propose a simple modification of LSCI system based on the off-axis holography to introduce after-registration refocusing ability to overcome both depth-selectivity and DOF problems as well as to get the potential possibility of producing a cross-section view of the specimen.

  18. A MATLAB-based finite-element visualization of quantum reactive scattering. I. Collinear atom-diatom reactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warehime, Mick; Alexander, Millard H., E-mail: mha@umd.edu

    We restate the application of the finite element method to collinear triatomic reactive scattering dynamics with a novel treatment of the scattering boundary conditions. The method provides directly the reactive scattering wave function and, subsequently, the probability current density field. Visualizing these quantities provides additional insight into the quantum dynamics of simple chemical reactions beyond simplistic one-dimensional models. Application is made here to a symmetric reaction (H+H{sub 2}), a heavy-light-light reaction (F+H{sub 2}), and a heavy-light-heavy reaction (F+HCl). To accompany this article, we have written a MATLAB code which is fast, simple enough to be accessible to a wide audience,more » as well as generally applicable to any problem that can be mapped onto a collinear atom-diatom reaction. The code and user's manual are available for download from http://www2.chem.umd.edu/groups/alexander/FEM.« less

  19. Decision support system of e-book provider selection for library using Simple Additive Weighting

    NASA Astrophysics Data System (ADS)

    Ciptayani, P. I.; Dewi, K. C.

    2018-01-01

    Each library has its own criteria and differences in the importance of each criterion in choosing an e-book provider for them. The large number of providers and the different importance levels of each criterion make the problem of determining the e-book provider to be complex and take a considerable time in decision making. The aim of this study was to implement Decision support system (DSS) to assist the library in selecting the best e-book provider based on their preferences. The way of DSS works is by comparing the importance of each criterion and the condition of each alternative decision. SAW is one of DSS method that is quite simple, fast and widely used. This study used 9 criteria and 18 provider to demonstrate how SAW work in this study. With the DSS, then the decision-making time can be shortened and the calculation results can be more accurate than manual calculations.

  20. Universal resilience patterns in cascading load model: More capacity is not always better

    NASA Astrophysics Data System (ADS)

    Wang, Jianwei; Wang, Xue; Cai, Lin; Ni, Chengzhang; Xie, Wei; Xu, Bo

    We study the problem of universal resilience patterns in complex networks against cascading failures. We revise the classical betweenness method and overcome its limitation of quantifying the load in cascading model. Considering that the generated load by all nodes should be equal to the transported one by all edges in the whole network, we propose a new method to quantify the load on an edge and construct a simple cascading model. By attacking the edge with the highest load, we show that, if the flow between two nodes is transported along the shortest paths between them, then the resilience of some networks against cascading failures inversely decreases with the enhancement of the capacity of every edge, i.e. the more capacity is not always better. We also observe the abnormal fluctuation of the additional load that exceeds the capacity of each edge. By a simple graph, we analyze the propagation of cascading failures step by step, and give a reasonable explanation of the abnormal fluctuation of cascading dynamics.

  1. Temporal Constraint Reasoning With Preferences

    NASA Technical Reports Server (NTRS)

    Khatib, Lina; Morris, Paul; Morris, Robert; Rossi, Francesca

    2001-01-01

    A number of reasoning problems involving the manipulation of temporal information can naturally be viewed as implicitly inducing an ordering of potential local decisions involving time (specifically, associated with durations or orderings of events) on the basis of preferences. For example. a pair of events might be constrained to occur in a certain order, and, in addition. it might be preferable that the delay between them be as large, or as small, as possible. This paper explores problems in which a set of temporal constraints is specified, where each constraint is associated with preference criteria for making local decisions about the events involved in the constraint, and a reasoner must infer a complete solution to the problem such that, to the extent possible, these local preferences are met in the best way. A constraint framework for reasoning about time is generalized to allow for preferences over event distances and durations, and we study the complexity of solving problems in the resulting formalism. It is shown that while in general such problems are NP-hard, some restrictions on the shape of the preference functions, and on the structure of the preference set, can be enforced to achieve tractability. In these cases, a simple generalization of a single-source shortest path algorithm can be used to compute a globally preferred solution in polynomial time.

  2. Network reconstruction via graph blending

    NASA Astrophysics Data System (ADS)

    Estrada, Rolando

    2016-05-01

    Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.

  3. A life history approach to delineating how harsh environments and hawk temperament traits differentially shape children's problem-solving skills.

    PubMed

    Suor, Jennifer H; Sturge-Apple, Melissa L; Davies, Patrick T; Cicchetti, Dante

    2017-08-01

    Harsh environments are known to predict deficits in children's cognitive abilities. Life history theory approaches challenge this interpretation, proposing stressed children's cognition becomes specialized to solve problems in fitness-enhancing ways. The goal of this study was to examine associations between early environmental harshness and children's problem-solving outcomes across tasks varying in ecological relevance. In addition, we utilize an evolutionary model of temperament toward further specifying whether hawk temperament traits moderate these associations. Two hundred and one mother-child dyads participated in a prospective multimethod study when children were 2 and 4 years old. At age 2, environmental harshness was assessed via maternal report of earned income and observations of maternal disengagement during a parent-child interaction task. Children's hawk temperament traits were assessed from a series of unfamiliar episodes. At age 4, children's reward-oriented and visual problem-solving were measured. Path analyses revealed early environmental harshness and children's hawk temperament traits predicted worse visual problem-solving. Results showed a significant two-way interaction between children's hawk temperament traits and environmental harshness on reward-oriented problem-solving. Simple slope analyses revealed the effect of environmental harshness on reward-oriented problem-solving was specific to children with higher levels of hawk traits. Results suggest early experiences of environmental harshness and child hawk temperament traits shape children's trajectories of problem-solving in an environment-fitting manner. © 2017 Association for Child and Adolescent Mental Health.

  4. Rapid Microfluidic Mixers Utilizing Dispersion Effect and Interactively Time-Pulsed Injection

    NASA Astrophysics Data System (ADS)

    Leong, Jik-Chang; Tsai, Chien-Hsiung; Chang, Chin-Lung; Lin, Chiu-Feng; Fu, Lung-Ming

    2007-08-01

    In this paper, we present a novel active microfluidic mixer utilizing a dispersion effect in an expansion chamber and applying interactively time-pulsed driving voltages to the respective inlet fluid flows to induce electroosmotic flow velocity variations for developing a rapid mixing effect in a microchannel. Without using any additional equipment to induce flow perturbations, only a single high-voltage power source is required for simultaneously driving and mixing sample fluids, which results in a simple and low-cost system for mixing. The effects of the applied main electrical field, interactive frequency, and expansion ratio on the mixing performance are thoroughly examined experimentally and numerically. The mixing ratio can be as high as 95% within a mixing length of 3000 μm downstream from the secondary T-form when a driving electric field strength of 250 V/cm, a periodic switching frequency of 5 Hz, and the expansion ratio M=1:10 are applied. In addition, the optimization of the driving electric field, switching frequency, expansion ratio, expansion entry length, and expansion chamber length for achieving a maximum mixing ratio is also discussed in this study. The novel method proposed in this study can be used for solving the mixing problem in the field of micro-total-analysis systems in a simple manner.

  5. Estimation of ion competition via correlated responsivity offset in linear ion trap mass spectrometry analysis: theory and practical use in the analysis of cyanobacterial hepatotoxin microcystin-LR in extracts of food additives.

    PubMed

    Urban, Jan; Hrouzek, Pavel; Stys, Dalibor; Martens, Harald

    2013-01-01

    Responsivity is a conversion qualification of a measurement device given by the functional dependence between the input and output quantities. A concentration-response-dependent calibration curve represents the most simple experiment for the measurement of responsivity in mass spectrometry. The cyanobacterial hepatotoxin microcystin-LR content in complex biological matrices of food additives was chosen as a model example of a typical problem. The calibration curves for pure microcystin and its mixtures with extracts of green alga and fish meat were reconstructed from the series of measurement. A novel approach for the quantitative estimation of ion competition in ESI is proposed in this paper. We define the correlated responsivity offset in the intensity values using the approximation of minimal correlation given by the matrix to the target mass values of the analyte. The estimation of the matrix influence enables the approximation of the position of a priori unknown responsivity and was easily evaluated using a simple algorithm. The method itself is directly derived from the basic attributes of the theory of measurements. There is sufficient agreement between the theoretical and experimental values. However, some theoretical issues are discussed to avoid misinterpretations and excessive expectations.

  6. Ultrasensitive, simple and solvent-free micro-assay for determining sulphite preservatives (E220-228) in foods by HS-SDME and UV-vis micro-spectrophotometry.

    PubMed

    Gómez-Otero, E; Costas, M; Lavilla, I; Bendicho, C

    2014-03-01

    A new method based on headspace single-drop microextraction in combination with UV-vis micro-spectrophotometry has been developed for the ultrasensitive determination of banned sulphite preservatives (E220-228) in fruits and vegetables. Sample acidification was used for SO2 generation, which is collected onto a 5,5'-dithiobis-(2-nitrobenzoic acid) microdrop for spectrophotometric measurement. A careful study of this reaction was necessary, including conditions for SO2 generation from different sulphating salts, drop pH, 5,5'-dithiobis-(2-nitrobenzoic acid) concentration and potential interference effects. Variables influencing mass transfer (stirring, sample volume and addition of salt) and microextraction time were also studied. A simple sulphite extraction was carried out, and problems caused by oxidation during the extraction process were addressed. A high enrichment factor (380) allows the determination of low levels of free SO2 in fruits and vegetables (limit of detection 0.06 μg g(-1), limit of quantification 0.2 μg g(-1)) with an adequate precision (repeatability, relative standard deviation 5 %). In addition, the sulphiting process was studied through the monitoring of residual SO2 in a vegetal sample, thus showing the importance of a sensitive tool for SO2 detection at low levels.

  7. Estimation of Ion Competition via Correlated Responsivity Offset in Linear Ion Trap Mass Spectrometry Analysis: Theory and Practical Use in the Analysis of Cyanobacterial Hepatotoxin Microcystin-LR in Extracts of Food Additives

    PubMed Central

    Hrouzek, Pavel; Štys, Dalibor; Martens, Harald

    2013-01-01

    Responsivity is a conversion qualification of a measurement device given by the functional dependence between the input and output quantities. A concentration-response-dependent calibration curve represents the most simple experiment for the measurement of responsivity in mass spectrometry. The cyanobacterial hepatotoxin microcystin-LR content in complex biological matrices of food additives was chosen as a model example of a typical problem. The calibration curves for pure microcystin and its mixtures with extracts of green alga and fish meat were reconstructed from the series of measurement. A novel approach for the quantitative estimation of ion competition in ESI is proposed in this paper. We define the correlated responsivity offset in the intensity values using the approximation of minimal correlation given by the matrix to the target mass values of the analyte. The estimation of the matrix influence enables the approximation of the position of a priori unknown responsivity and was easily evaluated using a simple algorithm. The method itself is directly derived from the basic attributes of the theory of measurements. There is sufficient agreement between the theoretical and experimental values. However, some theoretical issues are discussed to avoid misinterpretations and excessive expectations. PMID:23586036

  8. New Results from Fermi-LAT and Their Implications for the Nature of Dark Matter and the Origin of Cosmic Rays

    NASA Technical Reports Server (NTRS)

    Moiseev, Alexander

    2009-01-01

    The measured spectrum is compatible with a power law within our current systematic errors. The spectral index (-3.04) is harder than expected from previous experiments and simple theoretical considerations. "Pre-Fermi" diffusive model requires a harder electron injection spectrum (by 0.12) to fit the Fermi data, but inconsistent with positron excess reported by Pamela if it extends to higher energy. Additional component of electron flux from local source(s) may solve the problem; its origin, astrophysical or exotic, is still unclear. Valuable contribution to the calculation of IC component of diffuse gamma radiation.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplan, David E.; Krnjaic, Gordan Z.; Rehermann, Keith R.

    We present a simple UV completion of Atomic Dark Matter (aDM) in which heavy right-handed neutrinos decay to induce both dark and lepton number densities. This model addresses several outstanding cosmological problems: the matter/anti-matter asymmetry, the dark matter abundance, the number of light degrees of freedom in the early universe, and the smoothing of small-scale structure. Additionally, this realization of aDM may reconcile the CoGeNT excess with recently published null results and predicts a signal in the CRESST Oxygen band. We also find that, due to unscreened long-range interactions, the residual un recombined dark ions settle into a diffuse isothermalmore » halo.« less

  10. Automating Embedded Analysis Capabilities and Managing Software Complexity in Multiphysics Simulation, Part I: Template-Based Generic Programming

    DOE PAGES

    Pawlowski, Roger P.; Phipps, Eric T.; Salinger, Andrew G.

    2012-01-01

    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering.

  11. The Container Problem in Bubble-Sort Graphs

    NASA Astrophysics Data System (ADS)

    Suzuki, Yasuto; Kaneko, Keiichi

    Bubble-sort graphs are variants of Cayley graphs. A bubble-sort graph is suitable as a topology for massively parallel systems because of its simple and regular structure. Therefore, in this study, we focus on n-bubble-sort graphs and propose an algorithm to obtain n-1 disjoint paths between two arbitrary nodes in time bounded by a polynomial in n, the degree of the graph plus one. We estimate the time complexity of the algorithm and the sum of the path lengths after proving the correctness of the algorithm. In addition, we report the results of computer experiments evaluating the average performance of the algorithm.

  12. Physical data measurements and mathematical modelling of simple gas bubble experiments in glass melts

    NASA Technical Reports Server (NTRS)

    Weinberg, Michael C.

    1986-01-01

    In this work consideration is given to the problem of the extraction of physical data information from gas bubble dissolution and growth measurements. The discussion is limited to the analysis of the simplest experimental systems consisting of a single, one component gas bubble in a glassmelt. It is observed that if the glassmelt is highly under- (super-) saturated, then surface tension effects may be ignored, simplifying the task of extracting gas diffusivity values from the measurements. If, in addition, the bubble rise velocity is very small (or very large) the ease of obtaining physical property data is enhanced. Illustrations are given for typical cases.

  13. The Physics Workbook: A Needed Instructional Device.

    ERIC Educational Resources Information Center

    Brekke, Stewart E.

    2003-01-01

    Points out the importance of problem solving as a fundamental skill and how students struggle with problem solving in physics courses. Describes a workbook developed as a solution to students' struggles that features simple exercises and advanced problem solving. (Contains 12 references.) (Author/YDS)

  14. Simple prostatectomy

    MedlinePlus

    ... if you have: Problems emptying your bladder (urinary retention) Frequent urinary tract infections Frequent bleeding from the ... to internal organs Erection problems (impotence) Loss of sperm fertility ( infertility ) Passing semen back up into the ...

  15. Using a Modified Simple Pendulum to Find the Variations in the Value of “g”

    NASA Astrophysics Data System (ADS)

    Arnold, Jonathan P.; Efthimiou, C.

    2007-05-01

    The simple pendulum is one of the most known and studied system of Newtonian Mechanics. It also provides one of the most elegant and simple devices to measure the acceleration of gravity at any location. In this presentation we will revisit the problem of measuring the acceleration of gravity using a simple pendulum and will present a modification to the standard technique that increases the accuracy of the measurement.

  16. The trivial function of sleep.

    PubMed

    Rial, Ruben Victor; Nicolau, María Cristina; Gamundí, Antoni; Akaârir, Mourad; Aparicio, Sara; Garau, Celia; Tejada, Silvia; Roca, Catalina; Gené, Lluis; Moranta, David; Esteban, Susana

    2007-08-01

    Rest in poikilothermic animals is an adaptation of the organism to adjust to the geophysical cycles, a doubtless valuable function for all animals. In this review, we argue that the function of sleep could be trivial for mammals and birds because sleep does not provide additional advantages over simple rest. This conclusion can be reached by using the null hypothesis and parsimony arguments. First, we develop some theoretical and empirical considerations supporting the absence of specific effects after sleep deprivation. Then, we question the adaptive value of sleep traits by using non-coding DNA as a metaphor that shows that the complexity in the design is not a definitive proof of adaptation. We then propose that few, if any, phenotypic selectable traits do exist in sleep. Instead, the selection of efficient waking has been the major determinant of the most significant aspects in sleep structure. In addition, we suggest that the regulation of sleep is only a mechanism to enforce rest, a state that was challenged after the development of homeothermy. As a general conclusion, there is no direct answer to the problem of why we sleep; only an explanation of why such a complex set of mechanisms is used to perform what seems to be a simple function. This explanation should be reached by following the evolution of wakefulness rather than that of sleep. Sleep could have additional functions secondarily added to the trivial one, although, in this case, the necessity and sufficiency of these sleep functions should be demonstrated.

  17. Prediction of Sound Waves Propagating Through a Nozzle Without/With a Shock Wave Using the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Chang, Sin-Chung; Jorgenson, Philip C. E.

    2000-01-01

    The benchmark problems in Category 1 (Internal Propagation) of the third Computational Aeroacoustics (CAA) Work-shop sponsored by NASA Glenn Research Center are solved using the space-time conservation element and solution element (CE/SE) method. The first problem addresses the propagation of sound waves through a nearly choked transonic nozzle. The second one concerns shock-sound interaction in a supersonic nozzle. A quasi one-dimension CE/SE Euler solver for a nonuniform mesh is developed and employed to solve both problems. Numerical solutions are compared with the analytical solution for both problems. It is demonstrated that the CE/SE method is capable of solving aeroacoustic problems with/without shock waves in a simple way. Furthermore, the simple nonreflecting boundary condition used in the CE/SE method which is not based on the characteristic theory works very well.

  18. Rebecca Erikson – Solving Problems with Love for Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erikson, Rebecca

    Rebecca Erikson’s love for science began at a young age. Today, she’s a senior scientist at PNNL trying to solve problems that address national security concerns. Through one project, she developed a sleek, simple and inexpensive way to turn a cellphone into a high-powered, high-quality microscope that helps authorities determine if white powder that falls from an envelope is anthrax or something simple like baby powder. Listen as Rebecca describes her work in this Energy Department video.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrada, J.J.; Osborne-Lee, I.W.; Grizzaffi, P.A.

    Expert systems are known to be useful in capturing expertise and applying knowledge to chemical engineering problems such as diagnosis, process control, process simulation, and process advisory. However, expert system applications are traditionally limited to knowledge domains that are heuristic and involve only simple mathematics. Neural networks, on the other hand, represent an emerging technology capable of rapid recognition of patterned behavior without regard to mathematical complexity. Although useful in problem identification, neural networks are not very efficient in providing in-depth solutions and typically do not promote full understanding of the problem or the reasoning behind its solutions. Hence, applicationsmore » of neural networks have certain limitations. This paper explores the potential for expanding the scope of chemical engineering areas where neural networks might be utilized by incorporating expert systems and neural networks into the same application, a process called hybridization. In addition, hybrid applications are compared with those using more traditional approaches, the results of the different applications are analyzed, and the feasibility of converting the preliminary prototypes described herein into useful final products is evaluated. 12 refs., 8 figs.« less

  20. On the Performance Evaluation of 3D Reconstruction Techniques from a Sequence of Images

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed; Farag, Aly

    2005-12-01

    The performance evaluation of 3D reconstruction techniques is not a simple problem to solve. This is not only due to the increased dimensionality of the problem but also due to the lack of standardized and widely accepted testing methodologies. This paper presents a unified framework for the performance evaluation of different 3D reconstruction techniques. This framework includes a general problem formalization, different measuring criteria, and a classification method as a first step in standardizing the evaluation process. Performance characterization of two standard 3D reconstruction techniques, stereo and space carving, is also presented. The evaluation is performed on the same data set using an image reprojection testing methodology to reduce the dimensionality of the evaluation domain. Also, different measuring strategies are presented and applied to the stereo and space carving techniques. These measuring strategies have shown consistent results in quantifying the performance of these techniques. Additional experiments are performed on the space carving technique to study the effect of the number of input images and the camera pose on its performance.

  1. Finite-horizon differential games for missile-target interception system using adaptive dynamic programming with input constraints

    NASA Astrophysics Data System (ADS)

    Sun, Jingliang; Liu, Chunsheng

    2018-01-01

    In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.

  2. CamOptimus: a tool for exploiting complex adaptive evolution to optimize experiments and processes in biotechnology.

    PubMed

    Cankorur-Cetinkaya, Ayca; Dias, Joao M L; Kludas, Jana; Slater, Nigel K H; Rousu, Juho; Oliver, Stephen G; Dikicioglu, Duygu

    2017-06-01

    Multiple interacting factors affect the performance of engineered biological systems in synthetic biology projects. The complexity of these biological systems means that experimental design should often be treated as a multiparametric optimization problem. However, the available methodologies are either impractical, due to a combinatorial explosion in the number of experiments to be performed, or are inaccessible to most experimentalists due to the lack of publicly available, user-friendly software. Although evolutionary algorithms may be employed as alternative approaches to optimize experimental design, the lack of simple-to-use software again restricts their use to specialist practitioners. In addition, the lack of subsidiary approaches to further investigate critical factors and their interactions prevents the full analysis and exploitation of the biotechnological system. We have addressed these problems and, here, provide a simple-to-use and freely available graphical user interface to empower a broad range of experimental biologists to employ complex evolutionary algorithms to optimize their experimental designs. Our approach exploits a Genetic Algorithm to discover the subspace containing the optimal combination of parameters, and Symbolic Regression to construct a model to evaluate the sensitivity of the experiment to each parameter under investigation. We demonstrate the utility of this method using an example in which the culture conditions for the microbial production of a bioactive human protein are optimized. CamOptimus is available through: (https://doi.org/10.17863/CAM.10257).

  3. Differences in Memory Functioning between Children with Attention-Deficit/Hyperactivity Disorder and/or Focal Epilepsy

    PubMed Central

    Lee, Sylvia E.; Kibby, Michelle Y.; Cohen, Morris J.; Stanford, Lisa; Park, Yong; Strickland, Suzanne

    2016-01-01

    Prior research has shown that attention-deficit/hyperactivity disorder (ADHD) and epilepsy are frequently comorbid and that both disorders are associated with various attention and memory problems. Nonetheless, limited research has been conducted comparing the two disorders in one sample to determine unique versus shared deficits. Hence, we investigated differences in working memory and short-term and delayed recall between children with ADHD, focal epilepsy of mixed foci, comorbid ADHD/epilepsy and controls. Participants were compared on the Core subtests and the Picture Locations subtest of the Children’s Memory Scale (CMS). Results indicated that children with ADHD displayed intact verbal working memory and long-term memory (LTM), as well as intact performance on most aspects of short-term memory (STM). They performed worse than controls on Numbers Forward and Picture Locations, suggesting problems with focused attention and simple span for visual-spatial material. Conversely, children with epilepsy displayed poor focused attention and STM regardless of modality assessed, which affected encoding into LTM. The only loss over time was found for passages (Stories). Working memory was intact. Children with comorbid ADHD/epilepsy displayed focused attention and STM/LTM problems consistent with both disorders, having the lowest scores across the four groups. Hence, focused attention and visual-spatial span appear to be affected in both disorders, whereas additional STM/encoding problems are specific to epilepsy. Children with comorbid ADHD/epilepsy have deficits consistent with both disorders, with slight additive effects. This study suggests that attention and memory testing should be a regular part of the evaluation of children with epilepsy and ADHD. PMID:26156331

  4. [Formula: see text]Differences in memory functioning between children with attention-deficit/hyperactivity disorder and/or focal epilepsy.

    PubMed

    Lee, Sylvia E; Kibby, Michelle Y; Cohen, Morris J; Stanford, Lisa; Park, Yong; Strickland, Suzanne

    2016-01-01

    Prior research has shown that attention-deficit/hyperactivity disorder (ADHD) and epilepsy are frequently comorbid and that both disorders are associated with various attention and memory problems. Nonetheless, limited research has been conducted comparing the two disorders in one sample to determine unique versus shared deficits. Hence, we investigated differences in working memory (WM) and short-term and delayed recall between children with ADHD, focal epilepsy of mixed foci, comorbid ADHD/epilepsy and controls. Participants were compared on the Core subtests and the Picture Locations subtest of the Children's Memory Scale (CMS). Results indicated that children with ADHD displayed intact verbal WM and long-term memory (LTM), as well as intact performance on most aspects of short-term memory (STM). They performed worse than controls on Numbers Forward and Picture Locations, suggesting problems with focused attention and simple span for visual-spatial material. Conversely, children with epilepsy displayed poor focused attention and STM regardless of the modality assessed, which affected encoding into LTM. The only loss over time was found for passages (Stories). WM was intact. Children with comorbid ADHD/epilepsy displayed focused attention and STM/LTM problems consistent with both disorders, having the lowest scores across the four groups. Hence, focused attention and visual-spatial span appear to be affected in both disorders, whereas additional STM/encoding problems are specific to epilepsy. Children with comorbid ADHD/epilepsy have deficits consistent with both disorders, with slight additive effects. This study suggests that attention and memory testing should be a regular part of the evaluation of children with epilepsy and ADHD.

  5. Using Programmable Calculators to Solve Electrostatics Problems.

    ERIC Educational Resources Information Center

    Yerian, Stephen C.; Denker, Dennis A.

    1985-01-01

    Provides a simple routine which allows first-year physics students to use programmable calculators to solve otherwise complex electrostatic problems. These problems involve finding electrostatic potential and electric field on the axis of a uniformly charged ring. Modest programing skills are required of students. (DH)

  6. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  7. Exploring the relationship between math anxiety and gender through implicit measurement

    PubMed Central

    Rubinsten, Orly; Bialik, Noam; Solar, Yael

    2012-01-01

    Math anxiety, defined as a negative affective response to mathematics, is suggested as a strong antecedent for the low visibility of women in the science and engineering workforce. However, the assumption of gender differences in math anxiety is still being studied and results are inconclusive, probably due to the use of explicit measures such as direct questionnaires. Thus, our primary objective was to investigate the effects of math anxiety on numerical processing in males and females by using a novel affective priming task as an indirect measure. Specifically, university students (23 males and 30 females) completed a priming task in which an arithmetic equation was preceded by one of four types of priming words (positive, neutral, negative, or related to mathematics). Participants were required to indicate whether the equation (simple math facts based on addition, subtraction, multiplication, or division) was true or false. People are typically found to respond to target stimuli more rapidly after presentation of an affectively related prime than after an affectively unrelated one. In the current study, shorter response latencies for positive as compared to negative affective primes were found in the male group. An affective priming effect was found in the female group as well, but with a reversed pattern. That is, significantly shorter response latencies were observed in the female group for negative as compared to positive targets. That is, for females, negative affective primes act as affectively related to simple arithmetic problems. In contrast, males associated positive affect with simple arithmetic. In addition, only females with lower or insignificant negative affect toward arithmetic study at faculties of mathematics and science. We discuss the advantages of examining pure anxiety factors with implicit measures which are free of response factors. In addition it is suggested that environmental factors may enhance the association between math achievements and math anxiety in females. PMID:23087633

  8. Exploring the relationship between math anxiety and gender through implicit measurement.

    PubMed

    Rubinsten, Orly; Bialik, Noam; Solar, Yael

    2012-01-01

    Math anxiety, defined as a negative affective response to mathematics, is suggested as a strong antecedent for the low visibility of women in the science and engineering workforce. However, the assumption of gender differences in math anxiety is still being studied and results are inconclusive, probably due to the use of explicit measures such as direct questionnaires. Thus, our primary objective was to investigate the effects of math anxiety on numerical processing in males and females by using a novel affective priming task as an indirect measure. Specifically, university students (23 males and 30 females) completed a priming task in which an arithmetic equation was preceded by one of four types of priming words (positive, neutral, negative, or related to mathematics). Participants were required to indicate whether the equation (simple math facts based on addition, subtraction, multiplication, or division) was true or false. People are typically found to respond to target stimuli more rapidly after presentation of an affectively related prime than after an affectively unrelated one. In the current study, shorter response latencies for positive as compared to negative affective primes were found in the male group. An affective priming effect was found in the female group as well, but with a reversed pattern. That is, significantly shorter response latencies were observed in the female group for negative as compared to positive targets. That is, for females, negative affective primes act as affectively related to simple arithmetic problems. In contrast, males associated positive affect with simple arithmetic. In addition, only females with lower or insignificant negative affect toward arithmetic study at faculties of mathematics and science. We discuss the advantages of examining pure anxiety factors with implicit measures which are free of response factors. In addition it is suggested that environmental factors may enhance the association between math achievements and math anxiety in females.

  9. Durham Smith Vest-Over-Pant Technique: Simple Procedure for a Complex Problem (Post-Hypospadias Repair Fistula).

    PubMed

    Gite, Venkat A; Patil, Saurabh R; Bote, Sachin M; Siddiqui, Mohd Ayub Karam Nabi; Nikose, Jayant V; Kandi, Anitha J

    2017-01-01

    Urethrocutaneous fistula, which occurs after hypospadias surgery, is often a baffling problem and its treatment is challenging. The study aimed to evaluate the results of the simple procedure (Durham Smith vest-over-pant technique) for this complex problem (post-hypospadias repair fistula). During the period from 2011 to 2015, 20 patients with post-hypospadias repair fistulas underwent Durham Smith repair. Common age group was between 5 and 12 years. Site wise distribution of fistula was coronal 2 (10%), distal penile 7 (35%), mid-penile 7 (35%), and proximal-penile 4 (20%). Out of 20 patients, 15 had fistula of size <5 mm (75%) and 5 patients had fistula of size >5 mm (25%). All cases were repaired with Durham Smith vest-over-pant technique by a single surgeon. In case of multiple fistulas adjacent to each other, all fistulas were joined to form single fistula and repaired. We have successfully repaired all post-hypospadias surgery urethrocutaneous fistulas using the technique described by Durham Smith with 100% success rate. Durham Smith vest-over-pant technique is a simple solution for a complex problem (post hypospadias surgery penile fistulas) in properly selected patients. © 2017 S. Karger AG, Basel.

  10. Solving transient acoustic boundary value problems with equivalent sources using a lumped parameter approach.

    PubMed

    Fahnline, John B

    2016-12-01

    An equivalent source method is developed for solving transient acoustic boundary value problems. The method assumes the boundary surface is discretized in terms of triangular or quadrilateral elements and that the solution is represented using the acoustic fields of discrete sources placed at the element centers. Also, the boundary condition is assumed to be specified for the normal component of the surface velocity as a function of time, and the source amplitudes are determined to match the known elemental volume velocity vector at a series of discrete time steps. Equations are given for marching-on-in-time schemes to solve for the source amplitudes at each time step for simple, dipole, and tripole source formulations. Several example problems are solved to illustrate the results and to validate the formulations, including problems with closed boundary surfaces where long-time numerical instabilities typically occur. A simple relationship between the simple and dipole source amplitudes in the tripole source formulation is derived so that the source radiates primarily in the direction of the outward surface normal. The tripole source formulation is shown to eliminate interior acoustic resonances and long-time numerical instabilities.

  11. Learning Problem-Solving Rules as Search through a Hypothesis Space

    ERIC Educational Resources Information Center

    Lee, Hee Seung; Betts, Shawn; Anderson, John R.

    2016-01-01

    Learning to solve a class of problems can be characterized as a search through a space of hypotheses about the rules for solving these problems. A series of four experiments studied how different learning conditions affected the search among hypotheses about the solution rule for a simple computational problem. Experiment 1 showed that a problem…

  12. Getting to the Bottom of a Ladder Problem

    ERIC Educational Resources Information Center

    McCartney, Mark

    2002-01-01

    In this paper, the author introduces a simple problem relating to a pair of ladders. A mathematical model of the problem produces an equation which can be solved in a number of ways using mathematics appropriate to "A" level students or first year undergraduates. The author concludes that the ladder problem can be used in class to develop and…

  13. Using a Five-Step Procedure for Inferential Statistical Analyses

    ERIC Educational Resources Information Center

    Kamin, Lawrence F.

    2010-01-01

    Many statistics texts pose inferential statistical problems in a disjointed way. By using a simple five-step procedure as a template for statistical inference problems, the student can solve problems in an organized fashion. The problem and its solution will thus be a stand-by-itself organic whole and a single unit of thought and effort. The…

  14. Building and Solving Odd-One-Out Classification Problems: A Systematic Approach

    ERIC Educational Resources Information Center

    Ruiz, Philippe E.

    2011-01-01

    Classification problems ("find the odd-one-out") are frequently used as tests of inductive reasoning to evaluate human or animal intelligence. This paper introduces a systematic method for building the set of all possible classification problems, followed by a simple algorithm for solving the problems of the R-ASCM, a psychometric test derived…

  15. Intellectual Abilities That Discriminate Good and Poor Problem Solvers.

    ERIC Educational Resources Information Center

    Meyer, Ruth Ann

    1981-01-01

    This study compared good and poor fourth-grade problem solvers on a battery of 19 "reference" tests for verbal, induction, numerical, word fluency, memory, perceptual speed, and simple visualization abilities. Results suggest verbal, numerical, and especially induction abilities are important to successful mathematical problem solving.…

  16. Solution of Stochastic Capital Budgeting Problems in a Multidivisional Firm.

    DTIC Science & Technology

    1980-06-01

    linear programming with simple recourse (see, for example, Dantzig (9) or Ziemba (35)) - 12 - and has been applied to capital budgeting problems with...New York, 1972 34. Weingartner, H.M., Mathematical Programming and Analysis of Capital Budgeting Problems, Markham Pub. Co., Chicago, 1967 35. Ziemba

  17. Moving Material into Space Without Rockets.

    ERIC Educational Resources Information Center

    Cheng, R. S.; Trefil, J. S.

    1985-01-01

    In response to conventional rocket demands on fuel supplies, electromagnetic launches were developed to give payloads high velocity using a stationary energy source. Several orbital mechanics problems are solved including a simple problem (radial launch with no rotation) and a complex problem involving air resistance and gravity. (DH)

  18. Reflection on solutions in the form of refutation texts versus problem solving: the case of 8th graders studying simple electric circuits

    NASA Astrophysics Data System (ADS)

    Safadi, Rafi'; Safadi, Ekhlass; Meidav, Meir

    2017-01-01

    This study compared students’ learning in troubleshooting and problem solving activities. The troubleshooting activities provided students with solutions to conceptual problems in the form of refutation texts; namely, solutions that portray common misconceptions, refute them, and then present the accepted scientific ideas. They required students to individually diagnose these solutions; that is, to identify the erroneous and correct parts of the solutions and explain in what sense they differed, and later share their work in whole class discussions. The problem solving activities required the students to individually solve these same problems, and later share their work in whole class discussions. We compared the impact of the individual work stage in the troubleshooting and problem solving activities on promoting argumentation in the subsequent class discussions, and the effects of these activities on students’ engagement in self-repair processes; namely, in learning processes that allowed the students to self-repair their misconceptions, and by extension on advancing their conceptual knowledge. Two 8th grade classes studying simple electric circuits with the same teacher took part. One class (28 students) carried out four troubleshooting activities and the other (31 students) four problem solving activities. These activities were interwoven into a twelve lesson unit on simple electric circuits that was spread over a period of 2 months. The impact of the troubleshooting activities on students’ conceptual knowledge was significantly higher than that of the problem solving activities. This result is consistent with the finding that the troubleshooting activities engaged students in self-repair processes whereas the problem solving activities did not. The results also indicated that diagnosing solutions to conceptual problems in the form of refutation texts, as opposed to solving these same problems, apparently triggered argumentation in subsequent class discussions, even though the teacher was unfamiliar with the best ways to conduct argumentative classroom discussions. We account for these results and suggest possible directions for future research.

  19. Bhopal: lessons learned

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenly, G.D. Jr.

    1986-03-01

    The risk assessment lesson learned from the Bhopal tragedy is both simple and complex. Practical planning for toxic material releases must start with an understanding of what the risks and possible consequences are. Additionally, plans must be formulated to ensure immediate decisive actions tailored to site specific scenarios, and the possible impacts projected on both the plant and surrounding communities. Most importantly, the planning process must include the communities that could be affected. Such planning will ultimately provide significant financial savings and provide for good public relations, and this makes good business sense in both developed and developing countries. Paraphrasingmore » the adage ''a penny saved is a penny earned,'' a penny spent on emergency preparedness is dollars earned through public awareness. The complex aspect of these simple concepts is overcoming human inertia, i.e., overcoming the ''it can't happen here'' syndrome in both government and private industry. A world center of excellence (ITRAC), acting as a center for education, research, and development in the area of emergency planning and response, will be the conduit for needed technology transfer to national centers of excellence in emergency planning and response. These national emergency planning and response centers (NARACS), managed by private industry for governments, will be catalysts to action in formulating effective plans involving potentially affected communities and plant management. The ITRAC/NARAC proposal is a simple concept involving complex ideas to solve the simple problem of being prepared for the Bhopal-like emergency which, as experience has demonstrated, will have complex consequences for the unprepared.« less

  20. Community Involvement Manual.

    DTIC Science & Technology

    1979-05-01

    and social problems, does not lend itself to a single or simple solution. This is why we must all be involved. For this reason we. believe that...of admission to decisionmaking. At times the implications of this relatively simple premise are not minor. Many people beginning community...involvement programs have found it extremely difficult to locate technical people able to translate technical reports into simple , every- day English. There

  1. HIA, the next step: Defining models and roles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putters, Kim

    If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways andmore » responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking.« less

  2. Symmetries and "simple" solutions of the classical n-body problem

    NASA Astrophysics Data System (ADS)

    Chenciner, Alain

    2006-03-01

    The Lagrangian of the classical n-body problem has well known symmetries: isometries of the ambient Euclidean space (translations, rotations, reflexions) and changes of scale coming from the homogeneity of the potential. To these symmetries are associated "simple" solutions of the problem, the so-called homographic motions, which play a basic role in the global understanding of the dynamics. The classical subproblems (planar, isosceles) are also consequences of the existence of symmetries: invariance under reflexion through a plane in the first case, invariance under exchange of two equal masses in the second. In these two cases, the symmetry acts at the level of the "shape space" (the oriented one in the first case) whose existence is the main difference between the 2-body problem and the (n ≥ 3)-body problem. These symmetries of the Lagrangian imply symmetries of the action functional, which is defined on the space of regular enough loops of a given period in the configuration space of the problem. Minimization of the action under well-chosen symmetry constraints leads to remarkable solutions of the n-body problem which may also be called simple and could play after the homographic ones the role of organizing centers in the global dynamics. In [13] and [16], I have given a survey of the new classes of solutions which had been obtained in this way, mainly choreographies of n equal masses in a plane or in space and generalized Hip-Hops of at least 4 arbitrary masses in space. I give here an updated overview of the results and a quick glance at the methods of proofs.

  3. Improving the simple, complicated and complex realities of community-acquired pneumonia.

    PubMed

    Liu, S K; Homa, K; Butterly, J R; Kirkland, K B; Batalden, P B

    2009-04-01

    This paper first describes efforts to improve the care for patients hospitalised with community-acquired pneumonia and the associated changes in quality measures at a rural academic medical centre. The results of the improvement interventions and the associated clinical realities, expected outcomes, measures, improvement interventions and improvement aims are then re-examined using the Glouberman and Zimmerman typology of healthcare problems--simple, complicated and complex. The typology is then used to explore the future design and assessment of improvement interventions, which may allow better matching with the types of problem healthcare providers and organisations are confronted with. Matching improvement interventions with problem category has the possibility of improving the success of improvement efforts and the reliability of care while at the same time preserving needed provider autonomy and judgement to adapt care for more complex problems.

  4. Composition of Indigo naturalis.

    PubMed

    Plitzko, Inken; Mohn, Tobias; Sedlacek, Natalie; Hamburger, Matthias

    2009-06-01

    A proposal for a European Pharmacopoeia monograph concerning Indigo naturalis has recently been published, whereby the indigo (1) and indirubin (2) content should be determined by HPLC-UV. This method was tested, but problems were seen with the dosage of indigo due to poor solubility. A quantitative assay for indigo based on (1)H-NMR was developed as an alternative. The HPLC and qNMR assays were compared with eight Indigo naturalis samples. The HPLC assay consistently gave much lower indigo concentrations because solubility was the limiting factor in sample preparation. In one sample, sucrose was identified by (1)H-NMR as an organic additive. Simple wet chemistry assays for undeclared additives such as sugars and starch were tested with artificially spiked Indigo naturalis samples to establish their limits of detection, and sulfate ash determinations were carried out in view of a better assessment of Indigo naturalis in a future European monograph.

  5. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  6. Use of a Computer Simulation To Develop Mental Simulations for Understanding Relative Motion Concepts.

    ERIC Educational Resources Information Center

    Monaghan, James M.; Clement, John

    1999-01-01

    Presents evidence for students' qualitative and quantitative difficulties with apparently simple one-dimensional relative-motion problems, students' spontaneous visualization of relative-motion problems, the visualizations facilitating solution of these problems, and students' memories of the online computer simulation used as a framework for…

  7. On the Beauty of Mathematics as Exemplified by a Problem in Combinatorics.

    ERIC Educational Resources Information Center

    Dence, Thomas P.

    1982-01-01

    The beauty of discovering some simple yet elegant proof either to something new or to an already established fact is discussed. A combinatorial problem that deals with covering a checkerboard with dominoes is presented as a starting point for individual investigation of similar problems. (MP)

  8. Field Theory in Cultural Capital Studies of Educational Attainment

    ERIC Educational Resources Information Center

    Krarup, Troels; Munk, Martin D.

    2016-01-01

    This article argues that there is a double problem in international research in cultural capital and educational attainment: an empirical problem, since few new insights have been gained within recent years; and a theoretical problem, since cultural capital is seen as a simple hypothesis about certain isolated individual resources, disregarding…

  9. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  10. The Dementia Services Mini-Screen: A Simple Method to Identify Patients and Caregivers Needing Enhanced Dementia Care Services

    PubMed Central

    Borson, Soo; Scanlan, James M.; Sadak, Tatiana; Lessig, Mary; Vitaliano, Peter

    2014-01-01

    Objective The National Alzheimer’s Plan calls for targeted health system change to improve outcomes for persons with dementia and their family caregivers. We explored whether dementia-specific service needs and gaps could be predicted from simple information that can be readily acquired in routine medical care settings. Method Primary family caregivers for cognitively impaired older adults (n=215) were asked about current stress, challenging patient behaviors, and prior-year needs and gaps in 16 medical and psychosocial services. Demographic data, caregiver stress, and patient clinical features were evaluated in regression analyses to identify unique predictors of service needs and gaps. Results Caregiver stress and patient behavior problems together accounted for an average of 24% of the whole-sample variance in total needs and gaps. Across all analyses, including total, medical, and psychosocial services needs and gaps, all other variables combined (comorbid chronic disease, dementia severity, age, caregiver relationship, and residence) accounted for an accounted for a mean of 3%, with no variable yielding more than 4% in any equation. We combined stress and behavior problem indicators into a simple screen. In early/mild dementia dyads (n=111) typical in primary care settings, the screen identified gaps in total and psychosocial care in 84% and 77%, respectively, of those with high stress/high behavior problems vs. 25% and 23%, respectively, of those with low stress/low behavior problems. Medical care gaps were dramatically higher in high stress/high behavior problem dyads (66%) than all others (12%). Conclusion A simple tool (likely completed in 1–2 minutes) which combines caregiver stress and patient behavior problems, the Dementia Services Mini-Screen, could help clinicians rapidly identify high need, high gap dyads. Health care systems could use it to estimate population needs for targeted dementia services and facilitate their development. PMID:24315560

  11. Precise Temperature Measurement for Increasing the Survival of Newborn Babies in Incubator Environments

    PubMed Central

    Frischer, Robert; Penhaker, Marek; Krejcar, Ondrej; Kacerovsky, Marian; Selamat, Ali

    2014-01-01

    Precise temperature measurement is essential in a wide range of applications in the medical environment, however the regarding the problem of temperature measurement inside a simple incubator, neither a simple nor a low cost solution have been proposed yet. Given that standard temperature sensors don't satisfy the necessary expectations, the problem is not measuring temperature, but rather achieving the desired sensitivity. In response, this paper introduces a novel hardware design as well as the implementation that increases measurement sensitivity in defined temperature intervals at low cost. PMID:25494352

  12. The pear thrips problem

    Treesearch

    Bruce L. Parker

    1991-01-01

    As entomologists, we sometimes like to think of an insect pest problem as simply a problem with an insect and its host. Our jobs would be much easier if that were the case, but of course, it is never that simple. There are many other factors besides the insect, and each one must be fully considered to understand the problem and develop effective management solutions....

  13. Transformation Theory, Accelerating Frames, and Two Simple Problems

    ERIC Educational Resources Information Center

    Schmid, G. Bruno

    1977-01-01

    Presents an operator which transforms quantum functions to solve problems of the stationary state wave functions for a particle and the motion and spreading of a Gaussian wave packet in uniform gravitational fields. (SL)

  14. Primer on clinical acid-base problem solving.

    PubMed

    Whittier, William L; Rutecki, Gregory W

    2004-03-01

    Acid-base problem solving has been an integral part of medical practice in recent generations. Diseases discovered in the last 30-plus years, for example, Bartter syndrome and Gitelman syndrome, D-lactic acidosis, and bulimia nervosa, can be diagnosed according to characteristic acid-base findings. Accuracy in acid-base problem solving is a direct result of a reproducible, systematic approach to arterial pH, partial pressure of carbon dioxide, bicarbonate concentration, and electrolytes. The 'Rules of Five' is one tool that enables clinicians to determine the cause of simple and complex disorders, even triple acid-base disturbances, with consistency. In addition, other electrolyte abnormalities that accompany acid-base disorders, such as hypokalemia, can be incorporated into algorithms that complement the Rules and contribute to efficient problem solving in a wide variety of diseases. Recently urine electrolytes have also assisted clinicians in further characterizing select disturbances. Acid-base patterns, in many ways, can serve as a 'common diagnostic pathway' shared by all subspecialties in medicine. From infectious disease (eg, lactic acidemia with highly active antiviral therapy therapy) through endocrinology (eg, Conn's syndrome, high urine chloride alkalemia) to the interface between primary care and psychiatry (eg, bulimia nervosa with multiple potential acid-base disturbances), acid-base problem solving is the key to unlocking otherwise unrelated diagnoses. Inasmuch as the Rules are clinical tools, they are applied throughout this monograph to diverse pathologic conditions typical in contemporary practice.

  15. Disease induction by human microbial pathogens in plant-model systems: potential, problems and prospects.

    PubMed

    van Baarlen, Peter; van Belkum, Alex; Thomma, Bart P H J

    2007-02-01

    Relatively simple eukaryotic model organisms such as the genetic model weed plant Arabidopsis thaliana possess an innate immune system that shares important similarities with its mammalian counterpart. In fact, some human pathogens infect Arabidopsis and cause overt disease with human symptomology. In such cases, decisive elements of the plant's immune system are likely to be targeted by the same microbial factors that are necessary for causing disease in humans. These similarities can be exploited to identify elementary microbial pathogenicity factors and their corresponding targets in a green host. This circumvents important cost aspects that often frustrate studies in humans or animal models and, in addition, results in facile ethical clearance.

  16. Splenic red pulp macrophages are intrinsically superparamagnetic and contaminate magnetic cell isolates.

    PubMed

    Franken, Lars; Klein, Marika; Spasova, Marina; Elsukova, Anna; Wiedwald, Ulf; Welz, Meike; Knolle, Percy; Farle, Michael; Limmer, Andreas; Kurts, Christian

    2015-08-11

    A main function of splenic red pulp macrophages is the degradation of damaged or aged erythrocytes. Here we show that these macrophages accumulate ferrimagnetic iron oxides that render them intrinsically superparamagnetic. Consequently, these cells routinely contaminate splenic cell isolates obtained with the use of MCS, a technique that has been widely used in immunological research for decades. These contaminations can profoundly alter experimental results. In mice deficient for the transcription factor SpiC, which lack red pulp macrophages, liver Kupffer cells take over the task of erythrocyte degradation and become superparamagnetic. We describe a simple additional magnetic separation step that avoids this problem and substantially improves purity of magnetic cell isolates from the spleen.

  17. Additions to Pollard's "fun with Gompertz".

    PubMed

    Krishnamoorthy, S; Kulkarni, P M

    1993-01-01

    "In a recent paper, Pollard (1991) has demonstrated that under the Gompertz law of mortality quick accurate or approximate answers can be obtained to many queries on survival. Some of Pollard's formulae can also be developed in the context of multiple decrement life tables so as to arrive at simple solutions to problems on the probability of death due to a given cause and the effect of the elimination of a cause of death. It is realized that the cause-specific force of mortality may not obey the Gompertz law. Still, it may be possible to group the causes in such a way that for each group the Gompertz curve provides a good approximation." excerpt

  18. A cross-sectional survey of the growth and nutrition of the Bedouin of the South Sinai Peninsula.

    PubMed

    Beverley, David; Henderson, Catriona

    2003-09-01

    A total of 271 Bedouin, 140 of them younger than 16 years and 110 of them female, were examined as part of a health survey. The Bedouin of the southern Sinai showed evidence of stunted growth. Sixty-six subjects (24 female) were clinically anaemic. This might have been nutritional or secondary to giardiasis. Simple nutritional strategies to increase the protein and iron content of the diet might help to prevent these problems. Twenty Bedouin had sensorineural hearing loss that was thought to be autosomal recessive in one family grouping. In addition, ten adults had had an uvulectomy, a traditional means of thirst quenching.

  19. MacDoctor: The Macintosh diagnoser

    NASA Technical Reports Server (NTRS)

    Lavery, David B.; Brooks, William D.

    1990-01-01

    When the Macintosh computer was first released, the primary user was a computer hobbyist who typically had a significant technical background and was highly motivated to understand the internal structure and operational intricacies of the computer. In recent years the Macintosh computer has become a widely-accepted general purpose computer which is being used by an ever-increasing non-technical audience. This has lead to a large base of users which has neither the interest nor the background to understand what is happening 'behind the scenes' when the Macintosh is put to use - or what should be happening when something goes wrong. Additionally, the Macintosh itself has evolved from a simple closed design to a complete family of processor platforms and peripherals with a tremendous number of possible configurations. With the increasing popularity of the Macintosh series, software and hardware developers are producing a product for every user's need. As the complexity of configuration possibilities grows, the need for experienced or even expert knowledge is required to diagnose problems. This presents a problem to uneducated or casual users. This problem indicates a new Macintosh consumer need; that is, a diagnostic tool able to determine the problem for the user. As the volume of Macintosh products has increased, this need has also increased.

  20. TRUMP; transient and steady state temperature distribution. [IBM360,370; CDC7600; FORTRAN IV (95%) and BAL (5%) (IBM); FORTRAN IV (CDC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables--temperature, pressure, or field strength. Initial conditions may vary with spatial position, andmore » among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.IBM360,370;CDC7600; FORTRAN IV (95%) and BAL (5%) (IBM); FORTRAN IV (CDC); OS/360 (IBM360), OS/370 (IBM370), SCOPE 2.1.5 (CDC7600); As dimensioned, the program requires 400K bytes of storage on an IBM370 and 145,100 (octal) words on a CDC7600.« less

  1. A Method for Automated Detection of Usability Problems from Client User Interface Events

    PubMed Central

    Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.

    2005-01-01

    Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121

  2. When Practice Doesn't Lead to Retrieval: An Analysis of Children's Errors with Simple Addition

    ERIC Educational Resources Information Center

    de Villiers, Celéste; Hopkins, Sarah

    2013-01-01

    Counting strategies initially used by young children to perform simple addition are often replaced by more efficient counting strategies, decomposition strategies and rule-based strategies until most answers are encoded in memory and can be directly retrieved. Practice is thought to be the key to developing fluent retrieval of addition facts. This…

  3. Color Counts, Too!

    ERIC Educational Resources Information Center

    Sewell, Julia H.

    1983-01-01

    Students with undetected color blindness can have problems with specific teaching methods and materials. The problem should be ruled out in children with suspected learning disabilities and taken into account in career counseling. Nine examples of simple classroom modifications are described. (CL)

  4. Simple and Accurate Method for Central Spin Problems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Manolopoulos, David E.

    2018-06-01

    We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.

  5. Nick-free formation of reciprocal heteroduplexes: a simple solution to the topological problem.

    PubMed Central

    Wilson, J H

    1979-01-01

    Because the individual strands of DNA are intertwined, formation of heteroduplex structures between duplexes--as in presumed recombination intermediates--presents a topological puzzle, known as the winding problem. Previous approaches to this problem have assumed that single-strand breaks are required to permit formation of fully coiled heteroduplexes. This paper describes a simple, nick-free solution to the winding problem that satisfies all topological constraints. Homologous duplexes associated by their minor-groove surfaces can switch strand pairing to form reciprocal heteroduplexes that coil together into a compact, four-stranded helix throughout the region of pairing. Model building shows that this fused heteroduplex structure is plausible, being composed entirely of right-handed primary helices with Watson-Crick base pairing throughout. Its simplicity of formation, structural symmetry, and high degree of specificity are suggestive of a natural mechanism for alignment by base pairing between intact homologous duplexes. Implications for genetic recombination are discussed. Images PMID:291028

  6. Thinking in Terms of Sensors: Personification of Self as an Object in Physics Problem Solving

    ERIC Educational Resources Information Center

    Tabor-Morris, A. E.

    2015-01-01

    How can physics teachers help students develop consistent problem solving techniques for both simple and complicated physics problems, such as those that encompass objects undergoing multiple forces (mechanical or electrical) as individually portrayed in free-body diagrams and/or phenomenon involving multiple objects, such as Doppler effect…

  7. Helping Students with Emotional and Behavioral Disorders Solve Mathematics Word Problems

    ERIC Educational Resources Information Center

    Alter, Peter

    2012-01-01

    The author presents a strategy for helping students with emotional and behavioral disorders become more proficient at solving math word problems. Math word problems require students to go beyond simple computation in mathematics (e.g., adding, subtracting, multiplying, and dividing) and use higher level reasoning that includes recognizing relevant…

  8. Fracture mechanics and parapsychology

    NASA Astrophysics Data System (ADS)

    Cherepanov, G. P.

    2010-08-01

    The problem of postcritical deformation of materials beyond the ultimate strength is considered a division of fracture mechanics. A simple example is used to show the relationship between this problem and parapsychology, which studies phenomena and processes where the causality principle fails. It is shown that the concept of postcritical deformation leads to problems with no solution

  9. Duality of Mathematical Thinking When Making Sense of Simple Word Problems: Theoretical Essay

    ERIC Educational Resources Information Center

    Polotskaia, Elena; Savard, Annie; Freiman, Viktor

    2015-01-01

    This essay proposes a reflection on the learning difficulties and teaching approaches associated with arithmetic word problem solving. We question the development of word problem solving skills in the early grades of elementary school. We are trying to revive the discussion because first, the knowledge in question--reversibility of arithmetic…

  10. Modular thermal analyzer routine, volume 1

    NASA Technical Reports Server (NTRS)

    Oren, J. A.; Phillips, M. A.; Williams, D. R.

    1972-01-01

    The Modular Thermal Analyzer Routine (MOTAR) is a general thermal analysis routine with strong capabilities for performing thermal analysis of systems containing flowing fluids, fluid system controls (valves, heat exchangers, etc.), life support systems, and thermal radiation situations. Its modular organization permits the analysis of a very wide range of thermal problems for simple problems containing a few conduction nodes to those containing complicated flow and radiation analysis with each problem type being analyzed with peak computational efficiency and maximum ease of use. The organization and programming methods applied to MOTAR achieved a high degree of computer utilization efficiency in terms of computer execution time and storage space required for a given problem. The computer time required to perform a given problem on MOTAR is approximately 40 to 50 percent that required for the currently existing widely used routines. The computer storage requirement for MOTAR is approximately 25 percent more than the most commonly used routines for the most simple problems but the data storage techniques for the more complicated options should save a considerable amount of space.

  11. A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems

    NASA Astrophysics Data System (ADS)

    Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong

    2017-09-01

    In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.

  12. Brain Activation during Addition and Subtraction Tasks In-Noise and In-Quiet

    PubMed Central

    Abd Hamid, Aini Ismafairus; Yusoff, Ahmad Nazlim; Mukari, Siti Zamratol-Mai Sarah; Mohamad, Mazlyfarina

    2011-01-01

    Background: In spite of extensive research conducted to study how human brain works, little is known about a special function of the brain that stores and manipulates information—the working memory—and how noise influences this special ability. In this study, Functional magnetic resonance imaging (fMRI) was used to investigate brain responses to arithmetic problems solved in noisy and quiet backgrounds. Methods: Eighteen healthy young males performed simple arithmetic operations of addition and subtraction with in-quiet and in-noise backgrounds. The MATLAB-based Statistical Parametric Mapping (SPM8) was implemented on the fMRI datasets to generate and analyse the activated brain regions. Results: Group results showed that addition and subtraction operations evoked extended activation in the left inferior parietal lobe, left precentral gyrus, left superior parietal lobe, left supramarginal gyrus, and left middle temporal gyrus. This supported the hypothesis that the human brain relatively activates its left hemisphere more compared with the right hemisphere when solving arithmetic problems. The insula, middle cingulate cortex, and middle frontal gyrus, however, showed more extended right hemispheric activation, potentially due to the involvement of attention, executive processes, and working memory. For addition operations, there was extensive left hemispheric activation in the superior temporal gyrus, inferior frontal gyrus, and thalamus. In contrast, subtraction tasks evoked a greater activation of similar brain structures in the right hemisphere. For both addition and subtraction operations, the total number of activated voxels was higher for in-noise than in-quiet conditions. Conclusion: These findings suggest that when arithmetic operations were delivered auditorily, the auditory, attention, and working memory functions were required to accomplish the executive processing of the mathematical calculation. The respective brain activation patterns appear to be modulated by the noisy background condition. PMID:22135581

  13. The molecular matching problem

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    1993-01-01

    Molecular chemistry contains many difficult optimization problems that have begun to attract the attention of optimizers in the Operations Research community. Problems including protein folding, molecular conformation, molecular similarity, and molecular matching have been addressed. Minimum energy conformations for simple molecular structures such as water clusters, Lennard-Jones microclusters, and short polypeptides have dominated the literature to date. However, a variety of interesting problems exist and we focus here on a molecular structure matching (MSM) problem.

  14. FormTracer. A mathematica tracing package using FORM

    NASA Astrophysics Data System (ADS)

    Cyrol, Anton K.; Mitter, Mario; Strodthoff, Nils

    2017-10-01

    We present FormTracer, a high-performance, general purpose, easy-to-use Mathematica tracing package which uses FORM. It supports arbitrary space and spinor dimensions as well as an arbitrary number of simple compact Lie groups. While keeping the usability of the Mathematica interface, it relies on the efficiency of FORM. An additional performance gain is achieved by a decomposition algorithm that avoids redundant traces in the product tensors spaces. FormTracer supports a wide range of syntaxes which endows it with a high flexibility. Mathematica notebooks that automatically install the package and guide the user through performing standard traces in space-time, spinor and gauge-group spaces are provided. Program Files doi:http://dx.doi.org/10.17632/7rd29h4p3m.1 Licensing provisions: GPLv3 Programming language: Mathematica and FORM Nature of problem: Efficiently compute traces of large expressions Solution method: The expression to be traced is decomposed into its subspaces by a recursive Mathematica expansion algorithm. The result is subsequently translated to a FORM script that takes the traces. After FORM is executed, the final result is either imported into Mathematica or exported as optimized C/C++/Fortran code. Unusual features: The outstanding features of FormTracer are the simple interface, the capability to efficiently handle an arbitrary number of Lie groups in addition to Dirac and Lorentz tensors, and a customizable input-syntax.

  15. A Simple Genetic Incompatibility Causes Hybrid Male Sterility in Mimulus

    PubMed Central

    Sweigart, Andrea L.; Fishman, Lila; Willis, John H.

    2006-01-01

    Much evidence has shown that postzygotic reproductive isolation (hybrid inviability or sterility) evolves by the accumulation of interlocus incompatibilities between diverging populations. Although in theory only a single pair of incompatible loci is needed to isolate species, empirical work in Drosophila has revealed that hybrid fertility problems often are highly polygenic and complex. In this article we investigate the genetic basis of hybrid sterility between two closely related species of monkeyflower, Mimulus guttatus and M. nasutus. In striking contrast to Drosophila systems, we demonstrate that nearly complete hybrid male sterility in Mimulus results from a simple genetic incompatibility between a single pair of heterospecific loci. We have genetically mapped this sterility effect: the M. guttatus allele at the hybrid male sterility 1 (hms1) locus acts dominantly in combination with recessive M. nasutus alleles at the hybrid male sterility 2 (hms2) locus to cause nearly complete hybrid male sterility. In a preliminary screen to find additional small-effect male sterility factors, we identified one additional locus that also contributes to some of the variation in hybrid male fertility. Interestingly, hms1 and hms2 also cause a significant reduction in hybrid female fertility, suggesting that sex-specific hybrid defects might share a common genetic basis. This possibility is supported by our discovery that recombination is reduced dramatically in a cross involving a parent with the hms1–hms2 incompatibility. PMID:16415357

  16. A simple genetic incompatibility causes hybrid male sterility in mimulus.

    PubMed

    Sweigart, Andrea L; Fishman, Lila; Willis, John H

    2006-04-01

    Much evidence has shown that postzygotic reproductive isolation (hybrid inviability or sterility) evolves by the accumulation of interlocus incompatibilities between diverging populations. Although in theory only a single pair of incompatible loci is needed to isolate species, empirical work in Drosophila has revealed that hybrid fertility problems often are highly polygenic and complex. In this article we investigate the genetic basis of hybrid sterility between two closely related species of monkeyflower, Mimulus guttatus and M. nasutus. In striking contrast to Drosophila systems, we demonstrate that nearly complete hybrid male sterility in Mimulus results from a simple genetic incompatibility between a single pair of heterospecific loci. We have genetically mapped this sterility effect: the M. guttatus allele at the hybrid male sterility 1 (hms1) locus acts dominantly in combination with recessive M. nasutus alleles at the hybrid male sterility 2 (hms2) locus to cause nearly complete hybrid male sterility. In a preliminary screen to find additional small-effect male sterility factors, we identified one additional locus that also contributes to some of the variation in hybrid male fertility. Interestingly, hms1 and hms2 also cause a significant reduction in hybrid female fertility, suggesting that sex-specific hybrid defects might share a common genetic basis. This possibility is supported by our discovery that recombination is reduced dramatically in a cross involving a parent with the hms1-hms2 incompatibility.

  17. Viewing and Editing Earth Science Metadata MOBE: Metadata Object Browser and Editor in Java

    NASA Astrophysics Data System (ADS)

    Chase, A.; Helly, J.

    2002-12-01

    Metadata is an important, yet often neglected aspect of successful archival efforts. However, to generate robust, useful metadata is often a time consuming and tedious task. We have been approaching this problem from two directions: first by automating metadata creation, pulling from known sources of data, and in addition, what this (paper/poster?) details, developing friendly software for human interaction with the metadata. MOBE and COBE(Metadata Object Browser and Editor, and Canonical Object Browser and Editor respectively), are Java applications for editing and viewing metadata and digital objects. MOBE has already been designed and deployed, currently being integrated into other areas of the SIOExplorer project. COBE is in the design and development stage, being created with the same considerations in mind as those for MOBE. Metadata creation, viewing, data object creation, and data object viewing, when taken on a small scale are all relatively simple tasks. Computer science however, has an infamous reputation for transforming the simple into complex. As a system scales upwards to become more robust, new features arise and additional functionality is added to the software being written to manage the system. The software that emerges from such an evolution, though powerful, is often complex and difficult to use. With MOBE the focus is on a tool that does a small number of tasks very well. The result has been an application that enables users to manipulate metadata in an intuitive and effective way. This allows for a tool that serves its purpose without introducing additional cognitive load onto the user, an end goal we continue to pursue.

  18. Development of Schema Knowledge in the Classroom: Effects upon Problem Representation and Problem Solution of Programming.

    ERIC Educational Resources Information Center

    Tsai, Shu-Er

    Students with a semester or more of instruction often display remarkable naivety about the language that they have been studying and often prove unable to manage simple programming problems. The main purpose of this study was to create a set of problem-plan-program types for the BASIC programming language to help high school students build plans…

  19. Pathological and Sub-Clinical Problem Gambling in a New Zealand Prison: A Comparison of the Eight and SOGS Gambling Screens

    ERIC Educational Resources Information Center

    Sullivan, Sean; Brown, Robert; Skinner, Bruce

    2008-01-01

    Prison populations have been identified as having elevated levels of problem gambling prevalence, and screening for problem gambling may provide an opportunity to identify and address a behavior that may otherwise lead to re-offending. A problem gambling screen for this purpose would need to be brief, simple to score, and be able to be…

  20. Ending up with Less: The Role of Working Memory in Solving Simple Subtraction Problems with Positive and Negative Answers

    ERIC Educational Resources Information Center

    Robert, Nicole D.; LeFevre, Jo-Anne

    2013-01-01

    Does solving subtraction problems with negative answers (e.g., 5-14) require different cognitive processes than solving problems with positive answers (e.g., 14-5)? In a dual-task experiment, young adults (N=39) combined subtraction with two working memory tasks, verbal memory and visual-spatial memory. All of the subtraction problems required…

  1. Theoretical Analysis of Local Search and Simple Evolutionary Algorithms for the Generalized Travelling Salesperson Problem.

    PubMed

    Pourhassan, Mojgan; Neumann, Frank

    2018-06-22

    The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.

  2. Covariance expressions for eigenvalue and eigenvector problems

    NASA Astrophysics Data System (ADS)

    Liounis, Andrew J.

    There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.

  3. Automatic alignment for three-dimensional tomographic reconstruction

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Tristan; Maretzke, Simon; Joost Batenburg, K.

    2018-02-01

    In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset.

  4. A Numerical Study of Mesh Adaptivity in Multiphase Flows with Non-Newtonian Fluids

    NASA Astrophysics Data System (ADS)

    Percival, James; Pavlidis, Dimitrios; Xie, Zhihua; Alberini, Federico; Simmons, Mark; Pain, Christopher; Matar, Omar

    2014-11-01

    We present an investigation into the computational efficiency benefits of dynamic mesh adaptivity in the numerical simulation of transient multiphase fluid flow problems involving Non-Newtonian fluids. Such fluids appear in a range of industrial applications, from printing inks to toothpastes and introduce new challenges for mesh adaptivity due to the additional ``memory'' of viscoelastic fluids. Nevertheless, the multiscale nature of these flows implies huge potential benefits for a successful implementation. The study is performed using the open source package Fluidity, which couples an unstructured mesh control volume finite element solver for the multiphase Navier-Stokes equations to a dynamic anisotropic mesh adaptivity algorithm, based on estimated solution interpolation error criteria, and conservative mesh-to-mesh interpolation routine. The code is applied to problems involving rheologies ranging from simple Newtonian to shear-thinning to viscoelastic materials and verified against experimental data for various industrial and microfluidic flows. This work was undertaken as part of the EPSRC MEMPHIS programme grant EP/K003976/1.

  5. Using Direct Policy Search to Identify Robust Strategies in Adapting to Uncertain Sea Level Rise and Storm Surge

    NASA Astrophysics Data System (ADS)

    Garner, G. G.; Keller, K.

    2017-12-01

    Sea-level rise poses considerable risks to coastal communities, ecosystems, and infrastructure. Decision makers are faced with deeply uncertain sea-level projections when designing a strategy for coastal adaptation. The traditional methods have provided tremendous insight into this decision problem, but are often silent on tradeoffs as well as the effects of tail-area events and of potential future learning. Here we reformulate a simple sea-level rise adaptation model to address these concerns. We show that Direct Policy Search yields improved solution quality, with respect to Pareto-dominance in the objectives, over the traditional approach under uncertain sea-level rise projections and storm surge. Additionally, the new formulation produces high quality solutions with less computational demands than the traditional approach. Our results illustrate the utility of multi-objective adaptive formulations for the example of coastal adaptation, the value of information provided by observations, and point to wider-ranging application in climate change adaptation decision problems.

  6. Solving the MHD equations by the space time conservation element and solution element method

    NASA Astrophysics Data System (ADS)

    Zhang, Moujin; John Yu, S.-T.; Henry Lin, S.-C.; Chang, Sin-Chung; Blankson, Isaiah

    2006-05-01

    We apply the space-time conservation element and solution element (CESE) method to solve the ideal MHD equations with special emphasis on satisfying the divergence free constraint of magnetic field, i.e., ∇ · B = 0. In the setting of the CESE method, four approaches are employed: (i) the original CESE method without any additional treatment, (ii) a simple corrector procedure to update the spatial derivatives of magnetic field B after each time marching step to enforce ∇ · B = 0 at all mesh nodes, (iii) a constraint-transport method by using a special staggered mesh to calculate magnetic field B, and (iv) the projection method by solving a Poisson solver after each time marching step. To demonstrate the capabilities of these methods, two benchmark MHD flows are calculated: (i) a rotated one-dimensional MHD shock tube problem and (ii) a MHD vortex problem. The results show no differences between different approaches and all results compare favorably with previously reported data.

  7. Recent experience in simultaneous control-structure optimization

    NASA Technical Reports Server (NTRS)

    Salama, M.; Ramaker, R.; Milman, M.

    1989-01-01

    To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.

  8. Boundary condition computational procedures for inviscid, supersonic steady flow field calculations

    NASA Technical Reports Server (NTRS)

    Abbett, M. J.

    1971-01-01

    Results are given of a comparative study of numerical procedures for computing solid wall boundary points in supersonic inviscid flow calculatons. Twenty five different calculation procedures were tested on two sample problems: a simple expansion wave and a simple compression (two-dimensional steady flow). A simple calculation procedure was developed. The merits and shortcomings of the various procedures are discussed, along with complications for three-dimensional and time-dependent flows.

  9. On the bandwidth of the plenoptic function.

    PubMed

    Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin

    2012-02-01

    The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE

  10. Shared and differentiated motor skill impairments in children with dyslexia and/or attention deficit disorder: From simple to complex sequential coordination

    PubMed Central

    Morin-Moncet, Olivier; Bélanger, Anne-Marie; Beauchamp, Miriam H.; Leonard, Gabriel

    2017-01-01

    Dyslexia and Attention deficit disorder (AD) are prevalent neurodevelopmental conditions in children and adolescents. They have high comorbidity rates and have both been associated with motor difficulties. Little is known, however, about what is shared or differentiated in dyslexia and AD in terms of motor abilities. Even when motor skill problems are identified, few studies have used the same measurement tools, resulting in inconstant findings. The present study assessed increasingly complex gross motor skills in children and adolescents with dyslexia, AD, and with both Dyslexia and AD. Our results suggest normal performance on simple motor-speed tests, whereas all three groups share a common impairment on unimanual and bimanual sequential motor tasks. Children in these groups generally improve with practice to the same level as normal subjects, though they make more errors. In addition, children with AD are the most impaired on complex bimanual out-of-phase movements and with manual dexterity. These latter findings are examined in light of the Multiple Deficit Model. PMID:28542319

  11. Some anticipated contributions to core fluid dynamics from the GRM

    NASA Technical Reports Server (NTRS)

    Vanvorhies, C.

    1985-01-01

    It is broadly maintained that the secular variation (SV) of the large scale geomagnetic field contains information on the fluid dynamics of Earth's electrically conducting outer core. The electromagnetic theory appropriate to a simple Earth model has recently been combined with reduced geomagnetic data in order to extract some of this information and ascertain its significance. The simple Earth model consists of a rigid, electrically insulating mantle surrounding a spherical, inviscid, and perfectly conducting liquid outer core. This model was tested against seismology by using truncated spherical harmonic models of the observed geomagnetic field to locate Earth's core-mantle boundary, CMB. Further electromagnetic theory has been developed and applied to the problem of estimating the horizontal fluid motion just beneath CMB. Of particular geophysical interest are the hypotheses that these motions: (1) include appreciable surface divergence indicative of vertical motion at depth, and (2) are steady for time intervals of a decade or more. In addition to the extended testing of the basic Earth model, the proposed GRM provides a unique opportunity to test these dynamical hypotheses.

  12. System-level protection and hardware Trojan detection using weighted voting.

    PubMed

    Amin, Hany A M; Alkabani, Yousra; Selim, Gamal M I

    2014-07-01

    The problem of hardware Trojans is becoming more serious especially with the widespread of fabless design houses and design reuse. Hardware Trojans can be embedded on chip during manufacturing or in third party intellectual property cores (IPs) during the design process. Recent research is performed to detect Trojans embedded at manufacturing time by comparing the suspected chip with a golden chip that is fully trusted. However, Trojan detection in third party IP cores is more challenging than other logic modules especially that there is no golden chip. This paper proposes a new methodology to detect/prevent hardware Trojans in third party IP cores. The method works by gradually building trust in suspected IP cores by comparing the outputs of different untrusted implementations of the same IP core. Simulation results show that our method achieves higher probability of Trojan detection over a naive implementation of simple voting on the output of different IP cores. In addition, experimental results show that the proposed method requires less hardware overhead when compared with a simple voting technique achieving the same degree of security.

  13. Effect of cholesterol and triglycerides levels on the rheological behavior of human blood

    NASA Astrophysics Data System (ADS)

    Moreno, Leonardo; Calderas, Fausto; Sanchez-Olivares, Guadalupe; Medina-Torres, Luis; Sanchez-Solis, Antonio; Manero, Octavio

    2015-02-01

    Important public health problems worldwide such as obesity, diabetes, hyperlipidemia and coronary diseases are quite common. These problems arise from numerous factors, such as hyper-caloric diets, sedentary habits and other epigenetic factors. With respect to Mexico, the population reference values of total cholesterol in plasma are around 200 mg/dL. However, a large proportion has higher levels than this reference value. In this work, we analyze the rheological properties of human blood obtained from 20 donors, as a function of cholesterol and triglyceride levels, upon a protocol previously approved by the health authorities. Samples with high and low cholesterol and triglyceride levels were selected and analyzed by simple-continuous and linear-oscillatory shear flow. Rheometric properties were measured and related to the structure and composition of human blood. In addition, rheometric data were modeled by using several constitutive equations: Bautista-Manero-Puig (BMP) and the multimodal Maxwell equations to predict the flow behavior of human blood. Finally, a comparison was made among various models, namely, the BMP, Carreau and Quemada equations for simple shear rate flow. An important relationship was found between cholesterol, triglycerides and the structure of human blood. Results show that blood with high cholesterol levels (400 mg/dL) has flow properties fully different (higher viscosity and a more pseudo-plastic behavior) than blood with lower levels of cholesterol (tendency to Newtonian behavior or viscosity plateau at low shear rates).

  14. Rethinking Use of the OML Model in Electric Sail Development

    NASA Technical Reports Server (NTRS)

    Stone, Nobie H.

    2016-01-01

    In 1924, Irvin Langmuir and H. M. Mott-Smith published a theoretical model for the complex plasma sheath phenomenon in which they identified some very special cases which greatly simplified the sheath and allowed a closed solution to the problem. The most widely used application is for an electrostatic, or "Langmuir," probe in laboratory plasma. Although the Langmuir probe is physically simple (a biased wire) the theory describing its functional behavior and its current-voltage characteristic is extremely complex and, accordingly, a number of assumptions and approximations are used in the LMS model. These simplifications, correspondingly, place limits on the model's range of application. Adapting the LMS model to real-life conditions is the subject of numerous papers and dissertations. The Orbit-Motion Limited (OML) model that is widely used today is one of these adaptions that is a convenient means of calculating sheath effects. Since the Langmuir probe is a simple biased wire immersed in plasma, it is particularly tempting to use the OML equation in calculating the characteristics of the long, highly biased wires of an Electric Sail in the solar wind plasma. However, in order to arrive at the OML equation, a number of additional simplifying assumptions and approximations (beyond those made by Langmuir-Mott-Smith) are necessary. The OML equation is a good approximation when all conditions are met, but it would appear that the Electric Sail problem lies outside of the limits of applicability.

  15. Airtightness the simple(CS) way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, S.

    Builders who might buck against such time consuming air sealing methods as polyethylene wrap and the airtight drywall approach (ADA) may respond better to current strategies. One such method, called SimpleCS, has proven especially effective. SimpleCS, pronounced simplex, stands for simple caulk and seal. A modification of the ADA, SimpleCS is an air-sealing management tool, a simplified systems approach to building tight homes. The system address the crucial question of when and by whom various air sealing steps should be done. It avoids the problems that often occur when later contractors cut open polyethylene wrap to drill holes in themore » drywall. The author describes how SimpleCS works, and the cost and training involved.« less

  16. Rediscovery in a Course for Nonscientists: Use of Molecular Models to Solve Classical Structural Problems

    ERIC Educational Resources Information Center

    Wood, Gordon W.

    1975-01-01

    Describes exercises using simple ball and stick models which students with no chemistry background can solve in the context of the original discovery. Examples include the tartaric acid and benzene problems. (GS)

  17. Stimulating Mathematical Reasoning with Simple Open-Ended Tasks

    ERIC Educational Resources Information Center

    West, John

    2018-01-01

    The importance of mathematical reasoning is unquestioned and providing opportunities for students to become involved in mathematical reasoning is paramount. The open-ended tasks presented incorporate mathematical content explored through the contexts of problem solving and reasoning. This article presents a number of simple tasks that may be…

  18. Eye Movements Reveal Students' Strategies in Simple Equation Solving

    ERIC Educational Resources Information Center

    Susac, Ana; Bubic, Andreja; Kaponja, Jurica; Planinic, Maja; Palmovic, Marijan

    2014-01-01

    Equation rearrangement is an important skill required for problem solving in mathematics and science. Eye movements of 40 university students were recorded while they were rearranging simple algebraic equations. The participants also reported on their strategies during equation solving in a separate questionnaire. The analysis of the behavioral…

  19. A Simple View of Linguistic Complexity

    ERIC Educational Resources Information Center

    Pallotti, Gabriele

    2015-01-01

    Although a growing number of second language acquisition (SLA) studies take linguistic complexity as a dependent variable, the term is still poorly defined and often used with different meanings, thus posing serious problems for research synthesis and knowledge accumulation. This article proposes a simple, coherent view of the construct, which is…

  20. The addition of E (Empowerment and Economics) to the ABCD algorithm in diabetes care.

    PubMed

    Khazrai, Yeganeh Manon; Buzzetti, Raffaella; Del Prato, Stefano; Cahn, Avivit; Raz, Itamar; Pozzilli, Paolo

    2015-01-01

    The ABCD (Age, Body weight, Complications, Duration of disease) algorithm was proposed as a simple and practical tool to manage patients with type 2 diabetes. Diabetes treatment, as for all chronic diseases, relies on patients' ability to cope with daily problems concerning the management of their disease in accordance with medical recommendations. Thus, it is important that patients learn to manage and cope with their disease and gain greater control over actions and decisions affecting their health. Healthcare professionals should aim to encourage and increase patients' perception about their ability to take informed decisions about disease management and to improve patient self-esteem and feeling of self-efficacy to become agents of their own health. E for Empowerment is therefore an additional factor to take into account in the management of patients with type 2 diabetes. E stands also for Economics to be considered in diabetes care. Attention should be paid to public health policies as well as to the physician faced with the dilemma of delivering the best possible care within the problem of limited resources. The financial impact of the new treatment modalities for diabetes represents an issue that needs to be addressed at multiple strata both globally and nationally. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Algebraic approach to characterizing paraxial optical systems.

    PubMed

    Wittig, K; Giesen, A; Hügel, H

    1994-06-20

    The paraxial propagation formalism for ABCD systems is reviewed and written in terms of quantum mechanics. This formalism shows that the propagation based on the Collins integral can be generalized so that, in addition, the problem of beam quality degradation that is due to aberrations can be treated in a natural way. Moreover, because this formalism is well elaborated and reduces the problem of propagation to simple algebraic calculations, it seems to be less complicated than other approaches. This can be demonstrated with an easy and unitary derivation of several results, which were obtained with different approaches, in each case matched to the specific problem. It is first shown how the canonical decomposition of arbitrary (also complex) ABCD matrices introduced by Siegman [Lasers, 2nd ed. (Oxford U. Press, London, 1986)] can be used to establish the group structure of geometric optics on the space of optical wave functions. This result is then used to derive the propagation law for arbitrary moments in eneral ABCD systems. Finally a proper generalization to nonparaxial propagation operators that allows us to treat arbitrary aberration effects with respect to their influence on beam quality degradation is presented.

  2. Intervention in Countries with Unsustainable Energy Policies: Is it Ever Justifiable?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonn, Bruce Edward

    This paper explores whether it is ever justifiable for the international community to forcibly intervene in countries that have unsustainable energy policies. The literature on obligations to future generations suggests, philosophically, that intervention might be justified under certain circumstances. Additionally, the world community has intervened in the affairs of other countries for humanitarian reasons, such as in Kosovo, Somalia, and Haiti. However, intervention to deal with serious energy problems is a qualitatively different and more difficult problem. A simple risk analysis framework is used to organize the discussion about possible conditions for justifiable intervention. If the probability of deaths resultingmore » from unsustainable energy policies is very large, if the energy problem can be attributed to a relatively small number of countries, and if the risk of intervention is acceptable (i.e., the number of deaths due to intervention is relatively small), then intervention may be justifiable. Without further analysis and successful solution of several vexing theoretical questions, it cannot be stated whether unsustainable energy policies being pursued by countries at the beginning of the 21st century meet the criteria for forcible intervention by the international community.« less

  3. 5 CFR 1315.17 - Formulas.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Daily simple interest formula. (1) To calculate daily simple interest the following formula may be used... a payment is due on April 1 and the payment is not made until April 11, a simple interest... equation calculates simple interest on any additional days beyond a monthly increment. (3) For example, if...

  4. A Multiobjective Approach Applied to the Protein Structure Prediction Problem

    DTIC Science & Technology

    2002-03-07

    like a low energy search landscape . 2.1.1 Symbolic/Formalized Problem Domain Description. Every computer representable problem can also be embodied...method [60]. 3.4 Energy Minimization Methods The energy landscape algorithms are based on the idea that a protein’s final resting conformation is...in our GA used to search the PSP problem energy landscape ). 3.5.1 Simple GA. The main routine in a sGA, after encoding the problem, builds a

  5. Born-Oppenheimer approximation for a singular system

    NASA Astrophysics Data System (ADS)

    Akbas, Haci; Turgut, O. Teoman

    2018-01-01

    We discuss a simple singular system in one dimension, two heavy particles interacting with a light particle via an attractive contact interaction and not interacting among themselves. It is natural to apply the Born-Oppenheimer approximation to this problem. We present a detailed discussion of this approach; the advantage of this simple model is that one can estimate the error terms self-consistently. Moreover, a Fock space approach to this problem is presented where an expansion can be proposed to get higher order corrections. A slight modification of the same problem in which the light particle is relativistic is discussed in a later section by neglecting pair creation processes. Here, the second quantized description is more challenging, but with some care, one can recover the first order expression exactly.

  6. A new formulation for anisotropic radiative transfer problems. I - Solution with a variational technique

    NASA Technical Reports Server (NTRS)

    Cheyney, H., III; Arking, A.

    1976-01-01

    The equations of radiative transfer in anisotropically scattering media are reformulated as linear operator equations in a single independent variable. The resulting equations are suitable for solution by a variety of standard mathematical techniques. The operators appearing in the resulting equations are in general nonsymmetric; however, it is shown that every bounded linear operator equation can be embedded in a symmetric linear operator equation and a variational solution can be obtained in a straightforward way. For purposes of demonstration, a Rayleigh-Ritz variational method is applied to three problems involving simple phase functions. It is to be noted that the variational technique demonstrated is of general applicability and permits simple solutions for a wide range of otherwise difficult mathematical problems in physics.

  7. Problem Solvers: Problem--Light It up! and Solutions--Flags by the Numbers

    ERIC Educational Resources Information Center

    Hall, Shaun

    2009-01-01

    A simple circuit is created by the continuous flow of electricity through conductors (copper wires) from a source of electrical energy (batteries). "Completing a circuit" means that electricity flows from the energy source through the circuit and, in the case described in this month's problem, causes the light bulb tolight up. The presence of…

  8. Solving L-L Extraction Problems with Excel Spreadsheet

    ERIC Educational Resources Information Center

    Teppaitoon, Wittaya

    2016-01-01

    This work aims to demonstrate the use of Excel spreadsheets for solving L-L extraction problems. The key to solving the problems successfully is to be able to determine a tie line on the ternary diagram where the calculation must be carried out. This enables the reader to analyze the extraction process starting with a simple operation, the…

  9. The Potential of Automated Corrective Feedback to Remediate Cohesion Problems in Advanced Students' Writing

    ERIC Educational Resources Information Center

    Strobl, Carola

    2017-01-01

    This study explores the potential of a feedback environment using simple string-based pattern matching technology for the provision of automated corrective feedback on cohesion problems. Thirty-eight high-frequent problems, including non-target like use of connectives and co-references were addressed providing both direct and indirect feedback.…

  10. Research Reporting Sections, Annual Meeting of the National Council of Teachers of Mathematics (57th, Boston, Massachusetts, April 18-21, 1979).

    ERIC Educational Resources Information Center

    Higgins, Jon L., Ed.

    This document provides abstracts of 20 research reports. Topics covered include: children's comprehension of simple story problems; field independence and group instruction; problem-solving competence and memory; spatial visualization and the use of manipulative materials; effects of games on mathematical skills; problem-solving ability and right…

  11. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  12. Reflection on Solutions in the Form of Refutation Texts versus Problem Solving: The Case of 8th Graders Studying Simple Electric Circuits

    ERIC Educational Resources Information Center

    Safadi, Rafi; Safadi, Ekhlass; Meidav, Meir

    2017-01-01

    This study compared students' learning in troubleshooting and problem solving activities. The troubleshooting activities provided students with solutions to conceptual problems in the form of refutation texts; namely, solutions that portray common misconceptions, refute them, and then present the accepted scientific ideas. They required students…

  13. The black flies of Maine

    Treesearch

    L.S. Bauer; J. Granett

    1979-01-01

    Black flies have been long-time residents of Maine and cause extensive nuisance problems for people, domestic animals, and wildlife. The black fly problem has no simple solution because of the multitude of species present, the diverse and ecologically sensitive habitats in which they are found, and the problems inherent in measuring the extent of the damage they cause...

  14. Cone-Deciphered Modes of Problem Solving Action (MPSA Cone): Alternative Perspectives on Diversified Professions.

    ERIC Educational Resources Information Center

    Lai, Su-Huei

    A conceptual framework of the modes of problem-solving action has been developed on the basis of a simple relationship cone to assist individuals in diversified professions in inquiry and implementation of theory and practice in their professional development. The conceptual framework is referred to as the Cone-Deciphered Modes of Problem Solving…

  15. Developing Physics Concepts through Hands-On Problem Solving: A Perspective on a Technological Project Design

    ERIC Educational Resources Information Center

    Hong, Jon-Chao; Chen, Mei-Yung; Wong, Ashley; Hsu, Tsui-Fang; Peng, Chih-Chi

    2012-01-01

    In a contest featuring hands-on projects, college students were required to design a simple crawling worm using planning, self-monitoring and self-evaluation processes to solve contradictive problems. To enhance the efficiency of problem solving, one needs to practice meta-cognition based on an application of related scientific concepts. The…

  16. Specific arithmetic calculation deficits in children with Turner syndrome.

    PubMed

    Rovet, J; Szekely, C; Hockenberry, M N

    1994-12-01

    Study 1 compared arithmetic processing skills on the WRAT-R in 45 girls with Turner syndrome (TS) and 92 age-matched female controls. Results revealed significant underachievement by subjects with TS, which reflected their poorer performance on problems requiring the retrieval of addition and multiplication facts and procedural knowledge for addition and division operations. TS subjects did not differ qualitatively from controls in type of procedural error committed. Study 2, which compared the performance of 10 subjects with TS and 31 controls on the Keymath Diagnostic Arithmetic Test, showed that the TS group had less adequate knowledge of arithmetic, subtraction, and multiplication procedures but did not differ from controls on Fact items. Error analyses revealed that TS subjects were more likely to confuse component steps or fail to separate intermediate steps or to complete problems. TS subjects relied to a greater degree on verbal than visual-spatial abilities in arithmetic processing while their visual-spatial abilities were associated with retrieval of simple multidigit addition facts and knowledge of subtraction, multiplication, and division procedures. Differences between the TS and control groups increased with age for Keymath, but not WRAT-R, procedures. Discrepant findings are related to the different task constraints (timed vs. untimed, single vs. alternate versions, size of item pool) and the use of different strategies (counting vs. fact retrieval). It is concluded that arithmetic difficulties in females with TS are due to less adequate procedural skills, combined with poorer fact retrieval in timed testing situations, rather than to inadequate visual-spatial abilities.

  17. PHYSICS REQUIRES A SIMPLE LOW MACH NUMBER FLOW TO BE COMPRESSIBLE

    EPA Science Inventory

    Radial, laminar, plane, low velocity flow represents the simplest, non-linear fluid dynamics problem. Ostensibly this apparently trivial flow could be solved using the incompressible Navier-Stokes equations, universally believed to be adequate for such problems. Most researchers ...

  18. Encapsulation of enzyme via one-step template-free formation of stable organic-inorganic capsules: A simple and efficient method for immobilizing enzyme with high activity and recyclability.

    PubMed

    Huang, Renliang; Wu, Mengyun; Goldman, Mark J; Li, Zhi

    2015-06-01

    Enzyme encapsulation is a simple, gentle, and general method for immobilizing enzyme, but it often suffers from one or more problems regarding enzyme loading efficiency, enzyme leakage, mechanical stability, and recyclability. Here we report a novel, simple, and efficient method for enzyme encapsulation to overcome these problems by forming stable organic-inorganic hybrid capsules. A new, facile, one-step, and template-free synthesis of organic-inorganic capsules in aqueous phase were developed based on PEI-induced simultaneous interfacial self-assembly of Fmoc-FF and polycondensation of silicate. Addition of an aqueous solution of Fmoc-FF and sodium silicate into an aqueous solution of PEI gave a new class of organic-inorganic hybrid capsules (FPSi) with multi-layered structure in high yield. The capsules are mechanically stable due to the incorporation of inorganic silica. Direct encapsulation of enzyme such as epoxide hydrolase SpEH and BSA along with the formation of the organic-inorganic capsules gave high yield of enzyme-containing capsules (∼1.2 mm in diameter), >90% enzyme loading efficiency, high specific enzyme loading (158 mg protein g(-1) carrier), and low enzyme leakage (<3% after 48 h incubation). FPSi-SpEH capsules catalyzed the hydrolysis of cyclohexene oxide to give (1R, 2R)-cyclohexane-1,2-diol in high yield and concentration, with high specific activity (6.94 U mg(-1) protein) and the same high enantioselectivity as the free enzyme. The immobilized SpEH demonstrated also excellent operational stability and recyclability: retaining 87% productivity after 20 cycles with a total reaction time of 80 h. The new enzyme encapsulation method is efficient, practical, and also better than other reported encapsulation methods. © 2015 Wiley Periodicals, Inc.

  19. A novel approach to sports concussion assessment: Computerized multilimb reaction times and balance control testing.

    PubMed

    Vartiainen, Matti V; Holm, Anu; Lukander, Jani; Lukander, Kristian; Koskinen, Sanna; Bornstein, Robert; Hokkanen, Laura

    2016-01-01

    Mild traumatic brain injuries (MTBI) or concussions often result in problems with attention, executive functions, and motor control. For better identification of these diverse problems, novel approaches integrating tests of cognitive and motor functioning are needed. The aim was to characterize minor changes in motor and cognitive performance after sports-related concussions with a novel test battery, including balance tests and a computerized multilimb reaction time test. The cognitive demands of the battery gradually increase from a simple stimulus response to a complex task requiring executive attention. A total of 113 male ice hockey players (mean age = 24.6 years, SD = 5.7) were assessed before a season. During the season, nine concussed players were retested within 36 hours, four to six days after the concussion, and after the season. A control group of seven nonconcussed players from the same pool of players with comparable demographics were retested after the season. Performance was measured using a balance test and the Motor Cognitive Test battery (MotCoTe) with multilimb responses in simple reaction, choice reaction, inhibition, and conflict resolution conditions. The performance of the concussed group declined at the postconcussion assessment compared to both the baseline measurement and the nonconcussed controls. Significant changes were observed in the concussed group for the multilimb choice reaction and inhibition tests. Tapping and balance showed a similar trend, but no statistically significant difference in performance. In sports-related concussions, complex motor tests can be valuable additions in assessing the outcome and recovery. In the current study, using subtasks with varying cognitive demands, it was shown that while simple motor performance was largely unaffected, the more complex tasks induced impaired reaction times for the concussed subjects. The increased reaction times may reflect the disruption of complex and integrative cognitive function in concussions.

  20. Score Big! Pinball Project Teaches Simple Machine Basics

    ERIC Educational Resources Information Center

    Freeman, Matthew K.

    2009-01-01

    This article presents a design brief for a pinball game. The design brief helps students get a better grasp on the operation and uses of simple machines. It also gives them an opportunity to develop their problem-solving skills and use design skills to complete an interesting, fun product. (Contains 2 tables and 3 photos.)

  1. Population and Pollution in the United States

    ERIC Educational Resources Information Center

    Ridker, Ronald G.

    1972-01-01

    Analyzes a simple model relating environmental pollution to population and per capita income and concludes that no single cause is sufficient to explain.... environmental problems, and that there is little about the pollution problems.... of the next 50 years that is inevitable." (Author/AL)

  2. What Are the Signs of Alzheimer's Disease? | NIH MedlinePlus the Magazine

    MedlinePlus

    ... in behavior and personality Conduct tests of memory, problem solving, attention, counting, and language Carry out standard medical ... over and over having trouble paying bills or solving simple math problems getting lost losing things or putting them in ...

  3. Portfolio optimization using fuzzy linear programming

    NASA Astrophysics Data System (ADS)

    Pandit, Purnima K.

    2013-09-01

    Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.

  4. Resource-Competing Oscillator Network as a Model of Amoeba-Based Neurocomputer

    NASA Astrophysics Data System (ADS)

    Aono, Masashi; Hirata, Yoshito; Hara, Masahiko; Aihara, Kazuyuki

    An amoeboid organism, Physarum, exhibits rich spatiotemporal oscillatory behavior and various computational capabilities. Previously, the authors created a recurrent neurocomputer incorporating the amoeba as a computing substrate to solve optimization problems. In this paper, considering the amoeba to be a network of oscillators coupled such that they compete for constant amounts of resources, we present a model of the amoeba-based neurocomputer. The model generates a number of oscillation modes and produces not only simple behavior to stabilize a single mode but also complex behavior to spontaneously switch among different modes, which reproduces well the experimentally observed behavior of the amoeba. To explore the significance of the complex behavior, we set a test problem used to compare computational performances of the oscillation modes. The problem is a kind of optimization problem of how to allocate a limited amount of resource to oscillators such that conflicts among them can be minimized. We show that the complex behavior enables to attain a wider variety of solutions to the problem and produces better performances compared with the simple behavior.

  5. Equilibria of perceptrons for simple contingency problems.

    PubMed

    Dawson, Michael R W; Dupuis, Brian

    2012-08-01

    The contingency between cues and outcomes is fundamentally important to theories of causal reasoning and to theories of associative learning. Researchers have computed the equilibria of Rescorla-Wagner models for a variety of contingency problems, and have used these equilibria to identify situations in which the Rescorla-Wagner model is consistent, or inconsistent, with normative models of contingency. Mathematical analyses that directly compare artificial neural networks to contingency theory have not been performed, because of the assumed equivalence between the Rescorla-Wagner learning rule and the delta rule training of artificial neural networks. However, recent results indicate that this equivalence is not as straightforward as typically assumed, suggesting a strong need for mathematical accounts of how networks deal with contingency problems. One such analysis is presented here, where it is proven that the structure of the equilibrium for a simple network trained on a basic contingency problem is quite different from the structure of the equilibrium for a Rescorla-Wagner model faced with the same problem. However, these structural differences lead to functionally equivalent behavior. The implications of this result for the relationships between associative learning, contingency theory, and connectionism are discussed.

  6. Friction Stir Additive Manufacturing: Route to High Structural Performance

    NASA Astrophysics Data System (ADS)

    Palanivel, S.; Sidhar, H.; Mishra, R. S.

    2015-03-01

    Aerospace and automotive industries provide the next big opportunities for additive manufacturing. Currently, the additive industry is confronted with four major challenges that have been identified in this article. These challenges need to be addressed for the additive technologies to march into new frontiers and create additional markets. Specific potential success in the transportation sectors is dependent on the ability to manufacture complicated structures with high performance. Most of the techniques used for metal-based additive manufacturing are fusion based because of their ability to fulfill the computer-aided design to component vision. Although these techniques aid in fabrication of complex shapes, achieving high structural performance is a key problem due to the liquid-solid phase transformation. In this article, friction stir additive manufacturing (FSAM) is shown as a potential solid-state process for attaining high-performance lightweight alloys for simpler geometrical applications. To illustrate FSAM as a high-performance route, manufactured builds of Mg-4Y-3Nd and AA5083 are shown as examples. In the Mg-based alloy, an average hardness of 120 HV was achieved in the built structure and was significantly higher than that of the base material (97 HV). Similarly for the Al-based alloy, compared with the base hardness of 88 HV, the average built hardness was 104 HV. A potential application of FSAM is illustrated by taking an example of a simple stiffener assembly.

  7. Understanding synergy.

    PubMed

    Geary, Nori

    2013-02-01

    Analysis of the interactive effects of combinations of hormones or other manipulations with qualitatively similar individual effects is an important topic in basic and clinical endocrinology as well as other branches of basic and clinical research related to integrative physiology. Functional, as opposed to mechanistic, analyses of interactions rely on the concept of synergy, which can be defined qualitatively as a cooperative action or quantitatively as a supra-additive effect according to some metric for the addition of different dose-effect curves. Unfortunately, dose-effect curve addition is far from straightforward; rather, it requires the development of an axiomatic mathematical theory. I review the mathematical soundness, face validity, and utility of the most frequently used approaches to supra-additive synergy. These criteria highlight serious problems in the two most common synergy approaches, response additivity and Loewe additivity, which is the basis of the isobole and related response surface approaches. I conclude that there is no adequate, generally applicable, supra-additive synergy metric appropriate for endocrinology or any other field of basic and clinical integrative physiology. I recommend that these metrics be abandoned in favor of the simpler definition of synergy as a cooperative, i.e., nonantagonistic, effect. This simple definition avoids mathematical difficulties, is easily applicable, meets regulatory requirements for combination therapy development, and suffices to advance phenomenological basic research to mechanistic studies of interactions and clinical combination therapy research.

  8. A New Look at Two Old Problems in Electrostatics, or Much Ado with Hemispheres

    ERIC Educational Resources Information Center

    DasGupta, Ananda

    2007-01-01

    In this paper, we take a look at two electrostatics problems concerning hemispheres. The first problem concerns the direction of the electric field on the flat cap of a uniformly charged hemisphere. We show that the symmetry and principle of superposition coupled with Gauss's law gives a delightfully simple solution and then go on to examine how…

  9. "Cast Your Net Widely": Three Steps to Expanding and Refining Your Problem before Action Learning Application

    ERIC Educational Resources Information Center

    Reese, Simon R.

    2015-01-01

    This paper reflects upon a three-step process to expand the problem definition in the early stages of an action learning project. The process created a community-powered problem-solving approach within the action learning context. The simple three steps expanded upon in the paper create independence, dependence, and inter-dependence to aid the…

  10. The King and Prisoner Puzzle: A Way of Introducing the Components of Logical Structures

    ERIC Educational Resources Information Center

    Roh, Kyeong Hah; Lee, Yong Hah; Tanner, Austin

    2016-01-01

    The purpose of this paper is to provide issues related to student understanding of logical components that arise when solving word problems. We designed a logic problem called the King and Prisoner Puzzle--a linguistically simple, yet logically challenging problem. In this paper, we describe various student solutions to the puzzle and discuss the…

  11. On some variational acceleration techniques and related methods for local refinement

    NASA Astrophysics Data System (ADS)

    Teigland, Rune

    1998-10-01

    This paper shows that the well-known variational acceleration method described by Wachspress (E. Wachspress, Iterative Solution of Elliptic Systems and Applications to the Neutron Diffusion Equations of Reactor Physics, Prentice-Hall, Englewood Cliffs, NJ, 1966) and later generalized to multilevels (known as the additive correction multigrid method (B.R Huthchinson and G.D. Raithby, Numer. Heat Transf., 9, 511-537 (1986))) is similar to the FAC method of McCormick and Thomas (S.F McCormick and J.W. Thomas, Math. Comput., 46, 439-456 (1986)) and related multilevel methods. The performance of the method is demonstrated for some simple model problems using local refinement and suggestions for improving the performance of the method are given.

  12. Theory of Financial Risk and Derivative Pricing

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe; Potters, Marc

    2009-01-01

    Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.

  13. Theory of Financial Risk and Derivative Pricing - 2nd Edition

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe; Potters, Marc

    2003-12-01

    Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.

  14. Neuritogenesis: A model for space radiation effects on the central nervous system

    NASA Technical Reports Server (NTRS)

    Vazquez, M. E.; Broglio, T. M.; Worgul, B. V.; Benton, E. V.

    1994-01-01

    Pivotal to the astronauts' functional integrity and survival during long space flights are the strategies to deal with space radiations. The majority of the cellular studies in this area emphasize simple endpoints such as growth related events which, although useful to understand the nature of primary cell injury, have poor predictive value for extrapolation to more complex tissues such as the central nervous system (CNS). In order to assess the radiation damage on neural cell populations, we developed an in vitro model in which neuronal differentiation, neurite extension, and synaptogenesis occur under controlled conditions. The model exploits chick embryo neural explants to study the effects of radiations on neuritogenesis. In addition, neurobiological problems associated with long-term space flights are discussed.

  15. The viscosity of magmatic silicate liquids: A model for calculation

    NASA Technical Reports Server (NTRS)

    Bottinga, Y.; Weill, D. F.

    1971-01-01

    A simple model has been designed to allow reasonably accurate calculations of viscosity as a function of temperature and composition. The problem of predicting viscosities of anhydrous silicate liquids has been investigated since such viscosity numbers are applicable to many extrusive melts and to nearly dry magmatic liquids in general. The fluidizing action of water dissolved in silicate melts is well recognized and it is now possible to predict the effect of water content on viscosity in a semiquantitative way. Water was not incorporated directly into the model. Viscosities of anhydrous compositions were calculated, and, where necessary, the effect of added water and estimated. The model can be easily modified to incorporate the effect of water whenever sufficient additional data are accumulated.

  16. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, George

    1993-01-01

    The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multiparameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resource.

  17. Betaine Improves Polymer-Grade D-Lactic Acid Production by Sporolactobacillus inulinus Using Ammonia as Green Neutralizer.

    PubMed

    Lv, Guoping; Che, Chengchuan; Li, Li; Xu, Shujing; Guan, Wanyi; Zhao, Baohua; Ju, Jiansong

    2017-07-06

    The traditional CaCO3-based fermentation process generates huge amount of insoluble CaSO4 waste. To solve this problem, we have developed an efficient and green D-lactic acid fermentation process by using ammonia as neutralizer. The 106.7 g/L of D-lactic acid production and 0.89 g per g of consumed sugar were obtained by Sporolactobacillus inulinus CASD with a high optical purity of 99.7% by adding 100 mg/L betaine in the simple batch fermentation process. The addition of betaine was experimentally proven to protect cell at high concentration of ammonium ion, increase the D-lactate dehydrogenase specific activity and thus promote the production of D-lactic acid.

  18. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, Stanislav

    1992-01-01

    The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multi parameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resources.

  19. Dirac delta representation by exact parametric equations.. Application to impulsive vibration systems

    NASA Astrophysics Data System (ADS)

    Chicurel-Uziel, Enrique

    2007-08-01

    A pair of closed parametric equations are proposed to represent the Heaviside unit step function. Differentiating the step equations results in two additional parametric equations, that are also hereby proposed, to represent the Dirac delta function. These equations are expressed in algebraic terms and are handled by means of elementary algebra and elementary calculus. The proposed delta representation complies exactly with the values of the definition. It complies also with the sifting property and the requisite unit area and its Laplace transform coincides with the most general form given in the tables. Furthermore, it leads to a very simple method of solution of impulsive vibrating systems either linear or belonging to a large class of nonlinear problems. Two example solutions are presented.

  20. DShaper: An approach for handling missing low-Q data in pair distribution function analysis of nanostructured systems

    DOE PAGES

    Olds, Daniel; Wang, Hsiu -Wen; Page, Katharine L.

    2015-09-04

    In this work we discuss the potential problems and currently available solutions in modeling powder-diffraction based pair-distribution function (PDF) data from systems where morphological feature information content includes distances in the nanometer length scale, such as finite nanoparticles, nanoporous networks, and nanoscale precipitates in bulk materials. The implications of an experimental finite minimum Q-value are addressed by simulation, which also demonstrates the advantages of combining PDF data with small angle scattering data (SAS). In addition, we introduce a simple Fortran90 code, DShaper, which may be incorporated into PDF data fitting routines in order to approximate the so-called shape-function for anymore » atomistic model.« less

  1. Detection of osmotic damages in GRP boat hulls

    NASA Astrophysics Data System (ADS)

    Krstulović-Opara, L.; Domazet, Ž.; Garafulić, E.

    2013-09-01

    Infrared thermography as a tool of non-destructive testing is method enabling visualization and estimation of structural anomalies and differences in structure's topography. In presented paper problem of osmotic damage in submerged glass reinforced polymer structures is addressed. The osmotic damage can be detected by a simple humidity gauging, but for proper evaluation and estimation testing methods are restricted and hardly applicable. In this paper it is demonstrated that infrared thermography, based on estimation of heat wave propagation, can be used. Three methods are addressed; Pulsed thermography, Fast Fourier Transform and Continuous Morlet Wavelet. An additional image processing based on gradient approach is applied on all addressed methods. It is shown that the Continuous Morlet Wavelet is the most appropriate method for detection of osmotic damage.

  2. Addressing the unmet need for visualizing conditional random fields in biological data

    PubMed Central

    2014-01-01

    Background The biological world is replete with phenomena that appear to be ideally modeled and analyzed by one archetypal statistical framework - the Graphical Probabilistic Model (GPM). The structure of GPMs is a uniquely good match for biological problems that range from aligning sequences to modeling the genome-to-phenome relationship. The fundamental questions that GPMs address involve making decisions based on a complex web of interacting factors. Unfortunately, while GPMs ideally fit many questions in biology, they are not an easy solution to apply. Building a GPM is not a simple task for an end user. Moreover, applying GPMs is also impeded by the insidious fact that the “complex web of interacting factors” inherent to a problem might be easy to define and also intractable to compute upon. Discussion We propose that the visualization sciences can contribute to many domains of the bio-sciences, by developing tools to address archetypal representation and user interaction issues in GPMs, and in particular a variety of GPM called a Conditional Random Field(CRF). CRFs bring additional power, and additional complexity, because the CRF dependency network can be conditioned on the query data. Conclusions In this manuscript we examine the shared features of several biological problems that are amenable to modeling with CRFs, highlight the challenges that existing visualization and visual analytics paradigms induce for these data, and document an experimental solution called StickWRLD which, while leaving room for improvement, has been successfully applied in several biological research projects. Software and tutorials are available at http://www.stickwrld.org/ PMID:25000815

  3. Subjective complaints after acquired brain injury: presentation of the Brain Injury Complaint Questionnaire (BICoQ).

    PubMed

    Vallat-Azouvi, Claire; Paillat, Cyrille; Bercovici, Stéphanie; Morin, Bénédicte; Paquereau, Julie; Charanton, James; Ghout, Idir; Azouvi, Philippe

    2018-04-01

    The objective of the present study was to present a new complaint questionnaire designed to assess a wide range of difficulties commonly reported by patients with acquired brain injury. Patients (n =  619) had been referred to a community re-entry service at a chronic stage after brain injury, mainly traumatic brain injury (TBI). The Brain Injury Complaint Questionnaire (BICoQ) includes 25 questions in the following domains: cognition, behavior, fatigue and sleep, mood, and somatic problems. A self and a proxy questionnaire were given. An additional question was given to the relative, about the patient's awareness of his difficulties. The questionnaires had a good internal coherence, as measured with Cronbach's alpha. The most frequent complaints were, in decreasing order, mental slowness, memory troubles, fatigue, concentration difficulties, anxiety, and dual tasking problems. Principal component analysis with varimax rotation yielded six underlying factors explaining 50.5% of total variance: somatic concerns, cognition, and lack of drive, lack of control, psycholinguistic disorders, mood, and mental fatigue/slowness. About 52% of patients reported fewer complaints than their proxy, suggesting lack of awareness. The total complaint scores were not significantly correlated with any injury severity measure, but were significantly correlated with disability and poorer quality of life (Note: only factor 2 [cognition/lack of drive] was significantly related to disability.) The BICoQ is a simple scale that can be used in addition to traditional clinical and cognitive assessment measures, and to assess awareness of everyday life problems. © 2017 Wiley Periodicals, Inc.

  4. GIS-BASED HYDROLOGIC MODELING: THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT TOOL

    EPA Science Inventory

    Planning and assessment in land and water resource management are evolving from simple, local scale problems toward complex, spatially explicit regional ones. Such problems have to be
    addressed with distributed models that can compute runoff and erosion at different spatial a...

  5. Computational-hydrodynamic studies of the Noh compressible flow problem using non-ideal equations of state

    NASA Astrophysics Data System (ADS)

    Honnell, Kevin; Burnett, Sarah; Yorke, Chloe'; Howard, April; Ramsey, Scott

    2017-06-01

    The Noh problem is classic verification problem in the field of compressible flows. Simple to conceptualize, it is nonetheless difficult for numerical codes to predict correctly, making it an ideal code-verification test bed. In its original incarnation, the fluid is a simple ideal gas; once validated, however, these codes are often used to study highly non-ideal fluids and solids. In this work the classic Noh problem is extended beyond the commonly-studied polytropic ideal gas to more realistic equations of state (EOS) including the stiff gas, the Nobel-Abel gas, and the Carnahan-Starling hard-sphere fluid, thus enabling verification studies to be performed on more physically-realistic fluids. Exact solutions are compared with numerical results obtained from the Lagrangian hydrocode FLAG, developed at Los Alamos. For these more realistic EOSs, the simulation errors decreased in magnitude both at the origin and at the shock, but also spread more broadly about these points compared to the ideal EOS. The overall spatial convergence rate remained first order.

  6. Diffusion of a new intermediate product in a simple 'classical-Schumpeterian' model.

    PubMed

    Haas, David

    2018-05-01

    This paper deals with the problem of new intermediate products within a simple model, where production is circular and goods enter into the production of other goods. It studies the process by which the new good is absorbed into the economy and the structural transformation that goes with it. By means of a long-period method the forces of structural transformation are examined, in particular the shift of existing means of production towards the innovation and the mechanism of differential growth in terms of alternative techniques and their associated systems of production. We treat two important Schumpeterian topics: the question of technological unemployment and the problem of 'forced saving' and the related problem of an involuntary reduction of real consumption per capita. It is shown that both phenomena are potential by-products of the transformation process.

  7. Complexity and compositionality in fluid intelligence.

    PubMed

    Duncan, John; Chylinski, Daphne; Mitchell, Daniel J; Bhandari, Apoorva

    2017-05-16

    Compositionality, or the ability to build complex cognitive structures from simple parts, is fundamental to the power of the human mind. Here we relate this principle to the psychometric concept of fluid intelligence, traditionally measured with tests of complex reasoning. Following the principle of compositionality, we propose that the critical function in fluid intelligence is splitting a complex whole into simple, separately attended parts. To test this proposal, we modify traditional matrix reasoning problems to minimize requirements on information integration, working memory, and processing speed, creating problems that are trivial once effectively divided into parts. Performance remains poor in participants with low fluid intelligence, but is radically improved by problem layout that aids cognitive segmentation. In line with the principle of compositionality, we suggest that effective cognitive segmentation is important in all organized behavior, explaining the broad role of fluid intelligence in successful cognition.

  8. Mixed Beam Murine Harderian Gland Tumorigenesis: Predicted Dose-Effect Relationships if neither Synergism nor Antagonism Occurs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siranart, Nopphon; Blakely, Eleanor A.; Cheng, Alden

    Complex mixed radiation fields exist in interplanetary space, and not much is known about their latent effects on space travelers. In silico synergy analysis default predictions are useful when planning relevant mixed-ion-beam experiments and interpreting their results. These predictions are based on individual dose-effect relationships (IDER) for each component of the mixed-ion beam, assuming no synergy or antagonism. For example, a default hypothesis of simple effect additivity has often been used throughout the study of biology. However, for more than a century pharmacologists interested in mixtures of therapeutic drugs have analyzed conceptual, mathematical and practical questions similar to those thatmore » arise when analyzing mixed radiation fields, and have shown that simple effect additivity often gives unreasonable predictions when the IDER are curvilinear. Various alternatives to simple effect additivity proposed in radiobiology, pharmacometrics, toxicology and other fields are also known to have important limitations. In this work, we analyze upcoming murine Harderian gland (HG) tumor prevalence mixed-beam experiments, using customized open-source software and published IDER from past single-ion experiments. The upcoming experiments will use acute irradiation and the mixed beam will include components of high atomic number and energy (HZE). We introduce a new alternative to simple effect additivity, "incremental effect additivity", which is more suitable for the HG analysis and perhaps for other end points. We use incremental effect additivity to calculate default predictions for mixture dose-effect relationships, including 95% confidence intervals. We have drawn three main conclusions from this work. 1. It is important to supplement mixed-beam experiments with single-ion experiments, with matching end point(s), shielding and dose timing. 2. For HG tumorigenesis due to a mixed beam, simple effect additivity and incremental effect additivity sometimes give default predictions that are numerically close. However, if nontargeted effects are important and the mixed beam includes a number of different HZE components, simple effect additivity becomes unusable and another method is needed such as incremental effect additivity. 3. Eventually, synergy analysis default predictions of the effects of mixed radiation fields will be replaced by more mechanistic, biophysically-based predictions. However, optimizing synergy analyses is an important first step. If mixed-beam experiments indicate little synergy or antagonism, plans by NASA for further experiments and possible missions beyond low earth orbit will be substantially simplified.« less

  9. Unconsummated marriage: clarification of aetiology; treatment with intracorporeal injection.

    PubMed

    Zargooshi, J

    2000-07-01

    To clarify the aetiological factors in unconsummated marriage in Iran, and to report the results of intracorporeal injection therapy for erectile dysfunction in these circumstances. During a 2-year period, 200 cases of unconsummated marriage were evaluated. A detailed history was obtained to clarify the circumstances of the problem; if simple measures failed to resolve the problem, intracorporeal injection with papaverine +/- phentolamine was used. The main factor associated with an unconsummated marriage was the intense social pressure to accomplish hasty coitus with an unfamiliar woman (some men having had no social contact with their new bride) and in the presence of relatives waiting nearby for evidence of the bride's virginity and confirmation of coitus. The initial problem was then further compounded with resultant erectile failure caused by anxiety about sexual performance. Inability to consummate the marriage was caused by premature ejaculation in 23%, erectile dysfunction in 61%, and a combination in 16%; 70% of patients were able to consummate the union after intracorporeal injection with papaverine +/- phentolamine. In addition to psychological causes and a lack of sexual education, the social circumstances in which partners are obliged to initiate and complete coitus are important factors in the aetiology of unconsummated marriage. Intracorporeal injection is useful in treating this problem and it should be the therapy of choice for unconsummated marriage in developing countries, where the conditions do not favour psychotherapy and where alternative erectogenic agents are expensive or unavailable.

  10. The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces

    NASA Astrophysics Data System (ADS)

    Vuik, C.; Saghir, A.; Boerstoel, G. P.

    2000-08-01

    Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright

  11. Improvements in the Approximate Formulae for the Period of the Simple Pendulum

    ERIC Educational Resources Information Center

    Turkyilmazoglu, M.

    2010-01-01

    This paper is concerned with improvements in some exact formulae for the period of the simple pendulum problem. Two recently presented formulae are re-examined and refined rationally, yielding more accurate approximate periods. Based on the improved expressions here, a particular new formula is proposed for the period. It is shown that the derived…

  12. Expected Utility Illustrated: A Graphical Analysis of Gambles with More than Two Possible Outcomes

    ERIC Educational Resources Information Center

    Chen, Frederick H.

    2010-01-01

    The author presents a simple geometric method to graphically illustrate the expected utility from a gamble with more than two possible outcomes. This geometric result gives economics students a simple visual aid for studying expected utility theory and enables them to analyze a richer set of decision problems under uncertainty compared to what…

  13. Some Marginalist Intuition Concerning the Optimal Commodity Tax Problem

    ERIC Educational Resources Information Center

    Brett, Craig

    2006-01-01

    The author offers a simple intuition that can be exploited to derive and to help interpret some canonical results in the theory of optimal commodity taxation. He develops and explores the principle that the marginal social welfare loss per last unit of tax revenue generated be equalized across tax instruments. A simple two-consumer,…

  14. Design a Contract: A Simple Principal-Agent Problem as a Classroom Experiment

    ERIC Educational Resources Information Center

    Gachter, Simon; Konigstein, Manfred

    2009-01-01

    The authors present a simple classroom experiment that can be used as a teaching device to introduce important concepts of organizational economics and incentive contracting. First, students take the role of a principal and design a contract that consists of a fixed payment and an incentive component. Second, students take the role of agents and…

  15. On the interrelation of multiplication and division in secondary school children.

    PubMed

    Huber, Stefan; Fischer, Ursula; Moeller, Korbinian; Nuerk, Hans-Christoph

    2013-01-01

    Each division problem can be transformed into as a multiplication problem and vice versa. Recent research has indicated strong developmental parallels between multiplication and division in primary school children. In this study, we were interested in (i) whether these developmental parallels persist into secondary school, (ii) whether similar developmental parallels can be observed for simple and complex problems, (iii) whether skill level modulates this relationship, and (iv) whether the correlations are specific and not driven by general cognitive or arithmetic abilities. Therefore, we assessed performance of 5th and 6th graders attending two secondary school types of the German educational system in simple and complex multiplication as well as division while controlling for non-verbal intelligence, short-term memory, and other arithmetic abilities. Accordingly, we collected data from students differing in skills levels due to either age (5th < 6th grade) or school type (general < intermediate secondary school). We observed moderate to strong bivariate and partial correlations between multiplication and division with correlations being higher for simple tasks but nevertheless reliable for complex tasks. Moreover, the association between simple multiplication and division depended on students' skill levels as reflected by school types, but not by age. Partial correlations were higher for intermediate than for general secondary school children. In sum, these findings emphasize the importance of the inverse relationship between multiplication and division which persists into later developmental stages. However, evidence for skill-related differences in the relationship between multiplication and division was restricted to the differences for school types.

  16. Could the bug Triatoma sherlocki be vectoring Chagas disease in small mining communities in Bahia, Brazil?

    PubMed

    Almeida, C E; Folly-Ramos, E; Peterson, A T; Lima-Neiva, V; Gumiel, M; Duarte, R; Lima, M M; Locks, M; Beltrão, M; Costa, J

    2009-12-01

    Searches for Chagas disease vectors were performed at the type locality from which Triatoma sherlocki Papa et al. (Hemiptera: Reduviidae: Triatominae) was described in the municipality of Gentio do Ouro, in the state of Bahia, Brazil, and in a small artisan quarry-mining community approximately 13 km distant in a remote area of the same municipality. The latter site represents a new locality record for this species. Adults, nymphs and exuviae of T. sherlocki were found in 21% of human dwellings, indicating that the species is in the process of domiciliation. Prevalence of Trypanosoma cruzi infection in collected bugs was 10.8%. Simple predictive approaches based on environmental similarity were used to identify additional sites likely suitable for this species. The approach successfully predicted an additional five sites for the species in surrounding landscapes. Ecological and entomological indicators were combined to discuss whether this scenario likely represents an isolated case or an emerging public health problem.

  17. Oriented clay nanopaper from biobased components--mechanisms for superior fire protection properties.

    PubMed

    Carosio, F; Kochumalayil, J; Cuttica, F; Camino, G; Berglund, L

    2015-03-18

    The toxicity of the most efficient fire retardant additives is a major problem for polymeric materials. Cellulose nanofiber (CNF)/clay nanocomposites, with unique brick-and-mortar structure and prepared by simple filtration, are characterized from the morphological point of view by scanning electron microscopy and X-ray diffraction. These nanocomposites have superior fire protection properties to other clay nanocomposites and fiber composites. The corresponding mechanisms are evaluated in terms of flammability (reaction to a flame) and cone calorimetry (exposure to heat flux). These two tests provide a wide spectrum characterization of fire protection properties in CNF/montmorrilonite (MTM) materials. The morphology of the collected residues after flammability testing is investigated. In addition, thermal and thermo-oxidative stability are evaluated by thermogravimetric analyses performed in inert (nitrogen) and oxidative (air) atmospheres. Physical and chemical mechanisms are identified and related to the unique nanostructure and its low thermal conductivity, high gas barrier properties and CNF/MTM interactions for char formation.

  18. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  19. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    NASA Astrophysics Data System (ADS)

    Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.

  20. Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2001-01-01

    A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.

  1. The Diffusion Simulator - Teaching Geomorphic and Geologic Problems Visually.

    ERIC Educational Resources Information Center

    Gilbert, R.

    1979-01-01

    Describes a simple hydraulic simulator based on more complex models long used by engineers to develop approximate solutions. It allows students to visualize non-steady transfer, to apply a model to solve a problem, and to compare experimentally simulated information with calculated values. (Author/MA)

  2. Ancient Paradoxes Can Extend Mathematical Thinking

    ERIC Educational Resources Information Center

    Czocher, Jennifer A.; Moss, Diana L.

    2017-01-01

    This article presents the Snail problem, a relatively simple challenge about motion that offers engaging extensions involving the notion of infinity. It encourages students in grades 5-9 to connect mathematics learning to logic, history, and philosophy through analyzing the problem, making sense of quantitative relationships, and modeling with…

  3. Computer Simulations and Clear Observations Do Not Guarantee Conceptual Understanding

    ERIC Educational Resources Information Center

    Renken, Maggie D.; Nunez, Narina

    2013-01-01

    Evidence for cognitive benefits of simulated versus physical experiments is unclear. Seventh grade participants (n = 147) reported their understanding of two simple pendulum problems (1) before conducting an experiment, (2) immediately following experimentation, and (3) after a 12-week delay. "Problem type" was manipulated within…

  4. On the Teaching of Portfolio Theory.

    ERIC Educational Resources Information Center

    Biederman, Daniel K.

    1992-01-01

    Demonstrates how a simple portfolio problem expressed explicitly as an expected utility maximization problem can be used to instruct students in portfolio theory. Discusses risk aversion, decision making under uncertainty, and the limitations of the traditional mean variance approach. Suggests students may develop a greater appreciation of general…

  5. Effects of alcohol on a problem solving task.

    DOT National Transportation Integrated Search

    1972-03-01

    Twenty subjects were tested on two separate days on a simple problem-solving task. Half of the subjects received alcohol on the first day of testing and half on the second day of testing. A control group of 11 subjects was also tested on two days and...

  6. Alternatives to School Disciplinary and Suspension Problems.

    ERIC Educational Resources Information Center

    South Carolina State Dept. of Education, Columbia. Div. of Instruction.

    Policies and procedures for disciplining students should be designed to teach them responsibility, rather than simply punish them. Providing educational opportunities to behavioral deviants is a problem that does not have a simple solution. However, alternatives to suspension or expulsion must be attempted before these disciplinary actions are…

  7. Extending Parent–Child Interaction Therapy for Early Childhood Internalizing Problems: New Advances for an Overlooked Population

    PubMed Central

    Puliafico, Anthony C.; Kurtz, Steven M. S.; Pincus, Donna B.; Comer, Jonathan S.

    2014-01-01

    Although efficacious psychological treatments for internalizing disorders are now well established for school-aged children, until recently there have regrettably been limited empirical efforts to clarify indicated psychological intervention methods for the treatment of mood and anxiety disorders presenting in early childhood. Young children lack many of the developmental capacities required to effectively participate in established treatments for mood and anxiety problems presenting in older children, making simple downward extensions of these treatments for the management of preschool internalizing problems misguided. In recent years, a number of research groups have successfully adapted and modified parent–child interaction therapy (PCIT), originally developed to treat externalizing problems in young children, to treat various early internalizing problems with a set of neighboring protocols. As in traditional PCIT, these extensions target child symptoms by directly reshaping parent–child interaction patterns associated with the maintenance of symptoms. The present review outlines this emerging set of novel PCIT adaptations and modifications for mood and anxiety problems in young children and reviews preliminary evidence supporting their use. Specifically, we cover (a) PCIT for early separation anxiety disorder; (b) the PCIT-CALM (Coaching Approach behavior and Leading by Modeling) Program for the full range of early anxiety disorders; (c) the group Turtle Program for behavioral inhibition; and (d) the PCIT-ED (Emotional Development) Program for preschool depression. In addition, emerging PCIT-related protocols in need of empirical attention—such as the PCIT-SM (selective mutism) Program for young children with SM—are also considered. Implications of these protocols are discussed with regard to their unique potential to address the clinical needs of young children with internalizing problems. Obstacles to broad dissemination are addressed, and we consider potential solutions, including modular treatment formats and innovative applications of technology. PMID:25212716

  8. Extending parent-child interaction therapy for early childhood internalizing problems: new advances for an overlooked population.

    PubMed

    Carpenter, Aubrey L; Puliafico, Anthony C; Kurtz, Steven M S; Pincus, Donna B; Comer, Jonathan S

    2014-12-01

    Although efficacious psychological treatments for internalizing disorders are now well established for school-aged children, until recently there have regrettably been limited empirical efforts to clarify indicated psychological intervention methods for the treatment of mood and anxiety disorders presenting in early childhood. Young children lack many of the developmental capacities required to effectively participate in established treatments for mood and anxiety problems presenting in older children, making simple downward extensions of these treatments for the management of preschool internalizing problems misguided. In recent years, a number of research groups have successfully adapted and modified parent-child interaction therapy (PCIT), originally developed to treat externalizing problems in young children, to treat various early internalizing problems with a set of neighboring protocols. As in traditional PCIT, these extensions target child symptoms by directly reshaping parent-child interaction patterns associated with the maintenance of symptoms. The present review outlines this emerging set of novel PCIT adaptations and modifications for mood and anxiety problems in young children and reviews preliminary evidence supporting their use. Specifically, we cover (a) PCIT for early separation anxiety disorder; (b) the PCIT-CALM (Coaching Approach behavior and Leading by Modeling) Program for the full range of early anxiety disorders; (c) the group Turtle Program for behavioral inhibition; and (d) the PCIT-ED (Emotional Development) Program for preschool depression. In addition, emerging PCIT-related protocols in need of empirical attention--such as the PCIT-SM (selective mutism) Program for young children with SM--are also considered. Implications of these protocols are discussed with regard to their unique potential to address the clinical needs of young children with internalizing problems. Obstacles to broad dissemination are addressed, and we consider potential solutions, including modular treatment formats and innovative applications of technology.

  9. Clinical guidance and an evidence-based approach for restoration of worn dentition by direct composite resin.

    PubMed

    Milosevic, A

    2018-03-09

    This paper aims to provide the dentist with practical guidance on the technique for direct composite restoration of worn teeth. It is based on current evidence and includes practical advice regarding type of composite, enamel and dentine preparation, dentine bonding and stent design. The application of direct composite has the advantage of being additive, conserving as much of the remaining worn tooth as possible, ease of placement and adjustment, low maintenance and reversibility. A pragmatic approach to management is advocated, particularly as many of the cases are older patients with advanced wear. Several cases restored by direct composite build-ups illustrate what can be achieved. The restoration of the worn dentition may be challenging for many dentists. Careful planning and simple treatment strategies, however, can prove to be highly effective and rewarding. By keeping any intervention as simple as possible, problems with high maintenance are avoided and management of future failure is made easier. An additive rather than a subtractive treatment approach is more intuitive for worn down teeth. Traditional approaches of full-mouth rehabilitation with indirect cast or milled restorations may still have their place but complex treatment modalities will inevitably be more time consuming, more costly, possibly require specialist care and still have an unpredictable outcome. Composite resin restorations are a universal restorative material familiar to dentists from early-on in the undergraduate curriculum. This review paper discusses the application of composite to restore the worn dentition.

  10. Using Synchronous Boolean Networks to Model Several Phenomena of Collective Behavior

    PubMed Central

    Kochemazov, Stepan; Semenov, Alexander

    2014-01-01

    In this paper, we propose an approach for modeling and analysis of a number of phenomena of collective behavior. By collectives we mean multi-agent systems that transition from one state to another at discrete moments of time. The behavior of a member of a collective (agent) is called conforming if the opinion of this agent at current time moment conforms to the opinion of some other agents at the previous time moment. We presume that at each moment of time every agent makes a decision by choosing from the set (where 1-decision corresponds to action and 0-decision corresponds to inaction). In our approach we model collective behavior with synchronous Boolean networks. We presume that in a network there can be agents that act at every moment of time. Such agents are called instigators. Also there can be agents that never act. Such agents are called loyalists. Agents that are neither instigators nor loyalists are called simple agents. We study two combinatorial problems. The first problem is to find a disposition of instigators that in several time moments transforms a network from a state where the majority of simple agents are inactive to a state with the majority of active agents. The second problem is to find a disposition of loyalists that returns the network to a state with the majority of inactive agents. Similar problems are studied for networks in which simple agents demonstrate the contrary to conforming behavior that we call anticonforming. We obtained several theoretical results regarding the behavior of collectives of agents with conforming or anticonforming behavior. In computational experiments we solved the described problems for randomly generated networks with several hundred vertices. We reduced corresponding combinatorial problems to the Boolean satisfiability problem (SAT) and used modern SAT solvers to solve the instances obtained. PMID:25526612

  11. Nonlinear theory of magnetohydrodynamic flows of a compressible fluid in the shallow water approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimachkov, D. A., E-mail: klimchakovdmitry@gmail.com; Petrosyan, A. S., E-mail: apetrosy@iki.rssi.ru

    2016-09-15

    Shallow water magnetohydrodynamic (MHD) theory describing incompressible flows of plasma is generalized to the case of compressible flows. A system of MHD equations is obtained that describes the flow of a thin layer of compressible rotating plasma in a gravitational field in the shallow water approximation. The system of quasilinear hyperbolic equations obtained admits a complete simple wave analysis and a solution to the initial discontinuity decay problem in the simplest version of nonrotating flows. In the new equations, sound waves are filtered out, and the dependence of density on pressure on large scales is taken into account that describesmore » static compressibility phenomena. In the equations obtained, the mass conservation law is formulated for a variable that nontrivially depends on the shape of the lower boundary, the characteristic vertical scale of the flow, and the scale of heights at which the variation of density becomes significant. A simple wave theory is developed for the system of equations obtained. All self-similar discontinuous solutions and all continuous centered self-similar solutions of the system are obtained. The initial discontinuity decay problem is solved explicitly for compressible MHD equations in the shallow water approximation. It is shown that there exist five different configurations that provide a solution to the initial discontinuity decay problem. For each configuration, conditions are found that are necessary and sufficient for its implementation. Differences between incompressible and compressible cases are analyzed. In spite of the formal similarity between the solutions in the classical case of MHD flows of an incompressible and compressible fluids, the nonlinear dynamics described by the solutions are essentially different due to the difference in the expressions for the squared propagation velocity of weak perturbations. In addition, the solutions obtained describe new physical phenomena related to the dependence of the height of the free boundary on the density of the fluid. Self-similar continuous and discontinuous solutions are obtained for a system on a slope, and a solution is found to the initial discontinuity decay problem in this case.« less

  12. Educational Experiences of Embry-Riddle Students through NASA Research Collaboration

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Chatman, Yadira; Ristow, James; Gangadharan, Sathya; Sudermann, James; Walker, Charles

    2007-01-01

    NASA's educational programs benefit students while increasing the overall productivity of the organization. The NASA Graduate Student Research Program (GSRP) awards fellowships for graduate study leading to both masters and doctoral degrees in several technical fields, while the Cooperative Education program allows undergraduate and graduate students the chance to gain work experience in the field. The Mission Analysis Branch of the Expendable Launch Vehicles Division at NASA Kennedy Space Center has utilized these two programs with students from Embry-Riddle Aeronautical University to conduct research in modeling and developing a parameter estimation method for spacecraft fuel slosh using simple pendulum analogs. Simple pendulum models are used to understand complicated spacecraft fuel slosh behavior. A robust parameter estimation process will help to identiFy the parameters that will predict the response fairly accurately during the initial stages of design. NASA's Cooperative Education Program trains the next wave of new hires while allowing graduate and undergraduate college students to gain valuable "real-world" work experience. It gives NASA a no risk capability to evaluate the true performance of a prospective new hire without relying solely on a paper resume, while providing the students with a greater hiring potential upon graduation, at NASA or elsewhere. In addition, graduate students serve as mentors for undergrad students and provide a unique learning environment. Providing students with a unique opportunity to work on "real-world" aerospace problems ultimately reinforces their problem solving abilities and their communication skills (in terms of interviewing, resume writing, technical writing, presentation, and peer review) that are vital for the workforce to succeed.

  13. Clinical history for diagnosis of dementia in men: Caerphilly Prospective Study

    PubMed Central

    Creavin, Sam; Fish, Mark; Gallacher, John; Bayer, Antony; Ben-Shlomo, Yoav

    2015-01-01

    Background Diagnosis of dementia often requires specialist referral and detailed, time-consuming assessments. Aim To investigate the utility of simple clinical items that non-specialist clinicians could use, in addition to routine practice, to diagnose all-cause dementia syndrome. Design and setting Cross-sectional diagnostic test accuracy study. Participants were identified from the electoral roll and general practice lists in Caerphilly and adjoining villages in South Wales, UK. Method Participants (1225 men aged 45–59 years) were screened for cognitive impairment using the Cambridge Cognitive Examination, CAMCOG, at phase 5 of the Caerphilly Prospective Study (CaPS). Index tests were a standardised clinical evaluation, neurological examination, and individual items on the Informant Questionnaire for Cognitive Disorders in the Elderly (IQCODE). Results Two-hundred and five men who screened positive (68%) and 45 (4.8%) who screened negative were seen, with 59 diagnosed with dementia. The model comprising problems with personal finance and planning had an area under the curve (AUC) of 0.92 (95% confidence interval [CI] = 0.86 to 0.97), positive likelihood ratio (LR+) of 23.7 (95% CI = 5.88 to 95.6), negative likelihood ratio (LR−) of 0.41 (95% CI = 0.27 to 0.62). The best single item for ruling out was no problems learning to use new gadgets (LR− of 0.22, 95% CI = 0.11 to 0.43). Conclusion This study found that three simple questions have high utility for diagnosing dementia in men who are cognitively screened. If confirmed, this could lead to less burdensome assessment where clinical assessment suggests possible dementia. PMID:26212844

  14. Travel Medicine Encounters of Australian General Practice Trainees-A Cross-Sectional Study.

    PubMed

    Morgan, Simon; Henderson, Kim M; Tapley, Amanda; Scott, John; van Driel, Mieke L; Spike, Neil A; McArthur, Lawrie A; Davey, Andrew R; Catzikiris, Nigel F; Magin, Parker J

    2015-01-01

    Travel medicine is a common and challenging area of clinical practice and practitioners need up-to-date knowledge and experience in a range of areas. Australian general practitioners (GPs) play a significant role in the delivery of travel medicine advice. We aimed to describe the rate and nature of travel medicine consultations, including both the clinical and educational aspects of the consultations. A cross-sectional analysis from an ongoing cohort study of GP trainees' clinical consultations was performed. Trainees contemporaneously recorded demographic, clinical, and educational details of consecutive patient consultations. Proportions of all problems/diagnoses managed in these consultations that were coded "travel-related" and "travel advice" were both calculated with 95% confidence intervals (CIs). Associations of a problem/diagnosis being "travel-related" or "travel advice" were tested using simple logistic regression within the generalized estimating equations (GEE) framework. A total of 856 trainees contributed data on 169,307 problems from 108,759 consultations (2010-2014). Travel-related and travel advice problems were managed at a rate of 1.1 and 0.5 problems per 100 encounters, respectively. Significant positive associations of travel-related problems were younger trainee and patient age; new patient to the trainee and practice; privately billing, larger, urban, and higher socioeconomic status practices; and involvement of the practice nurse. Trainees sought in-consultation information and generated learning goals in 34.7 and 20.8% of travel advice problems, respectively, significantly more than in non-travel advice problems. Significant positive associations of travel advice problems were seeking in-consultation information, generation of learning goals, longer consultation duration, and more problems managed. Our findings reinforce the importance of focused training in travel medicine for GP trainees and adequate exposure to patients in the practice setting. In addition, our findings have implications more broadly for the delivery of travel medicine in general practice. © 2015 International Society of Travel Medicine.

  15. Classical problems in computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    In relation to the expected problems in the development of computational aeroacoustics (CAA), the preliminary applications were to classical problems where the known analytical solutions could be used to validate the numerical results. Such comparisons were used to overcome the numerical problems inherent in these calculations. Comparisons were made between the various numerical approaches to the problems such as direct simulations, acoustic analogies and acoustic/viscous splitting techniques. The aim was to demonstrate the applicability of CAA as a tool in the same class as computational fluid dynamics. The scattering problems that occur are considered and simple sources are discussed.

  16. Some Results on Proper Eigenvalues and Eigenvectors with Applications to Scaling.

    ERIC Educational Resources Information Center

    McDonald, Roderick P.; And Others

    1979-01-01

    Problems in avoiding the singularity problem in analyzing matrices for optimal scaling are addressed. Conditions are given under which the stationary points and values of a ratio of quadratic forms in two singular matrices can be obtained by a series of simple matrix operations. (Author/JKS)

  17. A Unified Approach for Solving Nonlinear Regular Perturbation Problems

    ERIC Educational Resources Information Center

    Khuri, S. A.

    2008-01-01

    This article describes a simple alternative unified method of solving nonlinear regular perturbation problems. The procedure is based upon the manipulation of Taylor's approximation for the expansion of the nonlinear term in the perturbed equation. An essential feature of this technique is the relative simplicity used and the associated unified…

  18. Probability in Action: The Red Traffic Light

    ERIC Educational Resources Information Center

    Shanks, John A.

    2007-01-01

    Emphasis on problem solving in mathematics has gained considerable attention in recent years. While statistics teaching has always been problem driven, the same cannot be said for the teaching of probability where discrete examples involving coins and playing cards are often the norm. This article describes an application of simple probability…

  19. A Laboratory Exercise with Related Rates.

    ERIC Educational Resources Information Center

    Sworder, Steven C.

    A laboratory experiment, based on a simple electric circuit that can be used to demonstrate the existence of real-world "related rates" problems, is outlined and an equation for voltage across the capacitor terminals during discharge is derived. The necessary materials, setup methods, and experimental problems are described. A student laboratory…

  20. Developing Problem-Solving Skills through Retrosynthetic Analysis and Clickers in Organic Chemistry

    ERIC Educational Resources Information Center

    Flynn, Alison B.

    2011-01-01

    A unique approach to teaching and learning problem-solving and critical-thinking skills in the context of retrosynthetic analysis is described. In this approach, introductory organic chemistry students, who typically see only simple organic structures, undertook partial retrosynthetic analyses of real and complex synthetic targets. Multiple…

  1. 42 CFR 483.136 - Evaluating whether an individual with mental retardation requires specialized services (PASARR/MR).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) The individual's medical problems; (2) The level of impact these problems have on the individual's...) Independent living development such as meal preparation, budgeting and personal finances, survival skills... the most personal care needs; (B) Understand simple commands; (C) Communicate basic needs and wants...

  2. The Geoboard Triangle Quest

    ERIC Educational Resources Information Center

    Allen, Kasi C.

    2013-01-01

    In line with the Common Core and Standards for Mathematical Practice that portray a classroom where students are engaged in problem-solving experiences, and where various tools and arguments are employed to grow their strategic thinking, this article is the story of such a student-initiated problem. A seemingly simple question was posed by…

  3. Measuring Children's Reading Development Using Leveled Texts.

    ERIC Educational Resources Information Center

    Paris, Scott G.

    2002-01-01

    Notes that the main problem with using Informal Reading Inventories (IRIs) for measuring reading growth is that running records and miscue analyses are gathered on variable levels of text that are appropriate for each child. Presents several possible solutions to the measurement problem beyond the simple profile descriptions usually reported from…

  4. Environmental Pollution, Student's Book (Experiences/Experiments/Activities).

    ERIC Educational Resources Information Center

    Weaver, Elbert C.

    Described in this student's manual are numerous experiments to acquaint the learner with community environmental problems. Experiments are relatively simple and useful in the junior high school grades. Activities are provided which emphasize some of the materials involved in pollution problems, such as carbon dioxide, sulfur compounds, and others,…

  5. Problem-Solving in the Pre-Clinical Curriculum: The Uses of Computer Simulations.

    ERIC Educational Resources Information Center

    Michael, Joel A.; Rovick, Allen A.

    1986-01-01

    Promotes the use of computer-based simulations in the pre-clinical medical curriculum as a means of providing students with opportunities for problem solving. Describes simple simulations of skeletal muscle loads, complex simulations of major organ systems and comprehensive simulation models of the entire human body. (TW)

  6. Autonomously Self-Adhesive Hydrogels as Building Blocks for Additive Manufacturing.

    PubMed

    Deng, Xudong; Attalla, Rana; Sadowski, Lukas P; Chen, Mengsu; Majcher, Michael J; Urosev, Ivan; Yin, Da-Chuan; Selvaganapathy, P Ravi; Filipe, Carlos D M; Hoare, Todd

    2018-01-08

    We report a simple method of preparing autonomous and rapid self-adhesive hydrogels and their use as building blocks for additive manufacturing of functional tissue scaffolds. Dynamic cross-linking between 2-aminophenylboronic acid-functionalized hyaluronic acid and poly(vinyl alcohol) yields hydrogels that recover their mechanical integrity within 1 min after cutting or shear under both neutral and acidic pH conditions. Incorporation of this hydrogel in an interpenetrating calcium-alginate network results in an interfacially stiffer but still rapidly self-adhesive hydrogel that can be assembled into hollow perfusion channels by simple contact additive manufacturing within minutes. Such channels withstand fluid perfusion while retaining their dimensions and support endothelial cell growth and proliferation, providing a simple and modular route to produce customized cell scaffolds.

  7. A definition of the degree of controllability - A criterion for actuator placement

    NASA Technical Reports Server (NTRS)

    Viswanathan, C. N.; Longman, R. W.; Likins, P. W.

    1979-01-01

    The unsolved problem of how to control the attitude and shape of future very large flexible satellite structures represents a challenging problem for modern control theory. One aspect of this problem is the question of how to choose the number and locations throughout the spacecraft of the control system actuators. Starting from basic physical considerations, this paper develops a concept of the degree of controllability of a control system, and then develops numerical methods to generate approximate values of the degree of controllability for any spacecraft. These results offer the control system designer a tool which allows him to rank the effectiveness of alternative actuator distributions, and hence to choose the actuator locations on a rational basis. The degree of controllability is shown to take a particularly simple form when the satellite dynamics equations are in modal form. Examples are provided to illustrate the use of the concept on a simple flexible spacecraft.

  8. Diffusion of a new intermediate product in a simple ‘classical‐Schumpeterian’ model

    PubMed Central

    2017-01-01

    Abstract This paper deals with the problem of new intermediate products within a simple model, where production is circular and goods enter into the production of other goods. It studies the process by which the new good is absorbed into the economy and the structural transformation that goes with it. By means of a long‐period method the forces of structural transformation are examined, in particular the shift of existing means of production towards the innovation and the mechanism of differential growth in terms of alternative techniques and their associated systems of production. We treat two important Schumpeterian topics: the question of technological unemployment and the problem of ‘forced saving’ and the related problem of an involuntary reduction of real consumption per capita. It is shown that both phenomena are potential by‐products of the transformation process. PMID:29695874

  9. Complexity and compositionality in fluid intelligence

    PubMed Central

    Duncan, John; Chylinski, Daphne

    2017-01-01

    Compositionality, or the ability to build complex cognitive structures from simple parts, is fundamental to the power of the human mind. Here we relate this principle to the psychometric concept of fluid intelligence, traditionally measured with tests of complex reasoning. Following the principle of compositionality, we propose that the critical function in fluid intelligence is splitting a complex whole into simple, separately attended parts. To test this proposal, we modify traditional matrix reasoning problems to minimize requirements on information integration, working memory, and processing speed, creating problems that are trivial once effectively divided into parts. Performance remains poor in participants with low fluid intelligence, but is radically improved by problem layout that aids cognitive segmentation. In line with the principle of compositionality, we suggest that effective cognitive segmentation is important in all organized behavior, explaining the broad role of fluid intelligence in successful cognition. PMID:28461462

  10. [Nationwide survey of postgraduate medical training in clinical neurology].

    PubMed

    Biesalski, A-S; Franke, C; Sturm, D; Behncke, J; Schreckenbach, T; Knauß, S; Eisenberg, H; Hillienhof, A; Sand, F; Zupanic, M

    2018-06-05

    Currently, no data are available, which reflect the situation of medical doctors specializing in neurology in German hospitals. In order to secure the high standard of neurological patient care it is essential to evaluate the working conditions and the specialty training in neurology. This nationwide survey was conducted throughout Germany with the aim to address problems and to give suggestions for improvements in neurological training curricula. The survey was online from February to May 2017 and 953 neurologists undergoing further training participated. More than half of the young neurologists were satisfied with their medical training. One of the main problems that complicates clinical training is the workload. In addition, organizational obstacles within the clinic, such as poor structure of education or a lack of mentors, lead to dissatisfaction among participants. The size or type of the department, as well as the prevailing service system, exert only a minor influence on the quality of specialist training, although there were differences especially in the self-assessment of the participants in connection with the type of department (university hospital versus public or private hospital). Specialist training in neurology can be improved by simple arrangements, e. g., the introduction of a binding rotation scheme, internal mentoring and structured feedback. In addition, it will be necessary to relieve medical staff of administrative duties in order to create time for training and the learning of competencies.

  11. Privacy-preserving heterogeneous health data sharing.

    PubMed

    Mohammed, Noman; Jiang, Xiaoqian; Chen, Rui; Fung, Benjamin C M; Ohno-Machado, Lucila

    2013-05-01

    Privacy-preserving data publishing addresses the problem of disclosing sensitive data when mining for useful information. Among existing privacy models, ε-differential privacy provides one of the strongest privacy guarantees and makes no assumptions about an adversary's background knowledge. All existing solutions that ensure ε-differential privacy handle the problem of disclosing relational and set-valued data in a privacy-preserving manner separately. In this paper, we propose an algorithm that considers both relational and set-valued data in differentially private disclosure of healthcare data. The proposed approach makes a simple yet fundamental switch in differentially private algorithm design: instead of listing all possible records (ie, a contingency table) for noise addition, records are generalized before noise addition. The algorithm first generalizes the raw data in a probabilistic way, and then adds noise to guarantee ε-differential privacy. We showed that the disclosed data could be used effectively to build a decision tree induction classifier. Experimental results demonstrated that the proposed algorithm is scalable and performs better than existing solutions for classification analysis. The resulting utility may degrade when the output domain size is very large, making it potentially inappropriate to generate synthetic data for large health databases. Unlike existing techniques, the proposed algorithm allows the disclosure of health data containing both relational and set-valued data in a differentially private manner, and can retain essential information for discriminative analysis.

  12. Privacy-preserving heterogeneous health data sharing

    PubMed Central

    Mohammed, Noman; Jiang, Xiaoqian; Chen, Rui; Fung, Benjamin C M; Ohno-Machado, Lucila

    2013-01-01

    Objective Privacy-preserving data publishing addresses the problem of disclosing sensitive data when mining for useful information. Among existing privacy models, ε-differential privacy provides one of the strongest privacy guarantees and makes no assumptions about an adversary's background knowledge. All existing solutions that ensure ε-differential privacy handle the problem of disclosing relational and set-valued data in a privacy-preserving manner separately. In this paper, we propose an algorithm that considers both relational and set-valued data in differentially private disclosure of healthcare data. Methods The proposed approach makes a simple yet fundamental switch in differentially private algorithm design: instead of listing all possible records (ie, a contingency table) for noise addition, records are generalized before noise addition. The algorithm first generalizes the raw data in a probabilistic way, and then adds noise to guarantee ε-differential privacy. Results We showed that the disclosed data could be used effectively to build a decision tree induction classifier. Experimental results demonstrated that the proposed algorithm is scalable and performs better than existing solutions for classification analysis. Limitation The resulting utility may degrade when the output domain size is very large, making it potentially inappropriate to generate synthetic data for large health databases. Conclusions Unlike existing techniques, the proposed algorithm allows the disclosure of health data containing both relational and set-valued data in a differentially private manner, and can retain essential information for discriminative analysis. PMID:23242630

  13. Development of small scale cluster computer for numerical analysis

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  14. Additive Runge-Kutta Schemes for Convection-Diffusion-Reaction Equations

    NASA Technical Reports Server (NTRS)

    Kennedy, Christopher A.; Carpenter, Mark H.

    2001-01-01

    Additive Runge-Kutta (ARK) methods are investigated for application to the spatially discretized one-dimensional convection-diffusion-reaction (CDR) equations. First, accuracy, stability, conservation, and dense output are considered for the general case when N different Runge-Kutta methods are grouped into a single composite method. Then, implicit-explicit, N = 2, additive Runge-Kutta ARK2 methods from third- to fifth-order are presented that allow for integration of stiff terms by an L-stable, stiffly-accurate explicit, singly diagonally implicit Runge-Kutta (ESDIRK) method while the nonstiff terms are integrated with a traditional explicit Runge-Kutta method (ERK). Coupling error terms are of equal order to those of the elemental methods. Derived ARK2 methods have vanishing stability functions for very large values of the stiff scaled eigenvalue, z(exp [I]) goes to infinity, and retain high stability efficiency in the absence of stiffness, z(exp [I]) goes to zero. Extrapolation-type stage-value predictors are provided based on dense-output formulae. Optimized methods minimize both leading order ARK2 error terms and Butcher coefficient magnitudes as well as maximize conservation properties. Numerical tests of the new schemes on a CDR problem show negligible stiffness leakage and near classical order convergence rates. However, tests on three simple singular-perturbation problems reveal generally predictable order reduction. Error control is best managed with a PID-controller. While results for the fifth-order method are disappointing, both the new third- and fourth-order methods are at least as efficient as existing ARK2 methods while offering error control and stage-value predictors.

  15. Approximation algorithms for the min-power symmetric connectivity problem

    NASA Astrophysics Data System (ADS)

    Plotnikov, Roman; Erzin, Adil; Mladenovic, Nenad

    2016-10-01

    We consider the NP-hard problem of synthesis of optimal spanning communication subgraph in a given arbitrary simple edge-weighted graph. This problem occurs in the wireless networks while minimizing the total transmission power consumptions. We propose several new heuristics based on the variable neighborhood search metaheuristic for the approximation solution of the problem. We have performed a numerical experiment where all proposed algorithms have been executed on the randomly generated test samples. For these instances, on average, our algorithms outperform the previously known heuristics.

  16. The Case of Web-Based Course on Taxation: Current Status, Problems and Future Improvement

    NASA Astrophysics Data System (ADS)

    Qin, Zhigang

    This paper mainly introduces the case of the web-based course on taxation developed by Xiamen University. We analyze the current status, problems and future improvement of the web-based course. The web-based course has the basic contents and modules, but it has several problems including unclear object, lacking interaction, lacking examination module, lacking study management module, and the learning materials and the navigation are too simple. According to its problems, we put forward the measures to improve it.

  17. The `Miracle' of Applicability? The Curious Case of the Simple Harmonic Oscillator

    NASA Astrophysics Data System (ADS)

    Bangu, Sorin; Moir, Robert H. C.

    2018-05-01

    The paper discusses to what extent the conceptual issues involved in solving the simple harmonic oscillator model fit Wigner's famous point that the applicability of mathematics borders on the miraculous. We argue that although there is ultimately nothing mysterious here, as is to be expected, a careful demonstration that this is so involves unexpected difficulties. Consequently, through the lens of this simple case we derive some insight into what is responsible for the appearance of mystery in more sophisticated examples of the Wigner problem.

  18. A simple finite element method for the Stokes equations

    DOE PAGES

    Mu, Lin; Ye, Xiu

    2017-03-21

    The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.

  19. A simple finite element method for the Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Ye, Xiu

    The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.

  20. The `Miracle' of Applicability? The Curious Case of the Simple Harmonic Oscillator

    NASA Astrophysics Data System (ADS)

    Bangu, Sorin; Moir, Robert H. C.

    2018-03-01

    The paper discusses to what extent the conceptual issues involved in solving the simple harmonic oscillator model fit Wigner's famous point that the applicability of mathematics borders on the miraculous. We argue that although there is ultimately nothing mysterious here, as is to be expected, a careful demonstration that this is so involves unexpected difficulties. Consequently, through the lens of this simple case we derive some insight into what is responsible for the appearance of mystery in more sophisticated examples of the Wigner problem.

  1. Recursive solution of number of reachable states of a simple subclass of FMS

    NASA Astrophysics Data System (ADS)

    Chao, Daniel Yuh

    2014-03-01

    This paper aims to compute the number of reachable (forbidden, live and deadlock) states for flexible manufacturing systems (FMS) without the construction of reachability graph. The problem is nontrivial and takes, in general, an exponential amount of time to solve. Hence, this paper focusses on a simple version of Systems of Simple Sequential Processes with Resources (S3PR), called kth-order system, where each resource place holds one token to be shared between two processes. The exact number of reachable (forbidden, live and deadlock) states can be computed recursively.

  2. Complex Problem Solving: What It Is and What It Is Not

    PubMed Central

    Dörner, Dietrich; Funke, Joachim

    2017-01-01

    Computer-simulated scenarios have been part of psychological research on problem solving for more than 40 years. The shift in emphasis from simple toy problems to complex, more real-life oriented problems has been accompanied by discussions about the best ways to assess the process of solving complex problems. Psychometric issues such as reliable assessments and addressing correlations with other instruments have been in the foreground of these discussions and have left the content validity of complex problem solving in the background. In this paper, we return the focus to content issues and address the important features that define complex problems. PMID:28744242

  3. A Simple Interactive Introduction to Teaching Genetic Engineering

    ERIC Educational Resources Information Center

    Child, Paula

    2013-01-01

    In the UK, at key stage 4, students aged 14-15 studying GCSE Core Science or Unit 1 of the GCSE Biology course are required to be able to describe the process of genetic engineering to produce bacteria that can produce insulin. The simple interactive introduction described in this article allows students to consider the problem, devise a model and…

  4. English Spelling: The Simple, The Fancy, The Insane, The Tricky, and The Scrunched Up. "Great Idea" Reprint Series #612.

    ERIC Educational Resources Information Center

    McCabe, Don

    The author explains the five-pronged approach to reading and spelling through classifying words into "simple,""fancy,""insane,""tricky," and "scrunched up" categories, and reports average gains of two grade levels in one semester by junior high school students with severe behavioral problems who learned the approach. Examples of the five word…

  5. Two Magnets and a Ball Bearing: A Simple Demonstration of the Methods of Images.

    ERIC Educational Resources Information Center

    Poon, W. C. K.

    2003-01-01

    Investigates the behavior of a bar magnet with a steel ball bearing on one pole as it approaches another bar magnet. Maps the problem onto electrostatics and explains observations based on the behavior of point charges near an isolated, uncharged sphere. Offers a simple demonstration of the method of images in electrostatics. (Author/NB)

  6. How Many U.S. High School Students Have a Foreign Language Reading "Disability"? Reading without Meaning and the Simple View

    ERIC Educational Resources Information Center

    Sparks, Richard L.; Luebbers, Julie

    2018-01-01

    Conventional wisdom suggests that students classified as learning disabled will exhibit difficulties with foreign language (FL) learning, but evidence has not supported a relationship between FL learning problems and learning disabilities. The simple view of reading model posits that reading comprehension is the product of word decoding and…

  7. Ten simple rules for making research software more robust

    PubMed Central

    2017-01-01

    Software produced for research, published and otherwise, suffers from a number of common problems that make it difficult or impossible to run outside the original institution or even off the primary developer’s computer. We present ten simple rules to make such software robust enough to be run by anyone, anywhere, and thereby delight your users and collaborators. PMID:28407023

  8. Making a Simple Self-Starting Electric Motor

    ERIC Educational Resources Information Center

    Hong, Seok-In; Choi, Jung-In; Hong, Seok-Cheol

    2009-01-01

    A simple electric motor has a problem in that the current applied to the motor per se can rarely trigger its rotation. Usually such motors begin to rotate after the rotor is slightly turned by hand (i.e., manual starting). In a "self-starting" motor, the rotor starts to rotate spontaneously as soon as the current is applied. This paper describes…

  9. Erratum: Simple Seismic Tests of the Solar Core

    NASA Astrophysics Data System (ADS)

    Kennedy, Dallas C.

    2000-12-01

    In the article ``Simple Seismic Tests of the Solar Core'' by Dallas C. Kennedy (ApJ, 540, 1109 [2000]), Figures 1, 2, and 3 in the print edition of the Journal were unreadable because of problems with the electronic file format. The figures in the electronic edition were unaffected. The figures should have appeared as below. The Press sincerely regrets this error.

  10. Helping Graduate Teaching Assistants Lead Discussions with Undergraduate Students: A Few Simple Teaching Strategies

    ERIC Educational Resources Information Center

    Jensen, Murray; Farrand, Kirsten; Redman, Leanne; Varcoe, Tamara; Coleman, Leana

    2005-01-01

    Graduate Teaching Assistants (GTAs) are frequently asked to lead discussion groups. These groups generally take the form of tutorials, review sessions, or problem-based learning classes. In their preparation, what to teach is often emphasized over how to teach. The primary intent of this article is to provide a few simple teaching strategies for…

  11. Simple Verification of the Parabolic Shape of a Rotating Liquid and a Boat on Its Surface

    ERIC Educational Resources Information Center

    Sabatka, Z.; Dvorak, L.

    2010-01-01

    This article describes a simple and inexpensive way to create and to verify the parabolic surface of a rotating liquid. The liquid is water. The second part of the article deals with the problem of a boat on the surface of a rotating liquid. (Contains 1 table, 10 figures and 5 footnotes.)

  12. 20 CFR 725.608 - Interest.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... simple annual interest, computed from the date on which the benefits were due. The interest shall be... payment of retroactive benefits, the beneficiary shall also be entitled to simple annual interest on such... entitled to simple annual interest computed from the date upon which the beneficiary's right to additional...

  13. Simultaneous viscosity and density measurement of small volumes of liquids using a vibrating microcantilever† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c6an02674e Click here for additional data file.

    PubMed Central

    Payam, A. F.; Trewby, W.

    2017-01-01

    Many industrial and technological applications require precise determination of the viscosity and density of liquids. Such measurements can be time consuming and often require sampling substantial amounts of the liquid. These problems can partly be overcome with the use of microcantilevers but most existing methods depend on the specific geometry and properties of the cantilever, which renders simple, accurate measurement difficult. Here we present a new approach able to simultaneously quantify both the density and the viscosity of microliters of liquids. The method, based solely on the measurement of two characteristic frequencies of an immersed microcantilever, is completely independent of the choice of a cantilever. We derive analytical expressions for the liquid's density and viscosity and validate our approach with several simple liquids and different cantilevers. Application of our model to non-Newtonian fluids shows that the calculated viscosities are remarkably robust when compared to measurements obtained from a standard rheometer. However, the results become increasingly dependent on the cantilever geometry as the frequency-dependent nature of the liquid's viscosity becomes more significant. PMID:28352874

  14. PLEMT: A NOVEL PSEUDOLIKELIHOOD BASED EM TEST FOR HOMOGENEITY IN GENERALIZED EXPONENTIAL TILT MIXTURE MODELS.

    PubMed

    Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J

    2017-01-01

    Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.

  15. A stochastic process approach of the drake equation parameters

    NASA Astrophysics Data System (ADS)

    Glade, Nicolas; Ballet, Pascal; Bastien, Olivier

    2012-04-01

    The number N of detectable (i.e. communicating) extraterrestrial civilizations in the Milky Way galaxy is usually calculated by using the Drake equation. This equation was established in 1961 by Frank Drake and was the first step to quantifying the Search for ExtraTerrestrial Intelligence (SETI) field. Practically, this equation is rather a simple algebraic expression and its simplistic nature leaves it open to frequent re-expression. An additional problem of the Drake equation is the time-independence of its terms, which for example excludes the effects of the physico-chemical history of the galaxy. Recently, it has been demonstrated that the main shortcoming of the Drake equation is its lack of temporal structure, i.e., it fails to take into account various evolutionary processes. In particular, the Drake equation does not provides any error estimation about the measured quantity. Here, we propose a first treatment of these evolutionary aspects by constructing a simple stochastic process that will be able to provide both a temporal structure to the Drake equation (i.e. introduce time in the Drake formula in order to obtain something like N(t)) and a first standard error measure.

  16. Automating quantum experiment control

    NASA Astrophysics Data System (ADS)

    Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.

    2017-03-01

    The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.

  17. Odor Recognition vs. Classification in Artificial Olfaction

    NASA Astrophysics Data System (ADS)

    Raman, Baranidharan; Hertz, Joshua; Benkstein, Kurt; Semancik, Steve

    2011-09-01

    Most studies in chemical sensing have focused on the problem of precise identification of chemical species that were exposed during the training phase (the recognition problem). However, generalization of training to predict the chemical composition of untrained gases based on their similarity with analytes in the training set (the classification problem) has received very limited attention. These two analytical tasks pose conflicting constraints on the system. While correct recognition requires detection of molecular features that are unique to an analyte, generalization to untrained chemicals requires detection of features that are common across a desired class of analytes. A simple solution that addresses both issues simultaneously can be obtained from biological olfaction, where the odor class and identity information are decoupled and extracted individually over time. Mimicking this approach, we proposed a hierarchical scheme that allowed initial discrimination between broad chemical classes (e.g. contains oxygen) followed by finer refinements using additional data into sub-classes (e.g. ketones vs. alcohols) and, eventually, specific compositions (e.g. ethanol vs. methanol) [1]. We validated this approach using an array of temperature-controlled chemiresistors. We demonstrated that a small set of training analytes is sufficient to allow generalization to novel chemicals and that the scheme provides robust categorization despite aging. Here, we provide further characterization of this approach.

  18. Reconciliation of Decision-Making Heuristics Based on Decision Trees Topologies and Incomplete Fuzzy Probabilities Sets

    PubMed Central

    Doubravsky, Karel; Dohnal, Mirko

    2015-01-01

    Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details. PMID:26158662

  19. Reconciliation of Decision-Making Heuristics Based on Decision Trees Topologies and Incomplete Fuzzy Probabilities Sets.

    PubMed

    Doubravsky, Karel; Dohnal, Mirko

    2015-01-01

    Complex decision making tasks of different natures, e.g. economics, safety engineering, ecology and biology, are based on vague, sparse, partially inconsistent and subjective knowledge. Moreover, decision making economists / engineers are usually not willing to invest too much time into study of complex formal theories. They require such decisions which can be (re)checked by human like common sense reasoning. One important problem related to realistic decision making tasks are incomplete data sets required by the chosen decision making algorithm. This paper presents a relatively simple algorithm how some missing III (input information items) can be generated using mainly decision tree topologies and integrated into incomplete data sets. The algorithm is based on an easy to understand heuristics, e.g. a longer decision tree sub-path is less probable. This heuristic can solve decision problems under total ignorance, i.e. the decision tree topology is the only information available. But in a practice, isolated information items e.g. some vaguely known probabilities (e.g. fuzzy probabilities) are usually available. It means that a realistic problem is analysed under partial ignorance. The proposed algorithm reconciles topology related heuristics and additional fuzzy sets using fuzzy linear programming. The case study, represented by a tree with six lotteries and one fuzzy probability, is presented in details.

  20. From Quantum Fields to Local Von Neumann Algebras

    NASA Astrophysics Data System (ADS)

    Borchers, H. J.; Yngvason, Jakob

    The subject of the paper is an old problem of the general theory of quantized fields: When can the unbounded operators of a Wightman field theory be associated with local algebras of bounded operators in the sense of Haag? The paper reviews and extends previous work on this question, stressing its connections with a noncommutive generalization of the classical Hamburger moment problem. Necessary and sufficient conditions for the existence of a local net of von Neumann algebras corresponding to a given Wightman field are formulated in terms of strengthened versions of the usual positivity property of Wightman functionals. The possibility that the local net has to be defined in an enlarged Hilbert space cannot be ruled out in general. Under additional hypotheses, e.g., if the field operators obey certain energy bounds, such an extension of the Hilbert space is not necessary, however. In these cases a fairly simple condition for the existence of a local net can be given involving the concept of “central positivity” introduced by Powers. The analysis presented here applies to translationally covariant fields with an arbitrary number of components, whereas Lorentz covariance is not needed. The paper contains also a brief discussion of an approach to noncommutative moment problems due to Dubois-Violette, and concludes with some remarks on modular theory for algebras of unbounded operators.

  1. A community detection algorithm using network topologies and rule-based hierarchical arc-merging strategies

    PubMed Central

    2017-01-01

    The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100

  2. Self calibrating autoTRAC

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.

    1994-01-01

    The work reported here demonstrates how to automatically compute the position and attitude of a targeting reflective alignment concept (TRAC) camera relative to the robot end effector. In the robotics literature this is known as the sensor registration problem. The registration problem is important to solve if TRAC images need to be related to robot position. Previously, when TRAC operated on the end of a robot arm, the camera had to be precisely located at the correct orientation and position. If this location is in error, then the robot may not be able to grapple an object even though the TRAC sensor indicates it should. In addition, if the camera is significantly far from the alignment it is expected to be at, TRAC may give incorrect feedback for the control of the robot. A simple example is if the robot operator thinks the camera is right side up but the camera is actually upside down, the camera feedback will tell the operator to move in an incorrect direction. The automatic calibration algorithm requires the operator to translate and rotate the robot arbitrary amounts along (about) two coordinate directions. After the motion, the algorithm determines the transformation matrix from the robot end effector to the camera image plane. This report discusses the TRAC sensor registration problem.

  3. A real-space stochastic density matrix approach for density functional electronic structure.

    PubMed

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  4. Generating subtour elimination constraints for the TSP from pure integer solutions.

    PubMed

    Pferschy, Ulrich; Staněk, Rostislav

    2017-01-01

    The traveling salesman problem ( TSP ) is one of the most prominent combinatorial optimization problems. Given a complete graph [Formula: see text] and non-negative distances d for every edge, the TSP asks for a shortest tour through all vertices with respect to the distances d. The method of choice for solving the TSP to optimality is a branch and cut approach . Usually the integrality constraints are relaxed first and all separation processes to identify violated inequalities are done on fractional solutions . In our approach we try to exploit the impressive performance of current ILP-solvers and work only with integer solutions without ever interfering with fractional solutions. We stick to a very simple ILP-model and relax the subtour elimination constraints only. The resulting problem is solved to integer optimality, violated constraints (which are trivial to find) are added and the process is repeated until a feasible solution is found. In order to speed up the algorithm we pursue several attempts to find as many relevant subtours as possible. These attempts are based on the clustering of vertices with additional insights gained from empirical observations and random graph theory. Computational results are performed on test instances taken from the TSPLIB95 and on random Euclidean graphs .

  5. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    DOE PAGES

    Kilcrease, D. P.; Brookes, S.

    2013-08-19

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. Additionally, a simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure formore » the Born cross-sections that employs the Elwert–Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. Furthermore, we also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.« less

  6. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

  7. The Implementation of Problem-Solving Based Laboratory Activities to Teach the Concept of Simple Harmonic Motion in Senior High School

    NASA Astrophysics Data System (ADS)

    Iradat, R. D.; Alatas, F.

    2017-09-01

    Simple harmonic motion is considered as a relatively complex concept to be understood by students. This study attempts to implement laboratory activities that focus on solving contextual problems related to the concept. A group of senior high school students participated in this pre-experimental method from a group’s pretest-posttest research design. Laboratory activities have had a positive impact on improving students’ scientific skills, such as, formulating goals, conducting experiments, applying laboratory tools, and collecting data. Therefore this study has added to the theoretical and practical knowledge that needs to be considered to teach better complicated concepts in physics learning.

  8. Working memory deficits in children with reading difficulties: memory span and dual task coordination.

    PubMed

    Wang, Shinmin; Gathercole, Susan E

    2013-05-01

    The current study investigated the cause of the reported problems in working memory in children with reading difficulties. Verbal and visuospatial simple and complex span tasks, and digit span and reaction times tasks performed singly and in combination, were administered to 46 children with single word reading difficulties and 45 typically developing children matched for age and nonverbal ability. Children with reading difficulties had pervasive deficits in the simple and complex span tasks and had poorer abilities to coordinate two cognitive demanding tasks. These findings indicate that working memory problems in children with reading difficulties may reflect a core deficit in the central executive. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Multi-objective optimal dispatch of distributed energy resources

    NASA Astrophysics Data System (ADS)

    Longe, Ayomide

    This thesis is composed of two papers which investigate the optimal dispatch for distributed energy resources. In the first paper, an economic dispatch problem for a community microgrid is studied. In this microgrid, each agent pursues an economic dispatch for its personal resources. In addition, each agent is capable of trading electricity with other agents through a local energy market. In this paper, a simple market structure is introduced as a framework for energy trades in a small community microgrid such as the Solar Village. It was found that both sellers and buyers benefited by participating in this market. In the second paper, Semidefinite Programming (SDP) for convex relaxation of power flow equations is used for optimal active and reactive dispatch for Distributed Energy Resources (DER). Various objective functions including voltage regulation, reduced transmission line power losses, and minimized reactive power charges for a microgrid are introduced. Combinations of these goals are attained by solving a multiobjective optimization for the proposed ORPD problem. Also, both centralized and distributed versions of this optimal dispatch are investigated. It was found that SDP made the optimal dispatch faster and distributed solution allowed for scalability.

  10. Text-interpreter language for flexible generation of patient notes and instructions.

    PubMed

    Forker, T S

    1992-01-01

    An interpreted computer language has been developed along with a windowed user interface and multi-printer-support formatter to allow preparation of documentation of patient visits, including progress notes, prescriptions, excuses for work/school, outpatient laboratory requisitions, and patient instructions. Input is by trackball or mouse with little or no keyboard skill required. For clinical problems with specific protocols, the clinician can be prompted with problem-specific items of history, exam, and lab data to be gathered and documented. The language implements a number of text-related commands as well as branching logic and arithmetic commands. In addition to generating text, it is simple to implement arithmetic calculations such as weight-specific drug dosages; multiple branching decision-support protocols for paramedical personnel (or physicians); and calculation of clinical scores (e.g., coma or trauma scores) while simultaneously documenting the status of each component of the score. ASCII text files produced by the interpreter are available for computerized quality audit. Interpreter instructions are contained in text files users can customize with any text editor.

  11. Parametric study of beam refraction problems across laser anemometer windows

    NASA Technical Reports Server (NTRS)

    Owen, A. K.

    1986-01-01

    The experimenter is often required to view flows through a window with a different index of refraction than either the medium being observed or the medium that the laser anemometer is immersed in. The refraction that occurs at the window surfaces may lead to undesirable changes in probe volume position or beam crossing angle and can lead to partial or complete beam uncrossing. This report describes the results of a parametric study of this problem using a ray tracing technique to predict these changes. The windows studied were a flat plate and a simple cyclinder. For the flat-plate study: (1) surface thickness, (2) beam crossing angle, (3) bisecting line - surface normal angle, and (4) incoming beam plane surface orientation were varied. For the cylindrical window additional parameters were also varied: (1) probe volume immersion, (2) probe volume off-radial position, and (3) probe volume position out of the R-theta plane of the lens. A number of empirical correlations were deduced to aid the interested reader in determining the movement, uncrossing, and change in crossing angle for a particular situation.

  12. A parametric study of the beam refraction problems across laser anemometer windows

    NASA Technical Reports Server (NTRS)

    Owen, Albert K.

    1986-01-01

    The experimenter is often required to view flows through a window with a different index of refraction than either the medium being observed or the medium that the laser anemometer is immersed in. The refraction that occurs at the window surfaces may lead to undesirable changes in probe volume position or beam crossing angle and can lead to partial or complete beam uncrossing. This report describes the results of a parametric study of this problem using a ray tracing technique to predict these changes. The windows studied were a flat plate and a simple cylinder. For the flat-plate study: (1) surface thickness, (2) beam crossing angle, (3) bisecting line - surface normal angle, and (4) incoming beam plane surface orientation were varied. For the cylindrical window additional parameters were also varied: (1) probe volume immersion, (2) probe volume off-radial position, and (3) probe volume position out of the r-theta plane of the lens. A number of empirical correlations were deduced to aid the reader in determining the movement, uncrossing, and change in crossing angle for a particular situations.

  13. Conceptual and statistical problems associated with the use of diversity indices in ecology.

    PubMed

    Barrantes, Gilbert; Sandoval, Luis

    2009-09-01

    Diversity indices, particularly the Shannon-Wiener index, have extensively been used in analyzing patterns of diversity at different geographic and ecological scales. These indices have serious conceptual and statistical problems which make comparisons of species richness or species abundances across communities nearly impossible. There is often no a single statistical method that retains all information needed to answer even a simple question. However, multivariate analyses could be used instead of diversity indices, such as cluster analyses or multiple regressions. More complex multivariate analyses, such as Canonical Correspondence Analysis, provide very valuable information on environmental variables associated to the presence and abundance of the species in a community. In addition, particular hypotheses associated to changes in species richness across localities, or change in abundance of one, or a group of species can be tested using univariate, bivariate, and/or rarefaction statistical tests. The rarefaction method has proved to be robust to standardize all samples to a common size. Even the simplest method as reporting the number of species per taxonomic category possibly provides more information than a diversity index value.

  14. Who will save the tokamak - Harry Potter, Arnold Schwarzenegger, or Shaquille O'Neil?

    NASA Astrophysics Data System (ADS)

    Freidberg, J.; Mangiarotti, F.; Minervini, J.

    2014-10-01

    The tokamak is the current leading contender for a fusion power reactor. The reason for the preeminence of the tokamak is its high quality plasma physics performance relative to other concepts. Even so, it is well known that the tokamak must still overcome two basic physics challenges before becoming viable as a DEMO and ultimately a reactor: (1) the achievement of non-inductive steady state operation, and (2) the achievement of robust disruption free operation. These are in addition to the PMI problems faced by all concepts. The work presented here demonstrates by means of a simple but highly credible analytic calculation that a ``standard'' tokamak cannot lead to a reactor - it is just not possible to simultaneously satisfy all the plasma physics plus engineering constraints. Three possible solutions, some more well-known than others, to the problem are analyzed. These visual image generating solutions are defined as (1) the Harry Potter solution, (2) the Arnold Schwarzenegger solution, and (3) the Shaquille O'Neil solution. Each solution will be described both qualitatively and quantitatively at the meeting.

  15. A solution to the biodiversity paradox by logical deterministic cellular automata.

    PubMed

    Kalmykov, Lev V; Kalmykov, Vyacheslav L

    2015-06-01

    The paradox of biological diversity is the key problem of theoretical ecology. The paradox consists in the contradiction between the competitive exclusion principle and the observed biodiversity. The principle is important as the basis for ecological theory. On a relatively simple model we show a mechanism of indefinite coexistence of complete competitors which violates the known formulations of the competitive exclusion principle. This mechanism is based on timely recovery of limiting resources and their spatio-temporal allocation between competitors. Because of limitations of the black-box modeling there was a problem to formulate the exclusion principle correctly. Our white-box multiscale model of two-species competition is based on logical deterministic individual-based cellular automata. This approach provides an automatic deductive inference on the basis of a system of axioms, and gives a direct insight into mechanisms of the studied system. It is one of the most promising methods of artificial intelligence. We reformulate and generalize the competitive exclusion principle and explain why this formulation provides a solution of the biodiversity paradox. In addition, we propose a principle of competitive coexistence.

  16. Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing

    NASA Astrophysics Data System (ADS)

    Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline

    2017-11-01

    Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.

    Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor stimulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly, and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adaptingmore » Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM execution time is proportionate to the number of triangle changes per frame, which is typically a few percent of the output mesh size, hence ROAM performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.« less

  18. The Natural-CCD Algorithm, a Novel Method to Solve the Inverse Kinematics of Hyper-redundant and Soft Robots.

    PubMed

    Martín, Andrés; Barrientos, Antonio; Del Cerro, Jaime

    2018-03-22

    This article presents a new method to solve the inverse kinematics problem of hyper-redundant and soft manipulators. From an engineering perspective, this kind of robots are underdetermined systems. Therefore, they exhibit an infinite number of solutions for the inverse kinematics problem, and to choose the best one can be a great challenge. A new algorithm based on the cyclic coordinate descent (CCD) and named as natural-CCD is proposed to solve this issue. It takes its name as a result of generating very harmonious robot movements and trajectories that also appear in nature, such as the golden spiral. In addition, it has been applied to perform continuous trajectories, to develop whole-body movements, to analyze motion planning in complex environments, and to study fault tolerance, even for both prismatic and rotational joints. The proposed algorithm is very simple, precise, and computationally efficient. It works for robots either in two or three spatial dimensions and handles a large amount of degrees-of-freedom. Because of this, it is aimed to break down barriers between discrete hyper-redundant and continuum soft robots.

  19. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  20. Hypertension in Children: Role of Obesity, Simple Carbohydrates, and Uric Acid

    PubMed Central

    Orlando, Antonina; Cazzaniga, Emanuela; Giussani, Marco; Palestini, Paola; Genovesi, Simonetta

    2018-01-01

    Over the past 60 years there has been a dramatic increase in the prevalence of overweight in children and adolescents, ranging from 4% in 1975 to 18% in 2016. Recent estimates indicate that overweight or obese children and adolescents are more than 340 million. Obesity is often associated with hypertension, which is an important cardiovascular risk factor. Recent studies show that the presence of hypertension is a frequent finding in the pediatric age. Hypertensive children easily become hypertensive adults. This phenomenon contributes to increasing cardiovascular risk in adulthood. Primary hypertension is a growing problem especially in children and adolescents of western countries, largely because of its association with the ongoing obesity epidemic. Recently, it has been hypothesized that a dietary link between obesity and elevated blood pressure (BP) values could be simple carbohydrate consumption, particularly fructose, both in adults and in children. Excessive intake of fructose leads to increased serum uric acid (SUA) and high SUA values are independently associated with the presence of hypertension and weaken the efficacy of lifestyle modifications in children. The present review intends to provide an update of existing data regarding the relationship between BP, simple carbohydrates (particularly fructose), and uric acid in pediatric age. In addition, we analyze the national policies that have been carried out over the last few years, in order to identify the best practices to limit the socio-economic impact of the effects of excessive sugar consumption in children. PMID:29774210

  1. Hypertension in Children: Role of Obesity, Simple Carbohydrates, and Uric Acid.

    PubMed

    Orlando, Antonina; Cazzaniga, Emanuela; Giussani, Marco; Palestini, Paola; Genovesi, Simonetta

    2018-01-01

    Over the past 60 years there has been a dramatic increase in the prevalence of overweight in children and adolescents, ranging from 4% in 1975 to 18% in 2016. Recent estimates indicate that overweight or obese children and adolescents are more than 340 million. Obesity is often associated with hypertension, which is an important cardiovascular risk factor. Recent studies show that the presence of hypertension is a frequent finding in the pediatric age. Hypertensive children easily become hypertensive adults. This phenomenon contributes to increasing cardiovascular risk in adulthood. Primary hypertension is a growing problem especially in children and adolescents of western countries, largely because of its association with the ongoing obesity epidemic. Recently, it has been hypothesized that a dietary link between obesity and elevated blood pressure (BP) values could be simple carbohydrate consumption, particularly fructose, both in adults and in children. Excessive intake of fructose leads to increased serum uric acid (SUA) and high SUA values are independently associated with the presence of hypertension and weaken the efficacy of lifestyle modifications in children. The present review intends to provide an update of existing data regarding the relationship between BP, simple carbohydrates (particularly fructose), and uric acid in pediatric age. In addition, we analyze the national policies that have been carried out over the last few years, in order to identify the best practices to limit the socio-economic impact of the effects of excessive sugar consumption in children.

  2. Solving quantum optimal control problems using Clebsch variables and Lin constraints

    NASA Astrophysics Data System (ADS)

    Delgado-Téllez, M.; Ibort, A.; Rodríguez de la Peña, T.

    2018-01-01

    Clebsch variables (and Lin constraints) are applied to the study of a class of optimal control problems for affine-controlled quantum systems. The optimal control problem will be modelled with controls defined on an auxiliary space where the dynamical group of the system acts freely. The reciprocity between both theories: the classical theory defined by the objective functional and the quantum system, is established by using a suitable version of Lagrange’s multipliers theorem and a geometrical interpretation of the constraints of the system as defining a subspace of horizontal curves in an associated bundle. It is shown how the solutions of the variational problem defined by the objective functional determine solutions of the quantum problem. Then a new way of obtaining explicit solutions for a family of optimal control problems for affine-controlled quantum systems (finite or infinite dimensional) is obtained. One of its main advantages, is the the use of Clebsch variables allows to compute such solutions from solutions of invariant problems that can often be computed explicitly. This procedure can be presented as an algorithm that can be applied to a large class of systems. Finally, some simple examples, spin control, a simple quantum Hamiltonian with an ‘Elroy beanie’ type classical model and a controlled one-dimensional quantum harmonic oscillator, illustrating the main features of the theory, will be discussed.

  3. Two Methods for Teaching Simple Visual Discriminations to Learners with Severe Disabilities

    ERIC Educational Resources Information Center

    Graff, Richard B.; Green, Gina

    2004-01-01

    Simple discriminations are involved in many functional skills; additionally, they are components of conditional discriminations (identity and arbitrary matching-to-sample), which are involved in a wide array of other important performances. Many individuals with severe disabilities have difficulty acquiring simple discriminations with standard…

  4. Vegetarian versus Meat-Based Diets for Companion Animals

    PubMed Central

    Knight, Andrew; Leitsberger, Madelaine

    2016-01-01

    Simple Summary Many owners of companion animals are interested in vegetarian diets for their animals, as concerns increase about the consequences of animal farming, for health, animal welfare, and the environment. However, are vegetarian diets for cats and dogs nutritionally balanced and healthy? This article comprehensively reviews the evidence published to date from four studies that have examined the nutritional adequacy of vegetarian diets for cats and dogs. To obtain additional information, we surveyed 12 pet food companies detailed in the most recent study. We also examined the nutritional soundness of meat-based companion-animal diets, and reviewed the evidence concerning the health status of vegetarian, carnivorous and omnivorous companion animals. Both cats and dogs may thrive on vegetarian diets, but these must be nutritionally complete and reasonably balanced. Owners should also regularly monitor urinary acidity, and should correct urinary alkalinisation through appropriate dietary additives, if necessary. Abstract Companion animal owners are increasingly concerned about the links between degenerative health conditions, farm animal welfare problems, environmental degradation, fertilizers and herbicides, climate change, and causative factors; such as animal farming and the consumption of animal products. Accordingly, many owners are increasingly interested in vegetarian diets for themselves and their companion animals. However, are vegetarian canine and feline diets nutritious and safe? Four studies assessing the nutritional soundness of these diets were reviewed, and manufacturer responses to the most recent studies are provided. Additional reviewed studies examined the nutritional soundness of commercial meat-based diets and the health status of cats and dogs maintained on vegetarian and meat-based diets. Problems with all of these dietary choices have been documented, including nutritional inadequacies and health problems. However, a significant and growing body of population studies and case reports have indicated that cats and dogs maintained on vegetarian diets may be healthy—including those exercising at the highest levels—and, indeed, may experience a range of health benefits. Such diets must be nutritionally complete and reasonably balanced, however, and owners should regularly monitor urinary acidity and should correct urinary alkalinisation through appropriate dietary additives, if necessary. PMID:27657139

  5. Vervet monkeys use paths consistent with context-specific spatial movement heuristics.

    PubMed

    Teichroeb, Julie A

    2015-10-01

    Animal foraging routes are analogous to the computationally demanding "traveling salesman problem" (TSP), where individuals must find the shortest path among several locations before returning to the start. Humans approximate solutions to TSPs using simple heuristics or "rules of thumb," but our knowledge of how other animals solve multidestination routing problems is incomplete. Most nonhuman primate species have shown limited ability to route plan. However, captive vervets were shown to solve a TSP for six sites. These results were consistent with either planning three steps ahead or a risk-avoidance strategy. I investigated how wild vervet monkeys (Chlorocebus pygerythrus) solved a path problem with six, equally rewarding food sites; where site arrangement allowed assessment of whether vervets found the shortest route and/or used paths consistent with one of three simple heuristics to navigate. Single vervets took the shortest possible path in fewer than half of the trials, usually in ways consistent with the most efficient heuristic (the convex hull). When in competition, vervets' paths were consistent with different, more efficient heuristics dependent on their dominance rank (a cluster strategy for dominants and the nearest neighbor rule for subordinates). These results suggest that, like humans, vervets may solve multidestination routing problems by applying simple, adaptive, context-specific "rules of thumb." The heuristics that were consistent with vervet paths in this study are the same as some of those asserted to be used by humans. These spatial movement strategies may have common evolutionary roots and be part of a universal mental navigational toolkit. Alternatively, they may have emerged through convergent evolution as the optimal way to solve multidestination routing problems.

  6. Excitation spectrum of a mixture of two Bose gases confined in a ring potential with interaction asymmetry

    NASA Astrophysics Data System (ADS)

    Roussou, A.; Smyrnakis, J.; Magiropoulos, M.; Efremidis, N. K.; Kavoulakis, G. M.; Sandin, P.; Ögren, M.; Gulliksson, M.

    2018-04-01

    We study the rotational properties of a two-component Bose–Einstein condensed gas of distinguishable atoms which are confined in a ring potential using both the mean-field approximation, as well as the method of diagonalization of the many-body Hamiltonian. We demonstrate that the angular momentum may be given to the system either via single-particle, or ‘collective’ excitation. Furthermore, despite the complexity of this problem, under rather typical conditions the dispersion relation takes a remarkably simple and regular form. Finally, we argue that under certain conditions the dispersion relation is determined via collective excitation. The corresponding many-body state, which, in addition to the interaction energy minimizes also the kinetic energy, is dictated by elementary number theory.

  7. Three-body spectrum in a finite volume: The role of cubic symmetry

    DOE PAGES

    Doring, M.; Hammer, H. -W.; Mai, M.; ...

    2018-06-15

    The three-particle quantization condition is partially diagonalized in the center-of-mass frame by using cubic symmetry on the lattice. To this end, instead of spherical harmonics, the kernel of the Bethe-Salpeter equation for particle-dimer scattering is expanded in the basis functions of different irreducible representations of the octahedral group. Such a projection is of particular importance for the three-body problem in the finite volume due to the occurrence of three-body singularities above breakup. Additionally, we study the numerical solution and properties of such a projected quantization condition in a simple model. It is shown that, for large volumes, these solutions allowmore » for an instructive interpretation of the energy eigenvalues in terms of bound and scattering states.« less

  8. Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering

    NASA Technical Reports Server (NTRS)

    Bolton, Matthew L.; Bass, Ellen J.

    2009-01-01

    Both the human factors engineering (HFE) and formal methods communities are concerned with finding and eliminating problems with safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to use model checking with HFE practices to perform formal verification of a human-interactive system. Despite the use of a seemingly simple target system, a patient controlled analgesia pump, the initial model proved to be difficult for the model checker to verify in a reasonable amount of time. This resulted in a number of model revisions that affected the HFE architectural, representativeness, and understandability goals of the effort. If formal methods are to meet the needs of the HFE community, additional modeling tools and technological developments are necessary.

  9. Influence of torsional-lateral coupling on stability behavior of geared rotor systems

    NASA Technical Reports Server (NTRS)

    Schwibinger, P.; Nordmann, R.

    1987-01-01

    In high-performance turbomachinery trouble often arises because of unstable nonsynchronous lateral vibrations. The instabilities are mostly caused by oil-film bearings, clearance excitation, internal damping, annular pressure seals in pumps, or labyrinth seals in turbocompressors. In recent times the coupling between torsional and lateral vibrations has been considered as an additional influence. This coupling is of practical importance in geared rotor systems. The literature describes some field problems in geared drive trains where unstable lateral vibrations occurred together with torsional oscillations. This paper studies the influence of the torsional-lateral coupling on the stability behavior of a simple geared system supported by oil-film bearings. The coupling effect is investigated by parameter studies and a sensitivity analysis for the uncoupled and coupled systems.

  10. Three-body spectrum in a finite volume: The role of cubic symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doring, M.; Hammer, H. -W.; Mai, M.

    The three-particle quantization condition is partially diagonalized in the center-of-mass frame by using cubic symmetry on the lattice. To this end, instead of spherical harmonics, the kernel of the Bethe-Salpeter equation for particle-dimer scattering is expanded in the basis functions of different irreducible representations of the octahedral group. Such a projection is of particular importance for the three-body problem in the finite volume due to the occurrence of three-body singularities above breakup. Additionally, we study the numerical solution and properties of such a projected quantization condition in a simple model. It is shown that, for large volumes, these solutions allowmore » for an instructive interpretation of the energy eigenvalues in terms of bound and scattering states.« less

  11. Choosing order of operations to accelerate strip structure analysis in parameter range

    NASA Astrophysics Data System (ADS)

    Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.

    2018-05-01

    The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.

  12. Boby-Vortex Interaction, Sound Generation and Destructive Interference

    NASA Technical Reports Server (NTRS)

    Kao, Hsiao C.

    2000-01-01

    It is generally recognized that interaction of vortices with downstream blades is a major source of noise production. To analyze this problem numerically, a two-dimensional model of inviscid flow together with the method of matched asymptotic expansions is proposed. The method of matched asymptotic expansions is used to match the inner region of incompressible flow to the outer region of compressible flow. Because of incompressibility, relatively simple numerical methods are available to treat multiple vortices and multiple bodies of arbitrary shape. Disturbances from vortices and bodies propagate outward as sound waves. Due to their interactions, either constructive or destructive interference may result. When it is destructive, the combined sound intensity can be reduced, sometimes substantially. In addition, an analytical solution to sound generation by the cascade-vonex interaction is given.

  13. Application of ion chromatography in clinical studies and pharmaceutical industry.

    PubMed

    Michalski, Rajmund

    2014-01-01

    Ion chromatography is a well-established regulatory method for analyzing anions and cations in environmental, food and many other samples. It offers an enormous range of possibilities for selecting stationary and mobile phases. Additionally, it usually helps to solve various separation problems, particularly when it is combined with different detection techniques. Ion chromatography can also be used to determine many ions and substances in clinical and pharmaceutical samples. It provides: availability of high capacity stationary phases and sensitive detectors; simple sample preparation; avoidance of hazardous chemicals; decreased sample volumes; flexible reaction options on a changing sample matrix to be analyzed; or the option to operate a fully-automated system. This paper provides a short review of the ion chromatography applications for determining different inorganic and organic substances in clinical and pharmaceutical samples.

  14. A control approach for robots with flexible links and rigid end-effectors

    NASA Technical Reports Server (NTRS)

    Barbieri, Enrique; Ozguner, Umit

    1989-01-01

    Multiarm flexible robots with dexterous end effectors are currently being considered in such tasks as satellite retrieval, servicing and repair where a two phase problem can be identified: Phase 1, robot positioning in space; Phase 2, object retrieval. Some issues in Phase 1 regarding modelling and control strategies for a robotic system comprised of along flexible arm and a rigid three-link end effector are presented. The control objective is to maintain the last (rigid) link stationary in space in the presence of an additive disturbance caused by the flexible energy in the first link after a positioning maneuver has been accomplished. Several configuration strategies can be considered, and optimal decentralized servocompensators can be designed. Preliminary computer simulations are included for a simple proportional controller to illustrate the approach.

  15. Investigating student understanding of simple harmonic motion

    NASA Astrophysics Data System (ADS)

    Somroob, S.; Wattanakasiwich, P.

    2017-09-01

    This study aimed to investigate students’ understanding and develop instructional material on a topic of simple harmonic motion. Participants were 60 students taking a course on vibrations and wave and 46 students taking a course on Physics 2 and 28 students taking a course on Fundamental Physics 2 on the 2nd semester of an academic year 2016. A 16-question conceptual test and tutorial activities had been developed from previous research findings and evaluated by three physics experts in teaching mechanics before using in a real classroom. Data collection included both qualitative and quantitative methods. Item analysis and whole-test analysis were determined from student responses in the conceptual test. As results, most students had misconceptions about restoring force and they had problems connecting mathematical solutions to real motions, especially phase angle. Moreover, they had problems with interpreting mechanical energy from graphs and diagrams of the motion. These results were used to develop effective instructional materials to enhance student abilities in understanding simple harmonic motion in term of multiple representations.

  16. Using Three-Dimensional Printing to Fabricate a Tubing Connector for Dilation and Evacuation.

    PubMed

    Stitely, Michael L; Paterson, Helen

    2016-02-01

    This is a proof-of-concept study to show that simple instrumentation problems encountered in surgery can be solved by fabricating devices using a three-dimensional printer. The device used in the study is a simple tubing connector fashioned to connect two segments of suction tubing used in a surgical procedure where no commercially available product for this use is available through our usual suppliers in New Zealand. A cylindrical tubing connector was designed using three-dimensional printing design software. The tubing connector was fabricated using the Makerbot Replicator 2X three-dimensional printer. The connector was used in 15 second-trimester dilation and evacuation procedures. Data forms were completed by the primary operating surgeon. Descriptive statistics were used with the expectation that the device would function as intended in all cases. The three-dimensional printed tubing connector functioned as intended in all 15 instances. Commercially available three-dimensional printing technology can be used to overcome simple instrumentation problems encountered during gynecologic surgical procedures.

  17. Pruning Neural Networks with Distribution Estimation Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cantu-Paz, E

    2003-01-15

    This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than themore » original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.« less

  18. On the interrelation of multiplication and division in secondary school children

    PubMed Central

    Huber, Stefan; Fischer, Ursula; Moeller, Korbinian; Nuerk, Hans-Christoph

    2013-01-01

    Multiplication and division are conceptually inversely related: Each division problem can be transformed into as a multiplication problem and vice versa. Recent research has indicated strong developmental parallels between multiplication and division in primary school children. In this study, we were interested in (i) whether these developmental parallels persist into secondary school, (ii) whether similar developmental parallels can be observed for simple and complex problems, (iii) whether skill level modulates this relationship, and (iv) whether the correlations are specific and not driven by general cognitive or arithmetic abilities. Therefore, we assessed performance of 5th and 6th graders attending two secondary school types of the German educational system in simple and complex multiplication as well as division while controlling for non-verbal intelligence, short-term memory, and other arithmetic abilities. Accordingly, we collected data from students differing in skills levels due to either age (5th < 6th grade) or school type (general < intermediate secondary school). We observed moderate to strong bivariate and partial correlations between multiplication and division with correlations being higher for simple tasks but nevertheless reliable for complex tasks. Moreover, the association between simple multiplication and division depended on students' skill levels as reflected by school types, but not by age. Partial correlations were higher for intermediate than for general secondary school children. In sum, these findings emphasize the importance of the inverse relationship between multiplication and division which persists into later developmental stages. However, evidence for skill-related differences in the relationship between multiplication and division was restricted to the differences for school types. PMID:24133476

  19. Useful Material Efficiency Green Metrics Problem Set Exercises for Lecture and Laboratory

    ERIC Educational Resources Information Center

    Andraos, John

    2015-01-01

    A series of pedagogical problem set exercises are posed that illustrate the principles behind material efficiency green metrics and their application in developing a deeper understanding of reaction and synthesis plan analysis and strategies to optimize them. Rigorous, yet simple, mathematical proofs are given for some of the fundamental concepts,…

  20. SIMON: A Simple Instructional Monitor. Technical Report.

    ERIC Educational Resources Information Center

    Feurzeig, Wallace; And Others

    An instructional monitor is a program which tries to detect, diagnose, and possibly help overcome a student's learning difficulties in the course of solving a problem or performing a task. In one approach to building an instructional monitor, the student uses a special task- or problem-oriented language expressly designed around some particular…

  1. Split and flow: reconfigurable capillary connection for digital microfluidic devices.

    PubMed

    Lapierre, Florian; Harnois, Maxime; Coffinier, Yannick; Boukherroub, Rabah; Thomy, Vincent

    2014-09-21

    Supplying liquid to droplet-based microfluidic microsystems remains a delicate task facing the problems of coupling continuous to digital or macro- to microfluidic systems. Here, we take advantage of superhydrophobic microgrids to address this problem. Insertion of a capillary tube inside a microgrid aperture leads to a simple and reconfigurable droplet generation setup.

  2. Environmental Strategies to Prevent Alcohol Problems on College Campuses. Revised

    ERIC Educational Resources Information Center

    Stewart, Kathryn

    2011-01-01

    Alcohol problems on campuses cannot be solved with simple solutions, such as an alcohol awareness campaign. Instead, dangerous college drinking can be prevented with an array of protective measures that deal with alcohol availability, enforcement of existing laws and rules, and changes in how alcohol is promoted, sold and served. Many people,…

  3. QUALITY ASSURANCE AND QUALITY CONTROL IN THE DEVELOPMENT AND APPLICATION OF THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA) TOOL

    EPA Science Inventory

    Planning and assessment in land and water resource management are evolving from simple, local-scale problems toward complex, spatially explicit regional ones. Such problems have to be addressed with distributed models that can compute runoff and erosion at different spatial and t...

  4. Understanding Students' Epistemologies: Examining Practice and Meaning in Community Contexts

    ERIC Educational Resources Information Center

    Bang, Megan Elisabeth

    2009-01-01

    There is a great need to raise the levels of science achievement for those groups of children who have traditionally underperformed. Prior cognitive research with Native people suggests that problems with achievement for Native students may be more complicated then simple problems with knowing or not knowing content knowledge. This dissertation…

  5. Math Thinkercises. A Good Apple Math Activity Book for Students. Grades 4-8.

    ERIC Educational Resources Information Center

    Daniel, Becky

    This booklet designed for students in grades 4-8 provides 52 activities, including puzzles and problems. Activities range from simple to complex, giving learners practice in finding patterns, numeration, permutation, and problem solving. Calculators should be available, and students should be encouraged to discuss solutions with classmates,…

  6. Patterns of Student Growth in Reasoning about Multivariate Correlational Problems.

    ERIC Educational Resources Information Center

    Ross, John A.; Cousins, J. Bradley

    Previous studies of the development of correlational reasoning have focused on the interpretation of relatively simple data sets contained in 2 X 2 tables. In contrast, this study examined age trends in subjects' responses to problems involving more than two continuous variables. The research is part of a multi-year project to conceptualize…

  7. Simulation of Stagnation Region Heating in Hypersonic Flow on Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2007-01-01

    Hypersonic flow simulations using the node based, unstructured grid code FUN3D are presented. Applications include simple (cylinder) and complex (towed ballute) configurations. Emphasis throughout is on computation of stagnation region heating in hypersonic flow on tetrahedral grids. Hypersonic flow over a cylinder provides a simple test problem for exposing any flaws in a simulation algorithm with regard to its ability to compute accurate heating on such grids. Such flaws predominantly derive from the quality of the captured shock. The importance of pure tetrahedral formulations are discussed. Algorithm adjustments for the baseline Roe / Symmetric, Total-Variation-Diminishing (STVD) formulation to deal with simulation accuracy are presented. Formulations of surface normal gradients to compute heating and diffusion to the surface as needed for a radiative equilibrium wall boundary condition and finite catalytic wall boundary in the node-based unstructured environment are developed. A satisfactory resolution of the heating problem on tetrahedral grids is not realized here; however, a definition of a test problem, and discussion of observed algorithm behaviors to date are presented in order to promote further research on this important problem.

  8. Bread enriched in lycopene and other bioactive compounds by addition of dry tomato waste.

    PubMed

    Nour, Violeta; Ionica, Mira Elena; Trandafir, Ion

    2015-12-01

    The tomato processing industry generates high amounts of waste, mainly tomato skins and seeds, which create environmental problems. These residues are attractive sources of valuable bioactive components and pigments. A relatively simple recovery technology could consist of production of powders to be directly incorporated into foods. Tomato waste coming from a Romanian tomato processing unit were analyzed for the content of several bioactive compounds like ascorbic acid, β-carotene, lycopene, total phenolics, mineral and trace elements. In addition, its antioxidant capacity was assayed. Results revealed that tomato waste (skins and seeds) could be successfully utilized as functional ingredient for the formulation of antioxidant rich functional foods. Dry tomato processing waste were used to supplement wheat flour at 6 and 10 % levels (w/w flour basis) and the effects on the bread's physicochemical, baking and sensorial characteristics were studied. The following changes were observed: increase in moisture content, titratable acidity and bread crumb elasticity, reduction in specific volume and bread crumb porosity. The addition of dry tomato waste at 6 % resulted in bread with good sensory characteristics and overall acceptability but as the amount of dry tomato waste increased to 10 %, bread was less acceptable.

  9. Midinfrared radiation energy harvesting device

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Ren; Wang, Wei-Chih

    2017-07-01

    The International Energy Agency reports a 17.6% annual growth rate in sustainable energy production. However, sustainable power generation based on environmental conditions (wind and solar) requires an infrastructure that can handle intermittent power generation. An electromagnetic thermoelectric (EMTE) device to overcome the intermittency problems of current sustainable energy technologies, providing the continuous supply unachievable by photovoltaic cells with portability impossible for traditional thermoelectric (TE) generators, is proposed. The EMTE converts environmental electromagnetic waves to a voltage output without requiring additional input. A single cell of this TE-inspired broadband EMTE can generate a 19.50 nV output within a 7.2-μm2 area, with a verified linear scalability of the output voltage through cell addition. This idea leads to a challenge: the electrical polarity of each row of cells is the same but may require additional routing to combine output from each row. An innovative layout is proposed to overcome this issue through switching the electrical polarity every other row. In this scheme, the EM wave absorption spectrum is not altered, and a simple series connection can be implemented to boost the total voltage output by 1 order within a limited area.

  10. Phycocyanin-encapsulating hyalurosomes as carrier for skin delivery and protection from oxidative stress damage.

    PubMed

    Castangia, Ines; Manca, Maria Letizia; Catalán-Latorre, Ana; Maccioni, Anna Maria; Fadda, Anna Maria; Manconi, Maria

    2016-04-01

    The phycobiliprotein phycocyanin, extracted from Klamath algae, possesses important biological properties but it is characterized by a low bioavailability due to its high molecular weight. To overcome the bioavailability problems, phycocyanin was successfully encapsulated, using an environmentally-friendly method, into hyalurosomes, a new kind of phospholipid vesicles immobilised with hyaluronan sodium salt by the simple addition of drug/sodium hyaluronate water dispersion to phospholipids. Liposomes were used as a comparison. Vesicles were small in size and homogeneously dispersed, being the mean size always smaller than 150 nm and PI never higher than 0.31. Liposomes were unilamellar and spherical, the addition of the polymer slightly modify the vesicular shape which remain spherical, while the addition of PEG improve the lamellarity of vesicles being multilamellar vesicles. In all cases phycocyanin was encapsulated in good amount especially using hyalurosomes and PEG hyalurosomes (65 and 61% respectively). In vitro penetration studies suggested that hyalurosomes favoured the phycocyanin deposition in the deeper skin layers probably thanks to their peculiar hyaluronan-phospholipid structure. Moreover, hyalurosomes were highly biocompatible and improved phycocyanin antioxidant activity on stressed human keratinocytes respect to the drug solution.

  11. Energy storage and alternatives to improve train voltage on a mass transit system

    NASA Astrophysics Data System (ADS)

    Gordon, S. P.; Rorke, W. S.

    1995-04-01

    The wide separation of substations in the Bay Area Rapid Transit system's transbay tunnel contributes to voltage sag when power demand is high. In the future, expansions to the system will exacerbate this problem by increasing traffic density. Typically, this situation is remedied through the installation of additional substations to increase the system's power capacity. We have evaluated the efficacy of several alternatives to this approach - specifically, installation of an 8 megajoule energy storage system, modification of the existing substations, or reduction of the resistance of the running rails or the third rail. To support this analysis, we have developed a simple model of the traction power system in the tunnel. We have concluded that the storage system does not have sufficient capacity to deal with the expected voltage sags; in this application, the alternatives present more effective solutions. We have also investigated the potential impact of these system upgrades on expected future capital outlays by BART for traction power infrastructure additions. We have found that rail or substation upgrades may reduce the need for additional substations. These upgrades may also be effective on other parts of the BART system and on other traction power systems.

  12. Dysfunctional attitudes and poor problem solving skills predict hopelessness in major depression.

    PubMed

    Cannon, B; Mulroy, R; Otto, M W; Rosenbaum, J F; Fava, M; Nierenberg, A A

    1999-09-01

    Hopelessness is a significant predictor of suicidality, but not all depressed patients feel hopeless. If clinicians can predict hopelessness, they may be able to identify those patients at risk of suicide and focus interventions on factors associated with hopelessness. In this study, we examined potential predictors of hopelessness in a sample of depressed outpatients. In this study, we examined potential demographic, diagnostic, and symptom predictors of hopelessness in a sample of 138 medication-free outpatients (73 women and 65 men) with a primary diagnosis of major depression. The significance of predictors was evaluated in both simple and multiple regression analyses. Consistent with previous studies, we found no significant associations between demographic and diagnostic variables and greater hopelessness. Hopelessness was significantly associated with greater depression severity, poor problem solving abilities as assessed by the Problem Solving Inventory, and each of two measures of dysfunctional cognitions (the Dysfunctional Attitudes Scale and the Cognitions Questionnaire). In a stepwise multiple regression equation, however, only dysfunctional cognitions and poor problem solving offered non-redundant prediction of hopelessness scores, and accounted for 20% of the variance in these scores. This study is based on depressed patients entering into an outpatient treatment protocol. All analyses were correlational in nature, and no causal links can be concluded. Our findings, identifying clinical correlates of hopelessness, provide clinicians with potential additional targets for assessment and treatment of suicidal risk. In particular, clinical attention to dysfunctional attitudes and problem solving skills may be important for further reduction of hopelessness and perhaps suicidal risk.

  13. Phoretic drag reduction of chemically active homogeneous spheres under force fields and shear flows

    NASA Astrophysics Data System (ADS)

    Yariv, Ehud; Kaynan, Uri

    2017-01-01

    Surrounded by a spherically symmetric solute cloud, chemically active homogeneous spheres do not undergo conventional autophoresis when suspended in an unbounded liquid domain. When exposed to external flows, solute advection deforms that cloud, resulting in a generally asymmetric distribution of diffusio-osmotic slip which, in turn, modifies particle motion. Inspired by classical forced-convection analyses [Acrivos and Taylor, Phys. Fluids 5, 387 (1962), 10.1063/1.1706630; Frankel and Acrivos, Phys. Fluids 11, 1913 (1968), 10.1063/1.1692218] we illustrate this phoretic phenomenon using two prototypic configurations, one where the particle sediments under a uniform force field and one where it is subject to a simple shear flow. In addition to the Péclet number Pe associated with the imposed flow, the governing nonlinear problem also depends upon α , the intrinsic Péclet number associated with the chemical activity of the particle. As in the forced-convection problems, the small-Péclet-number limit is nonuniform, breaking down at large distances away from the particle. Calculation of the leading-order autophoretic effects thus requires use of matched asymptotic expansions, the outer region being at distances that scale inversely with Pe and Pe1 /2 in the respective sedimentation and shear problems. In the sedimentation problem we find an effective drag reduction of fractional amount α /8 ; in the shear problem we find that the magnitude of the stresslet is decreased by a fractional amount α /4 . For a dilute particle suspension the latter result is manifested by a reduction of the effective viscosity.

  14. The Acquisition of the Copula "Be" in Present Simple Tense in English by Native Speakers of Russian

    ERIC Educational Resources Information Center

    Unlu, Elena Antonova; Hatipoglu, Ciler

    2012-01-01

    The current research investigated the acquisition of the copula "be" in Present Simple Tense (PST) in English by native speakers of Russian. The aim of the study was to determine whether or not Russian students with different levels of English proficiency would encounter any problems while using the copula "be" in PST in English. The study also…

  15. A simple finite element method for linear hyperbolic problems

    DOE PAGES

    Mu, Lin; Ye, Xiu

    2017-09-14

    Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.

  16. A simple finite element method for linear hyperbolic problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Ye, Xiu

    Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.

  17. Determination of Al Content in Commercial Samples through Stoichiometry: A Simple Experiment for an Advanced High-School Chemistry Olympiad Preparatory Course

    ERIC Educational Resources Information Center

    de Lima, Kassio M. G.; da Silva, Amison R. L.; de Souza, Joao P. F.; das Neves, Luiz S.; Gasparotto, Luiz H. S.

    2014-01-01

    Stoichiometry has always been a puzzling subject. This may be partially due to the way it is introduced to students, with stoichiometric coefficients usually provided in the reaction. If the stoichiometric coefficients are not given, students find it very difficult to solve problems. This article describes a simple 4-h laboratory experiment for…

  18. Medical decision making: guide to improved CPT coding.

    PubMed

    Holt, Jim; Warsy, Ambreen; Wright, Paula

    2010-04-01

    The Current Procedural Terminology (CPT) coding system for office visits, which has been in use since 1995, has not been well studied, but it is generally agreed that the system contains much room for error. In fact, the available literature suggests that only slightly more than half of physicians will agree on the same CPT code for a given visit, and only 60% of professional coders will agree on the same code for a particular visit. In addition, the criteria used to assign a code are often related to the amount of written documentation. The goal of this study was to evaluate two novel methods to assess if the most appropriate CPT code is used: the level of medical decision making, or the sum of all problems mentioned by the patient during the visit. The authors-a professional coder, a residency faculty member, and a PGY-3 family medicine resident-reviewed 351 randomly selected visit notes from two residency programs in the Northeast Tennessee region for the level of documentation, the level of medical decision making, and the total number of problems addressed. The authors assigned appropriate CPT codes at each of those three levels. Substantial undercoding occurred at each of the three levels. Approximately 33% of visits were undercoded based on the written documentation. Approximately 50% of the visits were undercoded based on the level of documented medical decision making. Approximately 80% of the visits were undercoded based on the total number of problems which the patient presented during the visit. Interrater agreement was fair, and similar to that noted in other coding studies. Undercoding is not only common in a family medicine residency program but it also occurs at levels that would not be evident from a simple audit of the documentation on the visit note. Undercoding also occurs from not exploring problems mentioned by the patient and not documenting additional work that was performed. Family physicians may benefit from minor alterations in their documentation of office visit notes.

  19. The Role of Diffusion-Weighted Magnetic Resonance Imaging in the Differential Diagnosis of Simple and Hydatid Cysts of the Liver.

    PubMed

    Aksoy, S; Erdil, I; Hocaoglu, E; Inci, E; Adas, G T; Kemik, O; Turkay, R

    2018-02-01

    The present study indicates that simple and hydatid cysts in liver are a common health problem in Turkey. The aim of the study is to differentiate different types of hydatid cysts from simple cysts by using diffusion-weighted images. In total, 37 hydatid cysts and 36 simple cysts in the liver were diagnosed. We retrospectively reviewed the medical records of the patients who had both ultrasonography and magnetic resonance imaging. We measured apparent diffusion coefficient (ADC) values of all the cysts and then compared the findings. There was no statistically meaningful difference between the ADC values of simple cysts and type 1 hydatid cysts. However, for the other types of hydatid cysts, it is possible to differentiate hydatid cysts from simple cysts using the ADC values. Although in our study we cannot differentiate between type I hydatid cysts and simple cysts in the liver, diffusion-weighted images are very useful to differentiate different types of hydatid cysts from simple cysts using the ADC values.

  20. Solving fully fuzzy transportation problem using pentagonal fuzzy numbers

    NASA Astrophysics Data System (ADS)

    Maheswari, P. Uma; Ganesan, K.

    2018-04-01

    In this paper, we propose a simple approach for the solution of fuzzy transportation problem under fuzzy environment in which the transportation costs, supplies at sources and demands at destinations are represented by pentagonal fuzzy numbers. The fuzzy transportation problem is solved without converting to its equivalent crisp form using a robust ranking technique and a new fuzzy arithmetic on pentagonal fuzzy numbers. To illustrate the proposed approach a numerical example is provided.

Top